AWS Announces Two New Capabilities to Move Toward a Zero-ETL Future on AWS
At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced two new integrations that make it easier for customers to connect and analyze data across data stores without having to move data between services. Today’s announcement enables customers to analyze Amazon Aurora data with Amazon Redshift in near real time, eliminating the need to extract, transform, and load (ETL) data between services. Customers can also now run Apache Spark applications easily on Amazon Redshift data using AWS analytics and machine learning (ML) services (e.g., Amazon EMR, AWS Glue, and Amazon SageMaker). Together, these new capabilities help customers move toward a zero-ETL future on AWS. To learn more about unlocking the value of data using AWS, visit aws.amazon.com/data.
“The vastness and complexity of data that customers manage today means they cannot analyze and explore it with a single technology or even a small set of tools. Many of our customers rely on multiple AWS database and analytics services to extract value from their data, and ensuring they have access to the right tool for the job is important to their success,” said Swami Sivasubramanian, vice president of Databases, Analytics, and Machine Learning at AWS. “The new capabilities announced today help us move customers toward a zero-ETL future on AWS, reducing the need to manually move or transform data between services. By eliminating ETL and other data movement tasks for our customers, we are freeing them to focus on analyzing data and driving new insights for their business—regardless of the size and complexity of their organization and data.”
Data is at the center of every application, process, and business decision and is the cornerstone of almost every organization’s digital transformation. But, real-world data systems are often sprawling and complex, with diverse data dispersed across multiple services and on-premises systems. Many organizations are sitting on a treasure trove of data and want to maximize the value they get out of it. AWS provides a range of purpose-built tools like Amazon Aurora, to store transactional data in MySQL and PostgreSQL-compatible relational databases, and Amazon Redshift, to run high-performance data warehousing and analytics workloads on petabytes of data. But to truly maximize the value of data, customers need these tools to work together seamlessly. That is why AWS has invested in zero-ETL capabilities like Amazon Aurora ML and Amazon Redshift ML, which let customers take advantage of Amazon SageMaker for ML-powered use cases, without moving data between services. Additionally, AWS provides seamless data ingestion from AWS streaming services (e.g., Amazon Kinesis and Amazon MSK) into a wide range of AWS data stores, such as Amazon Simple Storage Service (Amazon S3) and Amazon OpenSearch Service, so customers can analyze data as soon as it is available. Today’s announcement builds on the strength and deep integrations of AWS’s database and analytics portfolio to make it faster, easier, and more cost-effective for customers to access and analyze data across data stores on AWS.
|Diskussion: AMAZON - geht es wieder aufwärts?|
|Diskussion: Horror-Crash als Chance: Jim Cramer: Zehn Tech-Aktien vor Mega-Comeback! Kursraketen!|