Amazon U7i instances now available in AWS Europe (Ireland) Region

Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the AWS Europe (Ireland) Region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.. To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

Amazon EC2 M8i and M8i-flex instances are now available in Asia Pacific (Mumbai) Region

Starting today, Amazon EC2 M8i and M8i-flex instances are now available in Asia Pacific (Mumbai) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances. M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i and M8i-flex page or visit the AWS News blog.
Quelle: aws.amazon.com

Amazon Redshift now supports writing to Apache Iceberg tables

Amazon Redshift today announces the general availability of write capability to Apache Iceberg tables, enabling users to run analytics read and write queries for append-only workloads on Apache Iceberg tables within Amazon Redshift. Amazon Redshift is a petabyte-scale, enterprise-grade cloud data warehouse service used by tens of thousands of customers. Whether your data is stored in operational data stores, data lakes, streaming engines or within your data warehouse, Amazon Redshift helps you quickly ingest, securely share data, and achieve the best performance at the best price. The Apache Iceberg open table format has been used by many customers to simplify data processing on rapidly expanding and evolving tables stored in data lakes. Customers have been using Amazon Redshift to run queries on data lake tables in various file and table formats, achieving a wide range of scalability across data warehouse and data lake workloads. Data lake use cases continue to evolve and become increasingly sophisticated, and require capabilities like transactional consistency for record-level updates and deletes while having seamless schema and partition evolution support. With this milestone Amazon Redshift now supports SQL DDL (data definition language) operations to CREATE an Apache Iceberg table, SHOW the table definition SQL, DROP the table and perform DML (data manipulation language) operations such as INSERT. You can continue to use Amazon Redshift to read from your Apache Iceberg tables in AWS Glue Data Catalog and perform write operations on those Apache Iceberg tables while other users or applications can safely run DML operations on your tables. Apache Iceberg support in Amazon Redshift is available in all AWS regions where Amazon Redshift is available. To get started, visit the documentation page for Amazon Redshift Management Guide.
Quelle: aws.amazon.com

Amazon RDS Blue/Green deployments now supports Aurora Global Database

Amazon RDS Blue/Green deployments now support safer, simpler, and faster updates for your Aurora Global Databases. With just a few clicks, you can create a staging (green) environment that mirrors your production (blue) Aurora Global Database, including primary and all secondary regions. When you’re ready to make your staging environment the new production environment, perform a blue/green switchover. This operation transitions your primary and all secondary regions to the green environment, which now serves as the active production environment. Your application begins accessing it immediately without any configuration changes, minimizing operational overhead. With Global Database, a single Aurora cluster can span multiple AWS Regions, providing disaster recovery for your applications in case of single Region impairment and enabling fast local reads for globally distributed applications. With this launch, you can perform critical database operations including major and minor version upgrades, OS updates, parameter modifications, instance type validations, and schema changes with minimal downtime. During blue/green switchover, Aurora automatically renames clusters, instances, and endpoints to match the original production environment, enabling applications to continue operating without any modifications. You can leverage this capability using the AWS Management console, SDK, or CLI. This capability is available in Amazon Aurora MySQL-Compatible Edition and Amazon Aurora PostgreSQL-Compatible Edition versions that support the Aurora Global Database configuration and in all commercial AWS Regions and AWS GovCloud (US) Regions. Start planning your next Global Database upgrade using RDS Blue/Green deployments by following the steps in the blog. For more details, refer to our documentation.
Quelle: aws.amazon.com

AWS IoT Services expand support of VPC endpoints and IPv6 connectivity

AWS IoT Core, AWS IoT Device Management, and AWS IoT Device Defender have expanded support for Virtual Private Cloud (VPC) endpoints and IPv6. Developers can now use AWS PrivateLink to establish VPC endpoints for all data plane operations, management APIs, and credential provider. This enhancement allows IoT workloads to operate entirely within virtual private clouds without traversing the public internet, helping strengthen the security posture for IoT deployments. Additionally, IPv6 support for both VPC and public endpoints gives developers the flexibility to connect IoT devices and applications using either IPv6 or IPv4. This helps organizations meet local requirements for IPv6 while maintaining compatibility with existing IPv4 infrastructure. These features can be configured through the AWS Management Console, AWS CLI, and AWS CloudFormation. The functionality is now generally available in all AWS Regions where the relevant AWS IoT services are offered. For more information about the IPv6 support and VPCe support, customers can visit the AWS IoT technical documentation pages. For information about PrivateLink pricing, visit the AWS PrivateLink pricing page.
Quelle: aws.amazon.com

Amazon SageMaker Catalog now supports read and write access to Amazon S3

Amazon SageMaker Catalog now supports read and write access to Amazon S3 general purpose buckets. This capability helps data scientists and analysts search for unstructured data, process it alongside structured datasets, and share transformed datasets with other teams. Data publishers gain additional controls to support analytics and generative AI workflows within SageMaker Unified Studio while maintaining security and governance controls over shared data. 
When approving subscription requests or directly sharing S3 data within the SageMaker Catalog, data producers can choose to grant read-only or read and write access. If granted read and write access, data consumers can process datasets in SageMaker and store the results back to the S3 bucket or folder. The data can then be published and automatically discoverable by other teams. This capability is now available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To get started, you can log into SageMaker Unified Studio, or you can use the Amazon DataZone API, SDK, or AWS CLI. To learn more, see the SageMaker Unified Studio guide.
Quelle: aws.amazon.com

Amazon ECS improves Service Availability during Rolling deployments

Amazon Elastic Container Service (Amazon ECS) now includes enhancements that improve service availability during rolling deployments. These enhancements help maintain availability when new application version tasks are failing, when current tasks are unexpectedly terminated, or when scale-out is triggered during deployments.
Previously, when tasks in your currently running version became unhealthy or were terminated during a rolling deployment, ECS would attempt to replace them with the new version to prioritize deployment progress. If the new version could not launch successfully—such as when new tasks fail health checks or fail to start—these replacements would fail and your service availability could drop. ECS now replaces unhealthy or terminated tasks using the same service revision they belong to. Unhealthy tasks in your currently running version are replaced with healthy tasks from that same version, independent of the new version’s status. Additionally, when Application Auto Scaling triggers during a rolling deployment, ECS applies scale-out to both service revisions, ensuring your currently running version can handle increased load even if the new version is failing.
These improvements respect your service’s maximumPercent and minimumHealthyPercent settings. These enhancements are enabled by default for all services using the rolling deployment strategy and are available in all AWS Regions. To learn more about rolling-update deployments, refer Link.
Quelle: aws.amazon.com

AWS Network Firewall is now available in the AWS New Zealand (Auckland) region

Starting today, AWS Network Firewall is available in the AWS New Zealand (Auckland) Region, enabling customers to deploy essential network protections for all their Amazon Virtual Private Clouds (VPCs). AWS Network Firewall is a managed firewall service that is easy to deploy. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up and maintain the underlying infrastructure. It is integrated with AWS Firewall Manager to provide you with central visibility and control over your firewall policies across multiple AWS accounts. To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.
Quelle: aws.amazon.com

Amazon EventBridge introduces enhanced visual rule builder

Amazon EventBridge introduces a new intuitive console based visual rule builder with a comprehensive event catalog for discovering and subscribing to events from custom applications, and over 200 AWS services. The new rule builder integrates the EventBridge Schema Registry with an updated event catalog and intuitive drag and drop canvas that simplifies building event-driven applications. With enhanced rule builder, developers can browse and search through events with readily available sample payloads and schemas, eliminating the need to find and reference individual service documentation. The schema-aware visual builder guides developers through creating event filter patterns and rules, reducing syntax errors and development time. The EventBridge enhanced rule builder is available today in all regions where the Schema Registry is launched. Developers can get started through the Amazon EventBridge console at no additional cost beyond standard EventBridge usage charges. For more information, visit the EventBridge documentation.
Quelle: aws.amazon.com

Announcing agreement EventBridge notifications for AWS Marketplace

AWS Marketplace now delivers purchase agreement events via Amazon EventBridge, transitioning from our Amazon Simple Notification Service (SNS) notifications for Software as a Service and Professional Services product types. This enhancement simplifies event-driven workflows for both sellers and buyers by enabling seamless integration of AWS Marketplace Agreements, reducing operational overhead, and improving event monitoring and automation. Marketplace sellers (Independent Software Vendors and Channel Partners) and buyers will receive notifications for all events in the lifecycle of their Marketplace Agreements, including when they are created, terminated, amended, replaced, renewed, cancelled or expired. Additionally, ISVs receive license-specific events to manage customer entitlements. With EventBridge integration, you can route these events to various AWS services such as AWS Lambda, Amazon S3, Amazon CloudWatch, AWS Step Functions, and Amazon SNS, maintaining compatibility with existing SNS-based workflows while gaining advanced routing capabilities. EventBridge notifications are generally available and can be created in AWS US East (N. Virginia) Region. To learn more about AWS Marketplace event notifications, see the AWS Marketplace documentation. You can start using EventBridge notifications today by visiting the Amazon EventBridge console and enabling the ‘aws.agreement-marketplace’ event source.
Quelle: aws.amazon.com