Amazon DocumentDB now supports Internet Protocol Version 6 (IPv6)

Amazon DocumentDB now offers customers the option to use Internet Protocol version 6 (IPv6) addresses on new and existing clusters. Customers moving to IPv6 can simplify their network stack by running their databases on a dual-stack network that supports both IPv4 and IPv6. IPv6 increases the number of available addresses and customers no longer need to manage overlapping IPv4 address spaces in their VPCs (Virtual Private Cloud). Customers can standardize their applications on the new version of Internet Protocol by moving to dual-stack mode (supporting both IPv4 and IPv6) with a few clicks in the AWS Management Console or directly using the AWS CLI. Amazon DocumentDB is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. Amazon DocumentDB support for IPv6 is generally available on version 4.0 and 5.0 in AWS Regions listed in Dual-stack mode Region and version availability. To learn more about configuring your environment for IPv6, please refer to Amazon VPC and Amazon DocumentDB.
Quelle: aws.amazon.com

Amazon EC2 now supports CPU options optimization for license-included instances

Amazon EC2 now allows customers to modify an instance’s CPU options to optimize the licensing costs of Microsoft Windows license-included workloads. You can now customize the number of vCPUs and/or disable hyperthreading on Windows Server and SQL Server license-included instances to save on vCPU-based licensing costs. This enhancement is particularly valuable for database workloads like Microsoft SQL Server that require high memory and IOPS but lower vCPU counts. By modifying CPU options, you can reduce vCPU-based licensing costs while maintaining memory and IOPS performance, achieve higher memory-to-vCPU ratios, and customize CPU settings to match your specific workload requirements. For example, on an r7i.8xlarge instance running Windows and SQL Server license included, you can turn off hyperthreading to reduce the default 32 vCPU count to 16, saving 50% on the licensing costs, while still getting the 256 GiB memory and 40,000 IOPS that come with the instance. This feature is available in all commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more, see CPU options in the Amazon EC2 User Guide and read this blog post.
Quelle: aws.amazon.com

Amazon Location Service Introduces New Map Styling Features for Enhanced Customization

Today, AWS announced enhanced map styling features for Amazon Location Service, enabling users to further customize maps with terrain visualization, contour lines, real-time traffic data, and transportation-specific routing information. Developers can create more detailed and informative maps tailored for various use cases, such as outdoor navigation, logistics planning, and traffic management, by leveraging parameters like terrain, contour-density, traffic, and travel-mode through the GetStyleDescriptor API. With these styling capabilities, users can overlay real-time traffic conditions, visualize transportation-specific routing information such as transit and trucks, and display topographic features through elevation shading. For instance, developers can display current traffic conditions for optimized route planning, show truck-specific routing restrictions for logistics applications, or create maps that highlight physical terrain details for hiking and outdoor activities. Each feature operates seamlessly, providing enhanced map visualization and reliable performance for diverse use cases. These new map styling features are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, please visit the Developer Guide.
Quelle: aws.amazon.com

Amazon Timestream now supports InfluxDB 3

Amazon Timestream for InfluxDB now offers support for InfluxDB 3. Now application developers and DevOps teams can run InfluxDB 3 databases as a managed service. InfluxDB 3 uses a new architecture for the InfluxDB database engine, built on Apache Arrow for in-memory data processing, Apache DataFusion for query execution, and columnar Parquet storage format with data persistence in Amazon S3 to deliver fast performance for high-cardinality data and large scale data processing for large analytical workloads. With Amazon Timestream for InfluxDB 3, customers can leverage improved query performance and resource utilization for data-intensive use cases while benefiting from virtually unlimited storage capacity through S3-based object storage. The service is available in two editions: Core, the open source version of InfluxDB 3, for near real-time workloads focused on recent data, and Enterprise for production workloads requiring high availability, multi-node deployments, and essential compaction capabilities for long-term storage. The Enterprise edition supports multi-node cluster configurations with up to 3 nodes initially, providing enhanced availability, improved performance for concurrent queries, and greater system resilience. Amazon Timestream for InfluxDB 3 is available in all Regions where Timestream for InfluxDB is available. See here for a full listing of our Regions. To get started with Amazon Timestream for InfluxDB 3, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.
Quelle: aws.amazon.com

AWS Security Hub CSPM now supports CIS AWS Foundations Benchmark v5.0

AWS Security Hub Cloud Security Posture Management (CSPM) now supports the Center for Internet Security (CIS) AWS Foundations Benchmark v5.0. This industry-standard benchmark provides security configuration best practices for AWS with clear implementation and assessment procedures. The new standard includes 40 controls that perform automated checks against AWS resources to evaluate compliance with the latest version 5.0 requirements. The standard is now available in all AWS Regions where Security Hub CSPM is currently available, including the AWS GovCloud (US) and the China Regions. To quickly enable the standard across your AWS environment, we recommend that you use Security Hub CSPM central configuration. With this approach, you can enable the standard in all or only some of your organization’s accounts and across all AWS Regions that are linked to Security Hub CSPM with a single action. To learn more, see CIS v5.0 in the AWS Security Hub CSPM User Guide. To receive notifications about new Security Hub CSPM features and controls, subscribe to the Security Hub CSPM SNS topic. You can also try Security Hub at no cost for 30 days with the AWS Free Tier offering.
Quelle: aws.amazon.com

AWS Global Accelerator now supports endpoints in two additional AWS Regions

Starting today, AWS Global Accelerator supports application endpoints in two additional AWS Regions, Asia Pacific (Thailand) Region and Asia Pacific (Taipei) Region, expanding the number of supported AWS Regions to thirty three.
AWS Global Accelerator is a service that is designed to improve the availability, security, and performance of your internet-facing applications. By using the congestion-free AWS network, end-user traffic to your applications benefits from increased availability, DDoS protection at the edge, and higher performance relative to the public internet. Global Accelerator provides static IP addresses that act as fixed entry endpoints for your application resources in one or more AWS Regions, such as your Application Load Balancers, Network Load Balancers, Amazon EC2 instances, or Elastic IPs. Global Accelerator continually monitors the health of your application endpoints and offers deterministic fail-over for multi-region workloads without any DNS dependencies.
To get started, visit the AWS Global Accelerator website and review its documentation.
Quelle: aws.amazon.com

Amazon EC2 C8gn instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Asia Pacific (Malaysia, Sydney, Thailand) Region. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference. For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. C8gn instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore, Malaysia, Sydney, Thailand) To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Quelle: aws.amazon.com

Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available

Amazon Aurora PostgreSQL-Compatible Edition now supports zero-ETL integration with Amazon SageMaker, enabling near real-time data availability for analytics workloads. This integration automatically extracts and loads data from PostgreSQL tables into your lakehouse where it’s immediately accessible through various analytics engines and machine learning tools. The data synced into the lakehouse is compatible with Apache Iceberg open standards, enabling you to use your preferred analytics tools and query engines such as SQL, Apache Spark, BI, and AI/ML tools. Through a simple no-code interface, you can create and maintain an up-to-date replica of your PostgreSQL data in your lakehouse without impacting production workloads. The integration features comprehensive, fine-grained access controls that are consistently enforced across all analytics tools and engines, ensuring secure data sharing throughout your organization. As a complement to the existing zero-ETL integrations with Amazon Redshift, this solution reduces operational complexity while enabling you to derive immediate insights from your operational data. Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm) AWS Regions. To learn more, visit What is zero-ETL. To begin using this new integration, visit the zero-ETL documentation for Aurora PostgreSQL.
Quelle: aws.amazon.com

AWS SAM CLI adds Finch support, expanding local development tool options for serverless applications

AWS Serverless Application Model Command Line Interface (SAM CLI) now supports Finch as an alternative to Docker for local development and testing of serverless applications. This gives developers greater flexibility in choosing their preferred local development environment when working with SAM CLI to build and test their serverless applications. Developers building serverless applications spend significant time in their local development environments. SAM CLI is a command-line tool for local development and testing of serverless applications. It allows you to build, test, debug, and package your serverless applications locally before deploying to AWS Cloud. To provide the local development and testing environment for your applications, SAM CLI uses a tool that can run containers on your local device. Previously, SAM CLI only supported Docker as the tool for running containers locally. Starting today, SAM CLI also supports Finch as a container development tool. Finch is an open-source tool, developed and supported by AWS, for local container development. This means you can now choose between Docker and Finch as your preferred container tool for local development when working with SAM CLI. You can use SAM CLI to invoke Lambda functions locally, test API endpoints, and debug your serverless applications with the same experience you would have in the AWS Cloud. With Finch support, SAM CLI now automatically detects and uses Finch as the container development tool when Docker is not available. You can also set Finch as your preferred container tool for SAM CLI. This new feature supports all core SAM CLI commands including sam build, sam local invoke, sam local start-api, and sam local start-lambda. To learn more about using SAM CLI with Finch, visit the SAM CLI developer guide. 
Quelle: aws.amazon.com

Amazon WorkSpaces Core Managed Instances is now available in 5 additional AWS Regions

AWS today announced Amazon WorkSpaces Core Managed Instances availability in US East (Ohio), Asia Pacific (Malaysia), Asia Pacific (Hong Kong), Middle East (UAE), and Europe (Spain), bringing Amazon WorkSpaces capabilities to these AWS Regions for the first time. WorkSpaces Core Managed Instances in these Regions is supported by partners including Citrix, Workspot, Leostream, and Dizzion. Amazon WorkSpaces Core Managed Instances simplifies virtual desktop infrastructure (VDI) migrations with highly customizable instance configurations. WorkSpaces Core Managed Instances provisions resources in your AWS account, handling infrastructure lifecycle management for both persistent and non-persistent workloads. Managed Instances provide flexibility for organizations requiring specific compute, memory, or graphics configurations. With WorkSpaces Core Managed Instances, you can use existing discounts, Savings Plans, and other features like On-Demand Capacity Reservations (ODCRs), with the operational simplicity of WorkSpaces – all within the security and governance boundaries of your AWS account. This solution is ideal for organizations migrating from on-premises VDI environments or existing AWS customers seeking enhanced cost optimization without sacrificing control over their infrastructure configurations. You can use a broad selection of instance types, including accelerated graphics instances, while your Core partner solution handles desktop and application provisioning and session management through familiar administrative tools. Customers will incur standard compute costs along with an hourly fee for WorkSpaces Core. See the WorkSpaces Core pricing page for more information. To learn more about Amazon WorkSpaces Core Managed Instances, visit the product page. For technical documentation and getting started guides, see the Amazon WorkSpaces Core Documentation.
Quelle: aws.amazon.com