AWS Glue launches native REST API connector for universal data integration

AWS Glue now offers a native REST-based connector that enables customers to easily read data from any source with a REST-based API. Customers can now create custom connectors to any REST-enabled data source and seamlessly integrate that data into their AWS Glue ETL (Extract, Transform, and Load) jobs. This capability extends AWS Glue’s existing connectivity to 100+ non-AWS data sources through 60+ native connectors and additional options on AWS Marketplace. Previously, connecting to proprietary systems or emerging platforms required customers to build custom connectors by providing specialized JARs with the necessary libraries. The new native REST API connector eliminates this complexity, making it easier to integrate data from any REST-enabled source. It reduces operational overhead by eliminating the need to install, update, or manage custom libraries, freeing teams from maintenance burdens. The connector also enhances flexibility, enabling organizations to quickly adapt to new data sources as business needs evolve. It also streamlines ETL management by allowing data engineers to focus on data transformation and business logic rather than spending time building and maintaining connector infrastructure. The AWS Glue REST API connector is available in all AWS commercial regions where AWS Glue is available. You can start using the AWS Glue REST API connector using AWS Glue APIs, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK). To get started, see AWS Glue documentation.
Quelle: aws.amazon.com

Amazon WorkSpaces launches Graphics G6, Gr6, and G6f bundles

Today, Amazon WorkSpaces announces the availability of 12 new Graphics G6, Gr6, and G6f WorkSpaces bundles built on the Amazon EC2 G6 family. These bundles expand customers’ options for running graphics-intensive and GPU-accelerated workloads, and are available on both Amazon WorkSpaces Personal and Amazon WorkSpaces Core.
The new bundles are designed to support a wide range of performance, memory, and cost requirements: G6 bundles include five sizes with 1:4 vCPU-to-memory configurations, suitable for graphic design, CAD/CAM, and ML model training workloads. Gr6 bundles include two sizes with memory-optimized 1:8 vCPU-to-memory configurations, designed for higher-memory workloads such as 3D rendering, seismic visualization, and GIS processing. G6f bundles include five sizes and offer fractional GPU options (1/8, 1/4, and 1/2 GPU), enabling cost-effective access to GPU acceleration for workloads that do not require a full GPU. All Graphics G6, Gr6, and G6f WorkSpaces support Windows Server 2022 and allow customers to bring their own Windows desktop licenses for Windows 11.
These bundles are available in 13 AWS Regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Paris, Frankfurt, London), Asia Pacific (Tokyo, Mumbai, Sydney, Seoul), South America (São Paulo), and AWS GovCloud (US-West and US-East).
To get started, create a Graphics G6, Gr6, or G6f WorkSpace using the Amazon WorkSpaces console. For pay-as-you-go pricing details, see the Amazon WorkSpaces Pricing Page and the Amazon WorkSpaces Core Pricing Page.
Quelle: aws.amazon.com

AWS Builder ID now supports Sign in with Apple

AWS Builder ID, your profile for accessing AWS applications including AWS Builder Center, AWS Training and Certification, AWS re:Post, AWS Startups, and Kiro, now supports Sign in with Apple as a social login provider. This expansion of sign-in options builds on the existing Sign in with Google capability, providing Apple users with a streamlined way to access AWS resources without managing separate credentials on AWS.
With Sign in with Apple integration, developers and builders can now enjoy access to their AWS Builder ID profile using their Apple Account credentials. This enhancement eliminates password management complexity, reduces forgotten password issues, and provides a frictionless experience for both new user registration and returning user sign-ins. Whether you’re accessing development resources in AWS Builder Center, enrolling in certification programs, participating in community discussions on AWS re:Post, exploring startup resources, or using Kiro to code your next app, your Apple Account now serves as a secure gateway to your builder AWS journey. 
To learn more about AWS Builder ID and get started with Sign in with Apple, visit the AWS Builder ID documentation.

Quelle: aws.amazon.com

Amazon EC2 G6e instances now available in Dubai region

Starting today, the Amazon EC2 G6e instances powered by NVIDIA L40S Tensor Core GPUs is now available in Middle East (UAE) Region. G6e instances can be used for a wide range of machine learning and spatial computing use cases.
Customers can use G6e instances to deploy large language models (LLMs) and diffusion models for generating images, video, and audio. Additionally, the G6e instances will unlock customers’ ability to create larger, more immersive 3D simulations and digital twins for spatial computing workloads. G6e instances feature up to 8 NVIDIA L40S Tensor Core GPUs with 48 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 400 Gbps of network bandwidth, up to 1.536 TB of system memory, and up to 7.6 TB of local NVMe SSD storage.  Amazon EC2 G6e instances are available today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo, Seoul), Middle East (UAE) and Europe (Frankfurt, Spain, Stockholm) Regions. Customers can purchase G6e instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6e instance page.
Quelle: aws.amazon.com

Apache Spark lineage now available in Amazon SageMaker Unified Studio for IDC based domains

Amazon SageMaker announces general availability of Data Lineage for Apache Spark jobs executed on Amazon EMR and AWS Glue in SageMaker Unified Studio for IDC based domains. Data Lineage provides you with the information you need to identify the root cause of complex issues and understand the impact of changes. This feature supports lineage capture of schema and transformations of data assets and columns from Spark executions in EMR-EC2, EMR-Serverless, EMR-EKS, and AWS Glue. You can then explore this lineage visually as a graph in SageMaker Unified Studio or query it using APIs. You can also use lineage to compare transformations across Spark job’s history. Spark lineage is available in all existing SageMaker Unified Studio regions. For detailed information on how to get started with lineage using these new features, refer to the documentation.
Quelle: aws.amazon.com

Structured outputs now available in Amazon Bedrock

Amazon Bedrock now supports structured outputs, a capability that provides consistent, machine-readable responses from foundation models that adhere to your defined JSON schemas. Instead of prompting for valid JSON and adding extra checks in your application, you can specify the format you want and receive responses that match it—making production workflows more predictable and resilient. Structured outputs helps with common production tasks such as extracting key fields and powering workflows that use APIs or tools, where small formatting errors can break downstream systems. By ensuring schema compliance, it reduces the need for custom validation logic and lowers operational overhead through fewer failed requests and retries—so you can confidently deploy AI applications that require predictable, machine-readable outputs. You can use structured outputs in two ways: define a JSON schema that describes the response format you want, or use strict tool definitions to ensure a model’s tool calls match your specifications. Structured outputs is generally available for Anthropic Claude 4.5 models and select open-weight models across the Converse, ConverseStream, InvokeModel, and InvokeModelWithResponseStream APIs in all commercial AWS Regions where Amazon Bedrock is supported. To learn more about structured outputs and the supported models, visit the Amazon Bedrock documentation.
Quelle: aws.amazon.com

Amazon EC2 G7e instances now available in US West (Oregon) region

Starting today, Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are now available in US West (Oregon) region. G7e instances offer up to 2.3x inference performance compared to G6e.
Customers can use G7e instances to deploy large language models (LLMs), agentic AI models, multimodal generative AI models, and physical AI models. G7e instances offer the highest performance for spatial computing workloads as well as workloads that require both graphics and AI processing capabilities. G7e instances feature up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, with 96 GB of memory per GPU, and 5th Generation Intel Xeon processors. They support up to 192 virtual CPUs (vCPUs) and up to 1600 Gbps of networking bandwidth. G7e instances support NVIDIA GPUDirect Peer to Peer (P2P) that boosts performance for multi-GPU workloads. Multi-GPU G7e instances also support NVIDIA GPUDirect Remote Direct Memory Access (RDMA) with EFA in EC2 UltraClusters, reducing latency for small-scale multi-node workloads.
You can use G7e instances for Amazon EC2 in the following AWS Regions: US West (Oregon), US East (N. Virginia) and US East (Ohio). You can purchase G7e instances as On-Demand Instances, Spot Instances, or as part of Savings Plans.
To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit G7e instances.
Quelle: aws.amazon.com

Cartesia Sonic 3 text-to-speech model is now available on Amazon SageMaker JumpStart

Cartesia’s Sonic 3 model is now available in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. Sonic 3 is Cartesia’s latest state space model (SSM) for streaming text-to-speech (TTS), delivering high naturalness, accurate transcript following, and industry-leading latency with fine-grained control over volume, speed, and emotion.
Sonic 3 supports 42 languages and provides advanced controllability through API parameters and SSML tags for volume, speed, and emotion adjustments. The model includes natural laughter support, stable voices optimized for voice agents, and emotive voices for expressive characters. With sub-100ms latency, Sonic 3 enables real-time conversational AI that captures human speech nuances including emotions and tonal shifts. With SageMaker JumpStart, customers can deploy Sonic 3 with just a few clicks to address their voice AI use cases. To get started with this model, navigate to the SageMaker JumpStart model catalog in the SageMaker Studio or use the SageMaker Python SDK to deploy the model to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Quelle: aws.amazon.com

AWS Batch now supports unmanaged compute environments for Amazon EKS

AWS Batch now extends its job scheduling capabilities to unmanaged compute environments on Amazon EKS. With unmanaged EKS compute environments, you can leverage AWS Batch’s job orchestration while maintaining full control over your Kubernetes infrastructure for security, compliance, or operational requirements. With this capability, you can create unmanaged compute environments through CreateComputeEnvironment API and AWS Batch console by selecting your existing EKS cluster and specifying a Kubernetes namespace, then associate your EKS nodes with the compute environment using kubectl labeling. AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. Unmanaged compute environments on Amazon EKS are available today in all AWS regions where AWS Batch is available. For more information, see the AWS Batch User Guide.
Quelle: aws.amazon.com

Amazon RDS now provides an enhanced console experience to connect to a database

Amazon RDS now provides an enhanced console experience that consolidates and provides all relevant information needed to connect to a database in one place, making it easier to connect to your RDS databases. The new console experience provides ready-made code snippets for Java, Python, Node.js and other programming languages as well as tools like the psql command line utility. These code snippets are automatically adjusted based on your database’s authentication settings. For example, if your cluster uses IAM authentication, the generated code snippets will use token-based authentication to connect to the database. The console experience also includes integrated CloudShell access, offering the ability to connect to your databases directly from within the RDS console. This feature is available for Amazon Aurora PostgreSQL, Amazon Aurora MySQL, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, Amazon RDS for MariaDB database engines across all commercial AWS Regions. Get started with the new console experience for database connectivity through the Amazon RDS Console. To learn more, see the Amazon RDS and Aurora user guide
Quelle: aws.amazon.com