AWS Lambda announces enhanced error handling capabilities for Kafka event processing

AWS Lambda launches enhanced error handling capabilities for Amazon Managed Streaming for Apache Kafka (MSK) and self-managed Apache Kafka (SMK) event sources. These capabilities allow customers to build custom retry configurations, optimize retries of failed messages, and send failed events to a Kafka topic as an on-failure destination, enabling customers to build resilient Kafka workloads with robust error handling strategies. Customers use Kafka event source mappings (ESM) with their Lambda functions to build their mission-critical Kafka applications. Kafka ESM offers robust error handling of failed events by retrying events with exponential backoff, and retaining failed events in on-failure destinations like Amazon SQS, Amazon S3, Amazon SNS. However, customers need customized error handling to meet stringent business and performance requirements. With this launch, developers can now exercise precise control over failed event processing and leverage Kafka topics as an additional on-failure destination when using Provisioned mode for Kafka ESM. Customers can now define specific retry limits and time boundaries for retry, automatically discarding failed records beyond these limits to customer-specified destination. They can now also set automatic retries of failed records in the batch and enhance their function code to report individual failed messages, optimizing the retry process. This feature is available in all AWS Commercial Regions where AWS Lambda’s Provisioned mode for Kafka ESM is available. To enable these capabilities, provide configuration parameters for your Kafka ESM in the ESM API, AWS Console, and AWS CLI. To learn more, read the Lambda ESM documentation and AWS Lambda pricing. 
Quelle: aws.amazon.com

OpenSearch Service Enhances Log Analytics with New PPL Experience

Today, AWS announces enhanced log analytics capabilities in Amazon OpenSearch Service, making Piped Processing Language (PPL) and natural language the default experience in OpenSearch UI’s Observability workspace. This update combines proven pipeline syntax with simplified workflows to deliver an intuitive observability experience, helping customers analyze growing data volumes while controlling costs. The new experience includes 35+ new commands for deep analysis, faceted exploration, and natural language querying to help customers gain deeper insights across infrastructure, security, and business metrics. With this enhancement, customers can streamline their log analytics workflows using familiar pipeline syntax while leveraging advanced analytics capabilities. The solution includes enterprise-grade query capabilities, supporting advanced event correlation using natural language that help teams uncover meaningful patterns faster. Users can seamlessly move from query to visualization within a single interface, reducing mean time to detect and resolve issues. Admins can quickly stand up an end-to-end OpenTelemetry solution using OpenSearch’s Get Started workflow in the AWS console. The unified workflow includes out-of-the-box OpenSearch Ingestion pipelines for OpenTelemetry data, making it easier for teams to get started quickly. Amazon OpenSearch UI is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Osaka), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Europe (Zurich), South America (São Paulo), and Canada (Central). To learn more about the new OpenSearch log analytics experience, visit the OpenSearch Service observability documentation and start using these enhanced capabilities today in OpenSearch UI.
Quelle: aws.amazon.com

EC2 Image Builder now supports auto-versioning and enhances Infrastructure as Code experience

Amazon EC2 Image Builder now supports automatic versioning for recipes and automatic build version incrementing for components, reducing the overhead of managing versions manually. This enables you to increment versions automatically and dynamically reference the latest compatible versions in your pipelines without manual updates. With automatic versioning, you no longer need to manually track and increment version numbers when creating new versions of your recipes. You can simply place a single ‘x’ placeholder in any position of the version number, and Image Builder detects the latest existing version and automatically increments that position. For components, Image Builder automatically increments the build version when you create a component with the same name and semantic version. When referencing resources in your configurations, wildcard patterns automatically resolve to the highest available version matching the specified pattern, ensuring your pipelines always use the latest versions. Auto-versioning is available in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK. Refer to documentation to learn more about recipes, components and semantic versioning.
Quelle: aws.amazon.com

Announcing a Fully Managed Appium Endpoint for AWS Device Farm

AWS Device Farm enables mobile and web developers to test their apps using real mobile devices and desktop browsers. Starting today, you can connect to a fully managed Appium endpoint using only a few lines of code and run interactive tests on multiple physical devices directly from your IDE or local machine. This feature also seamlessly works with third-party tools such as Appium Inspector — both hosted and local versions — for all actions including element inspection.
Support for live video and log streaming enables you to get faster test feedback within your local workflow. It complements our existing server-side execution which gives you the scale and control to run secure enterprise-grade workloads. Taken together, Device Farm now offers you the ability to author, inspect, debug, test, and release mobile apps faster, whether from your IDE, AWS Console, or other environments.
To learn more, see Appium Testing in AWS Device Farm Developer Guide.
Quelle: aws.amazon.com

AWS Payments Cryptography announces support for post-quantum cryptography to secure data in transit

Today, AWS Payments Cryptography announces support for hybrid post-quantum (PQ) TLS to secure API calls. With this launch, customers can future-proof transmissions of sensitive data and commands using ML-KEM post-quantum cryptography. Enterprises operating highly regulated workloads wish to reduce post-quantum risks from “harvest now, decrypt later”. Long-lived data-in-transit can be recorded today, then decrypted in the future when a sufficiently capable quantum computer becomes available. With today’s launch, AWS Payment Cryptography joins data protection services such as AWS Key Management Service (KMS) in addressing this concern by supporting PQ-TLS. To get started, simply ensure that your application depends on a version of AWS SDK or browser that supports PQ-TLS. For detailed guidance by language and platform, visit the PQ-TLS enablement documentation. Customers can also validate that ML-KEM was used to secure the TLS session for an API call by reviewing tlsDetails for the corresponding CloudTrail event in the console or a configured CloudTrail trail. These capabilities are generally available in all AWS Regions at no added cost. To get started with PQ-TLS and Payment Cyptography, see our post-quantum TLS guide. For more information about PQC at AWS, please see PQC shared responsibility.
Quelle: aws.amazon.com

Amazon Athena for Apache Spark is now available in Amazon SageMaker notebooks

Amazon SageMaker now supports Amazon Athena for Apache Spark, bringing a new notebook experience and fast serverless Spark experience together within a unified workspace. Now, data engineers, analysts, and data scientists can easily query data, run Python code, develop jobs, train models, visualize data, and work with AI from one place, with no infrastructure to manage and second-level billing. Athena for Apache Spark scales in seconds to support any workload, from interactive queries to petabyte-scale jobs. Athena for Apache Spark now runs on Spark 3.5.6, the same high-performance Spark engine available across AWS, optimized for open table formats including Apache Iceberg and Delta Lake. It brings you new debugging features, real-time monitoring in the Spark UI, and secure interactive cluster communication through Spark Connect. As you use these capabilities to work with your data, Athena for Spark now enforces table-level access controls defined in AWS Lake Formation.
Athena for Apache Spark is now available with Amazon SageMaker notebooks in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, visit Apache Spark engine version 3.5, read the AWS News Blog or visit Amazon SageMaker documentation. Visit the Getting Started guide to try it from Amazon SageMaker notebooks.
Quelle: aws.amazon.com

Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview)

Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview). With Spark 4.0.1, you can build and maintain data pipelines more easily with ANSI SQL and VARIANT data types, strengthen compliance and governance frameworks with Apache Iceberg v3 table format, and deploy new real-time applications faster with enhanced streaming capabilities. This enables your teams to reduce technical debt and iterate more quickly, while ensuring data accuracy and consistency. With Spark 4.0.1, you can build data pipelines with standard ANSI SQL, making it accessible to a larger set of users who don’t know programming languages like Python or Scala. Spark 4.0.1 natively supports JSON and semi-structured data through VARIANT data types, providing flexibility for handling diverse data formats. You can strengthen compliance and governance through Apache Iceberg v3 table format, which provides transaction guarantees and tracks how your data changes over time, creating the audit trails you need for regulatory requirements. You can deploy real-time applications faster with improved streaming controls that let you manage complex stateful operations and monitor streaming jobs more easily. With this capability, you can support use cases like fraud detection and real-time personalization. Apache Spark 4.0.1 is available in preview in all regions where EMR Serverless is available, excluding China and AWS GovCloud (US) regions. To learn more about Apache Spark 4.0.1 on Amazon EMR, visit the Amazon EMR Serverless release notes, or get started by creating an EMR application with Spark 4.0.1 from the AWS Management Console.
Quelle: aws.amazon.com

AWS announces Flexible Cost Allocation on AWS Transit Gateway

AWS announces general availability of Flexible Cost Allocation on AWS Transit Gateway, enhancing how you can distribute Transit Gateway costs across your organization.
Previously, Transit Gateway only used a sender-pay model, where the source attachment account owner was responsible for all data usage related costs. The new Flexible Cost Allocation (FCA) feature provides more versatile cost allocation options through a central metering policy. Using FCA metering policy, you can choose to allocate all of your Transit Gateway data processing and data transfer usage to the source attachment account, the destination attachment account, or the central Transit Gateway account. FCA metering policies can be configured at an attachment-level or individual flow-level granularity. FCA also supports middle-box deployment models enabling you to allocate data processing usage on middle-box appliances such as AWS Network Firewall to the original source or destination attachment owners. This flexibility allows you to implement multiple cost allocation models on a single Transit Gateway, accommodating various chargeback scenarios within your AWS network infrastructure. Flexible Cost Allocation is available in all commercial AWS Regions where Transit Gateway is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for using FCA on Transit Gateway. For more information, see the Transit Gateway documentation pages.
Quelle: aws.amazon.com

Amazon Connect launches monitoring of contacts queued for callback

Amazon Connect now provides you with the ability to monitor which contacts are queued for callback. This feature enables you to search for contacts queued for callback and view additional details such as the customer’s phone number and duration of being queued within the Connect UI and APIs. You can now pro-actively route contacts to agents that are at risk of exceeding the callback timelines communicated to customers. Businesses can also identify customers that have already successfully connected with agents, and clear them from the callback queue to remove duplicative work. This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage. 
Quelle: aws.amazon.com

Second-generation AWS Outposts racks now supported in the AWS Asia Pacific (Tokyo) Region

Second-generation AWS Outposts racks are now supported in the AWS Asia Pacific (Tokyo) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Organizations from startups to enterprises and the public sector in and outside of Japan can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.
Quelle: aws.amazon.com