Amazon Athena for Apache Spark is now available in Amazon SageMaker notebooks

Amazon SageMaker now supports Amazon Athena for Apache Spark, bringing a new notebook experience and fast serverless Spark experience together within a unified workspace. Now, data engineers, analysts, and data scientists can easily query data, run Python code, develop jobs, train models, visualize data, and work with AI from one place, with no infrastructure to manage and second-level billing. Athena for Apache Spark scales in seconds to support any workload, from interactive queries to petabyte-scale jobs. Athena for Apache Spark now runs on Spark 3.5.6, the same high-performance Spark engine available across AWS, optimized for open table formats including Apache Iceberg and Delta Lake. It brings you new debugging features, real-time monitoring in the Spark UI, and secure interactive cluster communication through Spark Connect. As you use these capabilities to work with your data, Athena for Spark now enforces table-level access controls defined in AWS Lake Formation.
Athena for Apache Spark is now available with Amazon SageMaker notebooks in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, visit Apache Spark engine version 3.5, read the AWS News Blog or visit Amazon SageMaker documentation. Visit the Getting Started guide to try it from Amazon SageMaker notebooks.
Quelle: aws.amazon.com

Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview)

Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview). With Spark 4.0.1, you can build and maintain data pipelines more easily with ANSI SQL and VARIANT data types, strengthen compliance and governance frameworks with Apache Iceberg v3 table format, and deploy new real-time applications faster with enhanced streaming capabilities. This enables your teams to reduce technical debt and iterate more quickly, while ensuring data accuracy and consistency. With Spark 4.0.1, you can build data pipelines with standard ANSI SQL, making it accessible to a larger set of users who don’t know programming languages like Python or Scala. Spark 4.0.1 natively supports JSON and semi-structured data through VARIANT data types, providing flexibility for handling diverse data formats. You can strengthen compliance and governance through Apache Iceberg v3 table format, which provides transaction guarantees and tracks how your data changes over time, creating the audit trails you need for regulatory requirements. You can deploy real-time applications faster with improved streaming controls that let you manage complex stateful operations and monitor streaming jobs more easily. With this capability, you can support use cases like fraud detection and real-time personalization. Apache Spark 4.0.1 is available in preview in all regions where EMR Serverless is available, excluding China and AWS GovCloud (US) regions. To learn more about Apache Spark 4.0.1 on Amazon EMR, visit the Amazon EMR Serverless release notes, or get started by creating an EMR application with Spark 4.0.1 from the AWS Management Console.
Quelle: aws.amazon.com

AWS announces Flexible Cost Allocation on AWS Transit Gateway

AWS announces general availability of Flexible Cost Allocation on AWS Transit Gateway, enhancing how you can distribute Transit Gateway costs across your organization.
Previously, Transit Gateway only used a sender-pay model, where the source attachment account owner was responsible for all data usage related costs. The new Flexible Cost Allocation (FCA) feature provides more versatile cost allocation options through a central metering policy. Using FCA metering policy, you can choose to allocate all of your Transit Gateway data processing and data transfer usage to the source attachment account, the destination attachment account, or the central Transit Gateway account. FCA metering policies can be configured at an attachment-level or individual flow-level granularity. FCA also supports middle-box deployment models enabling you to allocate data processing usage on middle-box appliances such as AWS Network Firewall to the original source or destination attachment owners. This flexibility allows you to implement multiple cost allocation models on a single Transit Gateway, accommodating various chargeback scenarios within your AWS network infrastructure. Flexible Cost Allocation is available in all commercial AWS Regions where Transit Gateway is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for using FCA on Transit Gateway. For more information, see the Transit Gateway documentation pages.
Quelle: aws.amazon.com

Amazon Connect launches monitoring of contacts queued for callback

Amazon Connect now provides you with the ability to monitor which contacts are queued for callback. This feature enables you to search for contacts queued for callback and view additional details such as the customer’s phone number and duration of being queued within the Connect UI and APIs. You can now pro-actively route contacts to agents that are at risk of exceeding the callback timelines communicated to customers. Businesses can also identify customers that have already successfully connected with agents, and clear them from the callback queue to remove duplicative work. This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage. 
Quelle: aws.amazon.com

Second-generation AWS Outposts racks now supported in the AWS Asia Pacific (Tokyo) Region

Second-generation AWS Outposts racks are now supported in the AWS Asia Pacific (Tokyo) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Organizations from startups to enterprises and the public sector in and outside of Japan can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.
Quelle: aws.amazon.com

AWS IoT Core enhances IoT rules-SQL with variable setting and error handling capabilities

AWS IoT Core now supports a SET clause in IoT rules-SQL, which lets you set and reuse variables across SQL statements. This new feature provides a simpler SQL experience and ensures consistent content when variables are used multiple times. Additionally, a new get_or_default() function provides improved failure handling by returning default values while encountering data encoding or external dependency issues, ensuring IoT rules continue execution successfully. AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. Rules for AWS IoT is a component of AWS IoT Core which enables you to filter, process, and decode IoT device data using SQL-like statements, and route the data to 20+ AWS and third-party services. As you define an IoT rule, these new capabilities help you eliminate complicated SQL statements and make it easy for you to manage IoT rules-SQL failures.
These new features are available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. For more information and getting started experience, visit the developer guides on SET clause and get_or_default() function.
Quelle: aws.amazon.com

Automated Reasoning checks now include natural language test Q&A generation

AWS announces the launch of natural language test Q&A generation for Automated Reasoning checks in Amazon Bedrock Guardrails. Automated Reasoning checks uses formal verification techniques to validate the accuracy and policy compliance of outputs from generative AI models. Automated Reasoning checks deliver up to 99% accuracy at detecting correct responses from LLMs, giving you provable assurance in detecting AI hallucinations while also assisting with ambiguity detection in model responses. To get started with Automated Reasoning checks, customers create and test Automated Reasoning policies using natural language documents and sample Q&As. Automated Reasoning checks generates up to N test Q&As for each policy using content from the input document, reducing the work required to go from initial policy generation to production-ready, refined policy. Test generation for Automated Reasoning checks is now available in the US (N. Virginia), US (Ohio), US (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris) Regions. Customers can access the service through the Amazon Bedrock console, as well as the Amazon Bedrock Python SDK. To learn more about Automated Reasoning checks and how you can integrate it into your generative AI workflows, please read the Amazon Bedrock documentation, review the tutorials on the AWS AI blog, and visit the Bedrock Guardrails webpage.
Quelle: aws.amazon.com

Amazon OpenSearch Serverless adds AWS PrivateLink for management console

Amazon OpenSearch Serverless now supports AWS PrivateLink for secure and private connectivity to management console. With AWS PrivateLink, you can establish a private connection between your virtual private cloud (VPC) and Amazon OpenSearch Serverless to create, manage, and configure your OpenSearch Serverless resources without using the public internet. By enabling private network connectivity, this enhancement eliminates the need to use public IP addresses or relying solely on firewall rules to access OpenSearch Serverless. With this feature release the OpenSearch Serverless management and data operations can be securely accessed through PrivateLinks. Data ingestion and query operations on collections still requires OpenSearch Serverless provided VPC endpoint configuration for private connectivity as described in the OpenSearch Serverless VPC developer guide. You can use PrivateLink connections in all AWS Regions where Amazon OpenSearch Serverless is available. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to AWS PrivateLink pricing page for details. You can get started by creating an AWS PrivateLink interface endpoint for Amazon OpenSearch Serverless using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on creating an interface VPC endpoint for management console. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. 
Quelle: aws.amazon.com

Recycle Bin adds support for Amazon EBS Volumes

Recycle Bin for Amazon EBS, which helps you recover accidentally deleted snapshots and EBS-backed AMIs, now supports EBS Volumes. If you accidentally delete a volume, you can now recover it directly from Recycle Bin instead of restoring from a snapshot, reducing your recovery point objective with no data loss between the last snapshot and deletion. Your recovered volume can immediately achieve the full performance without waiting for data to download from snapshots. To use Recycle Bin, you can set a retention period for deleted volumes, and you can recover any volume within that period. Recovered volumes are immediately available and will retain all attributes—tags, permissions, and encryption status. Volumes not recovered are deleted permanently when the retention period expires. You create retention rules to enable Recycle Bin for all volumes or specific volumes, using tags to target which volumes to protect. EBS Volumes in Recycle Bin are billed at the same price as EBS Volumes, read more on the pricing page. To get started, read the documentation. The feature is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS commercial, China, and AWS GovCloud (US) Regions.
Quelle: aws.amazon.com

Validate and enforce required tags in CloudFormation, Terraform and Pulumi with Tag Policies

AWS Organizations Tag Policies announces Reporting for Required Tags, a new validation check that proactively ensures your CloudFormation, Terraform, and Pulumi deployments include the required tags critical to your business. Your infrastructure-as-code (IaC) operations can now be automatically validated against tag policies to ensure tagging consistency across your AWS environments. With this, you can ensure compliance for your IaC deployments in two simple steps: 1) define your tag policy, and 2) enable validation in each IaC tool. Tag Policies enables you to enforce consistent tagging across your AWS accounts with proactive compliance, governance, and control. With this launch, you can specify mandatory tag keys in your tag policies, and enforce guardrails for your IaC deployments. For example, you can define a tag policy that all EC2 instances in your IaC templates must have “Environment”, “Owner”, and “Application” as required tag keys. You can start validation by activating AWS::TagPolicies::TaggingComplianceValidator Hook in CloudFormation, adding validation logic in your Terraform plan, or activating aws-organizations-tag-policies pre-built policy pack in Pulumi. Once configured, all CloudFormation, Terraform, and Pulumi deployments in the target account will be automatically validated and/or enforced against your tag policies, ensuring that resources like EC2 instances include the required “Environment”, “Owner”, and “Application” tags. You can use Reporting for Required Tags feature via AWS Management Console, AWS Command Line Interface, and AWS Software Development Kit. This feature is available with AWS Organizations Tag Policies in AWS Regions where Tag Policies is available. To learn more, visit Tag Policies documentation. To learn how to set up validation and enforcement, see the user guide for CloudFormation, this user guide for Terraform, and this blog post for Pulumi.
Quelle: aws.amazon.com