Amazon CloudWatch Composite Alarms adds threshold-based alerting

Amazon CloudWatch now enables you to create more flexible alerting policies by triggering notifications when a specific subset of your monitored resources need attention. Using CloudWatch composite alarms, you can create a rule to take action only when a certain combination of alarms is activated. This enhancement lets you choose to receive alerts only when a certain number of resources are impacted, helping you focus on meaningful incidents. The new threshold function in composite alarms allows you to eliminate unnecessary alerts for minor issues while ensuring quick notification of significant problems. IT operations teams can configure alerts to trigger when, for instance, at least two out of four storage volumes are running low on capacity, or when 50% of hosts in a cluster show high CPU utilization. The feature supports both fixed numbers and percentages, making it easy to maintain effective monitoring even as your infrastructure grows or changes. This capability is now available in all commercial AWS regions, the AWS GovCloud (US) Regions, and the China Regions. To create a threshold-based condition in a composite alarm, simply use the AT_LEAST function in the alarm’s condition. Composite alarms’ pricing applies, see CloudWatch pricing for details. To learn more about the threshold function’s parameters, visit the Amazon CloudWatch documentation for composite alarms.
Quelle: aws.amazon.com

AWS Parallel Computing Service (PCS) now supports Slurm CLI Filter plugins

AWS Parallel Computing Service (PCS) now supports Slurm CLI Filter plugins, enabling you to extend and modify how Slurm schedules and processes your high performance computing (HPC) workloads without modifying Slurm directly. Using CLI Filter plugins, you can now define custom policies for job submission to your clusters. For example, you can define policies that verify certain flags or fields of jobs when users submit them, automatically reject jobs submitted without specific attributes, or even modify job parameters. PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads and build scientific and engineering models on AWS using Slurm. You can use PCS to build complete environments that integrate compute, storage, networking, and visualization. PCS simplifies cluster operations with managed updates and built-in observability features, helping to remove the burden of maintenance. You can work in a familiar environment, focusing on your research and innovation instead of worrying about infrastructure. This feature is now available in all AWS Regions where PCS is available. To learn more about using Slurm CLI Filter plugins with PCS, see the PCS User Guide.
Quelle: aws.amazon.com

Supporting Viksit Bharat: Announcing our newest AI investments in India

India’s developer community, vibrant startup ecosystem, and leading enterprises are embracing AI with incredible speed. To meet this moment for India, we are investing in powerful, locally-available tools in India that can help foster a diverse ecosystem, and ensure our platform delivers the controls you need for compliance and AI sovereignty.
Today, we’re announcing a significant expansion of our local AI hardware capacity for customers in India. This increase in local compute, powered by Google’s AI Hypercomputer architecture with the latest Trillium TPUs, will help more businesses and public sector organizations train and serve their most advanced Gemini models in India. 
By unblocking new opportunities for high-performance, low-latency AI applications we can help customers meet India’s data residency and sovereignty requirements.
Enabling models and control: AI tools built for India’s context
While infrastructure is the foundation for digital sovereignty, it also requires control over the data and the models built on it. We’re committed to bringing our latest AI advancements to India faster than ever, with the controls you need.
Our new services would enable you to build, tune, and deploy models that understand India’s unique business logic and rich cultural context.

Next-generation models, here in India: Earlier this year, Google Cloud made Gemini available to regulated Indian customers by deploying Gemini 2.5 Flash with local machine-learning processing support. Now, we’re opening early testing for our latest and most advanced Gemini models to Indian customers. We’re also committing to launching the most powerful Gemini models in India with full data residency support. This is a first for Google Cloud, and a direct response to help meet the needs of our Indian customers.

More AI capabilities, available locally: We’re providing additional consumption models and pre-built AI-powered applications tailored for local context by launching a suite of new capabilities with data residency support in India:

Batch support for Gemini 2.5 Flash: Now generally available, this allows organizations to run high-volume, non-real-time AI tasks at a lower cost, all in India.

Document AI: Now in preview, we’re providing local support to help Indian businesses automate document processing.

More local context in your AI: Grounding on Google Maps is a new capability to ground model responses in real time from Google Maps, ensuring AI applications can provide accurate, location-aware answers.

A sovereign AI ecosystem: Building for India, with India
The most durable and decisive factor for long-term digital sovereignty lies in cultivating the “human element” — the skilled talent and innovation ecosystem. A sovereign AI future depends on building a strong local ecosystem.
Our strategy is to support India’s ecosystem-led approach by investing in the researchers, developers, and startups who are building for India’s specific needs.
Collaboration with IIT Madras: Google Cloud and Google DeepMind are thrilled to collaborate with IIT Madras to support the launch of Indic Arena. Run independently by the renowned AI4Bharat center at IIT Madras, this platform will allow users from all over India to anonymously evaluate and rank AI models on tasks unique to India’s rich multilingual landscape. To support this initiative, we are providing cloud credits to power this critical, community-driven resource.
“At AI4Bharat, our mission is to build AI for India’s specific needs. A critical part of this is having a neutral, standardized benchmark to understand how models are performing across our many languages,” said Mitesh Khapra, associate professor, IIT Madras. “Indic Arena will be that platform. We are delighted to have Google Cloud’s support to provide the initial compute power to bring this independent, public-facing project to life for the entire Indian AI community.”
We encourage all developers, researchers, and organizations in India to explore the Indic Arena platform and contribute to building a more inclusive AI future.
We invite the entire Indian ecosystem, from startups and universities to government bodies and enterprises, to take advantage of this new, dedicated capacity for Gemini in Vertex AI and our sovereign-ready infrastructure to build the next generation of AI that is built by Indians, for Indians.
Quelle: Google Cloud Platform

Amazon Braket notebook instances now support CUDA-Q natively

Amazon Braket notebook instances now come with native support for CUDA-Q, streamlining access to NVIDIA’s quantum computing platform for hybrid quantum-classical applications. This enhancement is enabled by upgrading the underlying operating system to Amazon Linux 2023, which delivers improved performance, security, and compatibility for quantum development workflows. Quantum researchers and developers can now seamlessly build and test hybrid quantum-classical algorithms using CUDA-Q’s GPU-accelerated quantum circuit simulation alongside access to quantum processing units (QPUs) from IonQ, Rigetti, and IQM, all within a single managed environment. With this release, developers can now access CUDA-Q directly within the managed notebook environment, simplifying workflows that previously required local deployment or needed to be run via Hybrid Jobs. CUDA-Q support in Amazon Braket notebook instances is available in all AWS Regions where Amazon Braket is available. To get started, see the Amazon Braket Developer Guide and visit the Amazon Braket product page to learn more about quantum computing on AWS.
Quelle: aws.amazon.com

Amazon S3 Express One Zone now supports Internet Protocol version 6 (IPv6)

Amazon S3 Express One Zone now supports Internet Protocol version 6 (IPv6) addresses for gateway Virtual Private Cloud (VPC) endpoints. S3 Express One Zone is a high-performance storage class designed for latency-sensitive applications. Organizations are adopting IPv6 networks to mitigate IPv4 address exhaustion in their private networks or to comply with regulatory requirements. You can now access your data in S3 Express One Zone over IPv6 or DualStack VPC endpoints. You don’t need additional infrastructure to handle IPv6 to IPv4 address translation. S3 Express One Zone support for IPv6 is available in all AWS Regions where the storage class is available at no additional cost. You can set up IPv6 for new and existing VPC endpoints using the AWS Management Console, AWS CLI, AWS SDK, or AWS CloudFormation. To get started using IPv6 on S3 Express One Zone, visit the S3 User Guide.
Quelle: aws.amazon.com