Amazon CloudWatch adds visual agent configuration to the EC2 console

Amazon CloudWatch now provides a visual configuration editor for the CloudWatch agent directly in the Amazon EC2 console, enabling you to set up and manage observability for your EC2 instances without hand-editing JSON. The CloudWatch agent collects infrastructure and application metrics, logs, and traces from EC2 instances and sends them to CloudWatch and AWS X-Ray. With the new visual editor, you can build agent configurations graphically, selecting metrics, log sources, and deployment targets, and deploy with a single click.
From the EC2 console, you can select one or more instances, install the CloudWatch agent, or create tag-based policies for automated fleet-wide management. From the instance detail page, you can view agent status, update configurations, and troubleshoot agent health. Automated policies automatically apply the correct monitoring settings to every new instance, including those launched by auto-scaling.
To get started, navigate to the Amazon EC2 console, select an instance, and choose the EC2 monitoring tab to access the CloudWatch agent management experience. CloudWatch in-console agent management is available in all AWS Commercial Regions at no additional cost. Standard CloudWatch pricing applies for metrics, logs, and other telemetry collected by the agent.
Quelle: aws.amazon.com

Paraphrase-multilingual-MiniLM-L12-v2, Table Transformer Detection, and Bielik-11B-v3.0-Instruct are now available in Amazon SageMaker JumpStart

Today, AWS announced the availability of paraphrase-multilingual-MiniLM-L12-v2, Microsoft Table Transformer Detection, and Bielik-11B-v3.0-Instruct in Amazon SageMaker JumpStart.
Paraphrase-multilingual-MiniLM-L12-v2 from Sentence Transformers is a lightweight semantic similarity model that maps sentences and paragraphs to a 384-dimensional dense vector space across 50+ languages. It is well suited for finding semantically similar content within and across languages, making it ideal for cross-lingual semantic search, multilingual document clustering, and sentence similarity scoring without requiring language-specific configuration.
Microsoft Table Transformer Detection is a DETR-based object detection model trained on the PubTables-1M dataset, purpose-built for detecting tables in unstructured documents such as PDFs and scanned images. It is well suited for document digitization pipelines and automated data extraction workflows that require reliably locating tabular content at scale across research papers, financial reports, and other document types.
Bielik-11B-v3.0-Instruct is an 11-billion-parameter generative language model developed by SpeakLeash and ACK Cyfronet AGH, trained on multilingual corpora spanning 32 European languages with a strong emphasis on Polish. It excels at Polish and European language dialogue, STEM and mathematical reasoning, logic and tool-use tasks, and enterprise applications requiring deep linguistic understanding across European languages.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Quelle: aws.amazon.com

Gemma 4 models are now available in Amazon SageMaker JumpStart

Today, AWS announced the availability of Gemma 4 E4B, Gemma 4 26B-A4B, and Gemma 4 31B in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three instruction-tuned models from Google DeepMind bring multimodal capabilities with configurable reasoning, native function calling, and multilingual support across 140+ languages, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure.
All three models share a common set of capabilities that address a broad range of enterprise AI use cases:
Thinking – Built-in reasoning mode that lets the model think step-by-step before answering
Image Understanding – Object detection, document and PDF parsing, screen and UI understanding, chart comprehension, OCR including multilingual, and handwriting recognition
Video Understanding – Analyze video content by processing sequences of frames
Interleaved Multimodal Input – Freely mix text and images in any order within a single prompt
Function Calling – Native support for structured tool use, enabling agentic workflows
Coding – Code generation, completion, and correction
Multilingual – Out-of-the-box support for 35+ languages, pre-trained on 140+ languages
Customers can choose the model that best fits their workload: Gemma 4 E4B additionally supports audio input for automatic speech recognition (ASR) and speech-to-translated-text translation across multiple languages.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases. To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Quelle: aws.amazon.com

Amazon CloudFront now supports invalidation by cache tag

Amazon CloudFront now allows you to invalidate cached objects by cache tag, enabling you to remove groups of related content from CloudFront edge locations with a single invalidation request. Cache tag invalidation simplifies common operational workflows such as updating product information across multiple pages, managing legal takedown requests, handling regulatory compliance requests, and refreshing content across multi-tenant platforms. Previously, invalidating related objects that didn’t share a common URL path required tracking individual URLs or using broad wildcard patterns that could unnecessarily clear unrelated content. With invalidation by cache tag, developers and site reliability engineers can tag cached objects when returning an object by including a specified header in HTTP responses with comma-separated tag values. When needed, they can invalidate all objects sharing a tag in one request, maintaining high cache hit ratios while ensuring end users see fresh content within seconds. You can configure the header name through the Amazon CloudFront console, AWS CLI, or API, and assign multiple tags per object for flexible, precise cache management. Over the years, CloudFront has made improvements to propagation times. Currently, invalidations take effect in under 5 seconds at P95. The end-to-end completion time, which includes reporting the invalidation status back, is under 25 seconds at P95. Amazon CloudFront invalidation by cache tag is available in all AWS Regions where CloudFront is offered except China (Beijing, operated by Sinnet) and China (Ningxia, operated by NWCD). To learn more, view the Invalidations By Cache Tag documentation. Each cache tag is priced as one path. For details on pricing, refer to the CloudFront pricing page.
Quelle: aws.amazon.com

Amazon DocumentDB (with MongoDB compatibility) is Now Available in the Canada West (Calgary) Region

Amazon DocumentDB (with MongoDB compatibility) is now available in the Canada West (Calgary) region adding to the list of available regions where you can use Amazon DocumentDB.
Amazon DocumentDB is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. Amazon DocumentDB is designed to give you the scalability and durability you need when operating mission-critical MongoDB workloads. Storage scales automatically up to 128TiB without any impact to your application. In addition, Amazon DocumentDB natively integrates with AWS Database Migration Service (DMS), Amazon CloudWatch, AWS CloudTrail, AWS Lambda, AWS Backup and more. Amazon DocumentDB supports millions of requests per second and can be scaled out to 15 low latency read replicas in minutes with no application downtime.
To learn more about Amazon DocumentDB, please visit the Amazon DocumentDB product page and pricing page. You can create a Amazon DocumentDB cluster from the AWS Management console, AWS Command Line Interface (CLI), or SDK.
Quelle: aws.amazon.com