AWS Elastic Beanstalk launches Deployments tab with in-progress deployment logs

AWS Elastic Beanstalk now provides a Deployments tab in the environment dashboard, giving customers a consolidated view of their deployment history and real-time deployment progress with step-by-step deployment logs. Previously, customers had to wait until a deployment completed before retrieving logs, and then correlate events across multiple sources to understand what happened. With this launch, customers can view deployment status, events, and detailed logs in a single interface directly from the Elastic Beanstalk console, even while a deployment is still in progress.
The Deployments tab displays a history of recent deployments for an environment, including application deployments, configuration updates, and environment launches. Each deployment includes a detailed view with deployment events and a new consolidated log that captures each step of the deployment process, including dependency installation, application builds, .ebextensions, platform hooks, and application startup output.
This feature is supported across all Elastic Beanstalk Linux-based platform branches. It is available in all AWS Commercial Regions and AWS GovCloud (US) Regions where Elastic Beanstalk is available. For a complete list of supported Regions, see AWS Regions.
To learn more, see the AWS Elastic Beanstalk Developer Guide. For additional information, visit the AWS Elastic Beanstalk product page.
Quelle: aws.amazon.com

Amazon Neptune Database is now available in Asia Pacific (Hyderabad) region

Amazon Neptune Database is now available in the AWS Asia Pacific (Hyderabad) region. You can now create Neptune clusters using R5, R5d, R6g, R6i, X2iedn, T4g, and T3 instance types in the AWS Asia Pacific (Hyderabad) region.
Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. Amazon Neptune supports Neptune Global Database designed for globally distributed applications, allowing a single Neptune database to span multiple AWS Regions. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.
Quelle: aws.amazon.com

Amazon Connect now provides integrated workflows for managers to coach agents

Amazon Connect now delivers integrated agent coaching workflows that enable contact center managers to provide timely, targeted feedback directly within the Connect UI. When managers identify improvement opportunities through evaluation scorecards, they can immediately create coaching plans with specific customer interaction examples. For example, a manager can share interactions with an agent where they excelled at problem-solving but could show more customer empathy, with examples of empathetic language to use going forward. After coaching sessions, agents acknowledge feedback and add notes to confirm understanding of expectations and next steps. Both managers and agents access all coaching history on a single page, enabling systematic progress tracking and improved coaching effectiveness. This integrated approach eliminates coaching delays and creates accountability throughout the agent development process, accelerating performance improvement across contact center operations. This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage. 
Quelle: aws.amazon.com

Amazon EC2 High Memory U7i instances now available in additional regions

Amazon EC2 High Memory U7i instances with 8TB of memory (u7i-8tb.112xlarge) are now available in AWS Asia Pacific (Hyderabad), and U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in AWS Europe (Spain). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory, U7i-12tb instances offer 12TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-8tb instances offer 448 vCPUs and U7i-12tb instances offer 896 vCPUs. Both instance types support up to 100 Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth for faster data loading and backups, up to 100 Gbps of network bandwidth, and ENA Express. 
U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
To learn more about U7i instances, visit the High Memory instances page.
Quelle: aws.amazon.com

Amazon EC2 R7gd instances are now available in South America (Sao Paulo) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in South America (Sao Paulo) Region. R7gd are powered by AWS Graviton3 processors with DDR5 memory are built on the AWS Nitro System. They are ideal for memory-intensive workloads such as open-source databases, in-memory caches, and real-time big data analytics and are a great fit for applications that need access to high-speed, low latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches. To learn more, see Amazon R7gd Instances. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Amazon EC2 C8gd and M8gd instances are now available in additional AWS Regions

Amazon Elastic Compute Cloud (Amazon EC2) C8gd and M8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in additional regions. C8gd instances are now available in South America (Sao Paulo). M8gd instances are now available in Europe (Ireland). These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon C8gd Instances and Amazon M8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Amazon EC2 C8id instances are now available in Europe (Spain)

Amazon Elastic Compute Cloud (EC2) C8id instances powered by custom Intel Xeon 6 processors feature up to 384 vCPUs, 768GiB of memory, and 22.8TB of NVMe SSD storage and deliver up to 43% higher performance and 3.3x more memory bandwidth compared to previous generation C6id instances. Starting today, C8id instances are available in Europe (Spain) region. These instances deliver up to 46% higher performance for I/O intensive database workloads, and up to 30% faster query results for I/O intensive real-time data analytics than previous sixth-generation instances. Additionally, these instances support Instance Bandwidth Configuration, allowing 25% flexible allocation between network and EBS bandwidth, allocating resources optimally for each workload. C8id instances are ideal for compute-intensive workloads such as high-performance web servers, batch processing, distributed analytics, ad serving, video encoding, and gaming servers. C8id instances are available in US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt, Spain), and Asia Pacific (Tokyo) regions. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 instance type page.
Quelle: aws.amazon.com

AWS Builder ID now supports Sign in with GitHub and Amazon

AWS Builder ID, your profile for accessing AWS applications including AWS Builder Center, AWS Training and Certification and Kiro, now supports two new social logins: GitHub and Amazon. This expansion of sign-in options builds on the existing Google Apple social sign-in capabilities, providing GitHub and Amazon users with a streamlined way to access AWS resources without managing separate credentials on AWS.
With Sign in with Github and Amazon integration, developers and builders can now enjoy access to their AWS Builder ID profile using their GitHub or Amazon Account credentials. This enhancement eliminates password management complexity, reduces forgotten password issues, and provides a frictionless experience for both new user registration and returning user sign-ins. Whether you’re accessing development resources in AWS Builder Center, enrolling in certification programs or using Kiro to code your next app, your GitHub and Amazon Accounts can now serve as a secure gateway to your builder AWS journey.
To learn more about AWS Builder ID and get started with Sign in with GitHub and Amazon, visit the AWS Builder ID documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports observability of First Token Latency and Quota Consumption

Amazon Bedrock is a fully managed service for building generative AI applications using high-performing foundation models from leading AI providers. It now supports two new CloudWatch metrics, TimeToFirstToken and EstimatedTPMQuotaUsage, giving you deeper visibility into inference performance and quota consumption.
TimeToFirstToken measures the latency from when a request is sent to when the first token is received, for streaming APIs (ConverseStream and InvokeModelWithResponseStream). You can use this metric to set CloudWatch alarms which monitor latency degradation and establish SLA baselines, without any client-side instrumentation. EstimatedTPMQuotaUsage tracks your estimated Tokens Per Minute (TPM) quota consumption, including cache write tokens and output burndown multipliers, across all inference APIs (Converse, InvokeModel, ConverseStream, and InvokeModelWithResponseStream). You can use this metric to set proactive alarms before reaching your quota limit, track your quota consumption across your models, and request further quota increases before usage is rate limited.
Both metrics are supported in all commercial Bedrock regions for models available via cross-region inference profiles and in-region inference, updated every minute for successfully completed requests. These are available in your CloudWatch out of the box; you pay only for the underlying model inference you consume, with no API changes or opt-in required.
To learn more about TimeToFirstToken and EstimatedTPMQuotaUsage, see our documentation page on Monitoring Amazon Bedrock.
Quelle: aws.amazon.com

Amazon Bedrock AgentCore Runtime now supports stateful MCP server features

Amazon Bedrock AgentCore Runtime now supports stateful Model Context Protocol (MCP) server features, enabling developers to build MCP servers that leverage elicitation, sampling, and progress notifications alongside existing support for resources, prompts, and tools. These capabilities allow MCP servers deployed to AgentCore Runtime to collect user input interactively during tool execution, request LLM-generated content from clients, and provide real-time progress updates for long-running operations. With stateful MCP sessions, each user session runs in a dedicated microVM with isolated resources, and the server maintains session context across multiple interactions using an Mcp-Session-Id header. Elicitation enables server-initiated, multi-turn conversations to gather information such as user preferences. Sampling allows servers to request AI-powered text generation from the client for tasks like personalized recommendations. Progress notifications keep clients informed during operations such as searching for flights or processing bookings. These features work together to support complex, interactive agent workflows that go beyond simple request-response patterns.
Stateful MCP server features are supported in AgentCore Runtime across fourteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).
To learn more, see Stateful MCP server features in the Amazon Bedrock AgentCore documentation.
Quelle: aws.amazon.com