Amazon GameLift Servers enhances AWS Console for game developers with AI powered assistance

Today, Amazon GameLift Servers is launching AI-powered assistance in the AWS Console, leveraging Amazon Q Developer to provide tailored guidance for game developers. This new feature integrates specialized GameLift Servers knowledge to help customers navigate complex workflows, troubleshoot issues, and optimize their game server deployments more efficiently. Developers can now access AI-assisted recommendations for game server integration, fleet configuration, and performance optimization directly within the AWS Console via Amazon GameLift Servers. This enhancement aims to streamline decision making processes, reduce troubleshooting time, and improve overall resource utilization, leading to cost savings and better player experiences. AI-powered assistance is now available in all Amazon GameLift Servers supported regions, except AWS China. To learn more about this new feature, visit the Amazon GameLift Servers documentation.
Quelle: aws.amazon.com

Amazon EC2 C8gn instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS US East (Ohio) and Middle East (UAE) Regions. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference. For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore, Malaysia, Sydney, Thailand), Middle East (UAE) To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Quelle: aws.amazon.com

Docker Joins the Agentic AI Foundation

Today, the Linux Foundation launched the Agentic AI Foundation with three founding projects: Anthropic’s Model Context Protocol (MCP), Block’s goose agent framework, and OpenAI’s AGENTS.md standard.

The foundation brings together the companies building the infrastructure layer for agents: Anthropic, Block, OpenAI, Amazon, Google, Microsoft, Cloudflare, and Bloomberg, alongside key tooling and platform companies. 

Docker is joining as a Gold member.

From Open Source to Production

The timing reflects how quickly the space has matured. A year ago, MCP launched as an open source project from Anthropic, solving a specific problem: how AI systems connect to tools and data. It’s now running on 10,000+ public servers and adopted across Claude, ChatGPT, Cursor, Copilot, VS Code, and Gemini.

Six months ago, companies started deploying agents that take real actions, triggering builds, accessing databases, modifying infrastructure, executing workflows. That shift from prototype to production created new questions around protocols and governance.

Today, foundational protocols that helped answer those questions, protocols like MCP, are moving to the Linux Foundation under the same governance structure that stewards Linux and PyTorch.

Why Neutral Governance Matters

When infrastructure becomes critical, developers won’t build on protocols that could change arbitrarily. And larger teams and enterprises want shared standards.

Over the past year we’ve partnered with Anthropic, Block, and other key players in the AI ecosystem to help create and embrace standards like MCP, Goose, and AGENTS.md. The Agentic AI Foundation creates a structure for the industry to unite behind these standards, building an ecosystem of interoperable tools that benefit developers.

Docker is excited to join as an active Gold member to drive innovation in developer-first, secure tools across our ecosystem.

What Happens Next

The protocols exist. Adoption is happening. The foundation ensures these protocols evolve transparently, with input from everyone building on them.

Docker helped build that structure for applications. Now we’re doing it for agents.

Learn more at aaif.io
Quelle: https://blog.docker.com/feed/

Amazon RDS and Aurora now support resource tagging for Automated Backups

Amazon RDS and Aurora now support resource tagging for automated backups and cluster automated backups. You can now tag your automated backups separately from the parent DB instance or DB cluster, enabling Attribute-Based Access Control (ABAC) and simplifying resource management and cost tracking.
With this launch, you can tag automated backups in the same way as other RDS resources using the AWS Management Console, API, or SDK. Use these tags with IAM policies to control access and permissions to automated backups. Additionally, these tags can help you categorize your resources by application, project, department, environment, and more, as well as manage, organize, and track costs of your automated backups. For example, create application specific tags to control permissions for describing, deleting, or restoring automated backups and to organize and track backup costs of the application.
This capability is available in all AWS Regions, including the AWS GovCloud (US) Regions where Aurora and RDS are available.
To learn more about tagging Aurora and RDS automated backups, see the Amazon documentation on Tagging Amazon Aurora resources, Tagging Amazon RDS resources, and Using tags for attribute-based access control.
Quelle: aws.amazon.com

AWS Partner Central now includes opportunity deal sizing

Today, AWS announces deal sizing capability in AWS Partner Central. This new feature, available within the APN Customer Engagements (ACE) Opportunities, uses AI to provide deal size estimates and AWS service recommendations. Deal Sizing capability allows Partners to save time on deal management by simplifying the process of estimating AWS monthly recurring revenue (MMR) when creating or updating opportunities. Partners can optionally import AWS Pricing Calculator URLs to automatically populate AWS service selections and corresponding spend estimates into their opportunities, reducing the need for manual re-entry. When a Pricing Calculator URL is provided, deal sizing delivers enhanced insights including pricing strategy optimization recommendations, potential cost savings analysis, Migration Acceleration Program (MAP) eligibility indicators, and modernization pathway analysis. These enhanced insights help Partners refine their technical approach and strengthen funding applications, accelerating the funding approval process. Deal sizing is now available in AWS Partner Central worldwide. The feature is accessible through both AWS Partner Central and the AWS Partner Central API for Selling, which is available in the US East (N. Virginia) Region. To get started, log in to AWS Partner Central in the console to create or update opportunities and view deal sizing insights. For API integration with your CRM system, see the AWS Partner Central API Documentation. To learn more about deal sizing, visit the Partner Central Sales Guide.
Quelle: aws.amazon.com