Amazon EC2 C8gd, M8gd, and R8gd instances are now available in additional AWS Regions

Amazon Elastic Compute Cloud (Amazon EC2) C8gd instances are now available in Europe (London), and Canada (Central) AWS Regions. Additionally, M8gd instances are available in South America (Sao Paulo) and R8gd instances are available in Europe (London) AWS Region. These instances feature up to 11.4 TB of local NVMe-based SSD block-level storage and are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon C8gd instances, M8gd instances, R8gd instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Amazon EC2 C6id and R6id instances are now available in additional regions

Amazon EC2 C6id instances are available in AWS Region Europe (Milan) and R6id instances are available in AWS Region Africa (Cape Town). These instances are powered by 3rd generation Intel Xeon Scalable Ice Lake processors with an all-core turbo frequency of 3.5 GHz and up to 7.6 TB of local NVMe-based SSD block-level storage. C6id and R6id are built on AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor, which delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security. Customers can take advantage of access to high-speed, low-latency local storage to scale performance of applications such as video encoding, image manipulation, other forms of media processing, data logging, distributed web-scale in-memory caches, in-memory databases, and real-time big data analytics. Customers can purchase the new instances via Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit our product pages for C6id and R6id.
Quelle: aws.amazon.com

Connect to Remote MCP Servers with OAuth in Docker

In just a year, the Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and external systems. The Docker MCP Catalog now hosts hundreds of containerized local MCP servers, enabling developers to quickly experiment and prototype locally.

We have now added support for remote MCP servers to the Docker MCP Catalog. These servers function like local MCP servers but run over the internet, making them easier to access from any environment without the need for local configuration.

With the latest update, the Docker MCP Toolkit now supports remote MCP servers with OAuth, making it easier than ever to securely connect to external apps like Notion and Linear, right from your Docker environment. Plus, the Docker MCP Catalog just grew by 60+ new remote MCP servers, giving you an even wider range of integrations to power your workflows and accelerate how you build, collaborate, and automate.

As remote MCP servers gain popularity, we’re excited to make this capability available to millions of developers building with Docker.

In this post, we’ll explore what this means for developers, why OAuth support is a game-changer, and how you can get started with remote MCP servers with just two simple commands.

Connect to Remote MCP Servers- Securely, Easily, Seamlessly

Goodbye Manual Setup, Hello OAuth Magic

Figuring out how to find and generate API tokens for a service is often tedious, especially for beginners. Tokens also tend to expire unpredictably, breaking existing MCP connections and require reconfiguration.

With OAuth built directly into Docker MCP, you’ll no longer need to juggle tokens or manually configure connections. You can securely connect to remote MCP servers in seconds – all while keeping your credentials safe. 

60+ New Remote MCP Servers, Instantly Available

From project management to documentation and issue tracking, the expanded MCP Catalog now includes integrations for Notion, Linear, and dozens more. Whatever tools your team depends on, they’re now just a command away. We will continue to expand the catalog as new remote servers become available.

Figure 1: Some examples of remote MCP servers that are now part of the Docker MCP Catalog

Easy to use via the CLI or Docker Desktop 

No new setup. No steep learning curve. Just use your existing Docker CLI and get going. Enabling and authorizing remote MCP servers is fully integrated into the familiar command-line experience you already love. You can also install servers via one-click with Docker Desktop.

Two Commands to Connect and Authorize Remote MCP Servers- It’s That Simple

Using Docker CLI

Step 1: Enable Your Remote MCP Server

Pick your server, and enable it with one line:

docker mcp server enable notion-remote

This registers the remote server and prepares it for OAuth authorization.

Step 2: Authorize Securely with OAuth

Next, authorize your connection with:

docker mcp oauth authorize notion-remote

This launches your browser with an OAuth authorization page.

Using Docker Desktop

Step 1: Enable Your Remote MCP Server

If you prefer to use Docker Desktop instead of the command line, open the Catalog tab and search for the server you want to use. The cloud icon indicates that it’s a remote server. Click the “+” button to enable the server.

Figure 2: Enabling the Linear remote MCP server is just one click.

Step 2: Authorize Securely with OAuth

Open the OAuth tab and click the “Authorize” button next to the MCP Server you want to authenticate with.

Figure 3: Built-in OAuth flow for Linear remote MCP servers. 

Once authorized, your connection is live. You can now interact with Notion, Linear, or any other supported MCP server directly through your Docker MCP environment.

Why This Update Matters for Developers

Unified Access Across Your Ecosystem

Developers rely on dozens of tools every day across different MCP clients. The Docker MCP Toolkit brings them together under one secure, unified interface – helping you move faster without manually configuring each MCP client. This means you don’t need to log in to the same service multiple times across Cursor, Claude Code, and other clients you may use.

Unlock AI-Powered Workflows

Remote MCP servers make it really easy to bridge data, tools, and AI. They are always up to date with the latest tools and are faster to use as they don’t run any code on your computer. With OAuth support, your connected apps can now securely provide context to AI models unlocking powerful new automation possibilities.

Building the Future of Developer Productivity

This update is more than just an integration boost – it’s the foundation for a more connected, intelligent, and automated developer experience. And this is only the beginning.

Conclusion

The addition of OAuth for remote MCP servers makes Docker MCP Toolkit the most powerful way to securely connect your tools, workflows, and AI-powered automations.

With 60+ new remote servers now available and growing, developers can bring their favorite services – like Notion and Linear, directly into Docker MCP Toolkit.

Learn more

Head over to our docs to learn more

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers

Open Docker Desktop and get started with the MCP Toolkit (Requires version 4.48 or newer to launch the MCP Toolkit automatically)

Quelle: https://blog.docker.com/feed/

Docker Engine v29: Foundational Updates for the Future

This post is for Linux users running Docker Engine (Community Edition) directly on their hosts. Docker Desktop users don’t need to take any action — Engine updates are included automatically in future Desktop releases.

Docker Engine v29 is a foundational release that sets the stage for the future of the Docker platform. While it may not come with flashy new features, it introduces a few significant under-the-hood changes that simplify our architecture and improve ecosystem alignment:

Minimum API version update

The Containerd image store is now the default for new installations.

Migration to Go modules

Experimental Support for NFTables

These changes improve maintainability, developer experience, and interoperability across the container ecosystem.

Minimum API Version Update

Docker versions older than v25 are now end of life, and as such, we have increased the Minimum API version to 1.44 (Moby v25). 

If you are getting the following error, you will need to update to a newer client or follow the mitigation steps to override the min-version

Error response from daemon: client version 1.43 is too old.Minimum supported API version is 1.44, please upgrade your client to a newer version

Override the minimum API version

There are two methods to launch dockerd with a lower minimum API version. Additional information can be found on docs.docker.com

Using flags when starting dockerd

Launch dockerd with the DOCKER_MIN_API_VERSION set to the previous value. For example:

DOCKER_MIN_API_VERSION=1.24 dockerd

Using a JSON configuration file — daemon.json

Set min-api-version in your daemon.json file.

{
  "min-api-version": "1.24"
}

Containerd Image Store Becomes the Default

Why We Made This Change

The Containerd runtime originated as a core component of Docker Engine and was later split out and donated to the Cloud Native Computing Foundation (CNCF). It now serves as the industry-standard container runtime, powering Kubernetes and many other platforms.

While Docker introduced containerd for container execution years ago, we continued using the graph driver storage backend for managing image layers. Meanwhile, containerd evolved its own image content store and snapshotter framework, designed for modularity, performance, and ecosystem alignment.

To ensure stability, Docker has been gradually migrating to the containerd image store over time. Docker Desktop has already used the containerd image store as the default for most of the past year. With Docker Engine v29, this migration takes the next step by becoming the default in the Moby engine.

What it is

As of Docker Engine v29, the containerd image store becomes the default for image layer and content management for new installs.

Legacy graph drivers are still available, but are now deprecated. New installs can still opt out of Containerd image store if there is any issue.

Why This Matters

Simplified architecture: Both execution and storage now use containerd, reducing duplication and internal complexity

Unlock new feature possibilities, such as:

Snapshotter innovations

Lazy pulling of image content

Remote content stores

Peer-to-peer distribution

Ecosystem alignment: Brings Docker Engine in sync with containerd-based platforms, like Kubernetes, improving interoperability.

Future-proofing: Enables faster innovation in image layer handling and runtime behaviour

We appreciate that this change may cause some disruption, as the Containerd image store takes a different approach to content and layer management compared to the existing storage drivers.

However, this shift is a positive one. It enables a more consistent, modular, and predictable container experience.

Migration Path

To be clear, these changes only impact new installs; existing users will not be forced to containerd. However, you can start your migration now and opt-in.

We are working on a migration guide to help teams transition and move their existing content to the containerd image store.

What’s next

The graph driver backend will be removed in a future release.

Docker will continue evolving the image store experience, leveraging the full capabilities of containerd’s ecosystem.

Expect to see enhanced content management, multi-snapshotter support, and faster pull/push workflows in the future.

Moby Migrates to Go Modules

Why We Made This Change

Go modules have been the community standard since 2019, but until now, the Moby project used a legacy vendoring system. Avoiding Go modules was creating:

Constant maintenance churn to work around tooling assumptions

Confusing workflows for contributors

Compatibility issues with newer Go tools and ecosystem practices

Simply put, continuing to resist Go modules was making life harder for everyone.

What It Is

The Moby codebase is now fully module-aware using go.mod.

This means cleaner dependency management and better interoperability for tools and contributors.

External clients, API libraries, and SDKs will find the Moby codebase easier to consume and integrate with.

What It’s Not

This is not a user-facing feature—you won’t see a UI or command change.

However, it does affect developers who consume Docker’s Go APIs.

Important for Go Developers

If you’re consuming the Docker client or API packages in your own Go projects:

The old module path github.com/docker/docker will no longer receive updates.

To stay current with Docker Engine releases, you must switch to importing from github.com/moby/moby.

Experimental support for nftables

Why We Made This Change

For bridge and overlay networks on Linux, Docker Engine currently creates firewall rules using “iptables” and “ip6tables”.

In most cases, these commands are linked to “iptables-nft” and “ip6tables-nft”. So, Docker’s rules are translated to nftables behind the scenes.

However, OS distributions are beginning to deprecate support for iptables. It’s past time for Docker Engine to create its own nftables rules directly.

What It Is

Opt-in support for creating nftables rules instead of iptables.

The rules are functionally equivalent, but there are some differences to be aware of, particularly if you make use of the “DOCKER-USER” chain in iptables.

On a host that uses “firewalld”, iptables rules are created via firewalld’s deprecated “direct” interface. That’s not necessary for nftables because rules are organised into separate tables, each with its own base chains. Docker will still set up firewalld zones and policies for its devices, but it creates nftables rules directly, just as it does on a host without firewalld.

What It’s Not

In this initial version, nftables support is “experimental”. Please be cautious about deploying it in a production environment.

Swarm support is planned for a future release. At present, it’s not possible to enable Docker Engine’s nftables support on a node with Swarm enabled.

In a future release, nftables will become the default firewall backend and iptables support will be deprecated.

Future Work

In addition to adding planned Swarm support, there’s scope for efficiency improvements.

For example, the rules themselves could make more use of nftables features, particularly sets of ports.

These changes will be prioritised based on the feedback received. If you would like to contribute, do let us know!

Try It Out

Start dockerd with option –firewall-backend=nftables to enable nftables support.After a reboot, you may find you need to enable IP Forwarding on the host. If you’re using the “DOCKER-USER” iptables chain, it will need to be migrated. For more information, see https://docs.docker.com/engine/network/firewall-nftablesWe’re looking for feedback. If you find issues, let us know at https://github.com/moby/moby/issues.

Getting Started with Engine v29

As mentioned, this post is for Linux users running Docker Engine (Community Edition) directly on their hosts. Docker Desktop users don’t need to take any action — Engine updates are included automatically in the upcoming Desktop releases.

To install Docker Engine on your host or update an existing installation, please follow the guide for your specific OS.

For additional information about this release:

Release notes for Engine v29

Documentation

Quelle: https://blog.docker.com/feed/

Amazon EC2 I7i instances now available in additional AWS regions

Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in the AWS Asia Pacific (Hyderabad), Canada (Central) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances also support real-time, high-resolution performance statistics for the NVMe instance store volumes attached to them. To learn more, visit the detailed NVMe performance statistics page. I7i instances are available in eleven sizes – nine virtual sizes up to 48xlarge and two bare metal sizes – delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.
Quelle: aws.amazon.com