Celebrating Women in AI: 3 Questions with Cecilia Liu on Leading Docker’s MCP Strategy

To celebrate International Women’s Day, we sat down with Cecilia Liu, Senior Product Manager at Docker, for three questions about the vision and strategy behind Docker’s MCP solutions. From shaping product direction to driving AI innovation, Cecilia plays a key role in defining how Docker enables secure, scalable AI tooling.

Cecilia leads product management for Docker’s MCP Catalog and Toolkit, our solution for running MCP servers securely and at scale through containerization. She drives Docker’s AI strategy across both enterprise and developer ecosystems, helping organizations deploy MCP infrastructure with confidence while empowering individual developers to seamlessly discover, integrate, and use MCP in their workflows. With a technical background in AI frameworks and an MBA from NYU Stern, Cecilia bridges the worlds of AI infrastructure and developer tools, turning complex challenges into practical, developer-first solutions.

What products are you responsible for?

I own Docker’s MCP solution. At its core, it’s about solving the problems that anyone working with MCP runs into: how do you find the right MCP servers, how do you actually use them without a steep learning curve, and how do you deploy and manage them reliably across a team or organization.

How does Docker’s MCP solution benefit developers and enterprise customers?

Dev productivity is where my heart is. I want to build something that meaningfully helps developers at every stage of their cycle — and that’s exactly how I think about Docker’s MCP solution.

For end-user developers and vibe coders, the goal is simple: you shouldn’t need to understand the underlying infrastructure to get value from MCP. As long as you’re working with AI, we make it easy to discover, configure, and start using MCP servers without any of the usual setup headaches. One thing I kept hearing in user feedback was that people couldn’t even tell if their setup was actually working. That pushed us to ship in-product setup instructions that walk you through not just configuration, but how to verify everything is running correctly. It sounds small, but it made a real difference.

For developers building MCP servers and integrating them into agents, I’m focused on giving them the right creation and testing tools so they can ship faster and with more confidence. That’s a big part of where we’re headed.

And for security and enterprise admins, we’re solving real deployment pain, making it faster and cheaper to roll out and manage MCP across an entire organization. Custom catalogs, role-based access controls, audit logging, policy enforcement. The goal is to give teams the visibility and control they need to adopt AI tooling confidently at scale.

Customers love us for all of the above, and there’s one more thing that ties it together: the security that comes built-in with Docker. That trust doesn’t happen overnight, and it’s something we take seriously across everything we ship.

What are you excited about when it comes to the future of MCP?

What excites me most is honestly the pace of change itself. The AI landscape is shifting constantly, and with every new tool that makes AI more powerful, there’s a whole new set of developers who need a way to actually use it productively. That’s a massive opportunity.

MCP is where that’s happening right now, and the adoption we’re seeing tells me the need is real. But what gets me out of bed is knowing the problems we’re solving: discoverability, usability, deployment. They are all going to matter just as much for whatever comes next. We’re not just building for today’s tools. We’re building the foundation that developers will reach for every time something new emerges.

Cecilia is speaking about scaling MCP for enterprises at the MCP Dev Summit in NYC on 3rd of April, 2026. If you’re attending, be sure to stop by Docker’s booth (D/P9).

Learn more

Explore Docker’s MCP Catalog and Toolkit on our website.

Dive into our documentation to get started quickly.

Ready to go hands-on? Open Docker Desktop or the CLI and start using MCP to streamline and automate your development workflows.

Quelle: https://blog.docker.com/feed/

Amazon Redshift Serverless now maintains datashare permissions during restore

Amazon Redshift Serverless now preserves datashare permissions when you restore a snapshot to the same namespace, simplifying data sharing workflows and reducing administrative overhead. Previously, restoring a serverless namespace from a snapshot required administrators to manually re-grant datashare permissions to consumer clusters and recreate consumer databases, even when restoring to the same namespace.
With this enhancement, datashare permissions are automatically maintained when you restore a snapshot to the same producer namespace, provided the datashare permission existed both when the snapshot was taken and on the current namespace. For consumer namespaces, datashare access remains unchanged after restore, eliminating the need for producer administrators to re-grant permissions. This streamlines disaster recovery and testing workflows by reducing manual configuration steps and potential errors. Amazon Redshift also provides EventBridge notifications to alert you when datashares are dropped, consumer access is revoked, or public accessibility changes during restore operations. This feature is available in all AWS Regions that support Amazon Redshift. To learn more, see the Amazon Redshift Management Guide.
Quelle: aws.amazon.com

Amazon EC2 R8g instances now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Middle East (UAE), AWS Mexico (Central), and AWS Europe (Zurich) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Azure IaaS series: Explore new resources for building a stronger, more efficient infrastructure

Why a modern cloud infrastructure foundation is critical to your business

Infrastructure has always been foundational to running business-critical cloud workloads; but today, it has become a strategic driver of innovation, resilience, and growth. As organizations accelerate digital transformation, infrastructure decisions increasingly shape how quickly teams can adopt AI, how reliably applications operate at global scale, and how effectively businesses respond to constant change.

To help customers navigate this shift, we’re introducing the Azure IaaS (Infrastructure as a Service) Resource Center: a centralized destination that brings together guidance, resources, demos, architectures, and best practices to support infrastructure design, optimization, and operations across compute, storage, and networking.

How does IaaS provide scalable cloud infrastructure?

AI adoption is accelerating faster than most organizations can operationalize it, with the pace and complexity of this shift becoming unprecedented. Applications are becoming more distributed and data intensive, while expectations for performance, availability, and security continue to rise. At the same time, leaders face growing pressure to optimize costs and ensure infrastructure investments align to tangible business outcomes.

These pressures are showing up in real, day-to-day infrastructure decisions:

Designing for continuity as environments grow more distributed and interdependent.

Strengthening security and compliance in an increasingly sophisticated threat landscape.

Achieving the performance required for data-intensive, latency-sensitive, and AI-driven workloads.

Keeping infrastructure flexible as workload patterns evolve and business priorities change.

Optimizing spend while ensuring infrastructure decisions are aligned with actual workload requirements.

This is exactly where a more intentional infrastructure strategy becomes critical. What has changed is not just the scale of infrastructure, but the need for system-level design across compute, storage, and networking. Infrastructure can no longer be optimized in isolation or managed reactively. It must operate as a cohesive platform, where performance, resiliency, security, scalability, and cost efficiency reinforce one another.

Azure IaaS has been designed for this reality, providing the foundation to run your most important cloud workloads today, while giving you the flexibility to adapt as needs evolve. To help organizations navigate this shift with clarity and confidence, the new Azure IaaS Resource Center offers a centralized destination to explore the guidance, resources, demos, architectures, and best practices needed to design, optimize, and operate infrastructure with confidence across every layer of the stack.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/1620950-AzureIaaSBuild_tbmnl_en-us?wid=1280″,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1620950-AzureIaaSBuild-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1620950-AzureIaaSBuild-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1620950-AzureIaaSBuild-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1620950-AzureIaaSBuild-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1620950-azureiaasbuild%2F1620950-AzureIaaSBuild_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}],”downloadableFiles”:[{“url”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1620950-AzureIaaSBuild_transcript_en-us”,”locale”:”en-us”,”mediaType”:”transcript”},{“url”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1620950-AzureIaaSBuild_audio_en-us”,”locale”:”en-us”,”mediaType”:”audio”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69a9cbdcd7a57″, options);
});

A modern infrastructure platform engineered for performance, security, and global scale

Azure IaaS brings together a comprehensive portfolio of compute, storage, and networking services to support a wide range of workloads, from: line-of-business applications and databases to analytics platforms, AI training clusters, and global consumer applications.

Built with a system-level approach, Azure IaaS unifies specialized hardware, intelligent software, high-capacity networking, and platform orchestration to deliver consistent performance, strong security protections, and flexible scaling. Backed by more than 70 regions worldwide, a private global fiber backbone, hardware acceleration, integrated resiliency, and multilayer security, Azure provides an infrastructure foundation ready for modern and future business demands.

Resilient by design to help keep your business running

Azure’s infrastructure is built from the ground up for resilience, ensuring applications remain available even when the unexpected occurs. With a broad portfolio of infrastructure options spanning zonal redundancy, regional redundancy, and globally distributed architectures, organizations can architect for continuity at every layer.

Azure’s compute, storage, and networking platforms are engineered to withstand failure through intelligent load balancing, fast failover mechanisms, and integrated data protection. This resilient foundation empowers organizations to operate with confidence, whether running mission-critical systems that demand continuous uptime or scaling AI-driven applications that cannot tolerate disruption.

By combining proactive fault isolation, automated recovery, and multilayer redundancy, Azure IaaS helps organizations maintain operations through outages, recover rapidly, and safeguard the business against uncertainty.

With Azure, resilience isn’t an addon; it’s the architecture that helps your infrastructure keep pace with your most ambitious goals.

High-performance Azure IaaS for your most demanding workloads—from databases to AI clusters

With a comprehensive portfolio of Azure Virtual Machine series—including memory-optimized, compute-optimized, GPU-accelerated, and storage-optimized options—customers can match infrastructure precisely to their workload needs, whether running mission-critical databases or training advanced AI models. The latest VM families leverage cutting-edge processors and high-speed networking, enabling ultra-low latency and massive throughput for data-intensive and AI-driven applications. This flexibility empowers organizations to match their infrastructure choices to their specific workload needs, harnessing the same platform for both everyday business operations and the most demanding AI workloads. As a result, Azure IaaS provides the foundation for innovation to help ensure your infrastructure keeps pace with your boldest goals.

Built-in security and compliance on Azure IaaS to help reduce risk

Security on Azure IaaS is a top priority; engineered into the platform across compute, storage, and networking. From the underlying hardware to the workloads it supports, Azure applies a defense-in-depth approach designed to protect infrastructure as threats continue to evolve.

At the foundation, Azure security includes secure supply chain practices, a rigorous secure development lifecycle (SDL), encryption, and identity and access management with Microsoft Entra ID.

Networking security helps reduce exposure through isolation, segmentation, and private connectivity, using virtual networks, Network Security Groups, and Private Link to limit public access. Services such as Azure Firewall and DDoS Protection add protection and control at scale.

Storage security enforces encryption by default, provides identity-based access controls, and includes safeguards such as soft delete, versioning, and immutability to reduce the risk of loss or tampering.

Compute security is rooted in hardware-based trust, starting with server-level secure boot and attestation, VM-level capabilities like Trusted Launch, secure VM boot, and a virtual Trusted Platform Module, and Azure confidential computing to help protect workloads and sensitive data in use.

Together, these integrated protections help organizations reduce risk, meet compliance requirements, and run critical infrastructure securely—without slowing innovation.

Scale infrastructure with flexibility to support changing workload needs

Modern workloads place uneven and evolving demands on infrastructure. Capacity must expand quickly, scale independently across layers, and extend globally.

Azure IaaS enables this flexibility by providing extensive solutions to scale compute, storage, and networking independently based on actual workload requirements. Teams can compute vertically by increasing VM sizes and performance levels, or horizontally by intelligently distributing workloads across multiple VM types, availability zones, and regions. Storage capacity and performance can be adjusted separately to support data growth and throughput needs, while high-capacity networking enables low-latency connectivity across distributed environments.

With more than 70 regions worldwide, Azure IaaS provides a variety of solutions that supports geographic expansion and proximity to users and data. Azure IaaS continues to innovate on deployment and capacity management solutions that provide users with increased scalability and decreased overhead. Global networking and region-to-region connectivity make it possible to scale applications while maintaining consistent performance and availability.

Together, elastic infrastructure, global reach, and adaptive architectural patterns help organizations expand capacity, respond to demand shifts, and support growth.

Build a cost-efficient cloud infrastructure strategy with Azure IaaS

Cost optimization in the cloud is about reducing spend while making informed infrastructure decisions that balance efficiency, performance, and business value. As workloads grow more complex and data-intensive, organizations are looking not only to lower costs, but to ensure every dollar invested in infrastructure delivers measurable impact.

Azure IaaS is designed to support this balance. It gives organizations the flexibility to optimize costs based on real workload requirements; whether that means right-sizing compute resources, aligning storage performance to actual usage, or selecting networking options that meet throughput needs without overprovisioning. By matching infrastructure capabilities to demand, teams can reduce unnecessary spend, while maintaining the performance and reliability their applications require.

Optimal cost efficiency on Azure is not a one-time exercise either. Built-in tooling and guidance help teams continuously evaluate usage patterns, identify inefficiencies, and adapt as workloads evolve. Flexible pricing options such as reservations and savings plans enable predictable cost control for steady-state workloads, while elastic scaling models support dynamic environments where demand fluctuates.

Azure IaaS also helps organizations optimize costs by reducing operational overhead. Managed services, automation, and integrated monitoring simplify infrastructure management, allowing teams to focus on improving utilization and performance rather than managing complexity. For organizations modernizing or migrating workloads, Azure provides purpose-built tools that help transition data and applications efficiently; creating opportunities that reduce long-term costs, while improving operational consistency.

Whether supporting core business systems, scaling global applications, or enabling AI innovation, with Azure IaaS you can reduce costs, improve price-performance, and continuously optimize infrastructure investments. Cost efficiency becomes not a constraint on innovation, but a foundation that enables it.

Your infrastructure for the AI era starts with Azure

AI is changing the demands placed on infrastructure. Teams are moving beyond experimentation to operationalizing AI across the business: training models, running inference at scale, and integrating AI into line-of-business applications and decision workflows. That shift requires more than raw computing power. It depends on an infrastructure platform that can deliver the right combination of performance, resiliency, security, scalability, and cost efficiency—together.

Azure IaaS is designed to support the full spectrum of AI workloads, helping organizations bring AI workloads closer to users and data—reducing latency and improving responsiveness. With integrated resiliency capabilities and multi-layered security, Azure supports the continuity and protection required for business-critical AI scenarios. And with flexible infrastructure choices and optimization models, organizations can scale AI responsibly while maintaining control over spend.

As AI requirements evolve quickly, the ability to make infrastructure decisions with clarity matters. The Azure IaaS Resource Center can help you navigate those decisions to connect the guidance, best practices, and practical resources needed to move from planning to production with confidence.

Build confidently, run efficiently, and innovate boldly with Azure IaaS

Whether you’re modernizing mission-critical systems, supporting global applications, optimizing hybrid and multi-cloud environments, or preparing your organization for AI innovation, Azure IaaS provides the trusted infrastructure platform to help you move forward—without trading off performance, resiliency, security, scalability, or cost efficiency.

The Azure IaaS Resource Center is your central destination to explore best practices, learn from experts, and find the right guidance for every stage of your infrastructure journey across compute, storage, and networking.

Build in the cloud with Azure
Visit the Azure IaaS Resource Center to start building a stronger, more efficient infrastructure today.

Get started with Azure

The post Azure IaaS series: Explore new resources for building a stronger, more efficient infrastructure appeared first on Microsoft Azure Blog.
Quelle: Azure

Amazon EC2 I8ge instances now generally available in Europe (Ireland) AWS region.

Amazon Web Services (AWS) announces the availability of Amazon EC2 I8ge instances in Europe (Ireland) AWS region. Designed for large storage I/O intensive workloads, these new instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I8ge instances offer up to 120TB local NVMe storage density—the highest available in the cloud for storage optimized instances—and deliver up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, these instances achieve up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks for database workloads. I8ge instances are high-density storage-optimized instances, for workloads that demand rapid local storage with high random read/write performance and consistently low latency for accessing large data sets. These versatile instances are offered in eleven different sizes including 2 metal sizes, providing flexibility to match customers computational needs. They deliver up to 180 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS), ensuring fast and efficient data transfer for the most demanding applications. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, visit the I8ge instances page.
Quelle: aws.amazon.com