The Year in Google Cloud — 2025

In the AI era, when one year can feel like 10, you’re forgiven for forgetting what happened last month, much less what happened all the way back in January. To jog your memory, we pulled the readership data for top product and company news of 2025. And because we publish a lot of great thought leadership and customer stories, we pulled that data too. Long story short: the most popular stories largely mapped to our biggest announcements. But not always — there were more than a few sleeper hits on this year’s list. Read on to relive this huge year, and perhaps discover a few gems that you may have missed. 

Building tomorrow, today: 2025 customer AI innovation highlights with Google Cloud

January
2025 started strong with important new virtual machine offerings, foundational AI tooling, and tools for both Kubernetes and data professionals. We also launched our “How Google Does It” series, looking at the internal systems and engineering principles behind how we run a modern threat-detection pipeline. We showed developers how to get started with JAX and made AI predictions for the year ahead. Readers were excited to learn about how L’Oréal built its MLOps platform and Deutsche Börse’s pioneering work on cloud-native financial trading.
Product news

Simplify the developer experience on Kubernetes with KRO

Blackwell is here — new A4 VMs powered by NVIDIA B200 now in preview

Introducing Vertex AI RAG Engine: Scale your Vertex AI RAG pipeline with confidence

Introducing BigQuery metastore, a unified metadata service with Apache Iceberg support

C4A, the first Google Axion Processor, now GA with Titanium SSD

Thought leadership:

How Google Does It: Making threat detection high-quality, scalable, and modern

2025 and the Next Chapter(s) of AI

Customer stories

How L’Oréal Tech Accelerator built its end-to-end MLOps platform

Trading in the Cloud: Lessons from Deutsche Börse Group’s cloud-native trading engine

FebruaryThere are AI products, and then there are products enhanced by AI. This month’s top launch, Gen AI Toolbox for Databases, falls into the latter category. This was also the month readers got serious about learning, with blogs about upskilling, resources, and certifications topping the charts. The fruits of our partnership with Anthropic made an appearance in our best-read list, and engineering leaders detailed Google’s extensive efforts to optimize AI system energy consumption. Execs ate up an opinion piece about how agents will unlock insights into unstructured data (which makes up 90% of enterprises’ information assets), and digested a sobering report on AI and cybercrime. During the Mobile World Congress event, we saw considerable interest in our work with telco leaders like Vodafone Italy and Amdocs.Product and company news:Announcing public beta of Gen AI Toolbox for DatabasesGet Google Cloud certified in 2025—and see why the latest research says it mattersDiscover Google Cloud careers and credentials in our new Career DreamerAnnouncing Claude 3.7 Sonnet, Anthropic’s first hybrid reasoning model, is available on Vertex AIThought leadershipDesigning sustainable AI: A deep dive into TPU efficiency and lifecycle emissionsFrom dark data to bright insights: How AI agents make data simpleNew AI, cybercrime reports underscore need for security best practicesCustomer storiesTransforming data: How Vodafone Italy modernized its data architecture in the cloudAI-powered network optimization: Unlocking 5G’s potential with Amdocs

MarchBack when we announced it, our intent to purchase cybersecurity startup Wiz was Google’s largest deal ever, and the biggest tech deal of the year. We built on that security momentum with the launch of AI Protection. We also spread our wings to the Nordics with a new region, and announced the Gemma 3 open model on Vertex AI. Meanwhile, we explained the threat that North Korean IT workers pose to employers, gave readers a peek under the hood of the Colossus file system, and reminisced about what we’ve learned over 25 years of building data centers. Readers were interested in Levi’s approach to data and weaving it into future AI efforts, and in honor of the GDC Festival of Gaming, our AI partners shared some new perspectives on “living games.”Product and company newsGoogle + Wiz: Strengthening Multicloud SecurityAnnouncing AI Protection: Security for the AI eraHej Sverige! Google Cloud launches new region in SwedenAnnouncing Gemma 3 on Vertex AIThought leadershipThe ultimate insider threat: North Korean IT workersColossus under the hood: How we deliver SSD performance at HDD prices3 key lessons from 25 years of warehouse scale computingCustomer storiesLevi’s seamless data strategy: How tailor-made AI keeps an icon from getting hemmed inCo-op mode: New partners driving the future of gaming with AI

AprilWith April came Google Cloud Next, our flagship annual conference. From Firebase Studio, Ironwood TPUs, and Google Agentspace, to Vertex AI, Cloud WAN, and Gemini 2.5, it’s hard to limit ourselves to just a few stories, there were so many bangers (for the whole list, there’s always the event recap). Meanwhile, our systems team discussed innovations to keep data center infrastructure’s thermal envelope in check. And at the RSA Conference, we unveiled our vision for the agentic security operations center of the future. On the customer front, we highlighted the startups who played a starring role at Next, and took a peek behind the curtain of The Wizard of Oz at Sphere.Product and company newsIntroducing Firebase Studio and agentic developer tools to build with GeminiIntroducing Ironwood TPUs and new innovations in AI HypercomputerVertex AI offers new ways to build and manage multi-agent systemsScale enterprise search and agent adoption with Google AgentspaceCloud WAN: Connect your global enterprise with a network built for the AI eraGemini 2.5 brings enhanced reasoning to enterprise use casesThe dawn of agentic AI in security operations at RSAC 2025Thought leadershipAI infrastructure is hot. New power distribution and liquid cooling infrastructure can help3 new ways to use AI as your security sidekickCustomer storiesGlobal startups are building the future of AI on Google CloudThe AI magic behind Sphere’s upcoming ‘The Wizard of Oz’ experience

MaySchool was almost out, but readers got back into learning mode to get certified as generative AI leaders. You were also excited about new gen AI media models in Vertex AI, the availability of Anthropic’s Claude Opus 4 and Claude Sonnet 4. We also learned that you’re very excited to use AI to generate SQL code, and about using Cloud Run as a destination for your AI apps. We outlined the steps for building a well-defined data strategy, and showed governments how AI can actually improve their security posture. And on the customer front, we launched our “Cool Stuff Customers Built” round-ups, and ran stories from Formula E and MLB.Google Cloud announces first-of-its-kind generative AI leader certificationExpanding Vertex AI with the next wave of generative AI media modelsAnnouncing Anthropic’s Claude Opus 4 and Claude Sonnet 4 on Vertex AIThought leadershipGetting AI to write good SQL: Text-to-SQL techniques explainedAI deployments made easy: Deploy to Cloud Run from AI Studio or any MCP clientBuilding a data strategy for the AI eraHow governments can use AI to improve threat detection and reduce costCustomer storiesCool Stuff Customers Built: May EditionPushing the limits of electric mobility: Formula E’s Mountain RechargeTuning in with AI: How MLB My Daily Story creates truly personalized highlight videos

JuneUp until this point, the promise of generative AI was largely around text and code. The launch of Veo 3 changed all that. Developers writing and deploying AI apps saw the availability of GPUs on Cloud Run as a big win, and we continued our steady drumbeat of Gemini innovation with 2.5 Flash and Flash-Lite. We also shared our thoughts on securing AI agents. And to learn how to actually build these agents, readers turned to stories about Box, the British real estate firm Schroders, and French luxury conglomerate LVMH (home of Louis Vuitton, Channel, Sephora and more).You dream it, Veo creates it: Veo 3 is now available for everyone in public preview on Vertex AICloud Run GPUs, now GA, makes running AI workloads easier for everyoneGemini momentum continues with launch of 2.5 Flash-Lite and general availability of 2.5 Flash and Pro on Vertex AIThought leadershipAsk OCTO: Making sense of AI agentsCloud CISO Perspectives: How Google secures AI agentsCustomer storiesThe secret to document intelligence: Box builds Enhanced Extract Agents with A2A frameworkHow Schroders built its multi-agent financial analysis research assistantInside LVMH’s perfectly manicured data estate, where luxury AI agents are taking root

JulyReaders took a break from reading about AI to read about network infrastructure — the new Sol transatlantic cable, to be precise. Then it was back to AI: new video generation models in Vertex; a crucial component for building stateful, context-aware agents; and a new toolset for connecting BigQuery data to Agent Development Kit (ADK) and Multi-Cloud Protocol (MCP) environments. Developers cheered the integration between Cloud Run and Docker Compose, and executive audiences enjoyed a listicle on actionable, real-world uses for AI agents.On the security front, we took a back-to-basics approach this month, exploring the persistence of some cloud security problems. And then, back to AI again, with our Big Sleep agent. Readers were also interested in how AI is alleviating record-keeping for nurses at HCA Healthcare, Ulta Beauty’s data warehousing and mobile record keeping initiatives, and how SmarterX migrated from Snowflake to BigQuery.Strengthening network resilience with the Sol transatlantic cableVeo 3 and Veo 3 Fast are now generally available on Vertex AIAnnouncing Vertex AI Agent Engine Memory Bank available for everyone in previewBigQuery meets ADK & MCP: Accelerate agent development with BigQuery’s new first-party toolsetFrom localhost to launch: Simplify AI app deployment with Cloud Run and Docker ComposeThought leadershipSecure cloud. Insecure use. (And what you can do about it)Our Big Sleep agent makes a big leapCustomer storiesHow nurses are charting the future of AI at America’s largest hospital network, HCA HealthcareUlta Beauty redefines beauty retail with BigQuerySmarterX’s migration from Snowflake to BigQuery accelerated model building and cut costs in half

AugustAI is compute- and energy-intensive; in a new technical paper, we released concrete numbers about our AI infrastructure’s power consumption. Then people went [nano] bananas for Gemini 2.5 Flash Image on Vertex AI, and developers got a jump on their AI projects with a wealth of technical blueprints to work from. The summer doldrums didn’t stop our security experts from tackling the serious challenge of cyber-enabled fraud. We also took a closer look at the specific agentic tools empowering workers at Wells Fargo, and how Keeta processes 11 million blockchain transactions per second with Spanner.How much energy does Google’s AI use? We did the mathBuilding next-gen visuals with Gemini 2.5 Flash Image (aka nano-banana) on Vertex AI101+ gen AI use cases with technical blueprintsThought leadershipNew Threat Horizons report details evolving risks — and defensesHow CISOs and boards of directors can help fight cyber-enabled fraudHow AI-powered weather forecasting can transform energy operationsCustomer storiesHow Wells Fargo is using Google Cloud AI to empower its workforce with agentic toolsHow Keeta processes 11 million financial transactions per second on the blockchain with Spanner

SeptemberAI is cool tech, but how do you monetize it? One answer is the Agent Payment Protocol, or AP2. Developers and data scientists preparing for AI flocked to blogs about new Data Cloud offerings, the 2025 DORA Report, and new trainings. Executives took in our thoughts on building an agentic data strategy, and took notes on the best prompts with which to kickstart their AI usage. And because everybody is impacted by the AI era, including business leaders, we explained what it means to be “bilingual” in AI and security. Then, at Google’s AI Builders Forum, startups described how Google’s AI, infrastructure, and services are supporting their growth. Not to be left out, enterprises like Target and Mr. Cooper also showed off their AI chops.Powering AI commerce with the new Agent Payments Protocol (AP2)The new data scientist: From analyst to agentic architectAnnouncing the 2025 DORA Report: State of AI-Assisted Software DevelopmentBack to AI school: New Google Cloud training to future-proof your AI skillsThought leadershipBuilding better data platforms, for AI and beyondBoards should be ‘bilingual’ in AI, security to gain advantageA leader’s guide to five essential AI promptsCustomer storiesHow Google Cloud’s AI tech stack powers today’s startupsFrom query to cart: Inside Target’s search bar overhaul with AlloyDB AIHow Mr. Cooper assembled a “team” of AI agents to handle complex mortgage questions

OctoberWelcome to the Gemini Enterprise era, which brings enhanced security, data control, and advanced agent capabilities to large organizations. To help you prepare, we relaunched a variety of enhancements to our learning platform, and added new commerce and security programs. And while developers versed themselves on the finer points of Veo prompts, we discussed securing the AI supply chain, building AI agents for cybersecurity and defense, and a new vision on economic threat modeling. We partnered with PayPal to enable commerce in AI chats, Germany’s Planck Institute showed how AI can help share deep scientific expertise, and DZ Bank pioneered ways to make blockchain-based finance more reliable.Introducing Gemini EnterpriseGoogle Skills: Your new home for cloud learningEnabling a safe agentic web with reCAPTCHAPartners powering the Gemini Enterprise agent ecosystemThought leadershipThe ultimate prompting guide for Veo 3.1How you can secure your AI supply chainHow Google Does It: Building AI agents for cybersecurity and defenseCustomer storiesIntroducing an agentic commerce solution for merchants from PayPal and Google CloudHow the Max Planck Institute is sharing expert skills through multimodal agentsThe oracles of DeFi: How DZ Bank builds trustworthy data feeds for decentralized applications

NovemberWhether it was Gemini 3, Nana Banana Pro, or our seventh-generation Ironwood TPUs, this was the month that we gave enterprise customers access to all our latest and greatest AI tech. We also did a deep dive on how we built the largest-ever Kubernetes cluster, clocking in at a massive 130,000 nodes, and we announced a new collaboration with AWS to improve connectivity between clouds.Meanwhile, we updated our findings on the adversarial misuse of AI by threat actors and on the ROI of AI for security, and executives vibed out on our piece about vibe coding. Then, just in time for the holidays, we took a look at how Mattel is using AI tools to revamp its toys, and Waze showed how it uses Memorystore to keep the holiday traffic flowing.Bringing Gemini 3 to EnterpriseHow Google Does It: Building the largest known Kubernetes cluster, with 130,000 nodesAnnouncing Nano Banana Pro for every builder and businessAnnouncing Ironwood TPUs General Availability and new Axion VMs to power the age of inferenceAWS and Google Cloud collaborate to simplify multicloud networkingThought leadershipRecent advances in how threat actors use AI toolsBeyond the hype: Analyzing new data on ROI of AI in securityHow vibe coding can help leaders move fasterCustomer storiesMattel’s game changer: How AI is turning customer feedback into real-time product updatesWaze keeps traffic flowing with 1M+ real-time reads per second on Memorystore

DecemberThe year is winding down, but we still have lots to say. Early returns show that you were interested in how to mitigate the React2Shell vulnerability, support for MCP across Google services, and the early access launch of AlphaEvolve. And let’s not forget Gemini 3 Flash, which is turning heads with its high-level reasoning, plus amazing speed and a flexible cost profile.What does this all mean for you and your future? It’s important to contextualize these technology developments, especially AI. For example, the DORA team put together a guide on how high-performing platform teams can integrate AI capabilities into their workflows, we discussed what it looks like to have an AI-ready workforce, and our Office of the CISO colleagues put out their 2026 cybersecurity predictions. More to the point (guard), you could do like Golden State Warrior Stephen Curry and turn to Gemini to analyze your game, to prepare for the year ahead. We’ll be watching on Christmas Day to see how Steph is faring with Gemini’s advice.Responding to React2Shell (CVE-2025-55182): Secure your React and Next.js workloadsAnnouncing Model Context Protocol (MCP) support for Google servicesAlphaEvolve on Google Cloud: AI for agentic discovery and optimizationIntroducing Gemini 3 Flash: Intelligence and speed for enterprisesThought leadershipFrom adoption to impact: Putting the DORA AI Capabilities Model to workIs AI fluency the ingredient or the result of an AI-ready workforce?Our 2026 Cybersecurity Forecast reportCustomer storiesWhat Stephen Curry learned about his game from a custom Gemini agent

The Curry sibling rivalry is going strong

And that’s a wrap on 2025! Thanks for reading, and see you next year!
Quelle: Google Cloud Platform

Microsoft’s commitment to supporting cloud infrastructure demand in the United States

Today, we are sharing progress on our infrastructure expansions across the United States that are supporting the tremendous growth in customer demand for cloud and AI services. Recently we announced Fairwater sites in Wisconsin and Atlanta, and now we are expanding with the launch of our East US 3 region in the Greater Atlanta Metro area in early 2027, and the expansion of five existing datacenter regions across the United States.

Learn about Microsoft’s investment in datacenter infrastructure

New Cloud Region to open in Atlanta in early 2027

Microsoft’s global cloud network serves as the foundation that underpins daily life, innovation, and economic growth. With more regions than any other cloud provider, Microsoft’s global cloud infrastructure includes more than 70 regions, over 400 datacenters worldwide, over 370,000 miles of terrestrial and subsea fiber, and over 190 edge sites, making it one of the largest, most trusted and secure in the world.

Our datacenter footprint in Greater Atlanta Metro area is already running the most advanced AI supercomputers on the planet, and in early 2027, this footprint in Atlanta will expand to support all our customer workloads out of the East US 3 datacenter region. This region will be designed to support the most advanced Azure workloads, on a foundation of trust for all organizations.

Get started with Azure today

The East US 3 region will offer additional resiliency capabilities through Availability Zones which are unique physical datacenter locations equipped with independent power, networking, and cooling. Availability Zones provide organizations with peace of mind knowing their applications can be designed with increased tolerance to failures by incorporating functionality such as zone-redundant storage.

Microsoft’s datacenter community pledge is to build and operate digital infrastructure that addresses societal challenges and creates benefits for communities in which we operate and where our employees live and work. The East US 3 region is being designed to meet Microsoft’s carbon, water, waste and sustainability commitments. In developing the East US 3 region, we have water conservation and replenishment top of mind.  The region in Georgia is designed to be LEED Gold Certification: a framework for healthy, highly efficient, and cost-saving buildings, offering environmental, social, and governance benefits.

Delivering a resilient cloud infrastructure

We’re empowering all organizations to adopt a resilient cloud strategy that enables them to take advantage of the full capabilities of the cloud. The cloud is not a single region or location but a network of regions across the United States and the world that enables the access of Azure services, resources and capacity across a broader set of geographic areas.  

Our infrastructure projects in the United States are driven by the need for greater resiliency, agility and flexibility in today’s dynamic cloud environment. With six datacenter regions with Availability Zones (AZ) already in operation, we will be adding AZs in the United States to the following existing regions:

North Central US by the end of 2026.

West Central US in early 2027.

US Gov Arizona in early 2026.

In 2026, we will add Availability Zones in regions where we already have three, including East US 2 in Virginia and South Central US in Texas. The expansion of our Availability Zone footprint will provide additional supply of Azure infrastructure capacity to meet the need for customers in these regions to grow with confidence and with more options when considering a multi-region cloud architecture. Leveraging a multi-region cloud architecture with any of our United States regions further strengthens application performance, latency, and overall resilience and availability of cloud applications.

Organizations are already using Azure to transform their applications in the era of AI, with a resilient cloud foundation:

The University of Miami: The University of Miami is a leading-edge teaching and research institution located on Florida’s southern tip, part of a region known as Hurricane Alley. With a steady threat of extreme weather–related outages, the University looked to Microsoft Azure to improve its disaster recovery capabilities and shift key on-premises assets to the cloud. Pursuing a well-architected strategy, the University now takes advantage of Azure availability zones to safeguard against outages, stay operational during maintenance and improvements, and help ensure resilience and reliability. Additionally, the University is realizing greater agility, faster response time to business needs, and reduced costs by continuing to pursue Azure-backed solutions.

The State of Alaska: The State of Alaska is reducing costs by consolidating infrastructure and decommissioning legacy systems. It is improving resiliency and reliability while strengthening security by migrating systems to Azure, where geography is no longer a challenge. 

Supporting our government customers

We remain committed to enabling resilient, compliant cloud strategies for our government customers. In early 2026, we will expand our Azure Government footprint with the addition of three Availability Zones in the US Government Arizona region, giving agencies and partners more options for zone-redundant architectures to improve recovery time (RTO) and recovery point objectives (RPO) and mission continuity aligned with CMMC and NIST guidance.

This expansion supports growing demand for segmented, resilient architectures that isolate sensitive workloads while meeting regulatory requirements for availability and security. The US Government datacenter region in Arizona gives additional options for customers in the Defense Industrial Base (DIB) for its benefits of proximity, latency, and mission alignment, offering an alternative to US Government Virginia for new deployments.

These investments complement the Azure for US Government Secret cloud region launched earlier this year, reinforcing our commitment to secure, compliant, and mission-ready cloud solutions. Discover how Microsoft is advancing AI and infrastructure innovation in our H200: Accelerating AI at Scale blog.

Discover what Azure can do for you

Boost your cloud strategy

Use the Cloud Adoption Framework to achieve your cloud goals with best practices, documentation, and tools for business and technology strategies.

Use the Well-Architected Framework to optimize workloads with guidance for building reliable, secure, and performant solutions on Azure.

By choosing to deploy services through any of our Azure regions, customers can leverage the diverse and robust infrastructure that Microsoft is developing across the United States. This approach not only offers resilience and flexibility but also paves the way for innovative solutions that drive economic growth and a more connected future.

Where to find more resources:

Take a virtual tour of Microsoft datacenters

Learn more about Microsoft’s global infrastructure

Microsoft Datacenters: Illuminating the unseen power of the cloud—Microsoft Datacenters

Learn about Georgia—Microsoft Local

Learn how Microsoft is driving next-generation AI and infrastructure innovation

The post Microsoft’s commitment to supporting cloud infrastructure demand in the United States appeared first on Microsoft Azure Blog.
Quelle: Azure

Actioning agentic AI: 5 ways to build with news from Microsoft Ignite 2025

Energy at Microsoft Ignite was electric. Over 20,000 attendees gathered in San Francisco, with 200,000 digitally joining us to explore the future of cloud and AI. What continues to inspire me most are the responses online and the conversations happening in our technical communities. It’s how quickly you’re turning these announcements into action and building solutions for billions of people, which will ultimately shape our future.

Join us at Microsoft AI Dev Days (Dec 10-11, 2025) and start building

As someone who lives and breathes technical audience marketing across Microsoft Azure, Foundry, Fabric, databases, and developer tools. Our Azure platform announcements resonated because they help solve real problems—and now, the work begins.

So, what is everyone saying about the top news? And where do we go from here? Let’s reflect on the top five cloud and AI stories from Microsoft Ignite across the web right now and then unpack how these innovations can be put into practice across Microsoft AI Dev Days, Microsoft AI Tour and more.

1. Claude comes to Microsoft Foundry: Choice for builders

The technical community lit up about what Claude models in Microsoft Foundry unlock. I really like how this eWeek article describes the significance as a “partnership [that] removes one of the biggest historical blockers to adopting new AI tools: vendor complexity.”

Developers told us they wanted access to Claude Sonnet and Claude Opus alongside OpenAI’s GPT models. They wanted the ability to select the right models for their use cases, and the tools to evaluate for tone, safety, performance, and more. Now Azure is the only cloud supporting access to both Claude and GPT frontier models for its customers.

Response from the community is clear: model diversity matters. When you’re building AI apps and agents, having options means you can optimize for what matters most to your users. Microsoft Foundry gives you flexibility while maintaining enterprise-grade security, compliance, and governance.

My favorite watches and reads:

Microsoft Exec talks OpenAI, AI Bubble, Data Centers, AI Safety, and more

Everything you need to build AI apps & agents

Foundry: The Top AI Announcement from Microsoft Ignite 2025

Microsoft Brings Anthropic’s Claude Opus 4.5 to Foundry Preview

Deploy and compare models in Microsoft Foundry

2. IQ Revolution: Semantic understanding that works

The new portfolio of Microsoft IQ offerings has data engineers and architects buzzing. One blogger captured it perfectly: “This is Microsoft rewiring the connective tissue between productivity apps, analytics platforms, and AI development environments to create something that’s been missing from the enterprise AI conversation.” Knowledge is how the shift to agentic AI becomes practical rather than theoretical.

Foundry IQ streamlines knowledge retrieval from multiple sources including SharePoint, Fabric, and the web. Powered by Azure AI Search, it delivers policy-aware retrieval without having to build complex custom RAG pipelines. Developers get pre-configured knowledge bases and agentic retrieval in a single API that “just works,” while also respecting user permissions, which is what I heard resonating on the ground.

Designed with Foundry IQ integration, Fabric IQ creates a semantic intelligence layer that unifies analytics, time-series, and operational data around shared business concepts, letting you build and deploy agents that reason consistently across domains while cutting down the schema-mapping, data wrangling, and prompt engineering that normally eat the most time.

More must-reads:

CIO Talk: Microsoft Gets IQ

Microsoft’s Fabric IQ teaches AI agents to understand business operations, not just data patterns

Microsoft Ignite 2025: How Data-Driven Intelligence Powers the Age of AI Agents

3. Azure HorizonDB: PostgreSQL power

PostgreSQL developers are celebrating the preview of Azure HorizonDB, which you can sign up for here. This fully managed, Postgres-compatible database service is designed from the ground up for modern cloud-native and AI workloads.

The technical community embraced it wholeheartedly, seeing their priorities reflected. Azure HorizonDB delivers up to 3x more throughput than open-source Postgres for transactional workloads, with auto-scaling storage up to 128TB and scale-out compute supporting up to 3,072 vCores. Sub-millisecond multi-zone commit latencies support apps that are both fast and resilient.

What really got developers excited was built-in vector indexing with advanced filtering using DiskANN, which brings AI intelligence directly to where your data lives. This helps developers build semantic search and RAG patterns without the complexity and latency of managing separate vector stores or moving data across systems. Integration with Microsoft Foundry also streamlines setup and AI app development.

And for those migrating from Oracle, GitHub Copilot-powered migration tools in the PostgreSQL Extension for VS Code make the transition smoother than ever. The community has spoken: they want PostgreSQL flexibility combined with Azure enterprise capabilities, and Azure HorizonDB delivers.

Be sure to check these out:

Announcing Azure HorizonDB

Water, Water Everywhere: How Microsoft Ignite 2025 Turned Data Into Intelligence

Microsoft Ignite 2025: AI + Databases = The Next Big Shift with Microsoft’s Shireesh Thota

Azure HorizonDB: Microsoft goes big with PostgreSQL

4. Azure Copilot: Agents change the game

The announcement of the new Azure Copilot gives IT professionals a new reason to come to the cloud. Now supporting the full cloud operations lifecycle, Azure Copilot features a collection of specialized agents for migration, deployment, observability, optimization, resiliency, and troubleshooting. A star within this new experience for IT pros is the migration agent, helping turn weeks of manual discovery into rapid acceleration by scanning environments, identifying legacy workloads, and auto-generating infrastructure-as-code templates so migrations are fast and clean.

With Azure Copilot, migrating and modernizing become far more manageable, surfacing cost improvements, right sizing environments, and diagnosing issues across containers, virtual machines, and databases, while honoring role-based access control (RBAC) policies and compliance guardrails. Available at no extra cost in the Azure Portal, CLI, and the new Operations Center, this new agentic interface in Azure transforms modernization and gives IT teams the ability to be more proactive as they build on the Azure foundation.

Smart takes on the news:

Making Sense of Microsoft Ignite 2025 for Azure and AI Architects

How Azure Copilot’s New Agents Automate DevOps and SecOps

Azure Update – IGNITE SPECIAL – 21st November 2025

Microsoft’s Azure Copilot to support agentic cloud operations at scale with new AI agents

Azure Copilot Agents Launch in Private Preview

5. Azure hardware: Limitless power and security

Performance is everything when you’re training large models or running inference at scale and that’s why the latest hardware museum behind our ‘Frontier Street’ activation at Ignite captured the community’s imagination.

When you stood in front of a blade from our Azure AI infrastructure server, with NVIDIA Blackwell Ultra GPUs presented like a museum piece, your excitement made it feel even more like stepping into an art gallery—with a spotlight on Cobalt, Maia, and Microsoft’s unique NVIDIA partnership.

And it didn’t stop there. Microsoft’s custom Azure silicon now includes the Azure Boost DPU, the first in-house data processing unit, and Azure Integrated HSM for top-notch security. We can’t wait to keep bringing these innovations directly to you.

See what others are saying:

Announcing Cobalt 200: Azure’s next cloud-native CPU

Microsoft has Designed its Own 132 Core Processor: Azure Cobalt 200

Microsoft’s Azure Cobalt 200 ARM Chip Delivers 50% Performance Boost

Powering Modern Cloud Workloads with Azure Boost: Ignite 2025

Your keyboard, your impact

Here’s what makes this moment special: announcements at Ignite aren’t endpoints; they’re starting points. You’re the next gen creators who will take these tools and build new agentic experiences we can’t yet imagine. Your implementations surface insights that shape how Azure evolves. Your real-world patterns are what drive product decisions. The relationship between announcement and innovation is a partnership, and the technical community drives that process forward.

Create the future with us:

Tune into Microsoft AI Dev Days: December 10-11. Starting today, we’re hosting two days of live-streamed technical content on the Reactor, broadcast across all dev channels. These sessions are designed for developers who want to go deep on building with the technologies announced at Ignite. Mark your calendars and join the community for hands-on workshops and technical deep dives that will bringing you to the cutting edge of AI innovation.

Join us at a Microsoft AI Tour location near you. We’re coming to your city with hands-on technical workshops. These one-day, free events focus on getting you keyboard time with the technologies announced at Ignite. We’re continuing to hit the road in 2026 to 30 more locations.

Catch up with your tech community. Ignite delivered incredible technical content across hundreds of sessions. What makes the following three sessions special is how deep they go into the tech with information about how to start implementing in your own environments.

Community Theater: Ask Me Anything with Scott Hanselman

Community Theater: Learn Infrastructure-as-Code through Minecraft

Community Theater: Cloud Perspectives: Cloud Management & Ops Platform Team Insights

Take your learning to the next level with curated skilling plans. Whether you’re new to these technologies or looking to deepen your expertise, Microsoft skilling plans can accelerate your career from fundamentals to advanced implementation.

What are you building with the latest technologies announced at Ignite? Join the conversation in Azure’s technical community.

Join Microsoft Ignite 2026 early!
Sign up now to join the Microsoft Ignite early-access list and be eligible to receive limited‑edition swag at the event.

Save the date

The post Actioning agentic AI: 5 ways to build with news from Microsoft Ignite 2025 appeared first on Microsoft Azure Blog.
Quelle: Azure

Azure Storage innovations: Unlocking the future of data

Microsoft is redefining what’s possible in the public cloud and driving the next wave of AI-powered transformation for organizations. Whether you’re pushing the boundaries with AI, improving the resilience of mission-critical workloads, or modernizing legacy systems with cloud-native solutions, Azure Storage has a solution for you.

Learn more about Azure Storage tools and products

At Microsoft Ignite 2025 and KubeCon North America last month, we showcased the latest innovations in Azure Storage, powering your workloads. Here is a recap of those releases and advancements.

Innovating for the future with AI

Azure Blob Storage provides a unified storage foundation for the entire AI lifecycle, powering everything from ingestion and preparation to checkpoint management and model deployment.

To enable customers to rapidly train, fine-tune, and deploy AI models, we evolved the Azure Blob Storage architecture to scale and deliver exabytes of capacity, 10s of Tbps throughput, and millions of IOPS to GPUs. In this video, you can see a single storage account scaling to over 50 Tbps on read throughput. Azure Blob Storage is also the foundation that enables OpenAI to train and serve models at unprecedented speed and scale. 

Fig 1. Storage-centric view of AI training and fine-tuning

For customers handling terabyte or petabyte scale AI training data, Azure Managed Lustre (AMLFS) is a high-performance parallel file system delivering massive throughput and parallel I/O to keep GPUs continuously fed with data. AMLFS 20 (preview) supports 25 PiB namespaces and up to 512 GBps throughput. Hierarchical Storage Management (HSM) integration enhances AMLFS scalability by enabling seamless data movement between AMLFS and your exabyte-scale datasets in Azure Blob Storage. Auto-import (preview) allows you to pull only required datasets into AMLFS, and auto-export sends trained models to long-term storage or inferencing.

Rakuten is accelerating the training of Japanese large language models on Microsoft Azure, leveraging Azure Managed Lustre, Azure Blob Storage, and Azure Kubernetes Service to maximize GPU utilization and simplify scaling.
Natalie Mao, VP, AI & Data Division, Rakuten Group

Once models are trained and fine-tuned, inferencing takes center stage delivering real-time predictions and insights. Azure Blob Storage provides best-in-class storage for Microsoft AI services, including Microsoft Foundry Agent Knowledge (preview) and AI Search retrieval agents (preview), enabling customers to bring their own storage accounts for full flexibility and control, ensuring that enterprise data remains secure and ready for retrieval-augmented generation (RAG).

Additionally, Premium Blob Storage delivers consistent low-latency and up to 3X faster retrieval performance, critical for RAG agents. For customers that prefer open-source AI frameworks, Azure Storage built LangChain Azure Blob Loader which delivers granular security, memory-efficient loading of millions of objects and up to 5x faster performance compared to prior community implementations.

Fig 2. Storage-centric view of AI inference with enterprise data

Azure Storage is evolving to be an integrated, intelligent AI-driven platform simplifying management of exabyte-scale AI data. Storage Discovery and Copilot work together to help you analyze and understand how your data estate is evolving over time using dashboards and questions in natural language. With Storage Discovery and Storage Actions, you can optimize costs, protect your data and govern large datasets with hundreds of billions of objects used for training, and fine-tuning.

Optimizing modern applications with Cloud Native

Modern cloud-native applications demand agility. Two principles consistently stand out: elasticity and flexibility. Your storage should scale seamlessly with dynamic workloads—without operational overhead. The innovations below are designed for the cloud, enabling you to auto-scale, optimize costs intelligently, and deliver the performance needed by modern applications.

Azure Elastic SAN provides cloud-native block storage for scale and tight Kubernetes integration for fast scaling, and multi‑tenancy that optimizes cost. With new auto scaling support, Elastic SAN automatically expands resources as needed, making it easier to manage storage footprints across workloads. Early next year, we’ll extend Kubernetes integration via Azure Container Storage for Azure Kubernetes Service (AKS) to general availability (GA). These enhancements let you maintain familiar hosting environments while layering in cloud-native capabilities.

Cloud-native agility is also critical for modern applications built on object storage, with the need to optimize costs and performance for dynamic and unpredictable traffic patterns. Smart Tier (preview) on Azure Blob Storage continuously analyzes access patterns, moving data between tiers automatically.

New data starts in the hot tier. After 30 days of inactivity, it moves to cool, and after 90 days, to cold. If an object is accessed again, it’s promoted back to hot which keeps data in the most cost-effective tier automatically. You can optimize costs without sacrificing performance, simplifying data management at scale and keeping your focus on building.

Hosting mission-critical workloads

Enterprises today run mission-critical workloads that require block storage and deliver predictable performance and uncompromising business continuity. Azure Ultra Disk is our highest-performance block storage offering, purpose-built for workloads like high-frequency trading, ecommerce platforms, transactional databases, and electronic health record systems that demand exceptional speed, reliability, and scalability.

With Azure Ultra Disk, we can confidently scale our platform globally, knowing that performance and resilience will meet enterprise expectations, that consistency allows our teams to focus on AI innovation and workflow automation rather than infrastructure.
Charles McDaniels, Director of Systems Engineering Management for Global Cloud Services, ServiceNow

We know performance, cost, and business continuity remain the top priorities for our customers and we are raising the bar in every category:

Performance: We have further improved the average latency for Azure Ultra Disk by 30% with average latency well under 0.5ms for small IOs on virtual machines (VMs) with Azure Boost. A single Azure Ultra Disk can deliver industry leading performance of 400K IOPS and 10 GBps throughput. In addition, with Ebsv6 VMs, both Premium SSD v2 and Azure Ultra Disk can deliver industry leading VM performance scale of 800K IOPS and 14 GBps throughput for the most demanding applications.

Cost: Flexible provisioning for Azure Ultra Disk reduces total cost of ownership by up to 50%, letting you scale capacity, IOPS, and MBps independently at finer granularity.

Business continuity: Instant Access Snapshots (preview) lets you backup and restore your workloads instantly with exceptional performance on rehydration. This differentiated experience for Azure Premium v2 and Ultra Disk helps eliminate the operational overhead of monitoring snapshot readiness or pre‑warming resources, while reducing recovery, refresh, and scale‑out times from hours to seconds.

Azure NetApp Files (ANF) is designed to deliver low latency, high performance, and data management at scale. Its large volumes capabilities have been significantly expanded providing an over 3x increase in single volume capacity scale to 7.2 PiB and a 4x increase in throughput to 50 GiBps. Cache volumes bring data and files closer to where users need rapid access in a space efficient footprint. These make ANF suitable for several high-performance computing workloads such as Electronic Design Automation (EDA), Seismic Interpretation and Visualization, Reservoir Simulations, and Risk Modeling. Microsoft is not only positioning ANF for mission critical applications but also using ANF for in-house silicon design.

Breaking barriers—migrating your storage infrastructure

Every organization’s cloud journey is unique. Whether you need to move existing environments to the cloud with minimal disruption or plan a full modernization, Azure Storage offers solutions for you. Storage Migration Solution Advisor in Copilot can provide recommendations to help streamline the decision-making process for these migrations. 

Azure Data Box and Storage Mover simplify the migration journey from on-premises and other clouds to Azure. The next generation Azure Data Box is now GA. Storage Mover is our fully managed data migration service that is secure, efficient and scalable with new capabilities: on-premises NFS shares to Azure Files NFS 4.1, on-premises SMB shares to Azure Blob storage, and cloud-to-cloud transfers. Storage Migration Solution Advisor in Copilot accelerates decision-making for migrations.

For users ready to migrate their NAS data estates, Azure Files now makes this easier than ever. We have introduced a new management model making it easier and more cost effective to use file shares. Additionally, Azure Files now enables you to eliminate complex on-premises Active Directory or domain controller infrastructure, with Entra-only identities for SMB shares. With cloud native identity support, you can now manage your user permissions directly in Azure, including external identities for applications like Azure Virtual Desktop (AVD).

Entra-only identities support with Azure Files transforms SLB’s Petrel workflows by removing dependencies on on-premises domain controllers, simplifying identity management and storage infrastructure for globally distributed teams working on complex exploration and reservoir characterization. This cloud-native architecture allows customers to access SMB shares in an easy and secure manner without complex VPN or hybrid infrastructure setups.
Swapnil Daga, Storage Architect for Tenant Infrastructure, SLB

ANF Migration Assistant simplifies moving ONTAP workloads from on-premises or other clouds to Azure. Behind the scenes, the Migration Assistant uses NetApp’s SnapMirror replication technology, providing efficient, full fidelity, block-level incremental transfers. You can now leverage large datasets without impacting production workloads.

For customers running on-premises partner solutions who want to migrate to Azure using the same partner-provided technology, Azure has recently introduced Azure Native offers with Pure Storage and Dell PowerScale.

To make migrations easier, Azure Storage’s Migration Program connects you with a robust ecosystem of experts and tools. Trusted partners like Atempo, Cirata, Cirrus Data, and Komprise can accelerate migration of SAN and NAS workloads. This program offers secure, low-risk transfers of files, objects, and block storage to help enterprises unlock the full potential of Azure.

Start your next chapter with Azure Storage

The era of AI-powered transformation is here. Begin your journey by exploring Azure’s advanced storage offerings and migration tools, designed to accelerate AI adoption, cloud migration, and modernization. Take the next step today and unlock new possibilities with Azure Storage as the foundation for your AI initiatives.

For any questions, reach out at azurestoragefeedback@microsoft.com.

Get started with Azure Storage
Secure, high-performance, reliable, and scalable cloud storage.

Start exploring

The post Azure Storage innovations: Unlocking the future of data appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing GPT-5.2 in Microsoft Foundry: The new standard for enterprise AI

The age of AI small talk is over. Enterprise applications demand more than clever chat. They require a reliable, reasoning partner capable of solving the most ambiguous, high-stakes problems, including planning multi-agent workflows and delivering auditable code.

Azure is the foundation for solving these challenges. Today, OpenAI’s GPT-5.2 is announced as generally available in Microsoft Foundry, introducing a new frontier model series purposefully built to meet the needs of enterprise developers and technical leaders—setting a new standard for a new era.

Explore GPT-5.2 in Foundry today

GPT-5.1 vs. GPT-5.2: Key upgrades for developers to know

GPT-5.2 series introduces deeper logical chains, richer context handling, and agentic execution that prompts shippable artifacts. For example, design docs, runnable code, unit tests, and deployment scripts can be generated with fewer iterations. The GPT-5.2 series is built on new architecture, delivering superior performance, efficiency, and reasoning depth compared to prior generations. It’s also trained on the proven GPT-5.1 dataset and further enhanced with improved safety and integrations. GPT-5.2 leaps beyond previous models with substantial performance improvements across core metrics.

Today, we’re shipping GPT-5.2 and GPT-5.2-Chat. Each is greatly improved from its predecessor, and together they excel in everyday professional excellence.

GPT-5.2: The most advanced reasoning model that solves harder problems more effectively and with more polish. An example of this is information work, where great thinking is now complemented with better communication skills and improved formatting in spreadsheets and slideshow creation.

GPT-5.2-Chat: A powerful yet efficient workhorse for everyday work and learning, with clear improvements in info-seeking questions, how-to’s and walk-throughs, technical writing, and translation. It’s also more effective at supporting studying and skill-building, as well as offering clearer job and career guidance.

Why GPT-5.2 sets a new standard for enterprise AI

For long term success in complex professional tasks, teams need structured outputs, reliable tool use, and enterprise guardrails. GPT‑5.2 is optimized for these agent scenarios within Foundry’s enterprise-grade platform, offering consistent developer experience across reasoning, chat, and coding.

Multi-Step Logical Chains: Decomposes complex tasks, justifies decisions, and produces explainable plans.

Context-Aware Planning: Ingests large inputs (project briefs, codebases, meeting notes) for holistic, actionable output.

Agentic Execution: Coordinates tasks end-to-end, across design, implementation, testing, and deployment, reducing iteration cycles and manual oversight.

Safety and Governance: Enterprise-grade controls, managed identities, and policy enforcement for secure, compliant AI adoption.

GPT-5.2’s deep reasoning capabilities, expanded context handling, and agentic patterns make it the smart choice for building AI agents that can tackle long-running, complex tasks across industries, including financial services, healthcare, manufacturing, and customer support.

Analytics and Decision Support: Useful for wind tunneling scenarios, explaining trade-offs, and producing defensible plans for stakeholders.

Application Modernization: Make rapid progress in refactoring services, generating tests, and producing migration plans with risk and rollback criteria.

Data and Pipelines: Audit ETL, recommend monitors/SLAs, and generate validation SQL for data integrity.

Customer Experiences: Build context-aware assistants and agentic workflows that integrate into existing apps.

The results? Agents that maintain reliability through complex workflows and agent service, while producing structured, auditable outputs that scale confidently in Microsoft Foundry.

GPT-5.2 deployment and pricing

Model Deployment Pricing (USD $/million tokens)   InputCached Input Output GPT-5.2Standard Global  $1.75 $0.175 $14.00 Standard Data Zones (US) $1.925 $0.193 $15.40GPT-5.2-Chat Standard Global  $1.75 $0.175 $14.00

Start building with GPT-5.2 today
Build in Microsoft Foundry, where enterprise agents go from vision to production.

Start your next project

The post Introducing GPT-5.2 in Microsoft Foundry: The new standard for enterprise AI appeared first on Microsoft Azure Blog.
Quelle: Azure

Docker Hardened Images: Security Independently Validated by SRLabs

Earlier this week, we took a major step forward for the industry. Docker Hardened Images (DHI) is now available at no cost, bringing secure-by-default development to every team, everywhere. Anyone can now start from a secure, minimal, production-ready foundation from the first pull, without a subscription.  

With that decision comes a responsibility: if Docker Hardened Images become the new starting point for modern development, then developers must be able to trust them completely. Not because we say they’re secure, but because they prove it: under scrutiny, under pressure, and through independent validation.

Security threats evolve constantly. Supply chains grow more complex. Attackers get smarter. The only way DHI stays ahead is by continuously pushing our security forward. That’s why we partnered with  SRLabs, one of the world’s leading cybersecurity research groups, known for uncovering high-impact vulnerabilities in highly sensitive systems.

This review included threat modeling, architecture analysis, and grey-box testing using publicly available artifacts. At Docker, we understand that trust is not earned through claims, it is earned through testing, validation and a commitment to do this continuously.  

Phase One: Grey Box Assessment

SRLabs started with a grey box assessment focused on how we build, sign, scan, and distribute hardened images. They validated our provenance chain, our signing practices, and our vulnerability management workflow.

One of the first things they called out was the strength of our verifiability model. Every artifact in DHI carries SLSA Build Level 3 provenance and Cosign signatures, all anchored in transparency logs via Rekor. This gives users a clear, cryptographically verifiable trail for where every hardened image came from and how it was built. As SRLabs put it:

“Docker incorporates signed provenance with Cosign, providing a verifiable audit trail aligned with SLSA level 3 standards.”

They also highlighted the speed and clarity of our vulnerability management process. Every image includes an SBOM and VEX data, and our automated rebuild system responds quickly when new CVEs appear. SRLabs noted:

“Fast patching. Docker promises a 7 day patch SLA, greatly reducing vulnerability exposure windows.”

They validated the impact of our minimization strategy as well. Non root by default, reduced footprint, and the removal of unnecessary utilities dramatically reduce what an attacker could exploit inside a container. Their assessment:

“Non root, minimal container images significantly reduce attack vectors compared to traditional images.”

After three weeks of targeted testing, including adversarial modeling and architectural probing, SRLabs came back with a clear message: no critical vulnerabilities, no high-severity exploitation paths, just a medium residual risk driven by industry-wide challenges like key stewardship and upstream trust. And the best part? The architecture is already set up to reach even higher assurance without needing a major redesign. In their words:

“Docker Hardened Images deliver on their public security promises for today’s threat landscape.”

 “No critical or high severity break outs were identified.”

And 

“By implementing recommended hardening steps, Docker can raise assurance to the level expected of a reference implementation for supply chain security without major re engineering.”

Throughout the assessment, our engineering teams worked closely with SRLabs. Several findings, such as a labeling issue and a race condition, were resolved during the engagement. Others, including a prefix-hijacking edge case, moved into remediation quickly. For SRLabs, this responsiveness showed more than secure technology; it demonstrated a security-first culture where issues are triaged fast, fixes land quickly, and collaboration is part of the process. 

SRLabs pointed to places where raising the bar would make DHI even stronger, and we are already acting on them. They told us our signing keys should live in Hardware Security Modules with quorum controls, and that we should move toward a keyless Fulcio flow, so we have started that work right away. They pointed out that offline environments need better protection against outdated or revoked signatures, and we are updating our guidance and exploring freshness checks to close that gap.They also flagged that privileged builds weaken reproducibility and SBOM accuracy. Several of those builds have already been removed or rebuilt, and the rest are being redesigned to meet our hardening standards.

 You can read more about the findings from the report here.

Phase Two: Full White Box Assessment

Grey box testing is just the beginning. 

This next phase goes much deeper. SRLabs will step into the role of an insider-level attacker. They’ll dig through code paths, dependency chains, and configuration logic. They’ll map every trust boundary, hunt for logic flaws, and stress-test every assumption baked into the hardened image pipeline. We expect to share that report in the coming months.

SRLabs showed us how DHI performs under pressure, but validation in the lab is only half the story.The real question is: what happens when teams put Docker at the center of their daily work? The good news is,  we have the data. When organizations adopt Docker, the impact reaches far beyond reducing vulnerabilities.New research from theCUBE, based on a survey of 393 IT, platform, and engineering leaders, reveals that 95 percent improved vulnerability detection and remediation, 93 percent strengthened policy and compliance, and 81 percent now meet most or all of their security goals across the entire SDLC. You can read about it in the report linked above.

By combining Independent validation, Continuous security testing and Transparent attestations and provenance, Docker is raising the baseline for what secure software supply chains should look like.

The full white-box report from SRLabs will be shared when complete, and every new finding, good or bad, will shape how we continue improving DHI. Being secure-by-default is something we aim to prove, continuously.
Quelle: https://blog.docker.com/feed/

From the Captain’s Chair: Igor Aleksandrov

Docker Captains are leaders from the developer community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “From the Captain’s Chair” is a blog series where we get a closer look at one Captain to learn more about them and their experiences.

Today we are interviewing Igor Aleksandrov. Igor is the CTO and co-founder of JetRockets, a Ruby on Rails development agency based in NYC, bringing over 20 years of software engineering experience and a deep commitment to the Rails ecosystem since 2008. He’s an open-source contributor to projects like the Crystal programming language and Kamal, a regular conference speaker sharing expertise on different topics from container orchestration to migration from React to Hotwire.

Can you share how you first got involved with Docker? What inspired you to become a Docker Captain?

Looking back at my journey to becoming a Docker Captain, it all started with a very practical problem that many Rails teams face: dependency hell. 

By 2018, JetRockets had been building Ruby on Rails applications for years. I’d been working with Rails since version 2.2 back in 2009, and we had established solid development practices. But as our team grew and our projects became more complex, we kept running into the same frustrating issues:

“It works on my machine” became an all-too-common phrase during deployments

Setting up new developer environments was a time-consuming process fraught with version mismatches

Our staging and production environments occasionally behaved differently despite our best efforts

Managing system-level dependencies across different projects was becoming increasingly complex

We needed a unified way to manage application dependencies that would work consistently across development, staging, and production environments.

Unlike many teams that start with Docker locally and gradually move to production, we decided to implement Docker in production and staging first. This might sound risky, but it aligned perfectly with our goal of achieving true environment parity.

We chose our first Rails application to containerize and started writing our first Dockerfile. Those early Dockerfiles were much simpler than the highly optimized ones we create today, but they solved our core problem: every environment now ran the same container with the same dependencies.

Even though AWS Beanstalk has never been a developer friendly solution, the goal was reached – we had achieved true environment consistency, and the mental overhead of managing different configurations across environments had virtually disappeared.

That initial Docker adoption in 2018 sparked a journey that would eventually lead to me becoming a Docker Captain. What began with a simple need for dependency management evolved into deep expertise in container optimization, advanced deployment strategies with tools like Kamal, and ultimately contributing back to the Docker community.

Today, I write extensively about Rails containerization best practices, from image slimming techniques to sophisticated CI/CD pipelines. But it all traces back to that moment in 2019 when we decided to solve our dependency challenges with Docker.

What are some of your personal goals for the next year?

I want to speak at more conferences and meetups, sharing the expertise I’ve built over the years. Living in the Atlanta area, I would like to become more integrated into the local tech community. Atlanta has such a vibrant IT scene, and I think there’s a real opportunity to contribute here. Whether that’s organizing Docker meetups, participating in Rails groups, or just connecting with other CTOs and technical leaders who are facing similar challenges.

If you weren’t working in tech, what would you be doing instead?

If I weren’t working in tech, I think I’d be doing woodworking. There’s something deeply satisfying about creating things with your hands, and woodworking offers that same creative problem-solving that draws me to programming – except you’re working with natural materials and traditional tools instead of code.

I truly enjoy working with my hands and seeing tangible results from my efforts. In many ways, building software and building furniture aren’t that different – you’re taking raw materials, applying craftsmanship and attention to detail, and creating something functional and beautiful.

If not woodworking, I’d probably pursue diving. I’m already a PADI certified rescue diver, and I truly like the ocean. There’s something about the underwater world that’s entirely different from our digital lives – it’s peaceful, challenging, and always surprising. Getting my diving instructor certification and helping others discover that underwater world would be incredibly rewarding.

Can you share a memorable story from collaborating with the Docker community?

One of the most rewarding aspects of being a Docker Captain is our regular Captains meetings, and honestly, I enjoy each one of them. These aren’t just typical corporate meetings – they’re genuine collaborations with some of the most passionate and knowledgeable people in the containerization space.

What makes these meetings special is the diversity of perspectives. You have Captains from completely different backgrounds – some focused on enterprise Kubernetes deployments, others working on AI, developers like me optimizing Rails applications, and people solving problems I’ve never even thought about.

What’s your favorite Docker product or feature right now, and why?

Currently, I’m really excited about the Build Debugging feature that was recently integrated into VS Code. As someone who spends a lot of time optimizing Rails Dockerfiles and writing about containerization best practices, this feature has been a game-changer for my development workflow.

When you’re crafting complex multi-stage builds for Rails applications – especially when you’re trying to optimize image size, manage build caches, and handle dependencies like Node.js and Ruby gems – debugging build failures used to be a real pain.

Can you walk us through a tricky technical challenge you solved recently?

Recently, I was facing a really frustrating development workflow issue that I think many Rails developers can relate to. We had a large database dump file, about 150GB, that we needed to use as a template for local development. The problem was that restoring this SQL dump into PostgreSQL was taking up to an hour every time we needed to reset our development database to a clean state.

For a development team, this was killing our productivity. Every time someone had to test a migration rollback, debug data-specific issues, or just start fresh, they’d have to wait an hour for the database restore. That’s completely unacceptable.

Initially, we were doing what most teams do: running pg_restore against the SQL dump file directly. But with a 150GB database, this involves PostgreSQL parsing the entire dump, executing thousands of INSERT statements, rebuilding indexes, and updating table statistics. It’s inherently slow because the database engine has to do real work.

I realized the bottleneck wasn’t the data itself – it was the database restoration process. So I wrote a Bash script that takes an entirely different approach:

Create a template volume: Start with a fresh Docker volume and spin up a PostgreSQL container

One-time restoration: Restore the SQL dump into this template database (this still takes an hour, but only once)

Volume snapshot: Use a BusyBox container to copy the entire database volume at the filesystem level

Instant resets: When developers need a fresh database, just copy the template volume to a new working volume

The magic is in step 4. Instead of restoring from SQL, we’re essentially copying files at the Docker volume level. This takes seconds instead of an hour because we’re just copying the already-processed PostgreSQL data files.

Docker volumes are just filesystem directories under the hood. PostgreSQL stores its data in a very specific directory structure with data files, indexes, and metadata. By copying the entire volume, we’re getting a perfect snapshot of the database in its “ready to use” state.

The script handles all the orchestration – creating volumes, managing container lifecycles, and ensuring the copied database starts up cleanly. What used to be a one-hour reset cycle is now literally 5-10 seconds. Developers can experiment freely, test destructive operations, and reset their environment without hesitation. It’s transformed how our team approaches database-dependent development.

What’s one Docker tip you wish every developer knew?

If something looks weird in your Dockerfile, you are doing it wrong. This is the single most important lesson I’ve learned from years of optimizing Rails Dockerfiles. I see this constantly when reviewing other developers’ container setups – there’s some convoluted RUN command, a bizarre COPY pattern, or a workaround that just feels off.

Your Dockerfile should read like clean, logical instructions. If you find yourself writing something like:

RUN apt-get update && apt-get install -y wget &&
wget some-random-script.sh && chmod +x some-random-script.sh &&
./some-random-script.sh && rm some-random-script.sh

…you’re probably doing it wrong.

The best Dockerfiles are almost boring in their simplicity and clarity. Every line should have a clear purpose, and the overall flow should make sense to anyone reading it. If you’re adding odd hacks, unusual file permissions, or complex shell gymnastics, step back and ask why.

This principle has saved me countless hours of debugging. Instead of trying to make unusual things work, I’ve learned to redesign the approach. Usually, there’s a cleaner, more standard way to achieve what you’re trying to do.

If you could containerize any non-technical object in real life, what would it be and why?

If I could containerize any non-technical object, it would definitely be knowledge itself. Imagine being able to package up skills, experiences, and expertise into portable containers that you could load and unload from your mind as needed. As someone who’s constantly learning new technologies and teaching others, I’m fascinated by how we acquire and transfer knowledge. Currently, if I want to dive deep into a new programming language like I did with Crystal, or master a deployment tool like Kamal, it takes months of dedicated study and practice.

But what if knowledge worked like Docker containers? You could have a “Ruby 3.3 expertise” container, a “Advanced Kubernetes” container, or even a “Woodworking joinery techniques” container. Need to debug a complex Rails application? Load the container. Working on a diving certification course? Swap in the marine biology knowledge base.

The real power would be in the consistency and portability – just like how Docker containers ensure your application runs the same way everywhere, knowledge containers would give you the same depth of understanding regardless of context. No more forgetting syntax, no more struggling to recall that one debugging technique you learned years ago.

Plus, imagine the collaborative possibilities. Experienced developers could literally package their hard-earned expertise and share it with the community. It would democratize learning in the same way Docker democratized deployment.

Of course, the human experience of learning and growing would be lost, but from a pure efficiency standpoint? That would be incredible.

Where can people find you online? (talks, blog posts, or open source projects, etc)

I am always active in X (@igor_alexandrov) and on LinkedIn. I try to give at least 2-3 talks at tech conferences and meetups each year, and besides this, I have my personal blog.

Rapid Fire Questions

Cats or Dogs?

Dogs

Morning person or night owl?

Both

Favorite comfort food?

Dumplings

One word friends would use to describe you?

Perfectionist

A hobby you picked up recently?

Cycling

Quelle: https://blog.docker.com/feed/

Amazon WorkSpaces Applications now supports Microsoft Windows Server 2025

Amazon WorkSpaces Applications now offers images powered by Microsoft Windows Server 2025, enabling customers to launch streaming instances with the latest features and enhancements from Microsoft’s newest server operating system. This update ensures your application streaming environment benefits from improved security, performance, and modern capabilities. With Windows Server 2025 support, you can deliver the Microsoft Windows 11 desktop experience to your end users, giving you greater flexibility in choosing the right operating system for your specific application and desktop streaming needs. Whether you’re running business-critical applications or providing remote access to specialized software, you now have expanded options to align your infrastructure decisions with your unique workload requirements and organizational standards. You can select from AWS-provided public images or create custom images tailored to your requirements using Image Builder. Support for Microsoft Windows Server 2025 is now generally available in all AWS Regions where Amazon WorkSpaces Applications is offered. To get started with Microsoft Windows Server 2025 images, visit the Amazon WorkSpaces Applications documentation. For pricing details, see the Amazon WorkSpaces Applications Pricing page.
Quelle: aws.amazon.com

Amazon RDS enhances observability for snapshot exports to Amazon S3

Amazon Relational Database Service (RDS) now offers enhanced observability for your snapshot exports to Amazon S3, providing detailed insights into export progress, failures, and performance for each task. These notifications enable you to monitor your exports with greater granularity and enables more predictability. With snapshot export to S3, you can export data from your RDS database snapshots to Apache Parquet format in your Amazon S3 bucket. This launch introduces four new event types, including current export progress and table-level notifications for long-running tables, providing more granular visibility into your snapshot export performance and recommendations for troubleshooting export operation issues. Additionally, you can view export progress, such as the number of tables exported and pending, along with exported data sizes, enabling you to better plan your operations and workflows. You can subscribe to these events through Amazon Simple Notification Service (SNS) to receive notifications and view the export events through the AWS Management Console, AWS CLI, or SDK. This feature is available for RDS PostgreSQL, RDS MySQL, and RDS MariaDB engines in all Commercial Regions where RDS is generally available. To learn more about the new event types, see Event categories in RDS. 
Quelle: aws.amazon.com