Introducing Azure Accelerate: Fueling transformation with experts and investments across your cloud and AI journey

As technology continues to reshape every industry, organizations are transforming with cloud, data, and AI—ultimately resulting in smarter operations, better outcomes, and more impactful customer experiences.

From global enterprises to fast-scaling startups, customers are putting Azure to work in powerful ways. To date, more than 26,000 customer projects have leveraged our offerings—Azure Migrate and Modernize and Azure Innovate—to drive faster time to value.

Introducing Azure Accelerate

As customer cloud and AI adoption needs change, we’re evolving our offerings to match your business priorities. Today, we’re excited to introduce Azure Accelerate, a new simplified offering designed to fuel transformation with experts and investments across the cloud and AI journey. Azure Accelerate brings together the full capabilities of Azure Migrate and Modernize, Azure Innovate, and Cloud Accelerate Factory in one place to assist customers from initial planning to full implementation. With Azure Accelerate, customers can:

Access trusted experts: The deep expertise of Azure’s specialized partner ecosystem ensures your projects launch smoothly and scale efficiently. You can also choose to augment your project by taking advantage of a new benefit within Azure Accelerate—our Cloud Accelerate Factory. Cloud Accelerate Factory provides Microsoft experts at no additional cost to deliver hands-on deployment assistance and get projects up and running on Azure faster.

Unlock Microsoft investments: Tap into funding designed to maximize value while minimizing risk. Azure Accelerate helps reduce the costs of engagements with Microsoft investment via partner funding and Azure credits. Microsoft also invests in your long-term success by supporting the skilling of your internal teams. Empower them with free resources available on Microsoft Learn, or develop training programs tailored to your needs with a qualified partner. Azure Accelerate supports projects of all sizes, from the migration of a few servers or virtual machines to the largest initiatives, with no cap on investments.

Succeed with comprehensive coverage: Navigate every stage of your cloud and AI journey with confidence through the robust, end-to-end support of Azure Accelerate. Start your journey with an in-depth assessment using AI-enhanced tools like Azure Migrate to gain critical insights. Design and validate new ideas in sandbox environments and then test solutions through funded pilots or proof-of-value projects before scaling. When you’re ready, start your Azure implementation by having experts build an Azure landing zone. Then, move workloads into Azure at scale following best practices for migrating or building new solutions.

The Cloud Accelerate Factory is a new benefit within Azure Accelerate and is designed to help you jumpstart your Azure projects with zero-cost deployment assistance from Microsoft experts. Through a joint delivery model with Azure partners, these experts can provide hands on deployment of over 30 Azure services using proven strategies developed across thousands of customer engagements. This benefit empowers customers to maximize their investments by offloading predictable technical tasks to our Factory team, enabling internal teams or partners to focus on the more custom and highly detailed elements of a project.

For those organizations who seek guidance and technical best practices, Azure Accelerate is backed by Azure Essentials, which brings together curated, solution-aligned guidance from proven methodologies and tools such as the Microsoft Cloud Adoption Framework, Azure Well-Architected Framework, reference architectures, and more in a single location.

Realizing business value with Azure offerings

Here are just a few examples of how organizations are turning ambition into action:

Modernizing for agility: Global financial leader UBS is using Azure to modernize its infrastructure, enhancing agility and resilience while laying the foundation for future innovation. This modernization has enabled UBS to respond more quickly to market and regulatory changes, while reducing operational complexity.

Unifying data for impact: Humanitarian nonprofit Médecins Sans Frontières UK centralized its data platform using Azure SQL, Dynamics 365, and Power BI. This has resulted in streamlined reporting, faster emergency response, and improved donor engagement—all powered by timely, self-service insights.

Scaling AI for global reach: Bayer Crop Science, in partnership with EY and Microsoft, built a generative AI assistant using Azure OpenAI and Azure AI Search. This natural language tool delivers real-time agronomy insights to farmers worldwide, helping unlock food productivity and accessibility at scale.

Enhancing insights with AI: OneDigital partnered with Microsoft and Data Science Dojo through Azure Innovate to co-develop custom AI agents using Azure OpenAI and Ejento AI. This solution streamlined research, saving 1,000 person-hours annually, and enabled consultants to deliver faster, more personalized client insights, improving retention through high-impact interactions.

Get started with Azure Accelerate

Azure Accelerate is designed to fuel your cloud and AI transformation. It’s how you move faster, innovate smarter, and lead in a cloud-first, AI-powered world.

We’re excited to partner with you on this journey and can’t wait to see what you’ll build next with Azure. To get started, visit Azure Accelerate to learn more or connect with your Microsoft account team or a specialized Azure partner to plan your next steps.
The post Introducing Azure Accelerate: Fueling transformation with experts and investments across your cloud and AI journey appeared first on Microsoft Azure Blog.
Quelle: Azure

Introducing Deep Research in Azure AI Foundry Agent Service

Unlock enterprise-scale web research automation

Today we’re excited to announce the public preview of Deep Research in Azure AI Foundry—an API and software development kit (SDK)-based offering of OpenAI’s advanced agentic research capability, fully integrated with Azure’s enterprise-grade agentic platform.

With Deep Research, developers can build agents that deeply plan, analyze, and synthesize information from across the web—automate complex research tasks, generate transparent, auditable outputs, and seamlessly compose multi-step workflows with other tools and agents in Azure AI Foundry.

Create with Azure AI Foundry

AI agents and knowledge work: Meeting the next frontier of research automation

Generative AI and large language models have made research and analysis faster than ever, powering solutions like ChatGPT Deep Research and Researcher in Microsoft 365 Copilot for individuals and teams. These tools are transforming everyday productivity and document workflows for millions of users.

As organizations look to take the next step—integrating deep research directly into their business apps, automating multi-step processes, and governing knowledge at enterprise scale—the need for programmable, composable, and auditable research automation becomes clear.

This is where Azure AI Foundry and Deep Research come in: offering the flexibility to embed, extend, and orchestrate world-class research as a service across your entire enterprise ecosystem—and connect it with your data and your systems.

Deep Research capabilities in Azure AI Foundry Agent Service

Deep Research in Foundry Agent Service is built for developers who want to move beyond the chat window. By offering Deep Research as a composable agent tool via API and SDK, Azure AI Foundry enables customers to:

Automate web-scale research using a best-in-class research model grounded with Bing Search, with every insight traceable and source-backed.

Programmatically build agents that can be invoked by apps, workflows, or other agents—turning deep research into a reusable, production-ready service.

Orchestrate complex workflows: Compose Deep Research agents with Logic Apps, Azure Functions, and other Foundry Agent Service connectors to automate reporting, notifications, and more.

Ensure enterprise governance: With Azure AI Foundry’s security, compliance, and observability, customers get full control and transparency over how research is run and used.

Unlike packaged chat assistants, Deep Research in Foundry Agent Service can evolve with your needs—ready for automation, extensibility, and integration with future internal data sources as we expand support.

How it works: Architecture and agent flow

Deep Research in Foundry Agent Service is architected for flexibility, transparency, and composability—so you can automate research that’s as robust as your business demands.

At its core, the Deep Research model, o3-deep-research, orchestrates a multi-step research pipeline that’s tightly integrated with Grounding with Bing Search and leverages the latest OpenAI models:

Clarifying intent and scoping the task:When a user or downstream app submits a research query, the agent uses GPT-series models including GPT-4o and GPT-4.1 to clarify the question, gather additional context if needed, and precisely scope the research task. This ensures the agent’s output is both relevant and actionable, and that every search is optimized for your business scenario.

Web grounding with Bing Search:Once the task is scoped, the agent securely invokes the Grounding with Bing Search tool to gather a curated set of high-quality, recent web data. This ensures the research model is working from a foundation of authoritative, up-to-date sources—no hallucinations from stale or irrelevant content.

Deep Research task execution:The o3-deep-research model starts the research task execution. This involves thinking, analyzing, and synthesizing information across all discovered sources. Unlike simple summarization, it reasons step-by-step, pivots as it encounters new insights, and composes a comprehensive answer that’s sensitive to nuance, ambiguity, and emerging patterns in the data.

Transparency, safety, and compliance:The final output is a structured report that documents not only the answer, but also the model’s reasoning path, source citations, and any clarifications requested during the session. This makes every answer fully auditable—a must-have for regulated industries and high-stakes use cases.

Programmatic integration and composition:By exposing Deep Research as an API, Azure AI Foundry empowers you to invoke research from anywhere—custom business apps, internal portals, workflow automation tools, or as part of a larger agent ecosystem. For example, you can trigger a research agent as part of a multi-agent chain: one agent performs deep web analysis, another generates a slide deck with Azure Functions, while a third emails the result to decision makers with Azure Logic Apps. This composability is the real game-changer: research is no longer a manual, one-off task, but a building block for digital transformation and continuous intelligence.

This flexible architecture means Deep Research can be seamlessly embedded into a wide range of enterprise workflows and applications. Already, organizations across industries are evaluating how these programmable research agents can streamline high-value scenarios—from market analysis and competitive intelligence, to large-scale analytics and regulatory reporting.

Pricing for Deep Research (model: o3-deep-research) is as follows: 

Input: $10.00 per 1M tokens.

Cached Input: $2.50 per 1M tokens.

Output: $40.00 per 1M tokens.

Search context tokens are charged input token prices for the model being used. You’ll separately incur charges for Grounding with Bing Search and the base GPT model being used for clarifying questions.  

Get started with Deep Research

Deep Research is available now in limited public preview for Azure AI Foundry Agent Service customers. To get started:

Sign up for the limited public preview to gain early access.

Visit our documentation to learn more about the feature.

Visit our learning modules to build your first agent with Azure AI Foundry Agent Service.

Start building your agents today in Azure AI Foundry.

We can’t wait to see the innovative solutions you’ll build. Stay tuned for customer stories, new features, and future enhancements that will continue to unlock the next generation of enterprise AI agents.

Azure AI Foundry
Get enterprise AI without enterprise complexity.

Learn more >

The post Introducing Deep Research in Azure AI Foundry Agent Service appeared first on Microsoft Azure Blog.
Quelle: Azure

From Dev to Deploy: Compose as the Spine of the Application Lifecycle

Nobody wants a spineless application development process. What do I mean by this? The spine is the backbone that supports and provides nerve channels for the human body. Without it, we would be floppy, weaker, and would struggle to understand how our extremities were behaving. A slightly tortured analogy, but consider the application lifecycle of the average software project. The traditional challenge has been, how do we give it a spine? How can we provide a backbone to support developers at every stage and a nerve channel to pass information back and forth, thereby cementing architectural constructs and automating or simplifying all the other processes required for modern applications?

We built Docker Compose specifically to be that spine, providing the foundation for an application from its inception in local development through testing and on to final deployment and maintenance as the application runs in the wild and interacts with real users. With Docker Compose Bridge, Docker Compose filled out the last gaps in full application lifecycle management. Using Compose Bridge, teams can now, with a single Compose file, take a multi-container, multi-tiered application from initial code and development setup all the way to production deployment in Kubernetes or other container orchestration systems.

Before and After: How Docker Compose Adds the Spine and Simplifies AppDev

So what does this mean in practice? Let’s take a “Before” and “After” view of how the spine of Docker Compose changes application lifecycle processes for the better. Imagine you’re building a customer-facing SaaS application—a classic three-tier setup:

Go API handling user accounts, payments, and anti-fraud check

PostgreSQL + Redis for persistence and caching

TypeScript/React UI that customers log into and interact with

You are deploying to Kubernetes because you want resilience, portability, and flexibility. You’ll deploy it across multiple regions in the cloud for low latency and high availability. Let’s walk through what that lifecycle looks like before and after adopting Docker Compose + Compose Bridge.

Before: The Local Development “Works on My Machine” Status Quo

Without Compose, you set up five separate containers with a messy sprawl that might look something that looks like this:

docker network create saas-net
docker run -d –name postgres –network saas-net
-e POSTGRES_PASSWORD=secret postgres:16
docker run -d –name redis –network saas-net redis:7
docker run -d –name go-api –network saas-net
-e DB_URL=postgres://postgres:secret@postgres/saasdb
-p 8080:8080 go-saas-api:latest
docker run -d –name payments –network saas-net payments-stub:1.2
docker run -d –name fraud –network saas-net anti-fraud:latest
docker run -d –name saas-ui –network saas-net
-p 3000:3000 saas-ui:latest

You can certainly automate the setup process with a script. But that would mean everyone else you are working with would need the same script to replicate your setup. You would also need to ensure that they all have the same updated script. And that’s not the end of it. Before Compose, setting up even a basic multi-service stack meant manually crafting networks and links—typically running docker network create and then launching each container with –network to stitch them together (see Docker run network options). Onboarding new developers only made matters worse: your README would balloon with dozens of flags and environment-variable examples, and inevitably, someone would mistype a port or misspell a variable name. Meanwhile, security and compliance tended to be afterthoughts.

There would be no standard WAF or API gateway in front of your services. In many instances, secrets were scattered in plain .env files, and you would have no consistent audit logging to prove who accessed what and who made what changes. Then, for debugging, you manually spin up phpPgAdmin; for observability, you install Prometheus and Jaeger on an ad-hoc basis. For vulnerability scanning, you would pull down Docker Scout each time. Both debugging and scanning would drag you outside your core workflow and break your vibe.

After: One Line for Universal Local Environment

Remember those five containers you had to set up individually? Now, your Docker Compose “Spine” carries the message and structure to automatically set all those up for you with a single command and a single file (compose.yaml)

The resulting YAML pulls down and lists in a readable format the entire setup (database, cache, API, UI/UX) of your setup, all living on a shared network with security, observability, and any other necessary services already in place. Not only does this save time and ensure consistency, but it also greatly boosts security (manual config error remains one of the leading sources of security breaches, according to the Verizon 2025 DBIR Report). This also standardizes all mounts and ports, ensuring secrets are treated uniformly. For compliance and artifact provenance, all audit logs are automatically mounted for local compliance checks.

Compose also makes debugging and hardening apps locally easier for developers who don’t want to think about setting up debug services. With Compose, the developer or platform team can add a debug profile that invokes a host of debug services (Prometheus for metrics, OpenTelemetry for distributed tracing, Grafana for dashboards, ModeSec for firewall rules). That said, you don’t want to add debug services to production apps in Kubernetes.

Enter Compose Bridge. This new addition to Docker Compose incorporates environmental awareness into all services, removing those that should not be deployed in production, and provides a clean Helm Chart or YAML manifest for production teams. So application developers don’t need to worry about stripping service calls before throwing code over the fence. More broadly, Compose Bridge enforces:

Clean separation – production YAML stays lean, with no leftover debug containers or extra resource definitions.

Conditional inclusion – Bridge reads profiles: settings and injects the right labels, annotations, and sidecars only when you ask for them.

Consistent templating – Bridge handles the profile logic at generation time, so all downstream manifests conform to stage and environment-specific policies and naming conventions

The result? Platform Operations teams can maintain different Docker Compose templates for various application development teams, keeping everyone on the established paths while providing customization where needed. Application Security teams can easily review or scan standardized YAML files to simplify policy adherence across configuration verification, secret handling, and services accessed.

Before:  CI & Testing Lead to Script Sprawl and Complexity

Application developers pass their code off to the DevOps team (or have the joy of running the CI/CD gauntlet themes). Teams typically wire up their CI tool (Jenkins, GitLab CI, GitHub Actions, etc.) to run shell-based workflows. Any changes to the application, like renaming a service, adding a dependency, adjusting a port, or adding a new service, mean editing those scripts or editing every CI step that invokes them. In theory, GitOps means automating much of this. In practice, the complexity is thinly buried and the system lacks, for better or for worse, a nervous system along the spine. The result? Builds break, tests fail, and the time to launch a new version and incorporate new code lengthens. Developers are inherently discouraged from shipping code faster because they know there’s a decent chance that even when everything shows green in their local environment tests, something will break in CI/CD. This dooms them to unpleasant troubleshooting ordeals. Without a nervous system along the spine to share information and easily propagate necessary changes, application lifecycles are more chaotic, less secure and less efficient. 

After: CI & Testing Run Fast, Smooth and Secure

After adopting Docker Compose as your application development spine, your CI/CD pipeline becomes a concise, reliable sequence that mirrors exactly what you run locally. A single compose.yaml declares every component so your CI job simply brings up the entire stack with docker compose up -d, orchestrating startup order and health checks without custom scripts or manual delays. You invoke your tests in the context of a real multi-container network via docker compose exec, replacing brittle mocks with true integration and end-to-end validation. When testing is complete, docker compose down tears down containers, networks, and volumes in one step, guaranteeing a clean slate for every build. Because CI consumes exactly the same manifest developers use on their workstations, feedback loops shrink to minutes, and promotions to staging or production require fewer (and often no) manual configuration tweaks.

Compose Bridge further elevates this efficiency and hardens security. After running tests, Bridge automatically converts your Docker Compose YAML file into Kubernetes manifests or a Helm chart, injecting network policies, security contexts, runtime protection sidecars, and audit log mounts based on your profiles and overlays. There’s no need for separate scripts or manual edits to bake in contract tests, policy validations, or vulnerability scanners. Your CI job can commit the generated artifacts directly to a GitOps repository, triggering an automated, policy-enforced rollout across all environments. This unified flow eliminates redundant configuration, prevents drift, and removes human error, turning CI/CD from a fragile sequence into a single, consistent pipeline.

Before: Production and Rollbacks are Floppy and Floundering 

When your application leaves CI and enters production, the absence of a solid spine becomes painfully clear. Platform teams must shoulder the weight of multiple files — Helm charts, raw manifests for network segmentation, pod security, autoscaling, ingress rules, API gateway configuration, logging agents, and policy enforcers. Each change ripples through, requiring manual edits in three or more places before nerves can carry the signal to a given cluster. There is no central backbone to keep everything aligned. A simple update to your service image or environment variable creates a cascade of copy-and-paste updates in values.yaml, template patches, and documentation. If something fails, your deployment collapses and you start manual reviews to find the source of the fault. Rolling back demands matching chart revisions to commits and manually issuing helm rollback. Without a nervous system to transmit clear rollback signals, each regional cluster becomes its own isolated segment. Canary and blue-green releases require separate, bespoke hooks or additional Argo CD applications, each one a new wrinkle in coordination. This floppy and floundering approach leaves your production lifecycle weak, communication slow, and the risk of human error high. The processes meant to support and stabilize your application instead become sources of friction and uncertainty, undermining the confidence of both engineering and operations teams.

After: Production and Rollbacks are Rock Solid

With Docker Compose Bridge acting as your application’s spinal cord, production and rollbacks gain the support and streamlined communication they’ve been missing. Your single compose.yaml file becomes the vertebral column that holds every service definition, environment variable, volume mount, and compliance rule in alignment. When you invoke docker compose bridge generate, the Bridge transforms that backbone into clean Kubernetes manifests or a Helm chart, automatically weaving in network policies, pod security contexts, runtime protection sidecars, scaling rules, and audit-log mounts. There is no need for separate template edits. Changes made to the Compose file propagate in real-time through all generated artifacts. Deployment can be as simple as committing the updated Compose file to your GitOps repository. Argo CD or Flux then serves as the extended nervous system, transmitting the rollout signal across every regional cluster in a consistent, policy-enforced manner. If you need to reverse course, reverting the Compose file acts like a reflex arc: Bridge regenerates the previous manifests and GitOps reverts each cluster to its prior state without manual intervention. Canary and blue-green strategies fit naturally into this framework through Compose profiles and Bridge overlays, eliminating the need for ad-hoc hooks. Your production pipeline is no longer a loose bundle of scripts and templates but a unified, resilient spine that supports growth, delivers rapid feedback, and ensures secure, reliable releases across all environments.

A Fully Composed Spine for the Full Lifecycle

To summarize, Docker Compose and Compose Bridge give your application a continuous spine running from local development through CI / CD, security validation and multi-region Kubernetes rollout. You define every service, policy and profile once in a Compose file, and Bridge generates production ready manifests with network policies, security contexts, telemetry, database, API and audit-log mounts already included. Automated GitOps rollouts and single-commit rollbacks make deployments reliable and auditable and fast. This helps application developers focus on features instead of plumbing, gives AppSec consistent policy enforcement, allows SecOps to maintain standardized audit trails, helps PlatformOps simplify operations and delivers faster time to market with reduced risk for the business.

Ready to streamline your pipeline and enforce security? Give it a try in your next project by defining your stack in Compose, then adding Bridge to automate manifest generation and GitOps rollouts.
Quelle: https://blog.docker.com/feed/