Google monetarisiert KI: Erste Werbeanzeigen im KI-Mode von Google
Google startet die Vermarktung seiner KI-gestützten Suchergebnisse und setzt damit auf einen Vorsprung gegenüber Konkurrenten wie ChatGPT. (Google, KI)
Quelle: Golem
Google startet die Vermarktung seiner KI-gestützten Suchergebnisse und setzt damit auf einen Vorsprung gegenüber Konkurrenten wie ChatGPT. (Google, KI)
Quelle: Golem
On November 12-14, the Docker team was out in numbers at JFrog SwampUP Berlin 2025. We joined technical sessions, put on a fireside chat, and had conversations with attendees there. We’d like to thank the folks at JFrog for having us there and putting on such a great show!
Here’s our takeaways from the event about software supply chain security trends:
Software supply chain attacks reach unprecedented scale leveraging open source packages
An analysis of recent software supply chain attacks by JFrog’s CTO Asaf Karas shed light on how malicious actors leverage AI and software supply chains on their exploits. Recent attacks combine existing techniques, like phishing, in combination with AI prompts that recursively write and execute code in order to compromise hundreds of thousands of systems running popular open source packages. A few examples include Shai Hulud, Red Donkey, and the recent NPM package phishing attack. So far, despite these attacks’ scale, damages have been limited due to the still rudimentary nature of these exploits. Expect more software supply chain attacks as well as more sophistication in the coming year.
New Roles of Governance as a Security Layer
The best way to avoid software supply chain attacks is to not have malicious code entering software supply chains in the first place. That’s where governance comes into play. Taking control of gate points during the software development lifecycle, for example during dependency scanning, build pipelines, and deployments is not enough. It is necessary to block malicious or risky code before it enters the software supply chain. Not only that, but also tools need increased interoperability to detect all potential attack vectors.
Addressing MCP Challenges in AI Development
MCP’s ability to leverage both deterministic and non-deterministic outcomes by connecting an LLM client to many different servers seems to be the main reasons companies are betting on the technology to build applications that deliver value to customers. Moreover, because each server can run independently from one another, it becomes possible to add governance layers on MCP servers, reducing risks of hallucination or unexpected results. Overall, we agree with JFrog’s assessment and look forward to opportunities where Docker and JFrog MCP technologies can work together for a safer and smoother enterprise AI developer experience.
Building on Strong Open Source Foundations Is Core in the AI Era
The fireside chat between Gal Marder, JFrog’s Chief Strategy Officer, and Michael Donovan, Docker’s VP of Product, explored how organizations can protect themselves from risks in unverified open source dependencies. They emphasized the importance of starting with strong foundations: using hardened images, maintaining them throughout their lifecycle, including those that have reached end of life, and ensuring visibility and governance across every stage. Strong third-party integrations are essential to manage this complexity effectively and extend security and trust from development to delivery.
Conclusion: Build strong foundations, keep it consistent, stay ahead
Software development is changing fast as AI becomes part of everyone’s workflow, developers and attackers alike. The best way to stay ahead is to build protection early by starting with strong foundations and keep it consistent across every stage with governance, visibility, and strong partnerships. Only then can teams innovate with confidence and speed as the landscape evolves. Exciting times!
Learn more
Subscribe to the Docker Navigator Newsletter
Explore the MCP Catalog: Discover containerized, security-hardened MCP servers
Explore the DHI Catalog: Discover secure, minimal, production-ready container images
Docker Partner Programs: Discover trusted partners, tools, and integrations
New to Docker? Create an account
Have questions? The Docker community is here to help
Quelle: https://blog.docker.com/feed/
Amazon EC2 Image Builder now supports automatic versioning for recipes and automatic build version incrementing for components, reducing the overhead of managing versions manually. This enables you to increment versions automatically and dynamically reference the latest compatible versions in your pipelines without manual updates. With automatic versioning, you no longer need to manually track and increment version numbers when creating new versions of your recipes. You can simply place a single ‘x’ placeholder in any position of the version number, and Image Builder detects the latest existing version and automatically increments that position. For components, Image Builder automatically increments the build version when you create a component with the same name and semantic version. When referencing resources in your configurations, wildcard patterns automatically resolve to the highest available version matching the specified pattern, ensuring your pipelines always use the latest versions. Auto-versioning is available in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions. You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK. Refer to documentation to learn more about recipes, components and semantic versioning.
Quelle: aws.amazon.com
AWS Device Farm enables mobile and web developers to test their apps using real mobile devices and desktop browsers. Starting today, you can connect to a fully managed Appium endpoint using only a few lines of code and run interactive tests on multiple physical devices directly from your IDE or local machine. This feature also seamlessly works with third-party tools such as Appium Inspector — both hosted and local versions — for all actions including element inspection.
Support for live video and log streaming enables you to get faster test feedback within your local workflow. It complements our existing server-side execution which gives you the scale and control to run secure enterprise-grade workloads. Taken together, Device Farm now offers you the ability to author, inspect, debug, test, and release mobile apps faster, whether from your IDE, AWS Console, or other environments.
To learn more, see Appium Testing in AWS Device Farm Developer Guide.
Quelle: aws.amazon.com
Today, AWS Payments Cryptography announces support for hybrid post-quantum (PQ) TLS to secure API calls. With this launch, customers can future-proof transmissions of sensitive data and commands using ML-KEM post-quantum cryptography. Enterprises operating highly regulated workloads wish to reduce post-quantum risks from “harvest now, decrypt later”. Long-lived data-in-transit can be recorded today, then decrypted in the future when a sufficiently capable quantum computer becomes available. With today’s launch, AWS Payment Cryptography joins data protection services such as AWS Key Management Service (KMS) in addressing this concern by supporting PQ-TLS. To get started, simply ensure that your application depends on a version of AWS SDK or browser that supports PQ-TLS. For detailed guidance by language and platform, visit the PQ-TLS enablement documentation. Customers can also validate that ML-KEM was used to secure the TLS session for an API call by reviewing tlsDetails for the corresponding CloudTrail event in the console or a configured CloudTrail trail. These capabilities are generally available in all AWS Regions at no added cost. To get started with PQ-TLS and Payment Cyptography, see our post-quantum TLS guide. For more information about PQC at AWS, please see PQC shared responsibility.
Quelle: aws.amazon.com
Amazon SageMaker now supports Amazon Athena for Apache Spark, bringing a new notebook experience and fast serverless Spark experience together within a unified workspace. Now, data engineers, analysts, and data scientists can easily query data, run Python code, develop jobs, train models, visualize data, and work with AI from one place, with no infrastructure to manage and second-level billing. Athena for Apache Spark scales in seconds to support any workload, from interactive queries to petabyte-scale jobs. Athena for Apache Spark now runs on Spark 3.5.6, the same high-performance Spark engine available across AWS, optimized for open table formats including Apache Iceberg and Delta Lake. It brings you new debugging features, real-time monitoring in the Spark UI, and secure interactive cluster communication through Spark Connect. As you use these capabilities to work with your data, Athena for Spark now enforces table-level access controls defined in AWS Lake Formation.
Athena for Apache Spark is now available with Amazon SageMaker notebooks in US East (Ohio), US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, visit Apache Spark engine version 3.5, read the AWS News Blog or visit Amazon SageMaker documentation. Visit the Getting Started guide to try it from Amazon SageMaker notebooks.
Quelle: aws.amazon.com
Amazon EMR Serverless now supports Apache Spark 4.0.1 (preview). With Spark 4.0.1, you can build and maintain data pipelines more easily with ANSI SQL and VARIANT data types, strengthen compliance and governance frameworks with Apache Iceberg v3 table format, and deploy new real-time applications faster with enhanced streaming capabilities. This enables your teams to reduce technical debt and iterate more quickly, while ensuring data accuracy and consistency. With Spark 4.0.1, you can build data pipelines with standard ANSI SQL, making it accessible to a larger set of users who don’t know programming languages like Python or Scala. Spark 4.0.1 natively supports JSON and semi-structured data through VARIANT data types, providing flexibility for handling diverse data formats. You can strengthen compliance and governance through Apache Iceberg v3 table format, which provides transaction guarantees and tracks how your data changes over time, creating the audit trails you need for regulatory requirements. You can deploy real-time applications faster with improved streaming controls that let you manage complex stateful operations and monitor streaming jobs more easily. With this capability, you can support use cases like fraud detection and real-time personalization. Apache Spark 4.0.1 is available in preview in all regions where EMR Serverless is available, excluding China and AWS GovCloud (US) regions. To learn more about Apache Spark 4.0.1 on Amazon EMR, visit the Amazon EMR Serverless release notes, or get started by creating an EMR application with Spark 4.0.1 from the AWS Management Console.
Quelle: aws.amazon.com
Die Entwicklung von Windows 1.0 wurde von Schreiduellen und Bill Gates’ Ego geplagt. Kein Wunder, dass es um Jahre verschoben wurde. Von Oliver Nickel (Windows, Microsoft)
Quelle: Golem
Innovation in AI is about empowering every developer and organization with the freedom to choose the right intelligence for every challenge. In today’s landscape, where business needs span from real-time chatbots to deep research agents, model choice is an essential engine of progress.
Microsoft Foundry already offers the widest selection of models of any cloud and with today’s partnership announcement with Anthropic, we’re proud that Azure is now the only cloud providing access to both Claude and GPT frontier models to customers on one platform. This milestone expands Foundry further into what it was built to be: a single place to use any model, any framework, and every enterprise control you need to build and run AI apps and agents at scale.
“We’re excited to use Anthropic Claude models from Microsoft Foundry. Having Claude’s advanced reasoning alongside GPT models in one platform gives us flexibility to build scalable, enterprise-grade workflows that move far beyond prototypes.” — Michele Catasta, President, Replit
Start building with Claude in Microsoft Foundry today
Meet the Claude models: AI that delivers real results
According to Anthropic, Claude models are engineered for the realities of enterprise development, from tight integration with productivity tools to deep, multi-document research and agentic software development across large repositories.
Model
Strengths
Ideal use cases
Claude Haiku 4.5
Fastest, most cost-efficient
Powering free tier user experiences, real-time experiences, coding sub-agents, financial sub-agents, research sub-agents, business tasks
Claude Sonnet 4.5
Smartest model for complex agents and coding
Long-running agents, coding, cybersecurity, financial analysis, computer use, research
Claude Opus 4.1
Exceptional model for specialized reasoning tasks
Advanced coding, long-horizon tasks and complex problem solving, AI agents, agentic search and research, content creation
All Claude models are built on Constitutional AI for safety and can now be deployed through Foundry with governance, observability, and rapid integration. This enables secure use cases like customer support agents, coding agents, and research copilots: making Claude an ideal choice for scalable, trustworthy AI.
Evolving from monolithic apps to intelligent agents
Across the tech landscape, organizations are embracing agentic AI systems. Early studies show AI agents can help boost efficiency by up to 30% for teams and stakeholders. But the challenge for most enterprises isn’t building powerful apps; it’s operationalizing them and weaving them into real workflows. Industry surveys point to a clear pattern. 78% percent of executives say the primary barrier to scaling AI impact is integrating it into core business processes.
Microsoft is uniquely positioned to address this integration gap. With Foundry, we’re bringing together leading-edge reasoning models, an open platform for innovation, and Responsible AI all within a unified environment. This empowers organizations to experiment, iterate, deploy, and scale AI with confidence, all backed by robust governance and security. This means building AI solutions that are not only powerful, but practical and ready to deliver impact at scale.
“Manus deeply utilizes Anthropic’s Claude models because of their strong capabilities in coding and long-horizon task planning, together with their prowess to handle agentic tasks. We are very excited to be using them now on Azure AI Foundry!” — Tao Zhang, Co-founder & Chief Product Officer, Manus AI.
Claude in Foundry Agent Service: From reasoning to results
Inside Foundry Agent Service, Claude models serve as the reasoning core behind intelligent, goal-driven agents. Developers can:
Plan multi-step workflows: Leverage Claude in Foundry Agent Service to orchestrate complex, multi-stage tasks with structured reasoning and long-context understanding
Streamline AI integration with your everyday productivity tools: Use the Model Context Protocol (MCP) to seamlessly connect Claude to data fetchers, pipelines, and external APIs, enabling dynamic actions across your stack.
Automate data operations: Upload files for Claude to summarize, classify, or extract insights to accelerate document-driven processes with robust AI.
Real-time model selection: Using the model router, customers can soon automatically route requests to Claude Opus 4.1, Sonnet 4.5, and Haiku 4.5. Lowering latency and delivering cost savings in production.
Govern and operate your fleet: Foundry offers unified controls and oversight, allowing developers to operate their entire agent fleet with clear insight into cost, performance, and behavior in one connected view.
Developers can also use Claude models in Microsoft Foundry with Claude Code, Anthropic’s AI coding agent.
These capabilities create a framework for AI agents to safely execute complex workflows with minimal human involvement. For example, if a deployment fails, Claude can query Azure DevOps logs, diagnose the root cause, recommend a fix, and trigger a patch deployment all automatically, using registered tools and operating within governed Azure workflows.
Claude Skills: Modular intelligence you can compose
With the Claude API, developers can define skills modular building blocks that combine:
Natural-language instructions,
Optional Python or Bash code, and
Linked data files (templates, visual assets, tabular data, etc.), or APIs
Each skill is dynamically discovered, maximizing your agent’s context. Skills automate a workflow like generating reports, cleaning datasets, or assembling PowerPoint summaries and can be reused or chained with others to form larger automations. Within Microsoft Foundry, every Skill is governed, tracible, and version-controlled, ensuring reliability across teams and projects.
These capabilities allow developers to create Skills that become reusable building blocks for intelligent automation. For example, instead of embedding complex logic in prompts, a Skill can teach Claude how to interact with a system, execute code, analyze data, or transform content and through the Model Context Protocol (MCP), those Skills can be invoked by any agent as part of a larger workflow. This makes it easier to standardize expertise, ensure consistency, and scale automation across teams and applications.
Custom Deep Research: Context that connects beyond a single prompt
Claude’s Deep Research capability extends model reasoning beyond static queries. It allows agents to gather information from live sources, compare it with internal enterprise data, and produce well-reasoned, source-grounded insights. This transforms agents from simple responders into analytical systems capable of synthesizing trends, evidence, and context at scale.
Pricing
Marketplace Models
Deployment Type
Azure Resource Endpoints
Input/1M TokensOutput/1M Tokens
Claude Haiku 4.5
Global Standard
East US 2, West US
$1.00
$5.00
Claude Sonnet 4.5
Global Standard
East US 2, West US
$3.00
$15.00
Claude Opus 4.1
Global Standard
East US 2, West US
$15.00
$75.00
Looking ahead
Our partnership with Anthropic is about more than just bringing new models to Foundry. It’s about empowering every person and organization to achieve more with AI. We look forward to seeing how developers and enterprises leverage these new capabilities to build the next generation of intelligent systems.
Ready to explore Claude in Foundry? Start building today and join us in shaping the next generation of intelligent agents. Tune in to Ignite for more exciting Microsoft Foundry announcements: register today.
The post Introducing Anthropic’s Claude models in Microsoft Foundry: Bringing Frontier intelligence to Azure appeared first on Microsoft Azure Blog.
Quelle: Azure
One year ago, at Microsoft Ignite, we set out to redefine enterprise intelligence with Foundry. Our conviction was clear: software would evolve beyond rigid workflows, becoming systems that reason, adapt, and act with purpose. We envisioned developers moving from prescriptive logic to shaping intelligent behavior.
Today, that transformation is accelerating across industries and organizations of every size. The shift is tangible: agents are no longer just assistants, they are dynamic collaborators, seamlessly integrated into the tools we use every day. For builders, agents are reshaping software, and we are delivering a platform that empowers every developer and every business to embrace this moment with confidence and control.
Microsoft Foundry helps builders everywhere turn vision into reality with a modular, interoperable, and secure agent stack. From code to cloud, today demonstrates our focus on empowering developers with a powerful, simple—and trusted—path to production AI apps and agents. Here is the TL;DR:
Foundry Models added new models from Anthropic, Cohere, NVIDIA, and more. Model router is now generally available in Microsoft Foundry and in public preview in Foundry Agent Service.
Foundry IQ, now in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process, simplifying orchestration and improving response quality.
Foundry Agent Service now offers Hosted Agents, multi-agent workflows, built-in memory, and the ability to deploy agents directly to Microsoft 365 and Agent 365 in public preview.
Foundry Tools, empowers developers to create agents with secure, real-time access to business systems, business logic, and multimodal capabilities.
Foundry Control Plane, now in public preview, centralizes identity, policy, observability, and security signals and capabilities for AI developers in one portal. GitHub Advanced Security and Microsoft Defender integration, now in public preview, helps improve collaboration between security and development teams across the full app lifecycle.
Foundry Local, now in private preview on Android, the world’s most widely used mobile platform.
Managed Instance on Azure App Service, now in public preview, helps organizations move their web applications to the cloud with just a few configuration changes.
Next-level productivity: AI-powered tools for builders
It all starts with developers, and GitHub is the world’s largest developer community, now serving over 180 million developers. AI-powered tools and agents in GitHub are helping developers move faster, build increasingly innovative apps, and modernize legacy systems more efficiently. More than 500 million pull requests were merged using AI coding agents this year, and with AgentHQ, coding agents like Codex, Claude Code, and Jules will be available soon directly in GitHub and Visual Studio Code so developers can go from idea to implementation faster. GitHub Copilot, the world’s the most popular AI pair programmer, now serves over 26 million users, helping organizations like Pantone, Ahold Delhaize USA, and Commerzbank streamline processes and save time.
Over the last year, developers have moved from experimentation to production. They need tools that let them design, test, monitor, and improve intelligent systems with the same confidence they have in traditional software. That’s why we built a new generation of AI-powered tools: GitHub Agent HQ for unified agent management, Custom Agents to encode domain expertise, and “bring your own models” to empower teams to adapt and innovate. With Copilot Metrics, teams evolve with data, not guesswork.
We’re committed to giving every developer the tools to design, test, and improve intelligent systems, so they can turn ideas into impact, faster than ever. Managed Instance on Azure App Service, now in public preview, lets organizations move existing .NET applications to the cloud with only a few configuration changes.
Enter Microsoft Foundry: The AI app and agent factory
Enterprises need a consistent foundation to build intelligence at scale. With Microsoft Foundry, we’re unifying models, tools, and knowledge into one open system, empowering organizations to run high-performing agent fleets and intelligent workflows across their business.
Today, teams can choose from over 11,000 frontier models in Foundry, including optimized solutions for scale and specialized models for scientific and industrial breakthroughs. I’m proud to announce Rosetta Fold 3, a next-generation biomolecular structure prediction model developed with the Institute for Protein Design and Microsoft’s AI for Good Lab. Models like these enable researchers and enterprises to tackle the world’s hardest problems with state-of-the-art technology.
Build AI agents with Microsoft Foundry
Here is our top Ignite news for Foundry:
1. Use the right model for every task with Foundry Models
Innovation thrives on adaptability and choice. With more than 11,000 models, Microsoft Foundry offers the broadest model selection on any cloud. Foundry Models empowers developers to benchmark, compare, and dynamically route models to optimize performance for every task.
Today’s announcements include:
Starting today, Anthropic’s Claude Sonnet 4.5, Opus 4.1, and Haiku 4.5 models are available in Foundry, advancing our mission to give customers choice across the industry’s leading frontier models, and making Azure the only cloud offering both OpenAI and Anthropic models. Also this week, Cohere’s leading models join Foundry’s first-party model lineup, providing ultimate model choice and flexibility.
Model router (generally available) enables AI apps and agents to dynamically select the best-fit model for each prompt—balancing cost, performance, and quality. Plus, in model router in Foundry Agent Service (public preview), enables developers to build more adaptable and efficient agents; particularly helpful for multi-agent systems. model router in Foundry Agent Service (public preview), enables developers to build more adaptable and efficient agents; particularly helpful for multi-agent systems.
A new Developer Tier (public preview) makes model fine-tuning more accessible by leveraging idle GPU capacity.
Optimize AI performance with Foundry Models
2. Empower agents with knowledge using Foundry IQ
The more context an agent has, the more grounded, productive, and reliable it’s likely to be. Foundry IQ, now available in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process rather than a one-time lookup. Powered by Azure AI Search, it centralizes RAG workflows into a single grounding API, simplifying orchestration and improving response quality while respecting user permissions and data classifications.
Key features include:
Simplified cross-source grounding with no upfront indexing.
Multi-source selection, iterative retrieval, and reflection to dynamically improve the quality of agent interactions.
Foundry Agent Service integration to enrich agent context in a single, observable runtime.
Foundry already powers more than 3 billion search queries per day. By combining Foundry IQ with Microsoft Fabric IQ and Work IQ from Microsoft 365 Copilot, Microsoft provides an unparalleled context layer for agents, helping them connect users with the right information at the right time to make informed decisions.
Start building reliable agents with Foundry IQ
3. Build context-aware, action-oriented agents with Foundry Agent Service
To be force multipliers, agents need access to the same tools and knowledge as the people they support. Foundry Agent Service empowers developers to create sophisticated single and multi-agent systems, connecting models, knowledge, and tools into a single, observable runtime.
Today’s announcements include:
Hosted Agents (public preview) enable developers to run agents built with Microsoft frameworks or third-party frameworks in a fully managed environment, so they can focus on agent logic rather than operational overhead.
Multi-agent workflows (public preview) coordinate specialized agents to execute multi-step business processes using either a visual designer or a code-first API. Workflows enable long-running, stateful collaboration with recovery and debugging built-in.
Memory (public preview) enables agents to securely retain context across sessions, reducing external data-store complexity and enabling more personalized interactions out-of-the-box.
Microsoft 365 and Agent 365 integration (public preview) enables developers to instantly deploy agents from Foundry to Microsoft productivity apps, making it easier to reach users directly within the M365 ecosystem while leveraging Agent 365 for secure orchestration, governance, and enterprise-grade deployment.
Create multi-agent systems with Foundry Agent Service
4. Enable agents to take action using Foundry Tools
The right tools can transform agents from simple responders into intelligent problem-solvers. With Foundry Tools, now in public preview, developers can provide agents with secure, real-time access to business systems, business logic, and multimodal capabilities to deliver business value.
Now, developers can:
Find, connect, and manage public or private MCP tools for agents from a single, secure interface.
Enable agents to act on real-time business data and events using more than 1,400 connectors with business systems such as SAP, Salesforce, and UiPath.
Enrich workflows with out-of-the-box tools such as transcription, translation, and document processing.
Expose any API or function as an MCP tool via API Management, reusing existing business logic to accelerate time-to-value.
Enable AI agents with MCP tools with Foundry Tools
5. Advancing security and trust with Foundry Control Plane
Scaling intelligence requires trust. As organizations rely on agents and AI powered systems for more of their workflows, teams need clearer visibility, stronger guardrails, and faster ways to identify and address risk. This year we’re expanding security and governance with two key announcements: Foundry Control Plane, now in public preview in Microsoft Foundry, and a new integration between Microsoft Defender for Cloud and GitHub Advanced Security, also in public preview. Together they give developers and security teams a more connected way to monitor behavior, guide access, and keep AI systems safe across the full lifecycle.
Foundry Control Plane brings identity, controls, observability, and security together in one place so teams can build, operate, and govern agents with confidence. Key capabilities include:
Controls that apply unified guardrails across inputs, outputs, and tool interactions to keep agents focused, accurate, and within defined boundaries.
Observability with built in evaluations, OTel based tracing, continuous red teaming, and dashboards that surface insights on quality, performance, safety, and cost.
Security anchored in Entra Agent ID, Defender, and Purview to provide durable identity, policy driven access, integrated data protection, and real-time risk detection across the agent lifecycle.
Fleet wide operations that unify health, cost, performance, risk, and policy coverage for every agent, no matter where it was built or runs, with alerts that surface issues the moment they appear empowering developers to take action.
Defender for Cloud + GitHub Advanced Security integration
Developers and security teams often work in separate tools and lack shared signals to prioritize risks. The new Defender for Cloud and GitHub Advanced Security integration closes this gap. Developers receive AI suggested fixes directly inside GitHub, while security teams track progress in Defender for Cloud in real time. This gives both sides a faster, more connected way to identify issues, remediate them, and keep AI systems secure throughout the app lifecycle.
Secure your code with GitHub and Microsoft Defender
6. Foundry Local Comes to Android: Powering Cloud to Edge
Six months ago, we launched Foundry Local on Windows and Mac. In that short time, it’s reached 560 million devices, making it one of the fastest-growing runtimes in enterprise history. Leading organizations like NimbleEdge, Morgan Stanley, Dell, and Pieces are already using Local to bring intelligence directly into the environments where work happens, from financial services to healthcare and edge computing.
Today, we’re taking the next step. Foundry Local is now in private preview on Android, the world’s most widely used mobile platform. This means agents can run natively on billions of phones, unlocking real-time inference, privacy-aware computation, and resilience, even where connectivity is unpredictable.
We’re also announcing a new partnership with PhonePe, one of India’s fastest-growing platforms. Together, we’ll bring agentic experiences into everyday consumer applications, showing how Local can transform not just enterprise workflows, but daily life at massive scale.
7. Modernize your web apps for the era of AI in weeks, not months
We see customers building net new AI applications and integrating AI into existing applications. Both require a modern foundation. Managed Instance on Azure App Service, available in public preview, lets organizations move their .NET web applications to the cloud with just a few configuration changes, saving the time and effort of rewriting code. The result is faster migrations with lower overhead, and access to cloud-native scalability, built-in security and AI capabilities in Microsoft Foundry.
Migrate your web apps with Managed Instance on Azure App Service
Learn more and get started with Foundry
We hope you join us at Microsoft Ignite 2025, in-person or virtually, to see these new capabilities in action and learn how they can support your biggest ambitions for your business.
Explore Microsoft Foundry.
Watch our Innovation Session: Your AI Apps and Agent Factory.
Watch all recorded sessions at Ignite.
Chat with us on Discord.
Provide feedback on GitHub.
The post Microsoft Foundry: Scale innovation on a modular, interoperable, and secure agent stack appeared first on Microsoft Azure Blog.
Quelle: Azure