Microsoft named a Leader in the IDC MarketScape: Worldwide API Management 2026 Vendor Assessment

In this article

Built on a proven foundation extending into AIOne platform to scale APIs and AIGovernance by design for AI at scaleTurning AI innovation into business impactExpanding the platform for what’s next

As AI moves into production, how systems interact is fundamentally changing. Organizations must now manage not just APIs, but how AI systems operate across the enterprise.

We’re proud to share that Microsoft has been named Leader in the IDC MarketScape: Worldwide API Management 2026 Vendor Assessment (#US52034025, March 2026). We believe this recognition reflects our focus on helping organizations securely scale APIs and AI together with the control, visibility, and reliability required for production.

Read why Microsoft was named a Leader in the IDC MarketScape: Worldwide API Management 2026 Vendor Assessment

Built on a proven foundation extending into AI

For more than a decade, Azure API Management has served as a trusted control plane for API governance, security, and observability at a global scale, supporting over 38,000 customers, nearly 3 million APIs, and more than 3 trillion API requests each month. That foundation is now extending to a new class of workloads.

As organizations bring AI into production, they must govern a growing mix of API traffic and AI-driven interactions, each with new governance needs, cost dynamics, and reliability requirements. What was once about connecting systems and exposing APIs is now an operational challenge at scale. Organizations must continuously manage how models, tools, and agents behave in production controlling cost, enforcing policies, and ensuring reliability across multi-provider AI traffic.

AI gateway capabilities in API Management build on this foundation, extending API Management’s proven API governance to AI workloads. Today, more than 2,000 enterprise customers are already using these capabilities to safely operationalize AI.

One platform to scale APIs and AI

To meet this shift, organizations need a simpler model, one platform that brings consistency across both APIs and AI.

Azure API Management provides a single, Azure-native platform to govern everything from traditional APIs to AI models, tools, and agents, built on a foundation proven at enterprise scale. This allows organizations to move faster with AI without losing control, visibility, or consistency as they scale. By standardizing how systems connect and interact, teams can reduce fragmentation, simplify operations, and create a trusted foundation for innovation across the business.

Learn more about Azure API Management

This approach is already delivering results on a global scale. Heineken uses Azure API Management as the backbone of its global API platform, enabling teams to build and scale digital experiences faster while maintaining a consistent, centrally governed foundation. In just five months, Heineken built and deployed a worldwide API platform now handling 50 million API calls per month, achieving 100% uptime since go-live, and reducing cost per API call by up to 75% through standardized governance and security at scale.

Governance by design for AI at scale

As AI adoption grows, the challenge shifts from building models to operating them reliably in production. Organizations need a consistent way to control how AI systems operate in production.

Azure API Management provides that governance layer, allowing organizations to define how AI systems access models, tools, and agents, while enforcing security policies, monitoring usage, and maintaining control over cost and behavior across environments. This ensures every interaction is secure, observable, and aligned with business and compliance requirements.

This approach is already proving essential in real-world deployments. Banco Bradesco uses Azure API Management to securely manage AI services and APIs across channels, applying centralized governance and end-to-end visibility. By standardizing how APIs and AI services are exposed and consumed, the bank ensures consistent security policies, improves monitoring across interactions, and supports high-scale digital banking experiences with strong data protection.

With Microsoft Azure API Management, we securely manage AI services and APIs across all channels. It’s the backbone of our architecture scaling with demand while maintaining strict governance and data protection.
 —Phelipi Dal’Olio, Bridge Manager, Banco Bradesco

Turning AI innovation into business impact

With governance in place, organizations can move beyond experimentation and focus on delivering real business value with AI.

Telefónica Brasil is using Azure OpenAI to enhance customer interactions across digital channels. This improves service experiences, accelerates response times, and enables more personalized engagement at scale.

At the same time, Access Group embedded AI directly into its product portfolio. Using Azure API Management as the foundation of its AI gateway, Access launched over 50 AI-powered products in a single year and scaled to 2.2 million users. The company also achieved ISO 42001 certification for responsible AI, demonstrating how governance can accelerate innovation.

Air India deployed a generative AI assistant at scale. It now handles up to 40,000 customer queries per day, has resolved over 13 million conversations, and operates with a 97% success rate. This allows the airline to scale customer support without increasing agent volume while saving millions annually.

Azure API Management supports this shift by providing a consistent way to expose, secure, and manage the APIs that power these AI-driven experiences, helping organizations move from isolated innovation to production-ready, enterprise-scale impact.

Expanding the platform for what’s next

As organizations adopt new interaction patterns across APIs and AI systems, the platform continues to evolve. Azure API Management is expanding to support emerging scenarios, including governed agent interactions, exposing APIs as reusable tools for AI systems, and enabling centralized discovery and policy enforcement across environments. This ensures organizations can adopt new capabilities without introducing fragmentation or losing control.

As organizations continue to invest in AI, the ability to govern how systems and AI interact at scale will become a defining capability. API management is evolving from connecting systems to enabling controlled, trusted interaction across the enterprise.

We’re honored to be named a Leader in the 2026 IDC MarketScape for Worldwide API Management Vendor Assessment, and we remain committed to helping organizations scale APIs and AI with confidence.

Explore Azure API Management

See more

The post Microsoft named a Leader in the IDC MarketScape: Worldwide API Management 2026 Vendor Assessment appeared first on Microsoft Azure Blog.
Quelle: Azure

Amazon Bedrock now offers OpenAI models, Codex, and Managed Agents (Limited Preview)

AWS and OpenAI are expanding their partnership to bring frontier intelligence to the infrastructure millions of organizations already trust. Enterprises want the most capable AI models and agents, with the security, operational maturity, and data governance that production workloads demand. Today, we’re bringing those together with three new offerings on Amazon Bedrock, all in limited preview: the latest OpenAI models, Codex, and Managed Agents powered by OpenAI.
First, the latest OpenAI models are available on Amazon Bedrock. For the first time, AWS customers can access OpenAI frontier models through the same Bedrock services they already use for model access, fine-tuning, and orchestration. OpenAI models on Bedrock inherit the enterprise controls customers depend on, including IAM, AWS PrivateLink, guardrails, encryption, and CloudTrail logging. Second, Codex on Amazon Bedrock brings the OpenAI coding agent into the AWS environments where enterprise teams already build. Customers authenticate with AWS credentials and run inference through Bedrock. Codex will be available through Bedrock via the Codex CLI, desktop app, and VS Code extension. Usage of both OpenAI models and Codex can be applied toward existing AWS cloud commitments. Lastly, Amazon Bedrock Managed Agents, powered by OpenAI, makes it fast to deploy production-ready OpenAI-powered agents on AWS. At the core are the latest OpenAI frontier models and the OpenAI agent harness, engineered for faster execution, sharper reasoning, and reliable steering of long-running tasks. Every agent has its own identity, logs each action, and runs in your environment with all inference on Amazon Bedrock. Managed Agents works with Amazon Bedrock AgentCore, which provides the default compute environment.
Read the blog to learn more. To follow our progress and be among the first to hear about the latest updates, register here.
Quelle: aws.amazon.com

Amazon Connect Talent for AI-powered hiring (now available in Preview)

Amazon Connect Talent is now available in Preview, giving talent acquisition leaders an AI-powered hiring solution that accelerates candidate selection at scale. Informed by decades of Amazon’s hiring science, Amazon Connect Talent uses AI agents to conduct structured voice interviews, administer science-backed assessments, and score candidates consistently — freeing recruiters to focus on strategic decisions. Candidates interview 24/7 from any device. Recruiters review scores, transcripts, and detailed candidate evaluations generated by their AI teammate — empowering them to make faster hiring decisions with consistent objectivity. Preview capabilities include AI-driven skills assessments, AI-led voice interviews with adaptive questioning, a brand-customizable mobile-first candidate portal, a comprehensive recruiter dashboard, system admin onboarding tools, and Applicant Tracking System (ATS) integrations for quick deployment. Amazon Connect Talent scales to handle hiring surges, evaluating hundreds of candidates simultaneously. Amazon Connect Talent is available in AWS US East (N. Virginia) and US West (Oregon) regions. To learn more and request access, visit the Amazon Connect Talent page.
Quelle: aws.amazon.com

Amazon EC2 C8gn instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Europe (Milan) and Asia Pacific (Hong Kong) regions. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. 
Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference. 
For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 120 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. 
C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm, Ireland, London, Spain, Zurich, Milan), Asia Pacific (Singapore, Malaysia, Sydney, Thailand, Mumbai, Seoul, Melbourne, Jakarta, Hyderabad, Tokyo, Hong Kong), Middle East (UAE), Africa (Cape Town), Canada West (Calgary, Central), South America (Sao Paulo), AWS GovCloud (US-East, US-West).   To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Quelle: aws.amazon.com

AWS Cost Optimization Hub now supports CSV download

AWS Cost Optimization Hub now supports direct CSV download in the console, enabling you to export your cost optimization recommendations to your local machine with a single click. This capability provides a one click export option directly from the console and complements the existing Data Export feature for automated exports to Amazon S3.
With CSV download, you can instantly export recommendations that use your current console filters, sorting preferences, and grouping settings. The download begins immediately, making it easy to analyze recommendations in spreadsheet applications, share with stakeholders who don’t have AWS console access, or work with recommendations offline in your preferred tools.
This feature is available now in all regions where AWS Cost Optimization Hub is offered. To learn more, visit the Cost Optimization Hub page.
Quelle: aws.amazon.com

AWS Glue 5.1 is now available in all AWS Commercial and AWS GovCloud (US) Regions

AWS Glue 5.1 is now available in the Asia Pacific (New Zealand), AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. AWS Glue is a serverless, scalable data integration service that simplifies discovering, preparing, moving, and integrating data from multiple sources. AWS Glue 5.1 upgrades core engines to Apache Spark 3.5.6, Python 3.11, and Scala 2.12.18, bringing performance and security enhancements. This release also updates support for open table format libraries, including Apache Hudi 1.0.2, Apache Iceberg 1.10.0, and Delta Lake 3.3.2. Additionally, AWS Glue 5.1 introduces support for Apache Iceberg format version 3.0, adding default column values, deletion vectors for merge-on-read tables, multi-argument transforms, and row lineage tracking. This release extends AWS Lake Formation fine-grained access control to write operations – both DML and DDL – for Spark DataFrames and Spark SQL. Previously, this capability was limited to read operations only. AWS Glue 5.1 also adds full-table access control in Apache Spark for Apache Hudi and Delta Lake tables, providing more comprehensive security options for your data. With this expansion, AWS Glue 5.1 is now available all AWS commercial and AWS GovCloud (US) Regions. You can get started with AWS Glue 5.1 using AWS APIs, AWS CLI, AWS SDK, or AWS Glue Studio. To learn more, visit the AWS Glue product page and our documentation. 
Quelle: aws.amazon.com