OpenAI’s GPT-5.5 in Microsoft Foundry: Frontier intelligence on an enterprise ready platform

OpenAI’s GPT-5.5 will be generally available tomorrow in Microsoft Foundry, bringing OpenAI’s latest frontier model to Azure and the enterprise teams building agents for real production work.

GPT-5.5 continues a clear progression in the GPT-5 series. GPT-5 brought unified reasoning and speed into a single system. GPT-5.4 brought stronger multi-step reasoning and early agentic capabilities for enterprise use. GPT-5.5 advances this arc with deeper long-context reasoning, more reliable agentic execution, improved computer-use accuracy, and greater token efficiency—designed for sustained, high-stakes professional workflows.

Powerful models alone aren’t enough to operationalize agentic AI at scale. Microsoft Foundry provides the platform layer that turns frontier models into usable, governable systems that enable enterprises to apply security policy and management at the platform level. Foundry is a unified, interoperable environment to build, optimize, and deploy AI applications and agents with confidence. Customers benefit from broad model choice, open and flexible agent frameworks, native integration with enterprise systems and productivity tools, and enterprise-grade security, compliance, and governance. When new models like GPT-5.5 become available, Foundry makes it easy to evaluate, productionize, and scale them without friction.

Explore models in Microsoft Foundry

What’s new in GPT-5.5

GPT-5.5 is built for professional scenarios where precision, reliability, and persistence matter. GPT-5.5 Pro, a premium variant, extends reasoning depth and task complexity for the most demanding enterprise workloads.

Improved agentic coding and computer-use: Executes multi-step engineering tasks end-to-end—holding context across large systems, diagnosing the root cause of ambiguous failures at the architectural level, and reasoning through what else in the codebase a fix will affect before making a move. It anticipates downstream testing and review requirements without needing to be told, and navigates software interfaces with improved precision and more reliable recovery when execution takes an unexpected turn.

Autonomous execution and research depth: Goes beyond code to handle the full span of professional work—producing polished deliverables like documents, spreadsheets, and presentations. For research-intensive workflows, GPT-5.5 operates as an active collaborator across the entire arc from question to output: refining drafts across multiple passes, stress-testing analytical reasoning, proposing approaches, and synthesizing across documents, data, and code to drive work forward rather than just answering it.

Complex reasoning and long-context analysis: Handles extensive documents, codebases, and multi-session histories without losing the thread.

Token efficiency built for scale: GPT-5.5 reaches higher-quality outputs with fewer tokens and fewer retries—lowering cost and latency for production deployments at scale.

GPT-5.5 is particularly well suited for domains where the cost of imprecision is high—such as software engineering, DevOps, legal, health sciences, and professional services. With GPT-5.5 in Microsoft Foundry, customers can pair OpenAI’s latest frontier model with enterprise-grade infrastructure to put agentic AI into production.

Microsoft Foundry: The operating system for GPT-5.5 agents at scale

Access to a frontier model is just the starting point. What we see from customers is that the hard part isn’t building an agent: it’s running thousands of them in production, with real isolation, identity, and governance. That’s where Foundry Agent Service comes in.

A revolution is unfolding in the market. Now, a developer can reliably reason through a business problem with a coding agent—human’s interact with a model doing heavy thinking, research, asking questions — and the output is a production agent: a declarative workflow suitable for a specific task, and connected to your business systems.

These declarative agents can be defined in YAML or written in a harness like Microsoft Agent Framework, GitHub Copilot SDK, or virtually any library. With hosted agents in Foundry Agent Service, LangGraph, Claude Agent SDK, and OpenAI Agents SDKs all work the same way. Engineers can run a single command to land agents in an isolated sandbox with a persistent filesystem, a distinct Microsoft Entra identity, and scale-to-zero pricing. Enterprise ready agents, at scale, powered by GPT-5.5.

Learn more about Foundry Agent Service and Microsoft Agent Framework.

Pricing

ModelInput ($/M tokens)Cached Input ($/M tokens)Output ($/M tokens)GPT-5.5$5.00$0.50$30.00GPT-5.5 Pro$30.00$3.00$180.00

Get started with Microsoft Foundry

Stop experimenting and start building your next production AI workloads with GPT-5.5 in Microsoft Foundry.
The post OpenAI’s GPT-5.5 in Microsoft Foundry: Frontier intelligence on an enterprise ready platform appeared first on Microsoft Azure Blog.
Quelle: Azure

Amazon Connect now provides eight new metrics to measure and improve AI agent performance

Amazon Connect now provides eight new metrics to measure and improve AI agent performance, including goal success rate, faithfulness score, and tool selection accuracy. These metrics offer visibility into the quality of AI-driven customer interactions, enabling measurement and continuous improvement of AI agent outcomes. With this launch, you can monitor whether AI agents successfully resolved customer requests, assess faithfulness and detect contextual hallucinations. You can also evaluate tool selection and utilization accuracy, and capture customer feedback through thumbs up/down ratings when enabled.  You can access these new metrics through Amazon Connect’s AI Agent Performance dashboard, or through the GetMetricDataV2 API and zero-ETL data lake for custom reporting or integration with your existing analytics workflows.
This feature is available in all AWS Regions where Amazon Connect AI Agents is supported. For more information, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, an AI-native solution that turns every customer interaction into a moment worth remembering, visit the Amazon Connect website
Quelle: aws.amazon.com

Amazon Bedrock AgentCore Gateway and Identity support VPC egress

Amazon Bedrock AgentCore Gateway and Identity now provide secure and controlled egress traffic management for your applications, enabling seamless communication with resources in your Virtual Private Cloud (VPC). VPC egress for AgentCore Gateway targets and Identity credential providers are offered in both managed and self-managed configurations.
With VPC egress support, customers can now invoke private resources (e.g., EKS-hosted MCP servers) directly from their AgentCore Gateway. Managed VPC egress covers most customer use cases. For more complex networking setups, customers can configure their own VPC Lattice resources. AgentCore Identity VPC egress supports connectivity to Identity Providers (IdPs) running inside a customer’s VPC. This enables two key capabilities: validating inbound access tokens issued by your private IdP and fetching tokens from your IdP for outbound request authentication. Finally, this launch supports private DNS resolution for managed VPC egress resources across Gateway and Identity.
AgentCore Gateway and Identity are available in fourteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).
Learn more about VPC egress capabilities through AgentCore Gateway documentation, and AgentCore Identity documentation. Get started with the AgentCore CLI.
Quelle: aws.amazon.com

Amazon Quick now integrates with Visier’s Vee agent for workforce intelligence

Amazon Quick now integrates with Vee, the AI assistant from Visier’s people analytics platform, through the model context protocol (MCP). HR business partners, finance managers, and operations leaders can now get governed access to live workforce intelligence from Visier directly within their Amazon Quick workspace without switching tools.
After setting up the connection in Quick using Visier’s remote MCP server, you can ask questions in natural language about headcount, attrition, tenure, and open requisitions and receive answers grounded in Visier’s governed workforce data model. Vee can also be invoked from automated Quick Flows to run recurring workforce reviews or draft documents. Quick intelligently routes relevant prompts to Vee and returns contextualized answers alongside enterprise knowledge – such as budgets, policies, and plans stored in Quick Spaces – so every answer reflects the full organizational picture.
The Visier integration with Amazon Quick is available in all AWS Regions where Amazon Quick is available.
To get started with Amazon Quick, visit the website. To learn more about the Visier integration, read the Visier integration guide, see the blog, and explore more integrations on the integrations page.
Quelle: aws.amazon.com

AWS Lambda Provisioned Mode for Kafka event source mappings (ESMs) now available in AWS Asia Pacific (Taipei) and AWS GovCloud (US) Regions

AWS Lambda now supports Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources in the Asia Pacific (Taipei), AWS GovCloud (US-East), and AWS GovCloud (US-West) Regions. Provisioned Mode allows you to optimize the throughput of your Kafka ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic, helping you build highly responsive and scalable event-driven Kafka applications with stringent performance requirements.
Customers building streaming data applications often use Kafka as an event source for Lambda functions, relying on Lambda’s fully managed ESM to automatically scale polling resources in response to events. However, for event-driven Kafka applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in your users’ experience. Provisioned Mode for Kafka ESM enables customers to fine-tune the throughput of their Amazon Managed Streaming for Apache Kafka (MSK) ESM or self-managed Kafka ESM by provisioning and auto-scaling between a minimum and maximum number of polling resources called event pollers. With this launch, this feature is now available in three additional regions.  
You can activate Provisioned Mode for MSK ESM or self-managed Kafka ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, and AWS CloudFormation. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read the Lambda ESM documentation and AWS Lambda pricing. 
Quelle: aws.amazon.com