Amazon EBS introduces additional performance monitoring metrics for EBS volumes

Amazon EBS now provides additional visibility to monitor the average IOPS and average throughput of your Amazon EBS volumes with two new CloudWatch metrics – VolumeAvgIOPS and VolumeAvgThroughput. You can use the metrics to monitor the I/O being driven on your EBS volumes to track performance trends. With these new volume level metrics, you can troubleshoot performance bottlenecks and optimize your volume’s provisioned performance to meet your application needs. The metrics will provide per-minute visibility into the driven average IOPS and average throughput on your EBS volume. With Amazon CloudWatch, you can use the new metrics to create customized dashboards and set alarms that notify you or automatically perform actions based on the metrics. The VolumeAvgIOPS and VolumeAvgThroughput metrics are available by default at a 1-minute frequency at no additional charge and are supported for all EBS volumes attached to an EC2 Nitro instance in all Commercial AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. To learn more about these new metrics, please visit the EBS CloudWatch Metrics documentation.
Quelle: aws.amazon.com

4 new image editing tools added to Stability AI Image Services in Amazon Bedrock

Amazon Bedrock announces the availability of 4 new image editing tools to Stability AI Image Services: outpaint, fast upscale, conservative upscale, and creative upscale. These tools give creators precise control over their workflows, enabling them to transform concepts into finished products efficiently. The expanded suite now offers enhanced flexibility for professional creative projects. Stability AI Image Services offers three categories of image editing capabilities: Edit tools: Remove Background, Erase Object, Search and Replace, Search and Recolor, Inpaint, and Outpaint (NEW) let you make targeted modifications to specific parts of your images; Upscale tools: Fast Upscale (NEW), Conservative Upscale (NEW), and Creative Upscale (NEW) enable you to enhance resolution while preserving quality; Control tools: Structure, Sketch, Style Guide, and Style Transfer give you powerful ways to generate variations based on existing images or sketches. Stability AI Image Services is available in Amazon Bedrock through the API and is supported in US West (Oregon), US East (N. Virginia), and US East (Ohio). For more information on supported regions, visit the Amazon Bedrock Model Support by Regions guide. For more details about Stability AI Image Services and its capabilities, visit the launch blog, Stability AI product page, and Stability AI documentation page. 
Quelle: aws.amazon.com

TwelveLabs’ Marengo Embed 3.0 for advanced video understanding now in Amazon Bedrock

TwelveLabs’ Marengo Embed 3.0 is now available on Amazon Bedrock, bringing advanced video-native multimodal embedding capabilities to developers and organizations working with video content. Marengo embedding models unify videos, images, audio, and text into a single representation space, enabling you to build sophisticated video search and content analysis applications for any-to-any search, recommendation systems, and other multimodal tasks with industry-leading performance. Marengo 3.0 delivers several key enhancements. Extended video processing capacity: process up to 4 hours of video and audio content and files up to 6GB—double the capacity of previous versions—making it ideal for analyzing full sporting events, extended training videos, and complete film productions. Enhanced sports analysis: the model delivers significant improvements with better understanding of gameplay dynamics, player movements, and event detection. Global multilingual support: expanded language capabilities from 12 to 36 languages, enabling global organizations to build unified search and retrieval systems that work seamlessly across diverse regions and markets. Multimodal search precision: combine images and descriptive text in a single embedding request, merging visual similarity with semantic understanding to deliver more accurate and contextually relevant search results. AWS is the first cloud provider to offer TwelveLab’s Marengo 3.0 model, now available in US East (N. Virginia), Europe (Ireland), and Asia Pacific (Seoul). The model supports synchronous inference for low-latency text and image embeddings, and asynchronous inference for processing for video, audio, and large-scale image files. To get started, visit the Amazon Bedrock console. To learn more, read product page, and documentation. 
Quelle: aws.amazon.com

Introducing Agent HQ: Any agent, any way you work

The current AI landscape presents a challenge we’re all too familiar with: incredible power fragmented across different tools and interfaces. At GitHub, we’ve always worked to solve these kinds of systemic challenges—by making Git accessible, code review systematic with pull requests, and automating deployment with Actions.

With 180 million developers, GitHub is growing at its fastest rate ever—a new developer joining every second. What’s more, 80% of new developers are using Copilot in their first week. AI isn’t just a tool anymore; it’s an integral part of the development experience. Our responsibility is to ensure this new era of collaboration is powerful, secure, and seamlessly integrated into the workflow you already trust.

At GitHub Universe, we’re announcing Agent HQ, GitHub’s vision for the next evolution of our platform. Agents shouldn’t be bolted on. They should work the way you already work. That’s why we’re making agents native to the GitHub flow.

Agent HQ transforms GitHub into an open ecosystem that unites every agent on a single platform. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, xAI, and more will become available directly within GitHub as part of your paid GitHub Copilot subscription.

To bring this vision to life, we’re shipping a suite of new capabilities built on the primitives you trust. This starts with a mission control, a single command center to assign, steer, and track the work of multiple agents from anywhere. It extends to VS Code with new ways to plan and customize agent behavior. And it is backed by enterprise-grade functionality: a new generation of agentic code review, a dedicated control plane to govern AI access and agent behavior, and a metrics dashboard to understand the impact of AI on your work.

We are also deeply committed to investing in our platform and strengthening the primitives you rely on every day. This new world of development is powered by that foundational work, and we look forward to sharing more updates.

Let’s dive in.

In this postGitHub is your Agent HQ: An open ecosystem for all agentsMission control: Your command center, wherever you buildNew in VS Code: Plan, customize, and connectIncreased confidence and control for your teamGitHub is your Agent HQ: An open ecosystem for all agentsThe future is about giving you the power to orchestrate a fleet of specialized agents to perform complex tasks in parallel, not juggling a patchwork of disconnected tools or relying on a single agent. As the pioneer of asynchronous collaboration, we believe it’s our responsibility to make sure these next-generation async tools just work.

With Agent HQ what’s not changing is just as important as what is. You’re still working with the primitives you know—Git, pull requests, issues—and using your preferred compute, whether that’s GitHub Actions or self-hosted runners. You’re accessing agents through your existing paid Copilot subscription.

On top of that foundation, we’re opening the doors to a new world of capability. Over the coming months, coding agents from Anthropic, OpenAI, Google, Cognition, and xAI will be available on GitHub as part of your paid GitHub Copilot subscription.

Don’t want to wait? Starting this week, Copilot Pro+ users can begin working with OpenAI Codex in VS Code Insiders, the first of our partner agents to extend beyond its native surfaces and directly into the editor.

‘Our collaboration with GitHub has always pushed the frontier of how developers build software. The first Codex model helped power Copilot and inspired a new generation of AI-assisted coding. We share GitHub’s vision of meeting developers wherever they work, and we’re excited to bring Codex to millions more developers who use GitHub and VS Code, extending the power of Codex everywhere code gets written.’

Alexander Embiricos, Codex Product Lead, OpenAI

‘We’re partnering with GitHub to bring Claude even closer to how teams build software. With Agent HQ, Claude can pick up issues, create branches, commit code, and respond to pull requests, working alongside your team like any other collaborator. This is how we think the future of development works: agents and developers building together, on the infrastructure you already trust.’

Mike Krieger, Chief Product Officer, Anthropic

‘The best developer tools fit seamlessly into your workflow, helping you stay focused and move faster. With Agent HQ, Jules becomes a native assignee, streamlining manual steps and reducing friction in everyday development. This deeper integration with GitHub brings agents closer to where developers already work, making collaboration more natural and efficient.’

Kathy Korevec, Director of Product at Google LabsMission control: Your command center, wherever you buildThe power of Agent HQ comes from mission control, a unified command center that follows you wherever you work. It’s not a single destination; it’s a consistent interface across GitHub, VS Code, mobile, and the CLI that lets you direct, monitor, and manage every AI-driven task. With mission control, you can choose from a fleet of agents, assign them work in parallel, and track their progress from any device.

We’re also providing:

New branch controls that give you granular oversight over when to run CI and other checks for agent-created code.Identity features to control which agent is building the task, managing access, and policies just like you would with any other developer on your team.One-click merge conflict resolution, improved file navigation, and better code commenting capabilities.New integrations for Slack and Linear, on top of our recently announced connections for Atlassian Jira, Microsoft Teams and Azure Boards, and Raycast.Logos for Slack, Linear, Microsoft Teams, VS Code, Azure Boards, Jira, and Raycast.Try mission control today.

New in VS Code: Plan, customize, and connectMission control is in VS Code, too, so you’ve got a single view of all your agents running in VS Code, in the Copilot CLI, or on GitHub.

Today’s brand new release in VS Code is all about working alongside agents on projects, and it’s not surprising that great results start with a great plan. Getting the context right before a project is critical, but that same context needs to carry through into the work. Copilot already adapts to the way your team works by learning from your files and your project’s culture, but sometimes you need more pointed context.

So today, we’re introducing Plan Mode, which works with Copilot, and asks you clarifying questions along the way, to help you to build a step-by-step approach for your task. Providing the context upfront improves what Copilot can do and helps you find gaps, missing decisions, or project deficiencies early in the process—before any code is written. Once you approve, your plan goes to Copilot to start implementing, whether that’s locally in VS Code or using an agent in the cloud.

For even finer control, you can now create custom agents in VS Code with AGENTS.md files, source-controlled documents that let you set clear rules and guardrails such as “prefer this logger” or “use table-driven tests for all handlers.” This shapes Copilot’s behavior without you re-prompting it every time.

Now you can rely on the new GitHub MCP Registry, available directly in VS Code. VS Code is the only editor that supports the full MCP specification. Discover, install, and enable MCP servers like Stripe, Figma, Sentry, and others, with a single click. When your task calls for a specialist, create custom agents in GitHub Copilot with their own system prompt and tools to help you define the ways you want Copilot to work.

Increased confidence and control for your teamAgent HQ doesn’t just give you more power—it gives you confidence. Ensuring code quality, understanding AI’s influence on your workflow, and maintaining control over how AI interacts with your codebase and organization are essential for your team’s success, and we’re tackling these challenges head-on.

When it comes to code quality, the core problem is that “LGTM” doesn’t always mean “the code is healthy.” A review can pass, but can still degrade the codebase and quickly become long-term technical debt. With GitHub Code Quality, in public preview today, you’ve got org-wide visibility, governance, and reporting to systematically improve code maintainability, reliability, and test coverage across every repository. Enabling it extends Copilot’s security checks to look at the maintainability and reliability impact of the code that’s been changed.

And we’ve added a code review step into the Copilot coding agent’s workflow, too, so Copilot gets an initial first-line review and addresses problems (before you even see the code).

Screenshot of GitHub Code Quality, showing the results of Copilot’s review.As an organization, you need to know how Copilot is being used. So today, we’re announcing the public preview of the Copilot metrics dashboard, showing Copilot’s impact and critical usage metrics across your entire organization.

For enterprise administrators who are managing AI access, including AI agents and MCP, we’re focused on providing consistent AI controls for teams with the control plane—your agent governance layer. Set security policies, audit logging, and manage access all in one place. Enterprise admins can also control which agents are allowed, define access to models, and obtain metrics about the Copilot usage in your organization.

For developers, by developersWe built Agent HQ because we’re developers, too. We know what it’s like when it feels like your tools are fighting you instead of helping you. When “AI-powered” ends up meaning more context-switching, more babysitting, more subscriptions, and more time explaining what you need to get the value you were promised.

That ends today.

Agent HQ isn’t about the hype of AI. It’s about the reality of shipping code. It’s about bringing order and governance to this new era without compromising choice. It’s about giving you the power to build faster, with more confidence, and on your terms.

Welcome home. Let’s build.
The post Introducing Agent HQ: Any agent, any way you work appeared first on Microsoft Azure Blog.
Quelle: Azure

Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC

Microsoft and NVIDIA are deepening our partnership to power the next wave of AI industrial innovation. For years, our companies have helped fuel the AI revolution, bringing the world’s most advanced supercomputing to the cloud, enabling breakthrough frontier models, and making AI more accessible to organizations everywhere. Today, we’re building on that foundation with new advancements that deliver greater performance, capability, and flexibility.

With added support for NVIDIA RTX PRO 6000 Blackwell Server Edition on Azure Local, customers can deploy AI and visual computing workloads distributed and edge environments with the seamless orchestration and management you use in the cloud. New NVIDIA Nemotron and NVIDIA Cosmos models in Azure AI Foundry give businesses an enterprise-grade platform to build, deploy, and scale AI applications and agents. With NVIDIA Run:ai on Azure, enterprises can get more from every GPU to streamline operations and accelerate AI. Finally, Microsoft is redefining AI infrastructure with the world’s first deployment of NVIDIA GB300 NVL72.

Explore our partnership on Azure Local

Today’s announcements mark the next chapter in our full-stack AI collaboration with NVIDIA, empowering customers to build the future faster.

Expanding GPU support to Azure Local

Microsoft and NVIDIA continue to drive advancements in artificial intelligence, offering innovative solutions that span the public and private cloud, the edge, and sovereign environments.

As highlighted in the March blog post for NVIDIA GTC, Microsoft will offer NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure. Now, with expanded availability of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs on Azure Local, organizations can optimize their AI workloads, regardless of location, to provide customers with greater flexibility and more options than ever. Azure Local leverages Azure Arc to empower organizations to run advanced AI workloads on-premises while retaining the management simplicity of the cloud or operating in fully disconnected environments. 

NVIDIA RTX PRO 6000 Blackwell GPUs provide the performance and flexibility needed to accelerate a broad range of use cases, from agentic AI, physical AI, and scientific computing to rendering, 3D graphics, digital twins, simulation, and visual computing. This expanded GPU support unlocks a range of edge use cases that fulfill the stringent requirements of critical infrastructure for our healthcare, retail, manufacturing, government, defense, and intelligence customers. This may include real-time video analytics for public safety, predictive maintenance in industrial settings, rapid medical diagnostics, and secure, low-latency inferencing for essential services such as energy production and critical infrastructure. The NVIDIA RTX PRO 6000 Blackwell enables improved virtual desktop support by leveraging NVIDIA vGPU technology and Multi-Instance GPU (MIG) capabilities. This can not only accommodate a higher user density, but also power AI-enhanced graphics and visual compute capabilities, offering an efficient solution for demanding virtual environments.

Earlier this year, Microsoft announced a multitude of AI capabilities at the edge, all enriched with NVIDIA accelerated computing:

Edge Retrieval Augmented Generation (RAG): Empower sovereign AI deployments with fast, secure, and scalable inferencing on local data—supporting mission-critical use cases across government, healthcare, and industrial automation.

Azure AI Video Indexer enabled by Azure Arc: Enables real-time and recorded video analytics in disconnected environments—ideal for public safety and critical infrastructure monitoring or post-event analysis.

With Azure Local, customers can meet strict regulatory, data residency, and privacy requirements while harnessing the latest AI innovations powered by NVIDIA.

Whether you need ultra-low latency for business continuity, robust local inferencing, or compliance with industry regulations, we’re dedicated to delivering cutting-edge AI performance wherever your data resides. Customers now access the breakthrough performance of the NVIDIA RTX PRO 6000 Blackwell GPUs in new Azure Local solutions—including Dell AX-770, HPE ProLiant DL380 Gen12, and Lenovo ThinkAgile MX650a V4.

To find out more about upcoming availability and sign up for early ordering, visit: 

Dell for Azure Local

HPE for Azure Local

Lenovo for Azure Local

Powering the future of AI with new models on Azure AI Foundry

At Microsoft, we’re committed to bringing the most advanced AI capabilities to our customers, wherever they need them. Through our partnership with NVIDIA, Azure AI Foundry now brings world-class multimodal reasoning models directly to enterprises, deployable anywhere as secure, scalable NVIDIA NIM™ microservices. The portfolio spans a range of different use cases:

NVIDIA Nemotron Family: High accuracy open models and datasets for agentic AI

Llama Nemotron Nano VL 8B is available now and is tailored for multimodal vision-language tasks, document intelligence and understanding, and mobile and edge AI agents. 

NVIDIA Nemotron Nano 9B is available now and supports enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling. 

NVIDIA Llama 3.3 Nemotron Super 49B 1.5 is coming soon and is designed for enterprise agents, scientific reasoning, advanced math, and coding for software engineering and tool calling.

NVIDIA Cosmos Family: Open world foundation models for physical AI

Cosmos Reason-1 7B is available now and supports robotics planning and decision making, training data curation and annotation for autonomous vehicles, and video analytics AI agents extracting insights and performing root-cause analysis from video data.

NVIDIA Cosmos Predict 2.5 is coming soon and is a generalist model for world state generation and prediction. 

NVIDIA Cosmos Transfer 2.5 is coming soon and is designed for structural conditioning and physical AI.

Microsoft TRELLIS by Microsoft Research: High-quality 3D asset generation 

Microsoft TRELLIS by Microsoft Research is available now and enables digital twins by generating accurate 3D assets from simple prompts, immersive retail experiences with photorealistic product models for AR and virtual try-ons, and game and simulation development by turning creative ideas into production-ready 3D content.

Together, these open models reflect the depth of the Azure and NVIDIA partnership: combining Microsoft’s adaptive cloud with NVIDIA’s leadership in accelerated computing to power the next generation of agentic AI for every industry. Learn more about the models here.

Maximizing GPU utilization for enterprise AI with NVIDIA Run:ai on Azure

As an AI workload and GPU orchestration platform, NVIDIA Run:ai helps organizations make the most of their compute investments, accelerating AI development cycles and driving faster time-to-market for new insights and capabilities. By bringing NVIDIA Run:ai to Azure, we’re giving enterprises the ability to dynamically allocate, share, and manage GPU resources across teams and workloads, helping them get more from every GPU.

NVIDIA Run:ai on Azure integrates seamlessly with core Azure services, including Azure NC and ND series instances, Azure Kubernetes Service (AKS), and Azure Identity Management, and offers compatibility with Azure Machine Learning and Azure AI Foundry for unified, enterprise-ready AI orchestration. We’re bringing hybrid scale to life to help customers transform static infrastructure into a flexible, shared resource for AI innovation.

With smarter orchestration and cloud-ready GPU pooling, teams can drive faster innovation, reduce costs, and unleash the power of AI across their organizations with confidence. NVIDIA Run:ai on Azure enhances AKS with GPU-aware scheduling, helping teams allocate, share, and prioritize GPU resources more efficiently. Operations are streamlined with one-click job submission, automated queueing, and built in governance. This ensures teams spend less time managing infrastructure and more time focused on building what’s next. 

This impact spans industries, supporting the infrastructure and orchestration behind transformative AI workloads at every stage of enterprise growth: 

Healthcare organizations can use NVIDIA Run:ai on Azure to advance medical imaging analysis and drug discovery workloads across hybrid environments. 

Financial services organizations can orchestrate and scale GPU clusters for complex risk simulations and fraud detection models. 

Manufacturers can accelerate computer vision training models for improved quality control and predictive maintenance in their factories. 

Retail companies can power real-time recommendation systems for more personalized experiences through efficient GPU allocation and scaling, ultimately better serving their customers.

Powered by Microsoft Azure and NVIDIA, Run:ai is purpose-built for scale, helping enterprises move from isolated AI experimentation to production-grade innovation.

const currentTheme =
localStorage.getItem(‘msxcmCurrentTheme’) ||
(window.matchMedia(‘(prefers-color-scheme: dark)’).matches ? ‘dark’ : ‘light’);

// Modify player theme based on localStorage value.
let options = {“autoplay”:false,”hideControls”:null,”language”:”en-us”,”loop”:false,”partnerName”:”cloud-blogs”,”poster”:”https://cdn-dynmedia-1.microsoft.com/is/image/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE?wid=1280″,”title”:””,”sources”:[{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x1080-6439k”,”type”:”video/mp4″,”quality”:”HQ”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x720-3266k”,”type”:”video/mp4″,”quality”:”HD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x540-2160k”,”type”:”video/mp4″,”quality”:”SD”},{“src”:”https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/1089732-ActualizeNVIDIA-RUN-AI-AZURE-0x360-958k”,”type”:”video/mp4″,”quality”:”LO”}],”ccFiles”:[{“url”:”https://azure.microsoft.com/en-us/blog/wp-json/msxcm/v1/get-captions?url=https%3A%2F%2Fwww.microsoft.com%2Fcontent%2Fdam%2Fmicrosoft%2Fbade%2Fvideos%2Fproducts-and-services%2Fen-us%2Fazure%2F1089732-actualizenvidia-run-ai-azure%2F1089732-ActualizeNVIDIA-RUN-AI-AZURE_cc_en-us.ttml”,”locale”:”en-us”,”ccType”:”TTML”}]};

if (currentTheme) {
options.playButtonTheme = currentTheme;
}

document.addEventListener(‘DOMContentLoaded’, () => {
ump(“ump-69012f878d8eb”, options);
});

Reimagining AI at scale: First to deploy NVIDIA GB300 NVL72 supercomputing cluster

Microsoft is redefining AI infrastructure with the new NDv6 GB300 VM series, delivering the first at-scale production cluster of NVIDIA GB300 NVL72 systems, featuring over 4600 NVIDIA Blackwell Ultra GPUs connected via NVIDIA Quantum-X800 InfiniBand networking. Each NVIDIA GB300 NVL72 rack integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace™ CPUs, delivering over 130 TB/s of NVLink bandwidth and up to 136 kW of compute power in a single cabinet. Designed for the most demanding workloads—reasoning models, agentic systems, and multimodal AI—GB300 NVL72 combines ultra-dense compute, direct liquid cooling, and smart rack-scale management to deliver breakthrough efficiency and performance within a standard datacenter footprint. 

Azure’s co-engineered infrastructure enhances GB300 NVL72 with technologies like Azure Boost for accelerated I/O and integrated hardware security modules (HSM) for enterprise-grade protection. Each rack arrives pre-integrated and self-managed, enabling rapid, repeatable deployment across Azure’s global fleet. As the first cloud provider to deploy NVIDIA GB300 NVL72 at scale, Microsoft is setting a new standard for AI supercomputing—empowering organizations to train and deploy frontier models faster, more efficiently, and more securely than ever before. Together, Azure and NVIDIA are powering the future of AI. 

Learn more about Microsoft’s systems approach in delivering GB300 NVL72 on Azure.

Unleashing the performance of ND GB200-v6 VMs with NVIDIA Dynamo 

Our collaboration with NVIDIA focuses on optimizing every layer of the computing stack to help customers maximize the value of their existing AI infrastructure investments. 

To deliver high-performance inference for compute-intensive reasoning models at scale, we’re bringing together a solution that combines the open-source NVIDIA Dynamo framework, our ND GB200-v6 VMs with NVIDIA GB200 NVL72 and Azure Kubernetes Service(AKS). We’ve demonstrated the performance this combined solution delivers at scale with the gpt-oss 120b model processing 1.2 million tokens per second deployed in a production-ready, managed AKS cluster and have published a deployment guide for developers to get started today. 

Dynamo is an open-source, distributed inference framework designed for multi-node environments and rack-scale accelerated compute architectures. By enabling disaggregated serving, LLM-aware routing and KV caching, Dynamo significantly boosts performance for reasoning models on Blackwell, unlocking up to 15x more throughput compared to the prior Hopper generation, opening new revenue opportunities for AI service providers. 

These efforts enable AKS production customers to take full advantage of NVIDIA Dynamo’s  inference optimizations when deploying frontier reasoning models at scale. We’re dedicated to bringing the latest open-source software innovations to our customers, helping them fully realize the potential of the NVIDIA Blackwell platform on Azure. 

Learn more about Dynamo on AKS.

Get more AI resources

Join us in San Francisco at Microsoft Ignite in November to hear about the latest in enterprise solutions and innovation.

Explore Azure AI Foundry and Azure Local.

The post Building the future together: Microsoft and NVIDIA announce AI advancements at GTC DC appeared first on Microsoft Azure Blog.
Quelle: Azure