Amazon EC2 R8gd instances are now available in additional AWS Regions

Amazon Elastic Compute Cloud (Amazon EC2) R8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (Ireland), Asia Pacific (Sydney, Malaysia), South America (São Paulo), and Canada (Central) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon R8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Amazon EC2 M8gd instances are now available in additional AWS Regions

Amazon Elastic Compute Cloud (Amazon EC2) M8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (London), Asia Pacific (Sydney, Malaysia), and Canada (Central) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon M8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
Quelle: aws.amazon.com

Unleash your creativity at scale: Azure AI Foundry’s multimodal revolution

Imagine a platform where every developer—whether you’re building for a startup or a global enterprise—can unlock the full spectrum of AI: text, images, audio, and video. This OpenAI DevDay, Azure AI Foundry is making that vision real. With today’s launch of OpenAI GPT-image-1-mini, GPT-realtime-mini, and GPT-audio-mini, plus major safety upgrades to GPT-5, you now have the ultimate toolkit to create, experiment, and scale multimodal solutions—faster and more affordably than ever before. We are excited to share that the models announced today by OpenAI will be rolling out now in Azure AI Foundry, with most customers being able to get started on October 7, 2025.

Try Azure AI Foundry today

Today’s announcement joins major innovations we announced last week with the launch of the Microsoft Agent Framework (now in preview), multi-agent workflows in Foundry Agent Service in private preview, unified observability, Voice Live API general availability, and the new Responsible AI capabilities. Microsoft Agent Framework (GitHub) is a commercial-grade, open-source SDK, and runtime designed to simplify the orchestration of multi-agent systems. It unifies the business-ready foundations of Semantic Kernel with the multi-agent capabilities of AutoGen, giving developers the tools to build intelligent, scalable agentic solutions with speed and confidence.

By expanding Azure AI Foundry with the latest OpenAI models and advancing our agentic AI framework, we empower customers with unparalleled choice, flexibility, and business capabilities, enabling developers to build intelligent agent systems that address complex business needs and drive innovation at scale.

Meet the new models: Built for developers, ready for anything

GPT-image-1-mini: Compact power for visual creativity

GPT-image-1-mini is purpose-built for organizations and developers who need rapid, resource-efficient image generation at scale. Its compact architecture enables high-quality text-to-image and image-to-image creation while consuming fewer computational resources, allowing teams to deploy multimodal AI even in constrained settings. Its robust architecture built on Image-1 model optimizes consistency and ease of adoption for organizations already leveraging multimodal AI in Azure AI Foundry.

What makes it special?

Flexible image generation: Deploy high-quality text-to-image and image-to-image features without breaking your budget.

Lightning-fast inference: Generate images in real time, seamlessly integrated with existing Azure AI Foundry workflows.

Use cases:

Generating educational materials for classrooms and online learning.

Designing storybooks and visual narratives.

Producing game assets for rapid prototyping and development.

Accelerating UI design workflows for apps and websites.

Table 1: GPT-image-1-mini pricing and deployment in Azure AI Foundry (per 1m tokens)*

GPT-realtime-mini and GPT-audio-mini: Efficient and affordable voice solution

The two new mini models are designed for organizations and developers who need fast, cost-effective multimodal AI without sacrificing quality. These models are lightweight and highly optimized, delivering real-time voice interaction and audio generation with minimal resource requirements. Their streamlined architecture enables rapid inference and low latency, making them ideal for scenarios where speed and responsiveness are critical—such as voice-based chatbots, real-time translation, and dynamic audio content creation. By consuming fewer computational resources, these models help businesses and developer teams reduce operational costs while scaling multimodal capabilities across a wide range of applications.

What makes them special?

Real-time responsiveness: Power chatbots, assistants, and translation tools with near-zero latency.

Resource-light: Run advanced voice and audio models on minimal infrastructure.

Affordable scaling: Lower your operational costs while expanding multimodal capabilities.

Use cases:

Voice-based chatbots for customer service and support.

Real-time translation for global communication.

Dynamic audio content creation for media and entertainment.

Interactive voice assistants for enterprise and consumer applications.

GPT‑realtime‑mini in Azure AI Foundry enables our customer to build voice solutions with lower latency, better instruction adherence, and cost efficiency—capabilities our customers value, driving shorter handle times, smoother dialogues, and faster time‑to‑value.
Andy O’Dower, VP of Product, Twilio

Table 2: GPT-realtime-mini and GPT-audio-mini pricing and deployment in Azure AI Foundry (per 1m tokens)*

GPT-5-chat-latest: Raising the bar for safety and wellbeing

The latest GPT-5-chat-latest update in Azure AI Foundry introduces a more robust set of safety guardrails, designed to better protect users during sensitive conversations. With enhanced detection and response capabilities, GPT-5-chat-latest is now equipped to more effectively recognize and manage dialogue that could lead to mental or emotional distress. These improvements reflect our ongoing commitment to responsible AI, ensuring that every interaction is not only intelligent and helpful, but also safe and supportive for users in challenging moments.

Table 3: GPT-5-chat-latest pricing and deployment in Azure AI Foundry (per 1m tokens)*

GPT-5-pro: The pinnacle of reasoning and analytics

GPT-5-pro represents the pinnacle of advanced reasoning and analytics within the Azure AI Foundry ecosystem, delivering research-grade intelligence. When deployed through Foundry, GPT-5-pro’s tournament-style architecture leverages multiple reasoning pathways to ensure maximum accuracy and reliability, making it ideal for complex analytics, code generation, and decision-making workflows. With Azure AI Foundry, organizations unlock the full potential of GPT-5-pro, driving smarter decisions and accelerating innovation across their most critical business processes, securely and reliably.

Table 4: GPT-5-pro pricing and deployment in Azure AI Foundry (per 1m tokens)*

The developer’s edge: Build, experiment, and ship—faster

With these new models, Azure AI Foundry isn’t just keeping up—it’s setting the pace. Developers can now move beyond text, tapping into image and audio generation, editing, and understanding. The result? Richer, smarter workflows that drive innovation in every industry—from education and gaming to enterprise automation.

Sneak peek: Sora 2—Next-level video and audio generation

And there’s more on the horizon. Sora 2 in Azure AI Foundry is coming soon, bringing advanced video and audio generation in a single API. Imagine physics-driven animation, synchronized dialogue, and cameo features—all available to developers through Azure AI Foundry. Stay tuned for the next wave of immersive, generative experiences.

Are you ready to create the next wave of immersive, multimodal experiences? Azure AI Foundry is your platform for every possibility.

*Pricing is accurate as of October 2025.
The post Unleash your creativity at scale: Azure AI Foundry’s multimodal revolution appeared first on Microsoft Azure Blog.
Quelle: Azure

Unlocking Local AI on Any GPU: Docker Model Runner Now with Vulkan Support

Running large language models (LLMs) on your local machine is one of the most exciting frontiers in AI development. At Docker, our goal is to make this process as simple and accessible as possible. That’s why we built Docker Model Runner, a tool to help you download and run LLMs with a single command.

Until now, GPU-accelerated inferencing with Model Runner was limited to CPU, NVIDIA GPUs (via CUDA), and Apple Silicon (via Metal). Today, we’re thrilled to announce a major step forward in democratizing local AI: Docker Model Runner now supports Vulkan!

This means you can now leverage hardware acceleration for LLM inferencing on a much wider range of GPUs, including integrated GPUs and those from AMD, Intel, and other vendors that support the Vulkan API.

Why Vulkan Matters: AI for Everyone’s GPU

So, what’s the big deal about Vulkan?

Vulkan is a modern, cross-platform graphics and compute API. Unlike CUDA, which is specific to NVIDIA GPUs, or Metal, which is for Apple hardware, Vulkan is an open standard that works across a huge range of graphics cards. This means if you have a modern GPU from AMD, Intel, or even an integrated GPU on your laptop, you can now get a massive performance boost for your local AI workloads.

By integrating Vulkan (thanks to our underlying llama.cpp engine), we’re unlocking GPU-accelerated inferencing for a much broader community of developers and enthusiasts. More hardware, more speed, more fun!

Getting Started: It Just Works

The best part? You don’t need to do anything special to enable it. We believe in convention over configuration. Docker Model Runner automatically detects compatible Vulkan hardware and uses it for inferencing. If a Vulkan-compatible GPU isn’t found, it seamlessly falls back to CPU.

Ready to give it a try? Just run the following command in your terminal:

docker model run ai/gemma3

This command will:Pull the Gemma 3 model.Detect if you have a Vulkan-compatible GPU with the necessary drivers installed.Run the model, using your GPU to accelerate the process.It’s that simple. You can now chat with a powerful LLM running directly on your own machine, faster than ever.

Join Us and Help Shape the Future of Local AI!

Docker Model Runner is an open-source project, and we’re building it in the open with our community. Your contributions are vital as we expand hardware support and add new features.Head over to our GitHub repository to get involved:https://github.com/docker/model-runnerPlease star the repo to show your support, fork it to experiment, and consider contributing back with your own improvements.

Learn more

Check out the Docker Model Runner General Availability announcement

Visit our Model Runner GitHub repo! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!

Get started with Model Runner with a simple hello GenAI application

Quelle: https://blog.docker.com/feed/