AWS IAM Identity Center now supports IPv6

AWS IAM Identity Center now supports Internet Protocol version 6 (IPv6) through new dual-stack endpoints. Customers can now connect to AWS IAM Identity Center using IPv6, IPv4, or dual-stack clients. The existing AWS IAM Identity Center endpoints supporting IPv4 remain available for backward compatibility. IAM Identity Center allows customers to enable workforce access to AWS managed applications and AWS accounts. When your client, such as a browsers or an application, makes a request to a dual-stack endpoint, the endpoint resolves to an IPv4 or IPv6 address, depending on the protocol used by your network and client. This launch helps you meet IPv6 compliance requirements, and minimize the need for complex NAT infrastructure. IPv6 support is available in all AWS Regions where IAM Identity Center is available, except the AWS GovCloud (US) Regions and the Taipei Region. To learn more, visit the IAM Identity Center User Guide.
Quelle: aws.amazon.com

Amazon Lightsail expands blueprint selection with updated support for Node.js, LAMP, and Ruby on Rails blueprints

Amazon Lightsail now offers new Node.js, LAMP, and Ruby on Rails blueprints. These new blueprint have Instance Metadata Service Version 2 (IMDSv2) enforced by default, and support IPv6-only instances. With just a few clicks, you can create a Lightsail virtual private server (VPS) of your preferred size with Node.js, LAMP, or Ruby on Rails preinstalled. With Lightsail, you can easily get started on the cloud by choosing a blueprint and an instance bundle to build your web application. Lightsail instance bundles include instances preinstalled with your preferred operating system, storage, and monthly data transfer allowance, giving you everything you need to get up and running quickly. These new blueprints are now available in all AWS Regions where Lightsail is available. For more information on blueprints supported on Lightsail, see Lightsail documentation. For more information on pricing, or to get started with your free trial, click here.
Quelle: aws.amazon.com

Amazon WorkSpaces announces advanced printer redirection

AWS announces advanced printer redirection for Amazon WorkSpaces Personal, enabling Windows users to access the full feature set of their printers from their virtual desktop environments. With this feature, customers can now use printer-specific capabilities such as double-sided printing, paper tray selection, finishing options (stapling, hole-punching), and color management directly from their Windows WorkSpaces.
Advanced printer redirection addresses the need for specialized printing features that require printer-specific drivers rather than generic drivers. This capability is ideal for organizations with users who need advanced printing features for professional documents, labels, or specialized output. The feature includes configurable driver validation modes (exact match, partial match, or name-only matching) to balance compatibility with feature support, allowing administrators to optimize for their specific environment. When matching drivers are not found, WorkSpaces automatically falls back to basic printing mode, ensuring users can always print.
This feature is available in all AWS Regions where Amazon WorkSpaces Personal is offered. Advanced printer redirection is supported on Windows WorkSpaces with Windows clients only, and requires WorkSpaces Agent version 2.2.0.2116 or later and Windows client version 5.31 or later. Matching printer drivers must be installed on both the WorkSpace and the client device.
For more information about advanced printer redirection in Amazon WorkSpaces, see Configure Printer Support for DCV in the Amazon WorkSpaces Administration Guide, or visit the Amazon WorkSpaces page to learn more about virtual desktop solutions from AWS.
Quelle: aws.amazon.com

AWS Marketplace expands AMI self-service listing experience to FPGA products

AWS Marketplace now offers a self-service listing experience for sellers listing Amazon Machine Image (AMI) products with FPGA (Field Programmable Gate Array) images. This new capability eliminates the previous dependency on manual Product Load Forms and accelerates time-to-market for AWS partners that offer specialized hardware accelerators using FPGA technology on supported Amazon F2 instance types. With this launch, sellers can now create and manage AMIs with Amazon FPGA images using a new UI experience or programmatically through the AWS Marketplace Catalog API. During listing creation, sellers are guided through a step-by-step workflow to fill in required information about their listings including up to 15 Amazon FPGA images. The self-service experience includes comprehensive inline validation and error messages to help sellers identify and resolve configuration issues before submission, streamlining the publishing process and improving speed to market. To learn more, see the AWS Marketplace Seller Guide and the AWS Marketplace Catalog API guide. To get started, visit the server product page in the AWS Marketplace Management Portal.
Quelle: aws.amazon.com

Beyond boundaries: The future of Azure Storage in 2026

2025 was a pivotal year in Azure Storage, and we’re heading into 2026 with a clear focus on helping customers turn AI into real impact. As outlined in last December’s Azure Storage innovations: Unlocking the future of data, Azure Storage is evolving as a unified intelligent platform that supports the full AI lifecycle at enterprise scale with the performance modern workloads demand.

What is Azure Storage?

Looking ahead to 2026, our investments span the full breadth of that lifecycle as AI becomes foundational across every industry. We are advancing storage performance for frontier model training, delivering purpose‑built solutions for large‑scale AI inferencing and emerging agentic applications, and empowering cloud‑native applications to operate at agentic scale. In parallel, we are simplifying adoption for mission‑critical workloads, lowering TCO, and deepening partnerships to co‑engineer AI‑optimized solutions with our customers.

We’re grateful to our customers and partners for their trust and collaboration, and excited to shape the next chapter of Azure Storage together in the year ahead.

Extending from training to inference

AI workloads extend from large, centralized model training to inference at scale, where models are applied continuously across products, workflows, and real-world decision making. LLM training continues to run on Azure, and we’re investing to stay ahead by expanding scale, improving throughput, and optimizing how model files, checkpoints, and training datasets flow through storage.

Innovations that helped OpenAI to operate at unprecedented scale are now available for all enterprises. Blob scaled accounts allow storage to scale across hundreds of scale units within a region, handling millions of objects required to enable enterprise data to be used as training and tuning datasets for applied AI. Our partnership with NVIDIA DGX on Azure shows that scale translates into real-world inference. DGX cloud was co-engineered to run on Azure, pairing accelerated compute with high-performance storage, Azure Managed Lustre (AMLFS), to support LLM research, automotive, and robotics applications. AMLFS provides the best price-performance for keeping GPU fleets continuously fed. We recently released Preview support for 25 PiB namespaces and up to 512 GBps of throughput, making AMLFS best in class managed Lustre deployment on Cloud.

As we look ahead, we’re deepening integration across popular first and third-party AI frameworks such as Microsoft Foundry, Ray, Anyscale, and LangChain, enabling seamless connections to Azure Storage out of box. Our native Azure Blob Storage integration within Foundry enables enterprise data consolidation into Foundry IQ, making blob storage the foundational layer for grounding enterprise knowledge, fine-tuning models, and serving low-latency context to inference, all under the tenant’s security and governance controls.

From training through full-scale inferencing, Azure Storage supports the entire agent lifecycle: from distributing large model files efficiently, storing and retrieving long-lived context, to serving data from RAG vector stores. By optimizing for each pattern end-to-end, Azure Storage has performant solutions for every stage of AI inference.

Evolving cloud native applications for agentic scale

As inference becomes the dominant AI workload, autonomous agents are reshaping how cloud native applications interact with data. Unlike human-driven systems with predictable query patterns, agents operate continuously, issuing an order of magnitude more queries than traditional users ever did. This surge in concurrency stresses databases and storage layers, pushing enterprises to rethink how they architect new cloud native applications.

Azure Storage is building with SaaS leaders like ServiceNow, Databricks, and Elastic to optimize for agentic scale leveraging our block storage portfolio. Looking forward, Elastic SAN becomes a core building block for these cloud native workloads, starting with transforming Microsoft’s own database solutions. It offers fully managed block storage pools for different workloads to share provisioned resources with guardrails for hosting multi-tenant data. We’re pushing the boundaries on max scale units to enable denser packing and capabilities for SaaS providers to manage agentic traffic patterns.

As cloud native workloads adopt Kubernetes to scale rapidly, we are simplifying the development of stateful applications through our Kubernetes native storage orchestrator, Azure Container Storage (ACStor) alongside CSI drivers. Our recent ACStor release signals two directional changes that will guide upcoming investments: adopting the Kubernetes operator model to perform more complex orchestration and open sourcing the code base to collaborate and innovate with the broader Kubernetes community.

Together, these investments establish a strong foundation for the next generation of cloud native applications where storage must scale seamlessly and deliver high efficiency to serve as the data platform for agentic scale systems.

Breaking price performance barriers for mission critical workloads

In addition to evolving AI workloads, enterprises continue to grow their mission critical workloads on Azure.

SAP and Microsoft are partnering together to expand core SAP performance while introducing AI-driven agents like Joule that enrich Microsoft 365 Copilot with enterprise context. Azure’s latest M-series advancements add substantial scale-up headroom for SAP HANA, pushing disk storage performance to ~780k IOPS and 16 GB/s throughput. For shared storage, Azure NetApp Files (ANF) and Azure Premium Files deliver the high throughput NFS/SMB foundations SAP landscapes rely on, while optimizing TCO with ANF Flexible Service Level and Azure Files Provisioned v2. Coming soon, we will introduce Elastic ZRS storage service level in ANF, bringing zone‑redundant high availability and consistent performance through synchronous replication across availability zones leveraging Azure’s ZRS architecture, without added operational complexity.

Similarly, Ultra Disks have become foundational to platforms like BlackRock’s Aladdin, which must react instantly to market shifts and sustain high-performance under heavy load. With average latency well under 500 microseconds, support for 400K IOPS, and 10 GB/s throughput, Ultra Disks enable faster risk calculation, more agile portfolio management, and resilient performance on BlackRock’s highest-volume trading days. When paired with Ebsv6 VMs, Ultra Disks can reach 800K IOPS and 14 GB/s for the most demanding mission critical workloads. And with flexible provisioning, customers can tune performance precisely to their needs while optimizing TCO.

These combined investments give enterprises a more resilient, scalable, and cost-efficient platform for their most critical workloads.

Designing for new realities of power and supply

The global AI surge is straining power grids and hardware supply chains. Rising energy costs, tight datacenter budgets, and industry-wide HDD/SSD shortages mean organizations can’t scale infrastructure simply by adding more hardware. Storage must become more efficient and intelligent by design.

We’re streamlining the entire stack to maximize hardware performance with minimal overhead. Combined with intelligent load balancing and cost-effective tiering, we are uniquely positioned to help customers scale storage sustainably even as power and hardware availability become strategic constraints. With continued innovations on Azure Boost Data Processing Units (DPUs), we expect step function gains in storage speed and feeds at even lower per unit energy consumption.

AI pipelines can span on-premises estates, neo cloud GPU clusters, and cloud, yet many of these environments are limited by power capacity or storage supply. When these limits become a bottleneck, we make it easy to shift workloads to Azure. We’re investing in integrations that make external datasets first class citizens in Azure, enabling seamless access to training, finetuning, and inference data wherever it lives. As cloud storage evolves into AI-ready datasets, Azure Storage is introducing curated, pipeline optimized experiences to simplify how customers feed data into downstream AI services.

Accelerating innovations through the storage partner ecosystem

We can’t do this alone. Azure Storage partners closely with strategic partners to push inference performance to the next level. In addition to the self-publishing capabilities available in Azure Marketplace, we go a step further by devoting resources with expertise to co-engineer solutions with partners to build highly optimized and deeply integrated services.

In 2026, you will see more co-engineered solutions like Commvault Cloud for Azure, Dell PowerScale, Azure Native Qumulo, Pure Storage Cloud, Rubrik Cloud Vault, and Veeam Data Cloud. We will focus on hybrid solutions with partners like VAST Data and Komprise to enable data movement that unlocks the power of Azure AI services and infrastructure—fueling impactful customer AI Agent and Application initiatives.

To an exciting new year with Azure Storage

As we move into 2026, our vision remains simple: help every customer unlock more value from their data with storage that is faster, smarter, and built for the future. Whether powering AI, scaling cloud native applications, or supporting mission critical workloads, Azure Storage is here to help you innovate with confidence in the year ahead.

What are the benefits of using Azure Storage?
Azure Storage services are durable, secure, and scalable. Review your options and check out our sample of scenarios.

Explore Azure Storage

The post Beyond boundaries: The future of Azure Storage in 2026 appeared first on Microsoft Azure Blog.
Quelle: Azure