Unlimited access to Docker Hardened Images: Because security should be affordable, always

Every organization we speak with shares the same goal: to deliver software that is secure and free of CVEs. Near-zero CVEs is the ideal state. But achieving that ideal is harder than it sounds, because paradoxes exist at every step. Developers patch quickly, yet new CVEs appear faster than fixes can ship. Organizations standardize on open source, but every dependency introduces fresh exposure. Teams are asked to move at startup speed, while still delivering the assurances expected in enterprise environments.

The industry has tried to close this gap and chase the seemingly impossible goal of near-zero CVEs. Scanners only add to the challenge, flooding teams with alerts more noise than signal. Dashboards spotlight problems but rarely deliver solutions. Hardened images hold real promise, giving teams a secure starting point with container images free of known vulnerabilities. But too often, they’re locked behind a premium price point. Even when organizations can pay, the costs don’t scale, leaving uneven protection and persistent risk.

That changes today. We’re introducing unlimited access to the Docker Hardened Images catalog, making near-zero CVEs a practical reality for every team at an affordable price. With a single Hardened Images subscription, every team can access the full catalog: unlimited, secured, and always up to date. Logged in users will be able to access a one-click free trial, so teams can see the impact right away.

This launch builds on something we’ve done before. With Docker Hub, we made containers accessible to every developer, everywhere. What was once complex, niche, and difficult to adopt became simple and universal. Now, Docker can play that same role in securing the ecosystem.Every developer’s journey, whether they realize it or not, often begins with Docker Hub, and the first step in that journey should be secure by default, with hardened, trusted images accessible to everyone, without a premium price tag.

What makes Docker Hardened Images different

Unlimited access to the Docker Hardened Images catalog isn’t just another secure image library, it’s a comprehensive foundation for modern development. The catalog covers the full spectrum of today’s needs: ML and AI images like Kubeflow, languages and runtimes such as Python, databases like PostgreSQL, application frameworks like NGINX, and core infrastructure services including Kafka.It even includes FedRAMP-ready variants, engineered to align out of the box with U.S. federal security requirements. 

What truly sets Docker Hardened Images apart is our hardening approach. Every image is built directly from source, patched continuously from upstream, and hardened by stripping away unnecessary components. This minimal approach not only reduces the attack surface but also delivers some of the smallest images available, up to 95% smaller than alternatives. Each image also includes VEX (Vulnerability Exploitability eXchange) support, helping teams cut through noise and focus only on vulnerabilities that truly matter.

Docker Hardened Images is compatible with widely adopted distros like Alpine and Debian. Developers already know and trust these, so the experience feels familiar and trusted from day one. Developers especially appreciate how flexible the solution is: migrating is as simple as changing a single line in a Dockerfile. And with customization, teams can extend hardened images even further, adding out-of-the-box system packages, certifications, scripts, and tools without losing the hardened baseline.

And this isn’t just our claim. The quality and rigor of Docker Hardened Images were independently validated by SRLabs, an independent cybersecurity consultancy, who confirmed that the images are signed, rootless by default, and ship with SBOM + VEX. Their assessment found no root escapes or high-severity breakouts, validated Docker’s 95% reduction in attack surface, and highlighted the 7-day patch SLA and build-to-sign pipeline as clear strengths over typical community images.

Making security universal

By making hardened, trusted images accessible to everyone, we ensure every developer’s journey begins secure by default, and every organization, from startups to enterprises, can pursue near-zero CVEs without compromise.

Talk to us to learn more

Explore how Docker Hardened Images is a good fit for every team 

Start a on-click free 30 day trial (requires Hub login) to see the difference for yourself

Quelle: https://blog.docker.com/feed/

IBM Granite 4.0 Models Now Available on Docker Hub

Developers can now discover and run IBM’s latest open-source Granite 4.0 language models from the Docker Hub model catalog, and start building in minutes with Docker Model Runner. Granite 4.0 pairs strong, enterprise-ready performance with a lightweight footprint, so you can prototype locally and scale confidently.

The Granite 4.0 family is designed for speed, flexibility, and cost-effectiveness, making it easier than ever to build and deploy generative AI applications.

About Docker Hub

Docker Hub is the world’s largest registry for containers, trusted by millions of developers to find and share high-quality container images at scale. Building on this legacy, it is now also becoming a go-to place for developers to discover, manage, and run local AI models. Docker Hub hosts our curated local AI model collection, packaged as OCI Artifacts and ready to run. You can easily download, share, and upload models on Docker Hub, making it a central hub for both containerized applications and the next wave of generative AI.

Why Granite 4.0 on Docker Hub matters

Granite 4.0 isn’t just another set of language models. It introduces a next-generation hybrid architecture that delivers incredible performance and efficiency, even when compared to larger models.

Hybrid architecture. Granite 4.0 cleverly combines the linear-scaling efficiency Mamba-2 with the precision of transformers. Select models also leverage a Mixture of Experts (MoE) strategy – instead of using the entire model for every task, it only activates the necessary “experts”, or subssets of parameters. This results in faster processing and memory usage reductions of more than 70% compared to similarly sized traditional models.

“Theoretically Unconstrained” Context. By removing positional encoding, Granite 4.0 can process incredibly long documents, with context lengths tested up to 128,000 tokens. Context length is limited only by your hardware, opening up powerful use cases for document analysis and Retrieval-Augmented Generation (RAG).

Fit-for-Purpose Sizes. The family includes several sizes, from the 3B parameter Micro models to the 32B parameter Small model, allowing you to pick the perfect balance of performance and resource usage for your specific need

What’s in the Granite 4.0 family

Sizes and targets (8-bit, batch=1, 128K context):

H-Small (32B total, ~9B active): Workhorse for RAG and agents; runs on L4-class GPUs.

H-Tiny (7B total, ~1B active): Latency-friendly for edge/local; consumer-grade GPUs like RTX 3060.

H-Micro (3B, dense): Ultra-light for on-device and concurrent agents; extremely low RAM footprint.

Micro (3B, dense): Traditional dense option when Mamba-2 support isn’t available.

In practice, these footprints mean you can run capable models on accessible hardware – a big win for local development and iterative agent design.

Run in seconds with Docker Model Runner

Docker Model Runner gives you a portable, reproducible way to run local models with an OpenAI-compatible API from laptop dev to CI and cloud.

# Example: start a chat with Granite 4.0 Micro
docker model run ai/granite-4.0-micro

Prefer a different size? Pick your Granite 4.0 variant in the Model Catalog and run it with the same command style. See the Model Runner guide for enabling the runner, chat mode, and API usage.

What you can build (fast)

Granite’s lightweight and versatile nature makes it perfect for a wide range of applications. Combined with Docker Model Runner, you can easily build and scale projects like:

Document Summarization and Analysis: Process and summarize long legal contracts, technical manuals, or research papers with ease.

Smarter RAG Systems: Build powerful chatbots and assistants that pull information from external knowledge bases, CRMs, or document repositories.

Complex Agentic Workflows: Leverage the compact models to run multiple AI agents concurrently for sophisticated, multi-step reasoning tasks.

Edge AI Applications: Deploy Granite 4.0 Tiny in resource-constrained environments for on-device chatbots or smart assistants that don’t rely on the cloud.

Join the Open-Source AI Community

This partnership is all about empowering developers to build the next generation of AI applications. The Granite 4.0 models are available under a permissive Apache 2.0 license, giving you the freedom to customize and use them commercially.

We invite you to explore the models on Docker Hub and start building today. To help us improve the developer experience for running local models, head over to our Docker Model Runner repository.

Head over to our GitHub repository to get involved:

Star the repo to show your support

Fork it to experiment

Consider contributing back with your own improvements

Granite 4.0 is here. Run it, build with it, and see what’s possible with Granite 4.0 and Docker Model Runner.
Quelle: https://blog.docker.com/feed/

New Compute Optimized Amazon EC2 C8i and C8i-flex instances

AWS is announcing the general availability of new compute optimized Amazon EC2 C8i and C8i-flex instances. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than C7i and C7i-flex instances, with even higher gains for some workloads. The C8i and C8i-flex instances are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex instances. C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources. C8i instances are a great choice for all compute intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. C8i and C8i-flex instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain). To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.
Quelle: aws.amazon.com

Amazon Connect now enables you to customize service level calculations

Amazon Connect now enables you to customize service level calculations to your specific needs. Supervisors and managers can define time thresholds for when a contact is considered to meet service level standards and select which contact outcomes to include in the calculation. For example, managers can choose to count callback contacts, exclude contacts transferred out while waiting in queue, and exclude short abandons using a configurable time threshold. Customization of service level calculation is available from the metric configuration section on the analytics dashboards. With this feature supervisors and managers can now create a service level metric calculation that better aligns with their business operations. With a customized view of service level performance, operations managers can assess how effectively they have met their service standards. This new feature is available in all AWS regions where Amazon Connect is offered. To learn more about customizing your service level calculation, visit the Admin Guide. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.
Quelle: aws.amazon.com

Amazon Connect launches new case APIs to link related cases, add custom related items, and search across them

Amazon Connect now allows you to programmatically enrich case data by linking related cases, attaching custom related items, and searching across them, so agents have the full context they need to resolve issues faster. For example, an airline can link all customer cases tied to a single flight cancellation to coordinate rebookings and send proactive updates, while a retailer can attach order and shipment details to a refund request to deliver faster resolutions and keep customers informed. Amazon Connect Cases is available in the following AWS regions: US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Africa (Cape Town) AWS regions. To learn more and get started, visit the Amazon Connect Cases webpage and documentation.
Quelle: aws.amazon.com

Amazon EKS and Amazon EKS Distro now supports Kubernetes version 1.34

Kubernetes version 1.34 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Distro to run Kubernetes version 1.34. Starting today, you can create new EKS clusters using version 1.34 and upgrade existing clusters to version 1.34 using the EKS console, the eksctl command line interface, or through an infrastructure-as-code tool. Kubernetes version 1.34 introduces several key improvements, including projected service account tokens for kubelet image credential providers helping improve security for container image pulls, and Pod-level resource requests and limits for simplified multi-container resource management. The release also introduces Dynamic Resource Allocation (DRA) prioritized alternatives, enabling workloads to define prioritized device requirements for improved resource scheduling. To learn more about the changes in Kubernetes version 1.34, see our documentation and the Kubernetes project release notes. EKS now supports Kubernetes version 1.34 in all the AWS Regions where EKS is available, including the AWS GovCloud (US) Regions. You can learn more about the Kubernetes versions available on EKS and instructions to update your cluster to version 1.34 by visiting EKS documentation. You can use EKS cluster insights to check if there any issues that can impact your Kubernetes cluster upgrades. EKS Distro builds of Kubernetes version 1.34 are available through ECR Public Gallery and GitHub. Learn more about the EKS version lifecycle policies in the documentation.
Quelle: aws.amazon.com