AWS Deadline Cloud now supports configurable job scheduling modes for queues

Today, AWS Deadline Cloud announces support for configurable job scheduling modes, giving you control over how workers are distributed across jobs in a queue. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. Previously, all available workers were assigned to the highest-priority, earliest-submitted job first, which could delay feedback on other submitted jobs. You can now choose from three scheduling modes when creating or updating a queue: priority FIFO (the existing default behavior), priority balanced (workers are distributed evenly across all jobs at the highest priority level), and weighted balanced (jobs are weighted based on configurable parameters including priority, error count, submission time, and rendering task count). Priority balanced and weighted balanced scheduling modes enable artists to get immediate feedback on their submissions without waiting for earlier jobs to complete. Configurable job scheduling modes are available in all AWS Regions where AWS Deadline Cloud is supported. To get started, visit the Deadline Cloud developer guide.
Quelle: aws.amazon.com

Amazon CloudWatch launches OTel Container Insights for Amazon EKS (Preview)

Amazon CloudWatch introduces Container Insights with OpenTelemetry metrics for Amazon EKS, available in public preview. Building on the existing Container Insights experience, this capability provides deeper visibility into EKS clusters by collecting more metrics from widely adopted open source and AWS collectors and sending them to CloudWatch using the OpenTelemetry Protocol (OTLP). Each metric is automatically enriched with up to 150 descriptive labels, including Kubernetes metadata and customer-defined labels such as team, application, or business unit. Curated dashboards in the Container Insights console present cluster, node, and pod health with the ability to aggregate and filter metrics by instance type, availability zone, node group, or any custom label. For deeper analysis, customers can write queries using the Prometheus Query Language (PromQL) in CloudWatch Query Studio. The CloudWatch Observability EKS add-on provides one-click installation through the Amazon EKS console, or can be deployed through CloudFormation, CDK, or Terraform. The add-on automatically detects accelerated compute hardware including NVIDIA GPUs, Elastic Fabric Adapters, and AWS Trainium and Inferentia accelerators. For existing customers of the add-on, CloudWatch supports publishing both OpenTelemetry and existing Container Insights metrics at the same time. Container Insights with OpenTelemetry metrics is available in public preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Singapore), and Europe (Ireland). There is no charge for OpenTelemetry metrics from Container Insights during preview. To get started, see the Container Insights with OpenTelemetry metrics for Amazon EKS.
Quelle: aws.amazon.com

Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity

Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity, expanding beyond the IPv4 connectivity that was previously available. This gives you greater flexibility in how your applications connect to your Serverless caches.
When creating an ElastiCache Serverless cache, you can now choose from three network type options — IPv4, IPv6, or dual stack. With dual stack connectivity, your cache accepts connections over both IPv4 and IPv6 simultaneously, making it ideal for migrating to IPv6 gradually while maintaining backward compatibility with applications connecting over IPv4. IPv6 connectivity enables you to use IPv6-only subnets with your Serverless caches, eliminating the need for IPv4 addresses and helping you meet compliance requirements for IPv6 adoption.
IPv6 and dual stack connectivity for ElastiCache Serverless is available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions, at no additional charge. To learn more, visit the Amazon ElastiCache product page and Choosing a network type for serverless caches in the Amazon ElastiCache documentation.
Quelle: aws.amazon.com

Docker Offload now Generally Available: The Full Power of Docker, for Every Developer, Everywhere.

Docker Desktop is one of the most widely used developer tools in the world, yet for millions of enterprise developers, running it simply hasn’t been an option. The environments they rely on, such as virtual desktop infrastructure (VDI) platforms and managed desktops, often lack the resources or capabilities needed to run Docker Desktop.

As enterprises scaled to support remote and contractor teams, these environments became the default, effectively blocking many developers from using Docker Desktop altogether. This slowed teams down and cut developers off from faster builds, the latest Docker features, and meaningful productivity gains. As a result, teams were forced into expensive workarounds that are difficult to secure and painful to maintain. 

Today, that changes.

Docker Offload is a fully managed cloud service that moves the container engine into Docker’s secure cloud, allowing developers to run Docker from any environment without changing their existing workflows. As of today, Docker Offload is generally available.

What this means in practice is simple. Developers keep using the same terminal, the same docker run commands, and the same Docker Desktop UI they are already familiar with. The only thing that has changed is where the engine runs, and by moving it to the cloud, Docker Desktop now works in every environment that once blocked it.

How It Works

When you run Docker Offload, it automatically routes the container engine to Docker’s secure cloud. The developer opens Docker Desktop exactly as they always have. No configuration. No retraining or reconfiguring applications for new tools. Containers run in Docker’s cloud infrastructure, and everything, including bind mounts, port forwarding, and Docker Compose, works identically to local.

Every connection runs over an encrypted tunnel on SOC 2 Certified infrastructure, and session activity is logged centrally, giving security teams the audit trail they already require without any changes to existing tooling, firewall rules, or endpoint policies. Every session runs in a temporary, isolated environment without data persistence, and closes cleanly.

What Can You Do With Docker Offload?

Run full Docker in any environment

Every Docker CLI command and every Docker Desktop feature works in VDI, locked-down laptops, remote workstations, and policy-restricted networks. Developers are productive from day one, using the exact CLI commands, workflows, and muscle memory they already have.

Same Infrastructure. New Capabilities. 

Offload deploys alongside your existing VDI infrastructure without touching a single piece of it. Infrastructure and platform teams get a clean drop-in: existing network segmentation, IAM boundaries, and access control policies all stay exactly in place. Centralized admin controls, SSO, and per-user access management are built in from day one. 

Keep security non-negotiable

Dedicated cloud sessions are destroyed at every session end, data stays clean, developer devices stay completely unaffected, and your security perimeter stays intact. Offload operates within your existing security architecture, not around it. SOC 2 Certified, with deployment options that scale from multi-tenant VM-level isolation up to a dedicated single-tenant VPC with private network connectivity for regulated environments.

Unblock developers in minutes

Offload detects constrained environments automatically and activates without developer configuration. Teams go from blocked to building without tickets, setup queues, or IT escalations. When nothing changes for the developer, adoption actually happens.

Current Deployment Options

Docker Offload is currently  available in two deployment methods.

Multi-Tenant provides VM-level isolation on Docker-managed infrastructure. It’s the fastest path for most enterprise teams: no ops overhead, no infrastructure to maintain, productive from the moment it’s enabled.

Single-Tenant provides a dedicated VPC and private network access available, important for organizations in Finance, Healthcare, Government, and other regulated industries. Traffic never traverses the public internet, meeting the network isolation requirements most regulated enterprises enforce as a baseline. For security architects evaluating data residency and compliance requirements, this is the deployment model built for you.

Docker Offload is an add-on to Docker Business. Available now, through Docker’s Sales Team.

Coming Soon

Today’s launch addresses the environment problem. Developers in managed and constrained environments can finally run Docker, without workarounds and without compromise. But we’re not stopping there. Also shipping this year:

Single-Tenant Bring-Your-Own-Cloud (BYOC): Compute runs in your cloud account, your data never leaves your environment, and SOC 2 Certified security stays intact. 

CI/CD Pipeline Integration:  Bring Offload to GitHub Actions, GitLab CI, and Jenkins to give every developer the same Docker experience in CI as locally, with cloud-based pipeline compute. 

GPU-backed instances: Unlocking AI/ML workloads in managed environments for the first time.

The Road Ahead

Development has outgrown the local machine. Docker Offload closes that gap. Infrastructure teams keep their architecture intact. Security teams get the compliance they require. Developers keep the workflows they know. The full power of Docker, for every developer, everywhere. 

This is just the beginning. Learn more about the power of Docker Offload , explore our Docker Offload Docs, and reach out to the Docker Sales Team to start your journey with Offload. 

Quelle: https://blog.docker.com/feed/

Gemma 4 is Here: Now Available on Docker Hub

Docker Hub is quickly becoming the home for AI models, serving millions of developers and bringing together a curated lineup that spans lightweight edge models to high-performance LLMs, all packaged as OCI artifacts.

Today, we’re excited to welcome Gemma 4, the latest generation of lightweight, state-of-the-art open models. Built on the same technology behind Gemini, Gemma 4 introduces three architectures that scale from low-power efficiency to high-end server performance.

By packaging models as OCI artifacts, models behave just like containers. They become versioned, shareable, and instantly deployable, with no custom toolchains required. You can pull ready-to-run models from Docker Hub, push your own, integrate with any OCI registry, and plug everything directly into your existing CI/CD pipelines using familiar tooling for security, access control, and automation.

And this is just the start. Over the next few weeks, Gemma 4 support is coming to Docker Model Runner, so you will not just discover models on Hub, you will be able to run, manage, and deploy them directly from Docker Desktop with the same simplicity you expect from Docker.

Docker Hub’s growing GenAI catalog already includes popular models like IBM Granite, Llama, Mistral, Phi, and SolarLLM, alongside apps like JupyterHub and H2O.ai, plus essential tools for inference, optimization, and orchestration.

What Docker Brings to Gemma 4

Gemma 4 expands what efficient, high-performance models can do. Docker makes them simple to run, share, and scale anywhere.

Run efficiently at the edge: Smaller Gemma 4 variants are optimized for on-device performance. Docker enables consistent deployment across laptops, edge devices, and local environments.

Scale performance with ease: From sparse to dense architectures, you can run any model like a container, making it easy to scale across cloud or on-prem infrastructure. 

One command to get started: Gemma 4 is just one command away:

docker model pull gemma4

No proprietary download tools. No custom authentication flows. Just the same pull, tag, push, and deploy workflow you already use.

By bringing Gemma 4 to Docker Hub, you get powerful models with a familiar, production-ready workflow.

What’s New in Gemma 4?

Gemma 4 redefines what “small” models can do, with architectures optimized across multiple sizes and use cases:

Small & Efficient (E2B, E4B): Built for on-device performance with high throughput and low memory use.

Sparsely Activated (26B A4B): Mixture-of-Experts design delivers large-model quality with smaller-model speed.

Flagship Dense (31B): High-performance model with a 256K context window for long-context reasoning.

Key capabilities include multimodal support (text, image, audio), advanced reasoning with “thinking” tokens, and strong coding plus function-calling abilities.

Technical Specifications

Model Name

Type

Total Params

Input Modalities

Context Window

Gemma 4 E2B

Dense (Small)

5.1B

Text, Vision, Audio

128K

Gemma 4 E4B

Dense (Small)

8.0B

Text, Vision, Audio

128K

Gemma 4 26B A4B

MoE

26.8B (3.8B active)

Text, Vision

256K – 512K

Gemma 4 31B

Dense

31.3B

Text, Vision

256K – 512K

Build the Future of AI with Docker Hub

The arrival of Gemma 4 on Docker Hub reinforces our commitment to making Docker Hub the best place to discover, share, and run AI models. Whether you are building a voice-activated mobile assistant or a large-scale document retrieval system, Docker Hub makes it simple to find the right model, pull it instantly, and run it anywhere.

Ready? Head over to Docker Hub to pull the modelsWant to join the Docker Model Runner community? Please star, fork and contribute to our GitHub repo

Quelle: https://blog.docker.com/feed/

Defending Your Software Supply Chain: What Every Engineering Team Should Do Now

The software supply chain is under sustained attack. Not from a single threat actor or a single incident, but from an ecosystem-wide campaign that has been escalating for months and shows no signs of slowing down.

This week, axios, the HTTP client library downloaded 83 million times per week and present in roughly 80% of cloud environments, was compromised via a hijacked maintainer account. Two backdoored versions deployed platform-specific RATs attributed with high confidence to North Korea’s Lazarus Group. The malicious versions were live for approximately three hours. That was enough.

This follows the TeamPCP campaign in March, which weaponized Aqua Security’s Trivy vulnerability scanner, a security tool trusted by thousands of organizations, and cascaded the compromise into Checkmarx KICS, LiteLLM, Telnyx, and 141 npm packages via a self-propagating worm. Before that, the Shai-Hulud worm tore through the npm ecosystem in late 2025, and GlassWorm infected 400+ VS Code extensions, GitHub repos, and npm packages using invisible Unicode payloads.

The pattern is consistent across all of these incidents: attackers steal developer credentials, use them to poison trusted packages, and the compromised packages steal more credentials. It is self-reinforcing, it is accelerating, and it now has ransomware monetization pipelines behind it.

The common thread is implicit trust

If you look at what actually failed in each of these compromises, the answer is the same every time: trust was assumed where it should have been verified. Organizations trusted a container tag because it had a familiar name. They trusted a GitHub Action because it had a version number. They trusted a CI/CD secret because the workflow was authored by someone on the team. In every case, the attacker exploited the gap between assumed trust and verified trust.

The organizations that came through these incidents with minimal damage had already begun replacing implicit trust with explicit verification at every layer of their stack: verified base images instead of community pulls, pinned references instead of mutable tags, scoped and short-lived credentials instead of long-lived tokens, and sandboxed execution environments instead of wide-open CI runners. None of these are new ideas, and none of them are difficult to implement. What they require is a shift in default posture, from “trust unless there’s a reason not to” to “verify before you trust, and limit the blast radius when verification fails.”

Here is what we recommend every engineering organization should do, and what we practice ourselves at Docker:

Secure your foundations

Start with trusted base images

Don’t build on artifacts you can’t verify. Docker Hardened Images (DHI) are rebuilt from source by Docker with SLSA Build Level 3 attestations, signed SBOMs, and VEX metadata, free and open source under Apache 2.0. DHI was not affected by TeamPCP because its controlled build pipeline and built-in cooldown periods mean short-lived supply chain exploits (typically 1 to 6 hours) are eradicated before they ever enter the image. There is no reason not to use these today.

Pin everything by digest or commit SHA

Mutable tags are not a security boundary. This is exactly how TeamPCP hijacked 75 of 76 trivy-action version tags. Pin GitHub Actions to full 40-character commit SHAs. Pin container images by sha256 digest. Pin package dependencies to exact versions and remove ^ and ~ ranges. If a reference can be overwritten without changing its name, it will be.  Inventory every third-party GitHub Action in use across your org and enforce an allowlist policy as you cannot pin what you haven’t cataloged. Enable two-factor authentication on every package registry account in your organization like npm, PyPI, RubyGems, Docker Hub as account takeover of a single maintainer is how most of these attacks begin. Commit your lock files and use npm ci (or the equivalent in your package manager) in all CI pipelines – this prevents builds from silently pulling new versions that aren’t in your lock file.

Use cooldown periods for dependency updates

Both npm and Renovate support minimum release age settings that delay adoption of new versions. Most supply chain attacks have a shelf life of hours, and a 3-day cooldown eliminates the vast majority of them. We maintain a collection of safe default configurations for common package managers and tooling. Use it. Contribute to it.

Generate SBOMs at build time

When an incident hits, the first question is always: “are we affected?” If you use docker buildx to build your images, you can generate and attach SBOMs and provenance attestations during the build. Sign them. Store them alongside your images. When the next axios or Trivy happens, you check the build metadata rather than having to exec into live Kubernetes pods to figure out what’s running. Docker Scout can then continuously monitor those SBOMs against known vulnerabilities and policy violations.

Secure your CI/CD

Treat every CI runner as a potential breach point

TeamPCP’s credential stealer ran inside CI/CD pipelines, dumping process memory and sweeping 50+ filesystem paths for secrets. Anything accessible to a workflow step is accessible to an attacker who compromises a dependency in that step. Avoid pull_request_targe triggers in GitHub Actions unless absolutely necessary and with explicit security checks as this is the exact mechanism TeamPCP used to execute code in the context of the base repository with access to its secrets.  Audit what secrets each workflow step can reach. If a scanning step has access to your deployment credentials, that is a blast radius problem, not a scanning problem.

Use short-lived, narrowly scoped credentials

The root cause of the Trivy breach was a single Personal Access Token with broad scope used across 33+ workflows. Use short-lived, narrowly-scoped credentials. No single token should grant cross-repository or organization-wide access. Use a secrets manager, not environment variables scattered across workflow files. This is an area where the ecosystem, including Docker Hub, needs to continue improving, and we are actively working on it.

Use an internal mirror or artifact proxy

Place Artifactory, CodeArtifact, or Nexus between your build systems and public registries. Scan and approve versions before they reach your pipelines. Docker Business customers can also use Registry Access Management and Image Access Management to restrict which registries and images developers can pull, providing a lighter-weight policy layer for teams that don’t run a full artifact proxy.

Test dependency updates where production secrets don’t exist

Evaluate updates in dev/staging environments that have no access to production credentials. If a malicious package runs in staging, it steals nothing of value.

Secure your endpoints

This is where most of these attacks actually start. TeamPCP, Shai-Hulud, and now axios all deploy infostealers that sweep developer machines for credentials stored in dotfiles, environment variables, SSH keys, browser sessions, and cloud configs. Protecting CI/CD pipelines matters, but if the developer machine that authors those pipelines is compromised, the attacker inherits whatever that developer can reach.

Deploy canary tokens

Place fake credentials across your fleet, AWS keys, API tokens, SSH keys, that serve no purpose other than to alert you when they’re exfiltrated. If an infostealer sweeps a machine, canary tokens fire before the real credentials are used. Tools like Tracebit and Canarytokens make this trivial. If you have an MDM solution (Jamf, Intune, Jumpcloud), push canaries to every managed device. We deployed this across our fleet in under a day.

Clean up credential sprawl

Audit ~/.ssh/, ~/.aws/credentials, ~/.docker/config.json, .env files, and shell histories for hardcoded secrets. Move everything to a password manager or secrets vault (1Password, HashiCorp Vault). Passphrase-protect all SSH keys. An infostealer that lands on a machine with no cleartext credentials gets nothing useful. Audit the extensions and plugins installed across your developer tools (IDE extensions, browser extensions, coding agent extensions like skills, plugins, MCP servers, etc…) as these tend to run with developer-level permissions and most marketplaces do not re-review updates after initial publication.

Deploy EDR with behavioral detection

Endpoint detection and response tools should cover developer machines and CI runners, with detections tuned for credential sweeping, persistence mechanisms, and unusual process behavior rather than just known malware signatures.

Secure your AI development

AI coding agents are compounding supply chain risk in ways the industry is only beginning to appreciate. Agents install packages, modify configs, make API calls, and spin up containers with developer-level access. A compromised dependency pulled by an agent has the same blast radius as a compromised developer machine, and the people using these agents now include non-developers who may not recognize suspicious behavior.

Run agents in sandboxed environments

Docker Sandboxes (sbx) run AI coding agents like Claude Code, Gemini CLI, Codex, and others inside isolated microVMs. Each sandbox gets its own kernel, filesystem, Docker Engine, and network, completely separated from your host. Credentials are injected into HTTP headers by the host proxy and never enter the VM directly. Network access is deny-by-default, with explicit allowlists. If a compromised dependency runs inside a sandbox, it cannot reach your host filesystem, your Docker daemon, your other containers, or any domain you haven’t explicitly approved.

Govern your MCP servers

Model Context Protocol servers are the new unvetted dependency. They run with broad permissions, connect AI agents to internal systems, and 43% of analyzed MCP servers have command injection flaws. Use signed, hardened images for MCP servers. Docker maintains 300+ verified MCP server images with the same SLSA/SBOM standards as DHI. Docker’s MCP Gateway provides centralized proxy, policy enforcement, secret blocking, and audit logging for all agent-to-tool traffic.

Standardize on fewer tools, governed centrally

It’s tempting to run every AI tool and model. Don’t. Consolidate on a trusted stack, push managed configurations via MDM, and use Docker Desktop’s administrative features (registry access management, proxy configuration, image access management) to control what agents can pull and where they can push.

Build muscle for incident response

Maintain SBOMs for everything in production

When the next compromise drops, you need to answer “are we affected?” in minutes, not days. Build-time SBOMs via docker buildx, combined with Docker Scout’s continuous monitoring, give you that capability. If you have to exec into running containers to determine exposure, you’re already behind.

Have playbooks ready

Know how to freeze your GitHub org, pause CI/CD without breaking everything, revoke credentials in bulk, and communicate to customers before you need to do it under pressure. The time to figure out your incident response workflow is not during the incident. If you haven’t already, audit your npm/PyPI/Docker, Hub accounts for unauthorized publishes, review recent CI logs for unexpected network calls or secret access, and rotate any long-lived tokens that were accessible to CI in the past 90 days.

Verify before you trust, slow down where it counts

Most supply chain attacks burn out within hours. A small delay in adopting new versions, whether via cooldown periods, manual review gates, or simply waiting 72 hours, eliminates the majority of the risk. Speed of adoption is not worth the cost of compromise.

The landscape has changed, your defaults should too

The supply chain attack wave is not a single incident to respond to. It is a permanent shift in the threat landscape. The attackers range from nation-state operators like Lazarus Group to opportunistic teenagers like TeamPCP and LAPSUS$ who are building the plane as it takes off, using AI to accelerate, and monetizing through ransomware partnerships. The ecosystem they are exploiting, npm, PyPI, GitHub Actions, container registries, has not fundamentally changed in its trust model.

What has changed is that defenders now have the tools to establish explicit trust boundaries where implicit trust used to be the only option. Hardened base images, build-time attestations, sandboxed execution, and canary-based detection did not exist at this maturity level two years ago. The gap between organizations that adopt these layers and those that don’t is going to widen fast.

Everything we’ve recommended here, we practice at Docker. We pull from public registries, we run CI/CD pipelines, we use AI agents, and we face the same threat actors you do. This is how we’re protecting ourselves.

Further reading:

Docker Hardened Images: free, signed, SLSA-compliant base images

Docker Scout: SBOM generation, vulnerability detection, and policy enforcement

Docker Sandboxes: isolated microVMs for AI coding agents

Safe Defaults: secure configurations for package managers and tooling

Building SBOMs with Docker Buildx: attach provenance and SBOMs at build time

Quelle: https://blog.docker.com/feed/