Kondensationsprobleme: iPhone Air mit Schwitzwasser im Objektiv
Apples dünnstes Smartphone kämpft bereits am ersten Tag mit beschlagenen Kameralinsen. Die Nutzer melden Feuchtigkeitsprobleme beim iPhone Air. (iPhone, Apple)
Quelle: Golem
Apples dünnstes Smartphone kämpft bereits am ersten Tag mit beschlagenen Kameralinsen. Die Nutzer melden Feuchtigkeitsprobleme beim iPhone Air. (iPhone, Apple)
Quelle: Golem
Baldur’s Gate 3, Expedition 33 und weitere Titel zeigen: Die besten Spiele entstehen inzwischen abseits der milliardenschweren Gamesindustrie. Eine Analyse von Rainer Sigl (Indiegames, Steam)
Quelle: Golem
Elektromagnetische Pulse sind im digitalen Zeitalter eine reale Bedrohung. Ob Naturphänomen oder Hightech-Waffe: Ihre Auswirkungen können dramatisch sein. Ein Ratgebertext von Fabian Deitelhoff (Wissen, Fernsehen)
Quelle: Golem
Microsoft erhöht die Preise für Xbox-Konsolen in den USA und begründet diesen Schritt mit Änderungen im makroökonomischen Umfeld. (Xbox, Microsoft)
Quelle: Golem
Vor 29 Jahren kam Barb Wire in die Kinos. Der Film floppte, das hält Pamela Anderson aber nicht ab, eine Fernsehserie mit Barb Wire zu produzieren. (Science-Fiction, Film)
Quelle: Golem
The world of local AI is moving at an incredible pace, and at the heart of this revolution is llama.cpp—the powerhouse C++ inference engine that brings Large Language Models (LLMs) to everyday hardware (and it’s also the inference engine that powers Docker Model Runner). Developers love llama.cpp for its performance and simplicity. And we at Docker are obsessed with making developer workflows simpler.
That’s why we’re thrilled to announce a game-changing new feature in llama.cpp: native support for pulling and running GGUF models directly from Docker Hub.
This isn’t about running llama.cpp in a Docker container. This is about using Docker Hub as a powerful, versioned, and centralized repository for your AI models, just like you do for your container images.
Why Docker Hub for AI Models?
Managing AI models can be cumbersome. You’re often dealing with direct download links, manual version tracking, and scattered files. By integrating with Docker Hub, llama.cpp leverages a mature and robust ecosystem to solve these problems.
Rock-Solid Versioning: The familiar repository:tag syntax you use for images now applies to models. Easily switch between gemma3 and smollm2:135M-Q4_0 with complete confidence.
Centralized & Discoverable: Docker Hub can become the canonical source for your team’s models. No more hunting for the “latest” version on a shared drive or in a chat history.
Simplified Workflow: Forget curl, wget or manually downloading from web UIs. A single command-line flag now handles discovery, download, and caching.
Reproducibility: By referencing a model with its immutable digest or tag, you ensure that your development, testing, and production environments are all using the exact same artifact, leading to more consistent and reproducible results.
How It Works Under the Hood
This new feature cleverly uses the Open Container Initiative (OCI) specification, which is the foundation of Docker images. The GGUF model file is treated as a layer within an OCI manifest, identified by a special media type like application/vnd.docker.ai.gguf.v3. For more details on why the OCI standard matters for models, check out our blog.
When you use the new –docker-repo flag, llama.cpp performs the following steps:
Authentication: It first requests an authentication token from the Docker registry to authorize the download.
Manifest Fetch: It then fetches the manifest for the specified model and tag (e.g., ai/gemma3:latest).
Layer Discovery: It parses the manifest to find the specific layer that contains the GGUF model file by looking for the correct media type.
Blob Download: Using the layer’s unique digest (a sha256 hash), it downloads the model file directly from the registry’s blob storage.
Caching: The model is saved to a local cache, so subsequent runs are instantaneous.
This entire process is seamless and happens automatically in the background.
Get Started in Seconds
Ready to try it? If you have a recent build of llama.cpp, you can serve a model from Docker Hub with one simple command. The new flag is –docker-repo (or -dr).
Let’s run gemma3, a model available from Docker Hub.
# Now, serve a model from Docker Hub!
llama-server -dr gemma3
The first time you execute this, you’ll see llama.cpp log the download progress. After that, it will use the cached version. It’s that easy! The default organization is ai/, so gemma3 is resolved to ai/gemma3. The default tag is :latest, but a tag can be specified like :1B-Q4_K_M.
For a complete Docker-integrated experience with OCI pushing and pulling support try out Docker Model Runner. The docker model runner equivalent for chatting is:
# Pull, serve and chat to a model from Docker Hub!
docker model run ai/gemma3
The Future of AI Model Distribution
This integration represents a powerful shift in how we think about distributing and managing AI artifacts. By using OCI-compliant registries like Docker Hub, the AI community can build more robust, reproducible, and scalable MLOps pipelines.
This is just the beginning. We envision a future where models, datasets, and the code that runs them are all managed through the same streamlined, developer-friendly workflow that has made Docker an essential tool for millions.
Check out the latest llama.cpp to try it out, and explore the growing collection of models on Docker Hub today!
Learn more
Read our quickstart guide to Docker Model Runner.
Visit our Model Runner GitHub repo! Docker Model Runner is open-source, and we welcome collaboration and contributions from the community!
Discover curated models on Docker Hub
Quelle: https://blog.docker.com/feed/
Following on from our previous initiative to improve how Docker Desktop delivers updates, we are excited to announce another major improvement to how Docker Desktop keeps your development tools up to date. Starting with Docker Desktop 4.46, we’re introducing automatic component updates and a completely redesigned update experience that puts your productivity first.
Why We’re Making This Change
Your development workflow shouldn’t be interrupted by update notifications and restart requirements. With our new approach, you get:
Zero workflow interruption – components update automatically in the background when a Docker Desktop restart is not required
Always-current tools – Scout, Compose, Ask Gordon, and Model Runner stay up-to-date without manual intervention
Better security posture – automatic updates mean you’re always running the latest, most secure versions
Enterprise control – admin console cloud setting to control the update behaviour.
What’s New in Docker Desktop 4.46
Silent Component Updates
Independent tools now update automatically in the background without any user interaction required and without impact on running containers:
Docker Scout – Latest vulnerability scanning capabilities
Docker Compose – New features and bug fixes
Ask Gordon – Enhanced AI assistance improvements
Model Runner – Updated model support and performance optimizations
Note that the component list above may change in the future as we add or remove features.
Redesigned Update Experience
We have completely re-imagined how Docker Desktop communicates updates to you:
Streamlined update flow with clearer messaging
In-app release highlights showcasing key improvements you actually care about
Reduced notification fatigue through more thoughtful update communications
[Coming soon] Smart timing – GUI-only updates happen automatically when you close and reopen Docker
Full Control When You Need It
Individual User Control
Want to manage updates yourself? You have complete control:
Go to Docker Desktop Settings
Navigate to Software Updates
Toggle “Automatically update components” on or off
Software updates: new setting to control opt in or out of automatic component updates.
Enterprise Management
For Docker Business subscribers, administrators maintain full governance through the admin console:
Access Admin Console > Desktop Settings Management
Edit your global policy
Configure “Automatically update components” to enable, disable, lock, or set defaults for your entire organization
This ensures enterprises can maintain their preferred update policies while giving individual developers the productivity benefits of seamless updates.
Admin console: desktop settings management policy contains a new silent update setting for enterprise control.
We Want Your Feedback
The redesigned update workflow is rolling out to the majority of our users as we gather feedback and refine the experience. We’re committed to getting this right, so please share your thoughts:
In-app feedback popup – we do read those!
Docker Slack community – join the conversation with other developers
GitHub issues – report specific bugs or feature requests
Getting Started
Docker Desktop 4.46 with silent component updates is available now. The new update experience will gradually roll out to all users over the coming weeks.
Already using Docker Desktop? Update in-app to get the latest features.
New to Docker? Download Docker Desktop here to experience the most seamless development environment we’ve ever built.
Quelle: https://blog.docker.com/feed/
Lenovo L24i-4A – 24-Zoll-Office-Monitor mit IPS, Full-HD und 100 Hz jetzt für unter 80 Euro bei Amazon erhältlich. (TV & Monitore, Display)
Quelle: Golem
Lego-Rivale Bluebrixx hat seinen zweiten Stargate Adventskalender veröffentlicht. Er bietet Drucke, Mini-Figuren und über 1.000 Teile. (Filme & Serien, Technik/Hardware)
Quelle: Golem
Ein Tiefstpreis jagt den nächsten. Das Macbook Air mit M4-Chip ist bei Amazon schon wieder günstiger geworden und erreicht einen Tiefstpreis. (Macbook Air, Apple)
Quelle: Golem