Gemma: Ukraine entwickelt eigenes KI-Modell
Die Ukraine arbeitet an einem Sprachmodell auf Basis von Googles Gemma. Das Projekt soll sowohl militärische als auch zivile KI-Anwendungen ermöglichen. (KI, Software)
Quelle: Golem
Die Ukraine arbeitet an einem Sprachmodell auf Basis von Googles Gemma. Das Projekt soll sowohl militärische als auch zivile KI-Anwendungen ermöglichen. (KI, Software)
Quelle: Golem
Microsofts jüngste Dark-Mode-Optimierungen bei Windows 11 gehen nach hinten los. Ein Bug lässt den Explorer wiederholt weiß aufblinken. (Windows 11, Microsoft)
Quelle: Golem
In Autos gibt es immer größere Bildschirme, meist für bewegte Inhalte. Darum setzt Mercedes-Benz im CLA und GLC auf die Game Engine von Unity. Ein Bericht von Dirk Kunde (Mercedes Benz, Spiele)
Quelle: Golem
Michael Shanks war Daniel Jackson in Stargate SG-1. Er hat Höhen und Tiefen erlebt und einen Rat für die Stars der kommenden Serie. (Science-Fiction, Amazon)
Quelle: Golem
For most developers, getting started with AI is still too complicated. Different models, tools, and platforms don’t always play nicely together. But with Docker, that’s changing fast.
Docker is emerging as essential infrastructure for standardized, portable, and scalable AI environments. By bringing composability, simplicity, and GPU accessibility to the agentic era, Docker is helping developers and the enterprises they support move faster, safer, and with far less friction.
Real results: Faster AI delivery with Docker
The platform is accelerating innovation: According to the latest report from theCUBE Research, 88% of respondents reported that Docker reduced the time-to-market for new features or products, with nearly 40% achieving efficiency gains of more than 25%. Docker is playing an increasingly vital role in AI development as well. 52% of respondents cut AI project setup time by over 50%, while 97% report increased speed for new AI product development.
Reduced AI project failures and delays
Reliability remains a key performance indicator for AI initiatives, and Docker is proving instrumental in minimizing risk. 90% of respondents indicated that Docker helped prevent at least 10% of project failures or delays, while 16% reported prevention rates exceeding 50%. Additionally, 78% significantly improved testing and validation of AI models. These results highlight how Docker’s consistency, isolation, and repeatability not only speed development but also reduce costly rework and downtime, strengthening confidence in AI project delivery.
Build, share, and run agents with Docker, easily and securely
Docker’s mission for AI is simple: make building and running AI and agentic applications as easy, secure, and shareable as any other kind of software.
Instead of wrestling with fragmented tools, developers can now rely on Docker’s trusted, container-based foundation with curated catalogs of verified models and tools, and a clean, modular way to wire them together. Whether you’re connecting an LLM to a database or linking services into a full agentic workflow, Docker makes it plug-and-play.
With Docker Model Runner, you can pull and run large language models locally with GPU acceleration. The Docker MCP Catalog and Toolkit connect agents to over 270 MCP servers from partners like Stripe, Elastic, and GitHub. And with Docker Compose, you can define the whole AI stack of models, tools, and services in a single YAML file that runs the same way locally or in the cloud. Cagent, our open-source agent builder, lets you easily build, run, and share AI agents, with behavior, tools, and persona all defined in a single YAML file. And with Docker Sandboxes, you can run coding agents like Claude Code in a secure, local environment, keeping your workflows isolated and your data protected.
Even hardware limits aren’t a blocker anymore when building agents. Docker Offload lets developers run heavy compute tasks on cloud GPUs with one click.
Conclusion
Docker’s vision is clear: to make AI development as simple and powerful as the workflows developers already know and love. And it’s working: theCUBE reports 52% of users cut AI project setup time by more than half, while 87% say they’ve accelerated time-to-market by at least 26%.
Learn more
Read more about ROI of working with Docker in our latest blog
Download theCUBE Research Report and eBook – economic validation of Docker
Explore the MCP Catalog: Discover containerized, security-hardened MCP servers
Open Docker Desktop and get started with the MCP Toolkit (Requires version 4.48 or newer to launch the MCP Toolkit automatically)
Head over to the cagent GitHub repository, give the repository a star, try it out, and let us know what amazing agents you build!
Quelle: https://blog.docker.com/feed/
AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. They deliver up to 2x higher compute performance and 31% price-performance compared to previous generation X2iezn instances. X8aedz instances are built using the latest sixth generation AWS Nitro Cards and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) regions. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page or AWS news blog.
Quelle: aws.amazon.com
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P6e-GB300 UltraServers. P6e-GB300 UltraServers, accelerated by NVIDIA GB300 NVL72, provide 1.5x GPU memory and 1.5x FP4 compute (without sparsity) compared to P6e-GB200.
Customers can optimize performance for the most powerful models in production with P6e-GB300 for applications that require higher context and implement emerging inference techniques like reasoning and Agentic AI.
To get started with P6e-GB300 UltraServers, please contact your AWS sales representative.
To learn more about P6e UltraServers and instances, visit Amazon EC2 P6 instances.
Quelle: aws.amazon.com
Today, Amazon announces the availability of Amazon Nova 2 Sonic, our speech-to-speech model for natural, real-time conversational AI that delivers industry leading quality and price for voice-based conversational AI. It offers best-in-class streaming speech understanding with robustness to background noise and users’ speaking styles, efficient dialog handling, and speech generation with expressive voices that can speak natively in multiple languages (Polyglot voices). It has superior reasoning, instruction following, and tool invocation accuracy over the previous model.
Nova 2 Sonic builds on the capabilities introduced in the original Nova Sonic model with new features including expanded language support (Portuguese and Hindi), polyglot voices that enable the model to speak different languages with native expressivity using the same voice, and turn-taking controllability to allow developers to set low, medium, or high pause sensitivity. The model also adds cross-modal interaction, allowing users to seamlessly switch between voice and text in the same session, asynchronous tool calling to support multi-step tasks without interrupting conversation flow, and a one-million token context window for sustained interactions.
Developers can integrate Nova Sonic 2 directly into real-time voice systems using Amazon Bedrock’s bidirectional streaming API. Nova Sonic 2 now also seamlessly integrates with Amazon Connect and other leading telephony providers, including Vonage, Twilio, and AudioCodes, as well as open source frameworks such as LiveKit and Pipecat.
Amazon Nova 2 Sonic is available in Amazon Bedrock in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Stockholm). To learn more, read the AWS News Blog and the Amazon Nova Sonic User Guide. To get started with Nova Sonic 2 in Amazon Bedrock, visit the Amazon Bedrock console.
Quelle: aws.amazon.com
AWS announces the Apache Spark upgrade agent, a new capability that accelerates Apache Spark version upgrades for Amazon EMR on EC2 and EMR Serverless. The agent converts complex upgrade processes that typically take months into projects spanning weeks through automated code analysis and transformation. Organizations invest substantial engineering resources analyzing API changes, resolving conflicts, and validating applications during Spark upgrades. The agent introduces conversational interfaces where engineers express upgrade requirements in natural language, while maintaining full control over code modifications. The Apache Spark upgrade agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers can initiate upgrades directly from SageMaker Unified Studio, Kiro CLI or IDE of their choice with the help of MCP (Model Context Protocol) compatibility. During the upgrade process, the agent analyzes existing code and suggests specific changes, and engineers can review and approve before implementation. The agent validates functional correctness through data quality validations. The agent currently supports upgrades from Spark 2.4 to 3.5 and maintains data processing accuracy throughout the upgrade process. The Apache Spark upgrade agent is now available in all AWS Regions where SageMaker Unified Studio is available. To start using the agent, visit SageMaker Unified Studio and select IDE Spaces or install the Kiro CLI. For detailed implementation guidance, reference documentation, and migration examples, visit the documentation.
Quelle: aws.amazon.com
Starting today, new general purpose high-frequency high-network Amazon Elastic Compute Cloud (Amazon EC2) M8azn instances are available for preview. These instances are powered by fifth generation AMD EPYC (formerly code named Turin) processors, offering the highest maximum CPU frequency, 5GHz in the cloud. The M8azn instances offer up to 2x compute performance versus previous generation M5zn instances. These instances also deliver 24% higher performance than M8a instances. M8azn instances are built on the AWS Nitro System, a collection of hardware and software innovations designed by AWS. The AWS Nitro System enables the delivery of efficient, flexible, and secure cloud services with isolated multitenancy, private networking, and fast local storage. These instances are ideal for applications such as gaming, high-performance computing, high-frequency trading (HFT), CI/CD, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries. To learn more or request access to the M8azn instances preview, visit the Amazon EC2 M8a page.
Quelle: aws.amazon.com