<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Cloud Computing Köln</title>
	<atom:link href="https://www.cloud-computing-koeln.de/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.cloud-computing-koeln.de</link>
	<description>Neues zu Cloud Computing, Internet of Things und Technologien</description>
	<lastBuildDate>Tue, 14 Apr 2026 02:41:28 +0000</lastBuildDate>
	<language>de-DE</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>How to Analyze Hugging Face for Arm64 Readiness</title>
		<link>https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/</link>
		<comments>https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:28 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/</guid>
		<description><![CDATA[<p>This post is a collaboration between Docker and Arm, demonstrating how Docker MCP Toolkit and the Arm MCP Server work together to scan Hugging Face Spaces for Arm64 Readiness. In our previous post, we walked through migrating a legacy C++ application with AVX2 intrinsics to Arm64 using Docker MCP Toolkit and the Arm MCP Server&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/">How to Analyze Hugging Face for Arm64 Readiness</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>This post is a collaboration between Docker and Arm, demonstrating how Docker MCP Toolkit and the Arm MCP Server work together to scan Hugging Face Spaces for Arm64 Readiness.</p>
<p>In our previous post, we walked through migrating a legacy C++ application with AVX2 intrinsics to Arm64 using Docker MCP Toolkit and the Arm MCP Server &#8211; code conversion, SIMD intrinsic rewrites, compiler flag changes, the full stack. This post is about a different and far more common failure mode.</p>
<p>When we tried to run ACE-Step v1.5, a 3.5B parameter music generation model from Hugging Face, on an Arm64 MacBook, the installation failed not with a cryptic kernel error but with a pip error. The flash-attn wheel in requirements.txt was hardcoded to a linux_x86_64 URL, no Arm64 wheel existed at that address, and the container would not build. It&#8217;s a deceptively simple problem that turns out to affect roughly 80% of Hugging Face Docker Spaces: not the code, not the Dockerfile, but a single hardcoded dependency URL that nobody noticed because nobody had tested on Arm.</p>
<p>To diagnose this systematically, we built a 7-tool MCP chain that can analyse any Hugging Face Space for Arm64 readiness in about 15 minutes. By the end of this guide you&#8217;ll understand exactly why ACE-Step v1.5 fails on Arm64, what the two specific blockers are, and how the chain surfaces them automatically.</p>
<p>Why Hugging Face Spaces Matter for Arm</p>
<p>Hugging Face hosts over one million Spaces, a significant portion of which use the Docker SDK meaning developers write a Dockerfile and HuggingFace builds and serves the container directly. The problem is that nearly all of those containers were built and tested exclusively on linux/amd64, which creates a deployment wall for three fast-growing Arm64 targets that are increasingly relevant for AI workloads.</p>
<p>Target</p>
<p>Hardware</p>
<p>Why it matters</p>
<p>Cloud</p>
<p>AWS Graviton, Azure Cobalt, Google Axion</p>
<p>20-40% cost reduction vs. x86</p>
<p>Edge/Robotics</p>
<p>NVIDIA Jetson Thor, DGX Spark</p>
<p>GR00T, LeRobot, Isaac all target Arm64</p>
<p>Local dev</p>
<p>Apple Silicon M1-M4</p>
<p>Most popular developer machine, zero cloud cost</p>
<p>The failure mode isn&#8217;t always obvious, and it tends to show up in one of two distinct patterns. The first is a missing container manifest &#8211; the image has no arm64 layer and Docker refuses to pull it, which is at least straightforward to diagnose. The second is harder to catch: the Dockerfile and base image are perfectly fine, but a dependency in requirements.txt points to a platform-specific wheel URL. The build starts, reaches pip install, and fails with a platform mismatch error that gives no clear indication of where to look. ACE-Step v1.5 is a textbook example of the second pattern, and the MCP chain catches both in minutes.</p>
<p>The 7-Tool MCP Chain</p>
<p>Docker MCP Toolkit orchestrates the analysis through a secure MCP Gateway. Each tool runs in an isolated Docker container. The seven tools in the chain are:</p>
<p>Caption: The 7-tool MCP chain architecture diagram</p>
<p>The tools:</p>
<p>Hugging Face MCP &#8211; Discovers the Space, identifies SDK type (Docker vs. Gradio)</p>
<p>Skopeo (via Arm MCP Server) &#8211; Inspects the container registry, reports supported architectures</p>
<p>migrate-ease (via Arm MCP Server) &#8211; Scans source code for x86-specific intrinsics, hardcoded paths, arch-locked libraries</p>
<p>GitHub MCP &#8211; Reads Dockerfile, pyproject.toml, requirements.txt from the repository</p>
<p>Arm Knowledge Base (via Arm MCP Server) &#8211; Searches learn.arm.com for build strategies and optimization guides</p>
<p>Sequential Thinking &#8211; Combines findings into a structured migration verdict</p>
<p>Docker MCP Gateway &#8211; Routes requests, manages container lifecycle</p>
<p>The natural question at this point is whether you could simply rebuild your Docker image for Arm64 and be done with it and for many applications, you could. But knowing in advance whether the rebuild will actually succeed is a different problem. Your Dockerfile might depend on a base image that doesn&#8217;t publish Arm64 builds. Your Python dependencies might not have aarch64 wheels. Your code might use x86-specific system calls. The MCP chain checks all of this automatically before you invest time in a build that may not work.</p>
<p>Setting Up Visual Studio Code with Docker MCP Toolkit</p>
<p>Prerequisites</p>
<p>Before you begin, make sure you have:</p>
<p>A machine with 8 GB RAM minimum (16GB recommended)</p>
<p>The latest Docker Desktop release</p>
<p>VS Code with GitHub Copilot extension</p>
<p>GitHub account with personal access token</p>
<p>Step 1. Enable Docker MCP Toolkit</p>
<p>Open Docker Desktop and enable the MCP Toolkit from Settings.</p>
<p>To enable:</p>
<p>Open Docker Desktop</p>
<p>Go to Settings &gt; Beta Features</p>
<p>Caption: Enabling Docker MCP Toolkit under Docker Desktop</p>
<p>Toggle Docker MCP Toolkit ON</p>
<p>Click Apply</p>
<p>Step 2. Add Required MCP Servers from Catalog</p>
<p>Add the following four MCP Servers from the Catalog. You can find them by selecting &#8220;Catalog&#8221; in the Docker Desktop MCP Toolkit, or by following these links:</p>
<p>Arm MCP Server &#8211; Architecture analysis, migrate-ease scanning, skopeo inspection, and Arm knowledge base</p>
<p>GitHub MCP Server &#8211; Repository analysis, code reading, and pull request creation</p>
<p>Sequential Thinking MCP Server &#8211; Complex problem decomposition and planning</p>
<p>Hugging Face MCP Server &#8211; Space discovery and metadata retrieval</p>
<p>Caption: Searching for Arm MCP Server in the Docker MCP Catalog</p>
<p>Step 3. Configure the Servers</p>
<p>Configure the Arm MCP Server</p>
<p>To access your local code for the migrate-ease scan and MCA tools, the Arm MCP Server needs a directory configured to point to your local code.</p>
<p>Caption: Arm MCP Server configuration</p>
<p>Once you click &#8216;Save&#8217;, the Arm MCP Server will know where to look for your code. If you want to give a different directory access in the future, you&#8217;ll need to change this path.</p>
<p>Available Arm Migration Tools</p>
<p>Click Tools to view all six MCP tools available under Arm MCP Server:</p>
<p>Caption: List of MCP tools provided by the Arm MCP Server</p>
<p>knowledge_base_search &#8211; Semantic search of Arm learning resources, intrinsics documentation, and software compatibility</p>
<p>migrate_ease_scan &#8211; Code scanner supporting C++, Python, Go, JavaScript, and Java for Arm compatibility analysis</p>
<p>check_image &#8211; Docker image architecture verification (checks if images support Arm64)</p>
<p>skopeo &#8211; Remote container image inspection without downloading</p>
<p>mca &#8211; Machine Code Analyzer for assembly performance analysis and IPC predictions</p>
<p>sysreport_instructions &#8211; System architecture information gathering</p>
<p>Configure the GitHub MCP Server</p>
<p>The GitHub MCP Server lets GitHub Copilot read repositories, create pull requests, manage issues, and commit changes.</p>
<p>Caption: Steps to configure GitHub Official MCP Server</p>
<p>Configure Authentication:</p>
<p>Select GitHub official</p>
<p>Choose your preferred authentication method</p>
<p>For Personal Access Token, get the token from GitHub &gt; Settings &gt; Developer Settings</p>
<p>Caption: Setting up Personal Access Token in GitHub MCP Server</p>
<p>Configure the Sequential Thinking MCP Server</p>
<p>Click &#8220;Sequential Thinking&#8221;</p>
<p>No configuration needed</p>
<p>Caption: Sequential MCP Server requires zero configuration</p>
<p>This server helps GitHub Copilot break down complex migration decisions into logical steps.</p>
<p>Configure the Hugging Face MCP Server</p>
<p>The Hugging Face MCP Server provides access to Space metadata, model information, and repository contents directly from the Hugging Face Hub.</p>
<p>Click &#8220;Hugging Face&#8221;</p>
<p>No additional configuration needed for public Spaces</p>
<p>For private Spaces, add your HuggingFace API token</p>
<p>Step 4. Add the Servers to VS Code</p>
<p>The Docker MCP Toolkit makes it incredibly easy to configure MCP servers for clients like VS Code.</p>
<p>To configure, click &#8220;Clients&#8221; and scroll down to Visual Studio Code. Click the &#8220;Connect&#8221; button:</p>
<p>Caption: Setting up Visual Studio Code as MCP Client</p>
<p>Now open VS Code and click on the &#8216;Extensions&#8217; icon in the left toolbar:</p>
<p>Caption: Configuring MCP_DOCKER under VS Code Extensions</p>
<p>Click the MCP_DOCKER gear, and click &#8216;Start Server&#8217;:</p>
<p>Caption: Starting MCP Server under VS Code</p>
<p>Step 5. Verify Connection</p>
<p>Open GitHub Copilot Chat in VS Code and ask:</p>
<p>What Arm migration and Hugging Face tools do you have access to?</p>
<p>You should see tools from all four servers listed. If you see them, your connection works. Let&#8217;s scan a Hugging Face Space.</p>
<p>Caption: Playing around with GitHub Copilot</p>
<p>Real-World Demo: Scanning ACE-Step v1.5</p>
<p>Now that you&#8217;ve connected GitHub Copilot to Docker MCP Toolkit, let&#8217;s scan a real Hugging Face Space for Arm64 readiness and uncover the exact Arm64 blocker we hit when trying to run it locally.</p>
<p>Target: ACE-Step v1.5 &#8211; a 3.5B parameter music generation model </p>
<p>Time to scan: 15 minutes </p>
<p>Infrastructure cost: $0 (all tools run locally in Docker containers) </p>
<p>The Workflow</p>
<p>Docker MCP Toolkit orchestrates the scan through a secure MCP Gateway that routes requests to specialized tools: the Arm MCP Server inspects images and scans code, Hugging Face MCP discovers the Space, GitHub MCP reads the repository, and Sequential Thinking synthesizes the verdict. </p>
<p>Step 1. Give GitHub Copilot Scan Instructions</p>
<p>Open your project in VS Code. In GitHub Copilot Chat, paste this prompt:</p>
<p>Your goal is to analyze the Hugging Face Space &quot;ACE-Step/ACE-Step-v1.5&quot; for Arm64 migration readiness. Use the MCP tools to help with this analysis.</p>
<p>Steps to follow:<br />
1. Use Hugging Face MCP to discover the Space and identify its SDK type (Docker or Gradio)<br />
2. Use skopeo to inspect the container image &#8211; check what architectures are currently supported<br />
3. Use GitHub MCP to read the repository &#8211; examine pyproject.toml, Dockerfile, and requirements<br />
4. Run migrate_ease_scan on the source code to find any x86-specific dependencies or intrinsics<br />
5. Use knowledge_base_search to find Arm64 build strategies for any issues discovered<br />
6. Use sequential thinking to synthesize all findings into a migration verdict</p>
<p>At the end, provide a clear GO / NO-GO verdict with a summary of required changes.</p>
<p>Step 2. Watch Docker MCP Toolkit Execute</p>
<p>GitHub Copilot orchestrates the scan using Docker MCP Toolkit. Here&#8217;s what happens:</p>
<p>Phase 1: Space Discovery</p>
<p>GitHub Copilot starts by querying the Hugging Face MCP server to retrieve Space metadata.</p>
<p>Caption: GitHub Copilot uses Hugging Face MCP to discover the Space and identify its SDK type.</p>
<p>The tool returns that ACE-Step v1.5 uses the Docker SDK &#8211; meaning Hugging Face serves it as a pre-built container image, not a Gradio app. This is critical: Docker SDK Spaces have Dockerfiles we can analyze and rebuild, while Gradio SDK Spaces are built by Hugging Face&#8217;s infrastructure we can&#8217;t control.</p>
<p>Phase 2: Container Image Inspection</p>
<p>Next, Copilot uses the Arm MCP Server&#8217;s skopeo tool to inspect the container image without downloading it.</p>
<p>Caption: The skopeo tool reports that the container image has no Arm64 build available. The container won&#8217;t start on Arm hardware.</p>
<p>Result: the manifest includes only linux/amd64. No Arm64 build exists. This is the first concrete data point  the container will fail on any Arm hardware. But this is not the full story.</p>
<p>Phase 3: Source Code Analysis</p>
<p>Copilot uses GitHub MCP to read the repository&#8217;s key files. Here is the actual Dockerfile from the Space:</p>
<p>FROM python:3.11-slim</p>
<p>ENV PYTHONDONTWRITEBYTECODE=1<br />
    PYTHONUNBUFFERED=1<br />
    DEBIAN_FRONTEND=noninteractive<br />
    TORCHAUDIO_USE_TORCHCODEC=0</p>
<p>RUN apt-get update &amp;amp;&amp;amp;<br />
    apt-get install -y &#8211;no-install-recommends git libsndfile1 build-essential &amp;amp;&amp;amp;<br />
    apt-get install -y ffmpeg libavcodec-dev libavformat-dev libavutil-dev libswresample-dev &amp;amp;&amp;amp;<br />
    rm -rf /var/lib/apt/lists/*</p>
<p>RUN useradd -m -u 1000 user<br />
RUN mkdir -p /data &amp;amp;&amp;amp; chown user:user /data &amp;amp;&amp;amp; chmod 755 /data</p>
<p>ENV HOME=/home/user<br />
    PATH=/home/user/.local/bin:$PATH<br />
    GRADIO_SERVER_NAME=0.0.0.0<br />
    GRADIO_SERVER_PORT=7860</p>
<p>WORKDIR $HOME/app<br />
COPY &#8211;chown=user:user requirements.txt .<br />
COPY &#8211;chown=user:user acestep/third_parts/nano-vllm ./acestep/third_parts/nano-vllm<br />
USER user</p>
<p>RUN pip install &#8211;no-cache-dir &#8211;user -r requirements.txt<br />
RUN pip install &#8211;no-deps ./acestep/third_parts/nano-vllm</p>
<p>COPY &#8211;chown=user:user . .<br />
EXPOSE 7860<br />
CMD &#x5B;&quot;python&quot;, &quot;app.py&quot;]</p>
<p>The Dockerfile itself looks clean:</p>
<p>python:3.11-slim already publishes multi-arch builds including arm64</p>
<p>No -mavx2, no -march=x86-64 compiler flags</p>
<p>build-essential, ffmpeg, libsndfile1 are all available in Debian&#8217;s arm64 repositories</p>
<p>But the real problem is in requirements.txt. This is what I hit when I tried to install ACE-Step locally:</p>
<p># nano-vllm dependencies<br />
triton&amp;gt;=3.0.0; sys_platform != &#039;win32&#039;</p>
<p>flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/<br />
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl<br />
  ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039;</p>
<p>Two immediate blockers:</p>
<p>flash-attn is pinned to a hardcoded linux_x86_64 wheel URL. On an aarch64 system, pip downloads this wheel and immediately rejects it: &#8220;not a supported wheel on this platform.&#8221; This is the exact error I hit.</p>
<p>triton&gt;=3.0.0 has no aarch64 wheel on PyPI for Linux. It will fail on Arm hardware.</p>
<p>Neither of these is a code problem. The Python source code is architecture-neutral. The fix is in the dependency declarations.</p>
<p>Phase 4: Architecture Compatibility Scan</p>
<p>Copilot runs the migrate_ease_scan tool with the Python scanner on the codebase.</p>
<p>Caption: The migrate_ease_scan tool analyzes the Python source code and finds zero x86-specific dependencies. No intrinsics, no hardcoded paths, no architecture-locked libraries.</p>
<p>The application source code itself returns 0 architecture issues — no x86 intrinsics, no platform-specific system calls. But the scan also flags the dependency manifest. Two blockers in requirements.txt:</p>
<p>Dependency</p>
<p>Issue</p>
<p>Arm64 Fix</p>
<p>flash-attn (linux wheel)</p>
<p>Hardcoded linux_x86_64 URL</p>
<p>Use flash-attn 2.7+ via PyPI — publishes aarch64 wheels natively</p>
<p>triton&gt;=3.0.0</p>
<p>No aarch64 PyPI wheel for Linux</p>
<p>Exclude on aarch64 or use triton-nightly aarch64 build</p>
<p>Phase 5: Arm Knowledge Base Lookup</p>
<p>Copilot queries the Arm MCP Server&#8217;s knowledge base for solutions to the discovered issues.</p>
<p>Caption: GitHub Copilot uses the knowledge_base_search tool to find Docker buildx multi-arch strategies from learn.arm.com.</p>
<p>The knowledge base returns documentation on:</p>
<p>flash-attn aarch64 wheel availability from version 2.7+</p>
<p>PyTorch Arm64 optimization guides for Graviton and Apple Silicon</p>
<p>Best practices for CUDA 13.0 on aarch64 (Jetson Thor / DGX Spark)</p>
<p>triton alternatives for CPU inference paths on Arm</p>
<p>Phase 6: Synthesis and Verdict</p>
<p>Sequential Thinking combines all findings into a structured verdict:</p>
<p>Check</p>
<p>Result</p>
<p>Blocks?</p>
<p>Container manifest</p>
<p>amd64 only</p>
<p>Yes, needs rebuild</p>
<p>Base image python:3.11-slim</p>
<p>Multi-arch (arm64 available)</p>
<p>No</p>
<p>System packages (ffmpeg, libsndfile1)</p>
<p>Available in Debian arm64</p>
<p>No</p>
<p>torch==2.9.1</p>
<p>aarch64 wheels published</p>
<p>No</p>
<p>flash-attn linux wheel</p>
<p>Hardcoded linux_x86_64 URL</p>
<p>YES, add arm64 URL alongside</p>
<p>triton&gt;=3.0.0</p>
<p>aarch64 wheels available from 3.5.0+</p>
<p>No, resolves automatically</p>
<p>Source code (migrate-ease)</p>
<p>0 architecture issues</p>
<p>No</p>
<p>Compiler flags in Dockerfile</p>
<p>None x86-specific</p>
<p>No</p>
<p>Verdict: CONDITIONAL GO. Zero code changes. Zero Dockerfile changes. One dependency fix is required.</p>
<p>Here are the exact changes needed in requirements.txt:</p>
<p># BEFORE — only x86_64</p>
<p>flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_aarch64.whl ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039; and platform_machine == &#039;aarch64&#039;</p>
<p># AFTER — add arm64 line alongside x86_64<br />
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_aarch64.whl ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039; and platform_machine == &#039;aarch64&#039;<br />
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039; and platform_machine != &#039;aarch64&#039;</p>
<p># triton — no change needed, 3.5.0+ has aarch64 wheels, resolves automatically<br />
triton&gt;=3.0.0; sys_platform != &#039;win32&#039;</p>
<p>After those two fixes, the build command is:</p>
<p>docker buildx build &#8211;platform linux/arm64 -t ace-step:arm64 .</p>
<p>That single command unlocks three deployment paths:</p>
<p>NVIDIA Arm64 — Jetson Thor, DGX Spark (aarch64 + CUDA 13.0)</p>
<p>Cloud Arm64 — AWS Graviton, Azure Cobalt, Google Axion (20-40% cost savings)</p>
<p>Apple Silicon — M1-M4 Macs with MPS acceleration (local inference, $0 cloud cost)</p>
<p>Phase 7: Create the Pull Request</p>
<p>After completing the scan, Copilot uses GitHub MCP to propose the fix. Since the only blocker is the hardcoded linux_x86_64 wheel URL on line 32 of requirements.txt, the change is surgical: one line added, nothing removed.</p>
<p>The fix adds the equivalent linux_aarch64 wheel from the same release alongside the existing x86_64 entry, conditioned on platform_machine == &#8216;aarch64&#8242;:</p>
<p># BEFORE — only x86_64, fails silently on Arm<br />
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/<br />
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl<br />
  ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039;</p>
<p># AFTER — add arm64 line alongside, conditioned by platform_machine<br />
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/<br />
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_x86_64.whl<br />
  ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039;<br />
flash-attn @ https://github.com/mjun0812/flash-attention-prebuild-wheels/releases/<br />
  download/v0.7.12/flash_attn-2.8.3+cu128torch2.10-cp311-cp311-linux_aarch64.whl<br />
  ; sys_platform == &#039;linux&#039; and python_version == &#039;3.11&#039; and platform_machine == &#039;aarch64&#039;</p>
<p>Caption: PR #14 on Hugging Face &#8211; Ready to merge</p>
<p>The key insight: the upstream maintainer already published the arm64 wheel in the same release. The fix wasn&#8217;t a rebuild or a code change &#8211; it was adding one line that references an artifact that already existed. The MCP chain found it in 15 minutes. Without it, a developer hitting this pip error would spend hours tracking it down.</p>
<p>PR: https://huggingface.co/spaces/ACE-Step/Ace-Step-v1.5/discussions/14</p>
<p>Without Arm MCP vs. With Arm MCP</p>
<p>Let&#8217;s be clear about what changes when you add the Arm MCP Server to Docker MCP Toolkit.</p>
<p>Without Arm MCP: You ask GitHub Copilot to check your Hugging Face Space for Arm64 compatibility. Copilot responds with general advice: &#8220;Check if your base image supports arm64&#8221;, &#8220;Look for x86-specific code&#8221;, &#8220;Try rebuilding with buildx&#8221;. You manually inspect Docker Hub, grep through the codebase, check each dependency on PyPI, and hit a pip install failure you cannot easily diagnose. The flash-attn URL issue alone can take an hour to track down.</p>
<p>With Arm MCP + Docker MCP Toolkit: You ask the same question. Within minutes, it uses skopeo to verify the base image, runs migrate_ease_scan on your actual codebase, flags the hardcoded linux_x86_64 wheel URLs in requirements.txt, queries knowledge_base_search for the correct fix, and synthesizes a structured CONDITIONAL GO verdict with every check documented.</p>
<p>Real images get inspected. Real code gets scanned. Real dependency files get analyzed. The difference is Docker MCP Toolkit gives GitHub Copilot access to actual Arm migration tooling, not just general knowledge.</p>
<p>Manual Process vs. MCP Chain</p>
<p>Manual process:</p>
<p>Clone the Hugging Face Space repository (10 minutes)</p>
<p>Inspect the container manifest for architecture support (5 minutes)</p>
<p>Read through pyproject.toml and requirements.txt (20 minutes)</p>
<p>Check PyPI for Arm64 wheel availability across all dependencies (30 minutes)</p>
<p>Analyze the Dockerfile for hardcoded architecture assumptions (10 minutes)</p>
<p>Research CUDA/cuDNN Arm64 support for the required versions (20 minutes)</p>
<p>Write up findings and recommended changes (15 minutes)</p>
<p>Total: 2-3 hours per Space</p>
<p>With Docker MCP Toolkit:</p>
<p>Give GitHub Copilot the scan instructions (5 minutes)</p>
<p>Review the migration report (5 minutes)</p>
<p>Submit a PR with changes (5 minutes)</p>
<p>Total: 15 minutes per Space</p>
<p>What This Suggests at Scale</p>
<p>ACE-Step is a standard Python AI application: PyTorch, Gradio, pip dependencies, a slim Dockerfile. This pattern covers the majority of Docker SDK Spaces on Hugging Face.</p>
<p>The Arm64 wall for these apps is not always visible. The Dockerfile looks clean. The base image supports arm64. The Python code has no intrinsics. But buried in requirements.txt is a hardcoded wheel URL pointing at a linux_x86_64 binary, and nobody finds it until they actually try to run the container on Arm hardware.</p>
<p>That is the 80% problem: 80% of Hugging Face Docker Spaces have never been tested on Arm. Not because the code will not work. but because nobody checked. The MCP chain is a systematic check that takes 15 minutes instead of an afternoon of debugging pip errors.</p>
<p>That has real cost implications:</p>
<p>Graviton inference runs 20-40% cheaper for the same workloads. Every amd64-only Space leaves that savings untouched.</p>
<p>NVIDIA Physical AI (GR00T, LeRobot, Isaac) deploys on Jetson Thor. Developers find models on Hugging Face, but the containers fail to build on target hardware.</p>
<p>Apple Silicon is the most common developer laptop. Local inference means faster iteration and no cloud bill.</p>
<p>How Docker MCP Toolkit Changes Development</p>
<p>Docker MCP Toolkit changes how developers interact with specialized knowledge and capabilities. Rather than learning new tools, installing dependencies, or managing credentials, developers connect their AI assistant once and immediately access containerized expertise.</p>
<p>The benefits extend beyond Hugging Face scanning:</p>
<p>Consistency — Same 7-tool chain produces the same structured analysis for any container</p>
<p>Security — Each tool runs in an isolated Docker container, preventing tool interference</p>
<p>Reproducibility — Scans behave identically across environments</p>
<p>Composability — Add or swap tools as the ecosystem evolves</p>
<p>Discoverability — Docker MCP Catalog makes finding the right server straightforward</p>
<p>Most importantly, developers remain in their existing workflow. VS Code. GitHub Copilot. Git. No context switching to external tools or dashboards.</p>
<p>Wrapping Up</p>
<p>You have just scanned a real Hugging Face Space for Arm64 readiness using Docker MCP Toolkit, the Arm MCP Server, and GitHub Copilot. What we found with ACE-Step v1.5 is representative of what you will find across Hugging Face: code that is architecture-neutral, a Dockerfile that is already clean, but a requirements.txt with hardcoded x86_64 wheel URLs that silently break Arm64 builds.</p>
<p>The MCP chain surfaces this in 15 minutes. Without it, you are staring at a pip error with no clear path to the cause.</p>
<p>Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with the Arm MCP Server, add GitHub,Sequential Thinking, and Hugging Face MCP. Point the chain at any Hugging Face Space you&#8217;re working with and see what comes back.</p>
<p>Learn More</p>
<p>New to Docker? Download Docker Desktop</p>
<p>Explore the MCP Catalog: Discover containerized, security-hardened MCP servers</p>
<p>Get Started with MCP Toolkit: Official Documentation</p>
<p>Arm MCP Server: Developer Documentation</p>
<p>Hugging Face MCP Server: Hub Documentation</p>
<p>ACE-Step v1.5: Hugging Face Space</p>
<p>Migration PR: GitHub Pull Request</p>
<p>Quelle: https://blog.docker.com/feed/</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/">How to Analyze Hugging Face for Arm64 Readiness</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/how-to-analyze-hugging-face-for-arm64-readiness/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AWS Elastic Disaster Recovery now supports IPv6</title>
		<link>https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/</link>
		<comments>https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:26 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/</guid>
		<description><![CDATA[<p>AWS Elastic Disaster Recovery (AWS DRS) now supports IPv6 for both data replication and control plane connections. Customers operating in IPv6-only or dual-stack network environments can now configure AWS DRS to replicate using IPv6, eliminating the need for IPv4 addresses in their disaster recovery setup. AWS DRS minimizes downtime and data loss with fast, reliable&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/">AWS Elastic Disaster Recovery now supports IPv6</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>AWS Elastic Disaster Recovery (AWS DRS) now supports IPv6 for both data replication and control plane connections. Customers operating in IPv6-only or dual-stack network environments can now configure AWS DRS to replicate using IPv6, eliminating the need for IPv4 addresses in their disaster recovery setup.  AWS DRS minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. Previously, AWS DRS required IPv4 connectivity for all replication and service communication. Now, customers can set the internet protocol to IPv6 in their replication configuration to use dual-stack endpoints for agent-to-service communication and data replication. This helps customers meet network modernization requirements and enables disaster recovery in environments where IPv4 addresses are unavailable or restricted. Existing replication configurations are not affected and continue to use IPv4 by default.  This capability is available in all AWS Regions where AWS DRS is available and where Amazon EC2 supports IPv6. See the AWS Regional Services List for the latest availability information.  To learn more about AWS DRS, visit our product page or documentation. To get started, sign in to the AWS Elastic Disaster Recovery Console.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/">AWS Elastic Disaster Recovery now supports IPv6</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/aws-elastic-disaster-recovery-now-supports-ipv6/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AWS IoT is now available in Israel (Tel Aviv) and Europe (Milan) AWS Regions</title>
		<link>https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/</link>
		<comments>https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:25 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/</guid>
		<description><![CDATA[<p>AWS IoT Core and AWS IoT Device Management services are now available in the Israel (Tel Aviv) and Europe (Milan) AWS Regions. With this expansion, organizations operating in these regions can better serve their local customers and unlock multiple benefits, including faster response times, stronger data residency controls, and reduced data transfer expenses. AWS IoT&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/">AWS IoT is now available in Israel (Tel Aviv) and Europe (Milan) AWS Regions</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>AWS IoT Core and AWS IoT Device Management services are now available in the Israel (Tel Aviv) and Europe (Milan) AWS Regions. With this expansion, organizations operating in these regions can better serve their local customers and unlock multiple benefits, including faster response times, stronger data residency controls, and reduced data transfer expenses.  AWS IoT Core is a managed cloud service that lets you securely connect billions of Internet of Things (IoT) devices to the cloud and manage them at scale. It routes trillions of messages to IoT devices and AWS endpoints, through bi-directional industry standard protocols, such as MQTT, HTTPS, LoRaWAN (select regions). AWS IoT Device Management allows customers to search, organize, monitor and remotely manage connected devices at scale.  With the expansion to these regions, AWS IoT is now available in 27 AWS Regions worldwide. To get started and to learn more, refer to the technical documentation for AWS IoT Core and AWS IoT Device Management.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/">AWS IoT is now available in Israel (Tel Aviv) and Europe (Milan) AWS Regions</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/aws-iot-is-now-available-in-israel-tel-aviv-and-europe-milan-aws-regions/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region</title>
		<link>https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/</link>
		<comments>https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:23 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/</guid>
		<description><![CDATA[<p>Starting today, Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/">Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Starting today, Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances.  M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don&#8217;t fully utilize all compute resources.  M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications.  To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i and M8i-flex instance page or visit the AWS News blog.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/">Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/amazon-ec2-m8i-and-m8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Amazon EC2 R8i and R8i-flex instances are now available in AWS GovCloud (US-West) Region</title>
		<link>https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/</link>
		<comments>https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:22 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/</guid>
		<description><![CDATA[<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/">Amazon EC2 R8i and R8i-flex instances are now available in AWS GovCloud (US-West) Region</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.  R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don&#8217;t fully utilize all compute resources.  R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, delivering exceptional performance for mission-critical SAP workloads.  To get started, sign in to the AWS Management Console. For more information about the R8i and R8i-flex instances visit the AWS News blog.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/">Amazon EC2 R8i and R8i-flex instances are now available in AWS GovCloud (US-West) Region</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/amazon-ec2-r8i-and-r8i-flex-instances-are-now-available-in-aws-govcloud-us-west-region/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Anzeige: Powerbank mit 165 W und 20.000 mAh im Angebot bei Amazon</title>
		<link>https://www.cloud-computing-koeln.de/anzeige-powerbank-mit-165-w-und-20-000-mah-im-angebot-bei-amazon/</link>
		<comments>https://www.cloud-computing-koeln.de/anzeige-powerbank-mit-165-w-und-20-000-mah-im-angebot-bei-amazon/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:14 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/anzeige-powerbank-mit-165-w-und-20-000-mah-im-angebot-bei-amazon/</guid>
		<description><![CDATA[<p>Eine Powerbank von Ugreen mit 165 W, 20.000 mAh und drei Anschlüssen inklusive einziehbarem Kabel gibt es bei Amazon zum Aktionspreis. (Technik/Hardware) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-powerbank-mit-165-w-und-20-000-mah-im-angebot-bei-amazon/">Anzeige: Powerbank mit 165 W und 20.000 mAh im Angebot bei Amazon</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Eine Powerbank von Ugreen mit 165 W, 20.000 mAh und drei Anschlüssen inklusive einziehbarem Kabel gibt es bei Amazon zum Aktionspreis. (Technik/Hardware)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-powerbank-mit-165-w-und-20-000-mah-im-angebot-bei-amazon/">Anzeige: Powerbank mit 165 W und 20.000 mAh im Angebot bei Amazon</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/anzeige-powerbank-mit-165-w-und-20-000-mah-im-angebot-bei-amazon/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Assassin&#039;s Creed Shadows: Wo Ubisofts offene Welt an Grenzen gestoßen ist</title>
		<link>https://www.cloud-computing-koeln.de/assassins-creed-shadows-wo-ubisofts-offene-welt-an-grenzen-gestossen-ist/</link>
		<comments>https://www.cloud-computing-koeln.de/assassins-creed-shadows-wo-ubisofts-offene-welt-an-grenzen-gestossen-ist/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:13 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/assassins-creed-shadows-wo-ubisofts-offene-welt-an-grenzen-gestossen-ist/</guid>
		<description><![CDATA[<p>Ubisoft erklärt, wie die Technik hinter Assassin&#8217;s Creed Shadows die Probleme offener Spielwelten lösen sollte &#8211; und wo die Grenzen lagen. Von Peter Steinlechner (Assassin&#039;s Creed Shadows, Ubisoft) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/assassins-creed-shadows-wo-ubisofts-offene-welt-an-grenzen-gestossen-ist/">Assassin&#039;s Creed Shadows: Wo Ubisofts offene Welt an Grenzen gestoßen ist</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Ubisoft erklärt, wie die Technik hinter Assassin&#8217;s Creed Shadows die Probleme offener Spielwelten lösen sollte &#8211; und wo die Grenzen lagen. Von Peter Steinlechner (Assassin&#039;s Creed Shadows, Ubisoft)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/assassins-creed-shadows-wo-ubisofts-offene-welt-an-grenzen-gestossen-ist/">Assassin&#039;s Creed Shadows: Wo Ubisofts offene Welt an Grenzen gestoßen ist</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/assassins-creed-shadows-wo-ubisofts-offene-welt-an-grenzen-gestossen-ist/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Anzeige: Gaming-TV bei Amazon zum aktuellen Bestpreis</title>
		<link>https://www.cloud-computing-koeln.de/anzeige-gaming-tv-bei-amazon-zum-aktuellen-bestpreis/</link>
		<comments>https://www.cloud-computing-koeln.de/anzeige-gaming-tv-bei-amazon-zum-aktuellen-bestpreis/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:11 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/anzeige-gaming-tv-bei-amazon-zum-aktuellen-bestpreis/</guid>
		<description><![CDATA[<p>Bei Amazon gibt es derzeit den TCL 55T8C im Angebot. Reduziert ist er um 33 Prozent und damit zum aktuellen Bestpreis erhältlich. (Fernsehen, Heimkino) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-gaming-tv-bei-amazon-zum-aktuellen-bestpreis/">Anzeige: Gaming-TV bei Amazon zum aktuellen Bestpreis</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Bei Amazon gibt es derzeit den TCL 55T8C im Angebot. Reduziert ist er um 33 Prozent und damit zum aktuellen Bestpreis erhältlich. (Fernsehen, Heimkino)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-gaming-tv-bei-amazon-zum-aktuellen-bestpreis/">Anzeige: Gaming-TV bei Amazon zum aktuellen Bestpreis</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/anzeige-gaming-tv-bei-amazon-zum-aktuellen-bestpreis/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Anzeige: M.2 SSD mit 2 TByte günstig wie seit Monaten nicht</title>
		<link>https://www.cloud-computing-koeln.de/anzeige-m-2-ssd-mit-2-tbyte-guenstig-wie-seit-monaten-nicht/</link>
		<comments>https://www.cloud-computing-koeln.de/anzeige-m-2-ssd-mit-2-tbyte-guenstig-wie-seit-monaten-nicht/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:09 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/anzeige-m-2-ssd-mit-2-tbyte-guenstig-wie-seit-monaten-nicht/</guid>
		<description><![CDATA[<p>Die mit PC und PS5 kompatible Lexar NM790 M.2 SSD mit 2 TByte ist bei Amazon 40 Euro reduziert. Günstiger war sie seit Januar nicht. (Solid State Drive, Speichermedien) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-m-2-ssd-mit-2-tbyte-guenstig-wie-seit-monaten-nicht/">Anzeige: M.2 SSD mit 2 TByte günstig wie seit Monaten nicht</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Die mit PC und PS5 kompatible Lexar NM790 M.2 SSD mit 2 TByte ist bei Amazon 40 Euro reduziert. Günstiger war sie seit Januar nicht. (Solid State Drive, Speichermedien)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-m-2-ssd-mit-2-tbyte-guenstig-wie-seit-monaten-nicht/">Anzeige: M.2 SSD mit 2 TByte günstig wie seit Monaten nicht</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/anzeige-m-2-ssd-mit-2-tbyte-guenstig-wie-seit-monaten-nicht/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Anzeige: Einhell-Handkreissäge kostet nur 44 Euro bei Amazon</title>
		<link>https://www.cloud-computing-koeln.de/anzeige-einhell-handkreissaege-kostet-nur-44-euro-bei-amazon/</link>
		<comments>https://www.cloud-computing-koeln.de/anzeige-einhell-handkreissaege-kostet-nur-44-euro-bei-amazon/#comments</comments>
		<pubDate>Tue, 14 Apr 2026 02:41:08 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/anzeige-einhell-handkreissaege-kostet-nur-44-euro-bei-amazon/</guid>
		<description><![CDATA[<p>Bei Amazon kostet eine Handkreissäge von Einhell 44 Euro. Damit ist sie billiger als in anderen Shops. (Technik/Hardware) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-einhell-handkreissaege-kostet-nur-44-euro-bei-amazon/">Anzeige: Einhell-Handkreissäge kostet nur 44 Euro bei Amazon</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Bei Amazon kostet eine Handkreissäge von Einhell 44 Euro. Damit ist sie billiger als in anderen Shops. (Technik/Hardware)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-einhell-handkreissaege-kostet-nur-44-euro-bei-amazon/">Anzeige: Einhell-Handkreissäge kostet nur 44 Euro bei Amazon</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/anzeige-einhell-handkreissaege-kostet-nur-44-euro-bei-amazon/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
