<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Cloud Computing Köln &#187; cloud computing</title>
	<atom:link href="https://www.cloud-computing-koeln.de/tag/cloud-computing/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.cloud-computing-koeln.de</link>
	<description>Neues zu Cloud Computing, Internet of Things und Technologien</description>
	<lastBuildDate>Sat, 02 May 2026 02:41:34 +0000</lastBuildDate>
	<language>de-DE</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1.1</generator>
	<item>
		<title>A Virtual Agent team at Docker: How the Coding Agent Sandboxes team uses a fleet of agents to ship faster</title>
		<link>https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/</link>
		<comments>https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:34 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/</guid>
		<description><![CDATA[<p>I work on Coding Agent Sandboxes, aka “sbx” at Docker. The project provides secure, microVM-based isolation for running AI coding agents like Claude Code, Gemini, Codex, Docker Agent and Kiro. Agents get full autonomy inside a sandbox (their own Docker daemon, network, filesystem) without touching your host system. Over the past couple of weeks, we&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/">A Virtual Agent team at Docker: How the Coding Agent Sandboxes team uses a fleet of agents to ship faster</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>I work on Coding Agent Sandboxes, aka “sbx” at Docker. The project provides secure, microVM-based isolation for running AI coding agents like Claude Code, Gemini, Codex, Docker Agent and Kiro. Agents get full autonomy inside a sandbox (their own Docker daemon, network, filesystem) without touching your host system. Over the past couple of weeks, we built something on top of it: a virtual team of seven AI agent roles that test the product, triage issues, post release notes, and even fix bugs, all running autonomously in CI. We call it the Fleet.</p>
<p>The Fleet is built on Claude Code skills: markdown files that give an agent a persona, a set of responsibilities, and the tools it’s allowed to use. Think of a skill not as a script that says “run these steps,” but as a role description that says “you are the build engineer, here’s what you know and how you make decisions.” That distinction matters because agents need judgment, not just instructions. When a test fails unexpectedly, a script stops. A role investigates.</p>
<p>The same skill file, the same behavior, whether it runs on a developer’s laptop or in CI.</p>
<p>Local First, CI Second</p>
<p>Coding Agent Sandboxes is a CLI tool (sbx) that manages sandbox lifecycles: create, start, stop, remove, configure networking, mount workspaces, and more. It runs on MacOS, Linux and Windows. Every release needs testing across both platforms, across upgrade paths between versions, and under sustained load to catch resource leaks. The team also needs daily visibility into what shipped, and a way to triage the growing issue backlog without it becoming a full-time job.</p>
<p>We could have written traditional test scripts and reporting tools. Instead, we built agent roles that handle these tasks autonomously, both on our laptops and in CI.</p>
<p>The design principle behind the Fleet is simple: every skill runs on your machine first.</p>
<p>When we built the /cli-tester skill (the Fleet’s exploratory tester, more on that below), we didn’t start by writing a GitHub workflow. We started by invoking it locally. We watched it build the binaries, exercise the CLI commands, find issues, and report them. We tweaked the skill until it did the right thing in our terminal. Only then did we wire it into a workflow.</p>
<p>This matters because the alternative is painful. If you build CI-only agents, you debug them through commit-push-wait-read-logs cycles. Every iteration takes minutes. When the skill runs locally first, the iteration takes seconds. You see the agent think. You see where it gets confused. You fix the skill file, re-invoke, and try again.</p>
<p>CI is just another runtime for the same skill. The /cli-tester that runs nightly on MacOS, Linux and Windows runners is the exact same skill we invoke from our terminals. The workflow sets up the environment, checks out the code, and calls the skill. That’s it. No separate “CI version.” No translation layer. One skill, two runtimes.</p>
<p>This is what makes the Fleet practical. You’re not maintaining two systems. You’re maintaining one set of skills and a set of workflows that invoke them.</p>
<p>The Roster</p>
<p>The skills directory has 20 skills in total. Most are foundational knowledge (architecture, code style, Go conventions, security, testing patterns). Seven of them are the Fleet: the roles that run autonomously on CI. Each one is a SKILL.md file that describes a persona, not a procedure.</p>
<p>/build-engineer is the foundation that other skills stand on. It references topic files for building binaries, container templates, and local installs. It knows the Taskfile.yml, the docker-bake.hcl, and the platform-specific build flags. It doesn’t run on CI by itself. Other skills load it when they need to compile anything.</p>
<p>/project-manager is the team’s memory. It deduplicates findings against existing issues and PRs before creating new ones, manages the GitHub Projects board (setting status, priority, and labels), and handles interactive triage when running locally. On CI, it switches to fully automatic mode: no questions asked, just deduplicate and create. It uses GraphQL pagination to scan the entire project board, not just the first page. Every other skill that discovers something calls the project-manager before opening an issue.</p>
<p>/product-owner translates commit-speak into human language. It collects merged PRs from a date range, categorizes them (New Features, Bug Fixes, Improvements, Documentation, Maintenance), and rewrites each one in plain English. “feat(cli): add TZ env passthrough” becomes “Docker Sandboxes now automatically use your local timezone.” On CI, it outputs Slack Block Kit JSON. Locally, it renders a markdown table. It filters out noise from bots (Dependabot bumps, workflow-only changes) and skips posting when there’s nothing meaningful to report.</p>
<p>/cli-tester is the exploratory tester of the Fleet, and it’s the largest skill by far. Unlike traditional test scripts that assert expected output and fail on any deviation, the cli-tester investigates what it finds. When output doesn’t match expectations, it asks why before filing a bug.</p>
<p>It defines 52+ test scenarios organized into 14 tiers: Core Lifecycle, Agent Smoke, Workspace, Network Policy, Sandbox Features, Blueprint, CLI UX, Environment, Code Tasks, Agent Network, Reliability, Collaboration, Error Recovery, and Human-Only (skipped in CI). It builds the binaries through the build-engineer, triages findings through the project-manager, and loads product scenarios defined by the actual Product Manager on the team. It monitors disk space during testing, posts an executive summary to Slack when it finishes, and runs nightly on CI across MacOS, Linux and Windows.</p>
<p>It also powers a slash command on GitHub. When someone comments /cli-tester-review on a pull request, CI spins up three runners (MacOS, Linux and Windows), each loading the skill to exercise the PR’s changes on that platform. The agents explore the code, run the scenarios, and post their findings as comments directly on the pull request.</p>
<p>/performance-tester runs in two modes. Lifecycle Endurance repeatedly cycles create/stop/rm to detect reliability issues and resource leaks, producing xUnit JSON output. Code Exploration Benchmark clones a real Git repository and compares host-vs-sandbox I/O performance and Claude Code session behavior. Both modes measure disk usage over time and flag regressions. The goal is catching the slow degradation that no single test run would notice.</p>
<p>/upgrade-tester runs a four-phase test plan. Phase A creates pre-upgrade state (sandboxes, configurations). Phase B installs the new version. Phase C verifies everything still works after the upgrade. Phase D optionally downgrades and verifies again. It takes two version tags as input, builds the binaries for each, creates VMs, and produces an executive summary with pass/fail per phase. Upgrade regressions are the kind of bug that’s invisible in a single-version test suite.</p>
<p>/software-engineer operates in two modes. Reactive: when someone adds the agent-fix label to a GitHub issue, a MacOS runner picks it up and runs a ralph-loop to work the issue, contributing a PR with minimal, focused changes. Proactive: weekly, it runs in architect mode, scanning the codebase for quality issues, producing up to five findings, triaging them through the project-manager, then spawning three MacOS runners in parallel to fix three of them. Each runner delivers a PR targeting a specific simplification or tech-debt reduction.</p>
<p>Skills That Compose</p>
<p>Individual skills are useful. Skills that load other skills are a team.</p>
<p>The seven Fleet roles sit on top of thirteen foundational skills: architecture, code style, Go conventions, software design, security, testing patterns, development workflow, git worktrees, and others. The foundational skills encode project knowledge. The Fleet roles encode behavior. A Fleet role loads the foundational skills it needs, the same way a new team member reads the project’s contributing guide before writing code.</p>
<p>The /cli-tester doesn’t know how to build binaries. It loads the /build-engineer for that. It doesn’t know whether the bug it found is a duplicate. It loads the /project-manager to check. The tester focuses on testing. The builder focuses on building. The manager focuses on triaging. Each role stays in its lane, and the composition creates something none of them could do alone.</p>
<p>The /software-engineer follows the same pattern. It loads the /build-engineer so it can compile the project, and it loads coding best practices and software design conventions so its output meets the team’s standards. The skill doesn’t try to encode everything. It delegates to the foundational skills.</p>
<p>The /performance-tester loads the /cli-tester, extending it with duration and metrics. Instead of duplicating the testing logic, it reuses it and adds a measurement layer on top.</p>
<p>This is the skills-as-roles principle in practice. When you design skills as personas with clear responsibilities (instead of step-by-step commands), they compose naturally. A tester that loads a builder and a manager is doing the same thing a human tester does: asking a colleague to compile the project and checking with the PM before filing a bug. The difference is that the “asking” happens through skill composition instead of a Slack message.</p>
<p>The Ralph-Loop Is the Engine</p>
<p>The Ralph Wiggum loop is a pattern popularized by Geoffrey Huntley in 2025: a Bash loop that keeps feeding an AI coding agent the same task until the work is done. At its simplest, it’s while :; do cat PROMPT.md | claude-code ; done. Each iteration spawns a fresh agent with a clean context window. The agent reads the task, implements one piece, runs the tests, commits if they pass, and exits. The loop restarts, and the next iteration picks up where the previous one left off. Instead of hoping for first-try perfection, you design for iteration.</p>
<p>Our implementation of this pattern is called a Ralph-loop. The Fleet skills define what each agent role knows. The Ralph-loop defines how the iteration runs.</p>
<p>Our Ralph-loop is a composite GitHub Action backed by a shell script that adds a layer on top of the basic pattern: a separate worker and reviewer. It fetches the issue context, creates a working branch, and iterates: the worker implements changes and writes a summary, the reviewer evaluates the diff and decides SHIP or REVISE. If REVISE, the feedback goes back to the worker for another pass. Up to five iterations by default. If the reviewer says SHIP, the loop pushes the branch, creates a PR, and comments on the original issue.</p>
<p>The worker and reviewer run as separate Claude invocations with different models. The worker uses Opus for implementation. The reviewer uses Opus with 1M context to evaluate the full diff against the task requirements. Each one loads the /software-engineer skill (which in turn loads the build-engineer and coding best practices), so they share the same project knowledge but apply it from different perspectives.</p>
<p>Separating generation from evaluation is deliberate. The same agent that wrote the code shouldn’t evaluate whether the code is good. It’s the oldest principle in quality assurance: the person who built the thing shouldn’t be the only person who tests it. The worker’s job is to solve the problem. The reviewer’s job is to decide whether the problem is actually solved.</p>
<p>The Ralph-loop works locally too. The same ralph-loop.sh script that CI calls can be invoked from your terminal with &#8211;issue-number 42. Locally, it parses CLI arguments instead of reading environment variables, and outputs plain text instead of streaming JSON. Same loop, same prompts, same iteration pattern. We debugged the worker and reviewer prompts on our laptops before they ever ran in CI.</p>
<p>The workflows handle scheduling and triggering: nightly cron for the testers, label events for the software-engineer, weekly cron for the architect mode. The Ralph-loop handles the iteration pattern. The skills handle the domain knowledge. Three layers, each with a clear job.</p>
<p>This separation is what made the Fleet possible to build in a couple of weeks. We didn’t have to reinvent the automation loop for every role. The Ralph-loop already knew how to iterate. We just needed to give each role its own skill file and wire the triggers.</p>
<p>What the Fleet Ships</p>
<p>The Fleet has been running for a couple of weeks. Here’s what it delivers.</p>
<p>Automated issue resolution. A team member labels an issue with agent-fix. The CI grabs a MacOS runner, reads the issue, and starts working. The result is a pull request that addresses the issue. Not every PR lands without changes, but the first draft is there for review, often within the hour.</p>
<p>Daily release notes. The product-owner traverses the git log every day and posts a Slack summary for stakeholders. No one has to manually compile “what shipped this week.” The stakeholders see progress in real time, at the speed the team actually moves.</p>
<p>Nightly exploratory testing. The cli-tester runs every night on MacOS and Windows. It loads the product scenarios that the Product Manager has defined, exercises the CLI, and opens issues for anything it finds. Before opening an issue, it checks for duplicates through the project-manager. When it finishes, it posts a Slack message with the results.</p>
<p>Performance and upgrade testing. The performance-tester and upgrade-tester run on CI across both platforms. Disk usage regressions, behavioral differences between sandbox and non-sandbox modes, and version compatibility issues get caught before they reach a human reporter.</p>
<p>Weekly tech-debt reduction. Every week, the software-engineer runs in architect mode. It reviews the codebase, identifies three spots where code can be simplified or legacy patterns can be cleaned up, spawns three parallel runners, and delivers three PRs. Each one is a small, focused improvement. Over time, they compound.</p>
<p>What We Don’t Automate</p>
<p>The Fleet creates pull requests. It does not merge them.</p>
<p>That’s the trust boundary, and it’s deliberate. Merge decisions stay with humans. So do architectural choices, scope decisions, and prioritization. The agents do the work. The team decides what work matters and whether the output meets the bar.</p>
<p>The supervision model scales the same way it works on a developer’s laptop. When we run multiple agents locally in parallel worktrees, we review their output before merging. With the Fleet, the team supervises seven agent roles running on CI. The shape of the oversight is the same: review the output, approve or adjust, move on. The difference is that the agents don’t need anyone’s laptop to start working.</p>
<p>The Fleet is not replacing the team. It’s extending it. Seven roles that handle repetitive, well-defined work so humans can focus on work that requires judgment, context, and taste. The Fleet has many arms, but the team still steers the ship.</p>
<p>What We Learnt Building the Fleet</p>
<p>Start with the foundation, not the flashiest skill. We started with the /cli-tester because testing the CLI felt like the highest-value target. But it needed to build binaries, triage issues, and load product scenarios, all things that depended on other skills we hadn&#8217;t written yet. We should have started with the /build-engineer, the skill everything else stands on. The second skill was better because of what we learned from the first. Don&#8217;t design the full fleet upfront.</p>
<p>Build locally first, deploy to CI second. The commit-push-wait-read-logs cycle is where velocity goes to die. If you can&#8217;t debug a skill in your terminal, it&#8217;s not ready for a workflow. Some behaviors only surface on CI runners (different OS, permissions, network constraints), and those iterations cost hours of wall-clock time. Minimize what can only be tested in CI.</p>
<p>Write skills as roles, not scripts. Ask yourself: &#8220;If a new team member joined tomorrow with this exact role, what would I tell them?&#8221; What do they need to know? What tools can they use? How should they handle ambiguity? That conversation is your SKILL.md. &#8220;You are the build engineer, here&#8217;s what you know&#8221; produces better judgment than &#8220;run these five steps.&#8221; When something unexpected happens, a role investigates. A script stops.</p>
<p>Compose skills like you compose teams. The /cli-tester doesn&#8217;t know how to build binaries or triage bugs. It loads the /build-engineer and /project-manager for that. Each role stays in its lane. The composition creates what none of them could do alone.</p>
<p>Separate generation from evaluation. The agent that wrote the code shouldn&#8217;t be the only one that reviews it. Our Ralph-loop uses a worker and a reviewer for a reason: the oldest principle in quality assurance applies to agents too.</p>
<p>Triage matters more than detection. The /cli-tester initially filed issues for every unexpected output. Transient failures, timing-dependent behavior, environment quirks: everything became an issue. The signal-to-noise ratio got bad enough that the team started ignoring findings. Getting the triage right (deduplication, confirming before filing) took longer than building the tester itself.And one more thing. All Fleet agents, even on ephemeral CI runners, run inside Coding Agent Sandboxes. We test with what our users use.<br />
Quelle: https://blog.docker.com/feed/</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/">A Virtual Agent team at Docker: How the Coding Agent Sandboxes team uses a fleet of agents to ship faster</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/a-virtual-agent-team-at-docker-how-the-coding-agent-sandboxes-team-uses-a-fleet-of-agents-to-ship-faster/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Amazon RDS for SQL Server supports read replica with additional storage volumes</title>
		<link>https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/</link>
		<comments>https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:32 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/</guid>
		<description><![CDATA[<p>Amazon Relational Database Service (Amazon RDS) for SQL Server now supports read replicas for database instances with additional storage volumes. Additional storage volumes allow customers to scale database storage up to 256 TiB by adding up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume. With this&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/">Amazon RDS for SQL Server supports read replica with additional storage volumes</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Amazon Relational Database Service (Amazon RDS) for SQL Server now supports read replicas for database instances with additional storage volumes. Additional storage volumes allow customers to scale database storage up to 256 TiB by adding up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume. With this launch, for database instances configured with additional storage volumes, customers can create same-region and cross-region read replica database instances.  When a read replica is created for a database instance with additional storage volumes, the replica preserves the storage layout of the source instance, including the configuration of any additional storage volumes. After the initial creation, you can independently manage additional storage volume configurations on the source and read replica instances.  Read replicas with additional storage volumes are available in all AWS commercial Regions&nbsp;and the AWS GovCloud (US) Regions. Customers can start using this feature today through the AWS Management Console, AWS CLI, or AWS SDKs. To learn more, see Working with read replicas for Amazon RDS for SQL Server and Working with storage in RDS for SQL Server in the Amazon RDS User Guide.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/">Amazon RDS for SQL Server supports read replica with additional storage volumes</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/amazon-rds-for-sql-server-supports-read-replica-with-additional-storage-volumes/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Announcing Kubernetes Dynamic Resource Allocation for Elastic Fabric Adapter</title>
		<link>https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/</link>
		<comments>https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:30 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/</guid>
		<description><![CDATA[<p>Amazon Elastic Kubernetes Service (Amazon EKS) now supports Dynamic Resource Allocation (DRA) for Elastic Fabric Adapter (EFA), simplifying high-performance inter-node communication and RDMA (Remote Direct Memory Access) for artificial intelligence, machine learning, and High Performance Computing (HPC) workloads. The EFA DRA driver, built on the upstream DRANET project, brings EFA interface sharing and topology-aware allocation&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/">Announcing Kubernetes Dynamic Resource Allocation for Elastic Fabric Adapter</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Amazon Elastic Kubernetes Service (Amazon EKS) now supports Dynamic Resource Allocation (DRA) for Elastic Fabric Adapter (EFA), simplifying high-performance inter-node communication and RDMA (Remote Direct Memory Access) for artificial intelligence, machine learning, and High Performance Computing (HPC) workloads. The EFA DRA driver, built on the upstream DRANET project, brings EFA interface sharing and topology-aware allocation for workloads running on Kubernetes.  With the EFA DRA driver, you can allocate EFA interfaces and accelerator devices that share the same PCIe root or device group, ensuring inter-node traffic flows through the closest network interface to each NVIDIA GPU, AWS Trainium, or AWS Inferentia device on the node. The EFA DRA driver also supports EFA interface sharing across workloads on the same node to maximize EFA interface utilization.  The EFA DRA driver is recommended for new deployments on Amazon EKS clusters running Kubernetes version 1.34 or later with EKS managed node groups or self-managed nodes. The EFA DRA driver is available in all AWS Regions where Amazon EKS is available. The EFA device plugin remains supported and is recommended for use with Karpenter and Amazon EKS Auto Mode.  To learn more, see Manage EFA devices on Amazon EKS in the Amazon EKS User Guide.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/">Announcing Kubernetes Dynamic Resource Allocation for Elastic Fabric Adapter</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/announcing-kubernetes-dynamic-resource-allocation-for-elastic-fabric-adapter/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Amazon Redshift Introduces Concurrency Scaling Support for auto-copy and zero-ETL</title>
		<link>https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/</link>
		<comments>https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:28 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/</guid>
		<description><![CDATA[<p>Amazon Redshift announces the general availability of Amazon Redshift concurrency scaling support for Amazon Redshift auto-copy and zero-ETL, enhancing the performance of data ingestion. This new feature combines the power of auto-copy&#8217;s seamless data ingestion from Amazon S3 and zero-ETL&#8217;s near real-time data replication from operational database, transactional database, and applications with the elasticity of&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/">Amazon Redshift Introduces Concurrency Scaling Support for auto-copy and zero-ETL</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Amazon Redshift announces the general availability of Amazon Redshift concurrency scaling support for Amazon Redshift auto-copy and zero-ETL, enhancing the performance of data ingestion. This new feature combines the power of auto-copy&#8217;s seamless data ingestion from Amazon S3 and zero-ETL&#8217;s near real-time data replication from operational database, transactional database, and applications with the elasticity of concurrency scaling.  The enhancement delivers benefits for high-volume, time-sensitive data operations. Auto-copy monitors S3 buckets and loads new data files automatically, while zero-ETL replicates data from operational and transactional databases in near real-time. When enabled, concurrency scaling adds compute capacity automatically to handle increased read and write queries, ensuring faster data ingestion without compromising performance during peak periods.  This new enhancement is available in all AWS commercial regions and AWS GovCloud (US) regions where Amazon Redshift is available for Amazon Redshift Serverless and RA3 Provisioned data warehouses. You can implement this feature immediately to optimize their data ingestion workflows.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/">Amazon Redshift Introduces Concurrency Scaling Support for auto-copy and zero-ETL</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/amazon-redshift-introduces-concurrency-scaling-support-for-auto-copy-and-zero-etl/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>IAM Roles Anywhere now enforces VPC endpoint policies for the CreateSession API</title>
		<link>https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/</link>
		<comments>https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:26 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/</guid>
		<description><![CDATA[<p>AWS Identity and Access Management (IAM) Roles Anywhere now provides the capability to configure Virtual Private Cloud (VPC) endpoint policies for the IAM Roles Anywhere CreateSession API. You can update your VPC endpoint policies to allow or deny the CreateSession operation. If CreateSession is not explicitly included in the Allow statement of your VPC endpoint&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/">IAM Roles Anywhere now enforces VPC endpoint policies for the CreateSession API</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>AWS Identity and Access Management (IAM) Roles Anywhere now provides the capability to configure Virtual Private Cloud (VPC) endpoint policies for the IAM Roles Anywhere CreateSession API. You can update your VPC endpoint policies to allow or deny the CreateSession operation. If CreateSession is not explicitly included in the Allow statement of your VPC endpoint policy or if you don’t allow all operations (for example, by specifying “rolesanywhere:*“ as the action), IAM Roles Anywhere will not return temporary AWS credentials for requests made through your VPC endpoint.  The CreateSession API enables workloads running outside of AWS to obtain temporary AWS credentials using X.509 certificates to access AWS resources. Previously, VPC endpoint policies applied to all IAM Roles Anywhere API operations except CreateSession. This launch closes that gap, giving you consistent, fine-grained access control across all IAM Roles Anywhere API operations.  This feature is available in all AWS Regions where IAM Roles Anywhere is available, including the AWS GovCloud (US) Regions, AWS European Sovereign Cloud (Germany) Region, and China Regions. To learn more, see the IAM Roles Anywhere User Guide.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/">IAM Roles Anywhere now enforces VPC endpoint policies for the CreateSession API</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/iam-roles-anywhere-now-enforces-vpc-endpoint-policies-for-the-createsession-api/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Amazon CloudFront Announces WebSocket Support for VPC Origins</title>
		<link>https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/</link>
		<comments>https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:24 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Amazon AWS]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/</guid>
		<description><![CDATA[<p>Amazon CloudFront now supports WebSockets traffic through Virtual Private Cloud (VPC) origins, enabling you to use CloudFront as the single entry point for real-time applications hosted entirely in private subnets. WebSockets support extends VPC origins to applications that require persistent, bidirectional connections between clients and servers, such as chat platforms, collaborative editing tools, live dashboards,&#8230; <a class="more-link" href="https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/">Continue reading &#8594;</a></p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/">Amazon CloudFront Announces WebSocket Support for VPC Origins</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Amazon CloudFront now supports WebSockets traffic through Virtual Private Cloud (VPC) origins, enabling you to use CloudFront as the single entry point for real-time applications hosted entirely in private subnets. WebSockets support extends VPC origins to applications that require persistent, bidirectional connections between clients and servers, such as chat platforms, collaborative editing tools, live dashboards, and IoT device management systems.  Previously, customers running real-time applications over WebSockets had to keep their origins in public subnets and use Access Control Lists and other mechanisms to restrict access to their WebSockets-enabled servers. Customers had to spend ongoing effort to implement and maintain these solutions. Now, customers can place their Application Load Balancers (ALB), Network Load Balancers (NLB), and EC2 instances serving WebSockets traffic in private subnets accessible only through their CloudFront distributions. CloudFront serves as the single front door for both traditional HTTP traffic and real-time WebSockets connections, reducing attack surface, simplifying security management, and providing built-in DDoS protection.  WebSockets support for VPC origins is available in all AWS Commercial Regions where VPC origins is supported. There is no additional cost for WebSockets traffic through VPC origins. To learn more, visit&nbsp;CloudFront VPC origins.<br />
Quelle: aws.amazon.com</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/">Amazon CloudFront Announces WebSocket Support for VPC Origins</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/amazon-cloudfront-announces-websocket-support-for-vpc-origins/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Wire-Chef Schilz: US-Investoren &quot;haben keinerlei Einfluss auf Wire&quot;</title>
		<link>https://www.cloud-computing-koeln.de/wire-chef-schilz-us-investoren-haben-keinerlei-einfluss-auf-wire/</link>
		<comments>https://www.cloud-computing-koeln.de/wire-chef-schilz-us-investoren-haben-keinerlei-einfluss-auf-wire/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:15 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/wire-chef-schilz-us-investoren-haben-keinerlei-einfluss-auf-wire/</guid>
		<description><![CDATA[<p>Nach den Phishing-Angriffen auf Signal-Nutzer plant der Bundestag einen Wechsel zu Wire. Firmenchef Schilz erläutert die Unterschiede zwischen beiden Messengern. Ein Interview von Friedhelm Greis (Phishing, Verschlüsselung) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/wire-chef-schilz-us-investoren-haben-keinerlei-einfluss-auf-wire/">Wire-Chef Schilz: US-Investoren &quot;haben keinerlei Einfluss auf Wire&quot;</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Nach den Phishing-Angriffen auf Signal-Nutzer plant der Bundestag einen Wechsel zu Wire. Firmenchef Schilz erläutert die Unterschiede zwischen beiden Messengern. Ein Interview von Friedhelm Greis (Phishing, Verschlüsselung)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/wire-chef-schilz-us-investoren-haben-keinerlei-einfluss-auf-wire/">Wire-Chef Schilz: US-Investoren &quot;haben keinerlei Einfluss auf Wire&quot;</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/wire-chef-schilz-us-investoren-haben-keinerlei-einfluss-auf-wire/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Kostenbremse für Mieter: An der Wärmepumpe kommt auch die CDU nicht vorbei</title>
		<link>https://www.cloud-computing-koeln.de/kostenbremse-fuer-mieter-an-der-waermepumpe-kommt-auch-die-cdu-nicht-vorbei/</link>
		<comments>https://www.cloud-computing-koeln.de/kostenbremse-fuer-mieter-an-der-waermepumpe-kommt-auch-die-cdu-nicht-vorbei/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:13 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/kostenbremse-fuer-mieter-an-der-waermepumpe-kommt-auch-die-cdu-nicht-vorbei/</guid>
		<description><![CDATA[<p>Zukünftig sollen die steigenden Kosten für Öl- und Gasheizung zwischen Mietern und Vermietern geteilt werden. Das Risiko explodierender Heizkosten wird damit verteilt. (Wärmepumpe, Umweltschutz) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/kostenbremse-fuer-mieter-an-der-waermepumpe-kommt-auch-die-cdu-nicht-vorbei/">Kostenbremse für Mieter: An der Wärmepumpe kommt auch die CDU nicht vorbei</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Zukünftig sollen die steigenden Kosten für Öl- und Gasheizung zwischen Mietern und Vermietern geteilt werden. Das Risiko explodierender Heizkosten wird damit verteilt. (Wärmepumpe, Umweltschutz)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/kostenbremse-fuer-mieter-an-der-waermepumpe-kommt-auch-die-cdu-nicht-vorbei/">Kostenbremse für Mieter: An der Wärmepumpe kommt auch die CDU nicht vorbei</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/kostenbremse-fuer-mieter-an-der-waermepumpe-kommt-auch-die-cdu-nicht-vorbei/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Quartalszahlen: Apple verkündet Gewinn von 25 Milliarden Euro</title>
		<link>https://www.cloud-computing-koeln.de/quartalszahlen-apple-verkuendet-gewinn-von-25-milliarden-euro/</link>
		<comments>https://www.cloud-computing-koeln.de/quartalszahlen-apple-verkuendet-gewinn-von-25-milliarden-euro/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:11 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/quartalszahlen-apple-verkuendet-gewinn-von-25-milliarden-euro/</guid>
		<description><![CDATA[<p>Das aktuelle iPhone und ein überraschend starker ausländischer Markt lassen Umsatz und Gewinn steigen. (Apple, KI) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/quartalszahlen-apple-verkuendet-gewinn-von-25-milliarden-euro/">Quartalszahlen: Apple verkündet Gewinn von 25 Milliarden Euro</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Das aktuelle iPhone und ein überraschend starker ausländischer Markt lassen Umsatz und Gewinn steigen. (Apple, KI)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/quartalszahlen-apple-verkuendet-gewinn-von-25-milliarden-euro/">Quartalszahlen: Apple verkündet Gewinn von 25 Milliarden Euro</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/quartalszahlen-apple-verkuendet-gewinn-von-25-milliarden-euro/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Anzeige: TP Link-Innenraumkamera im 2er-Pack unter 40 Euro bei Amazon</title>
		<link>https://www.cloud-computing-koeln.de/anzeige-tp-link-innenraumkamera-im-2er-pack-unter-40-euro-bei-amazon/</link>
		<comments>https://www.cloud-computing-koeln.de/anzeige-tp-link-innenraumkamera-im-2er-pack-unter-40-euro-bei-amazon/#comments</comments>
		<pubDate>Sat, 02 May 2026 02:41:08 +0000</pubDate>
		<dc:creator><![CDATA[da Agency]]></dc:creator>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[cloud computing]]></category>

		<guid isPermaLink="false">https://www.cloud-computing-koeln.de/anzeige-tp-link-innenraumkamera-im-2er-pack-unter-40-euro-bei-amazon/</guid>
		<description><![CDATA[<p>Die Tapo-Innenraumkamera bietet 360°-Sicht und Nachtsicht. Aktuell ist die smarte WLAN-Kamera im Doppelpack wieder für unter 40 Euro zu haben. (Smart Home, Amazon Alexa) Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-tp-link-innenraumkamera-im-2er-pack-unter-40-euro-bei-amazon/">Anzeige: TP Link-Innenraumkamera im 2er-Pack unter 40 Euro bei Amazon</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></description>
				<content:encoded><![CDATA[<p>Die Tapo-Innenraumkamera bietet 360°-Sicht und Nachtsicht. Aktuell ist die smarte WLAN-Kamera im Doppelpack wieder für unter 40 Euro zu haben. (Smart Home, Amazon Alexa)<br />
Quelle: Golem</p>
<p>Der Beitrag <a rel="nofollow" href="https://www.cloud-computing-koeln.de/anzeige-tp-link-innenraumkamera-im-2er-pack-unter-40-euro-bei-amazon/">Anzeige: TP Link-Innenraumkamera im 2er-Pack unter 40 Euro bei Amazon</a> erschien zuerst auf <a rel="nofollow" href="https://www.cloud-computing-koeln.de">Cloud Computing Köln</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.cloud-computing-koeln.de/anzeige-tp-link-innenraumkamera-im-2er-pack-unter-40-euro-bei-amazon/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
