Verkehr: ADAC warnt vor Folgen des Hamburger Zukunftsentscheids
Der ADAC kritisiert Hamburgs Pläne für ein Verbrenner-Aus und Tempo 30. Der Zukunftsentscheid sei eine “Katastrophe für den Verkehr”. (Verkehr, GreenIT)
Quelle: Golem
Der ADAC kritisiert Hamburgs Pläne für ein Verbrenner-Aus und Tempo 30. Der Zukunftsentscheid sei eine “Katastrophe für den Verkehr”. (Verkehr, GreenIT)
Quelle: Golem
Der Apple M5 ist vor allem bei der Grafikleistung und im KI-Rendering besser. Aber auch der CPU-Teil soll schneller sein. (Apple Silicon, Apple)
Quelle: Golem
Das auf Datenschutz fokussierte Android-ROM Graphene OS ist bislang exklusiv für Pixel-Smartphones erhältlich – das soll sich ändern. (GrapheneOS, Smartphone)
Quelle: Golem
In the rapidly evolving landscape of AI-assisted development, most developers continue to struggle with clunky web interfaces, resource-intensive IDEs, and fragmented toolchains. But what if we told you there’s a combination that pairs Google’s 76.3K-star Gemini CLI (in just 5 months) with Docker’s innovative MCP Toolkit, quietly revolutionizing how modern AI developers work?
Enter the powerhouse duo: Gemini CLI and Docker MCP Toolkit. This isn’t just another tool combination — it’s a paradigm shift that delivers developer AI assistance with zero complexity overhead. A one-time setup of just 5 minutes can save you 20 minutes per test scenario — that’s a 97% time reduction that compounds with every test you run.
Let’s look at a concrete example: browser testing and performance analysis that developers do regularly can be simplified and completely automated. What used to require opening browsers, clicking through flows, analyzing DevTools, taking screenshots, and manually documenting buys can now happen in a single 30-second conversation.
In this guide, you’ll learn how to:
Set up Gemini CLI and connect it to Docker MCP Toolkit
Configure the Playwright MCP server for browser automation
Configure the GitHub MCP server for issue creation
Configure the Filesystem MCP server for saving test artifacts
Automate browser testing that discovers real bugs and creates documented GitHub issues
See how Gemini CLI can analyze performance, capture screenshots, and report findings—all without leaving your terminal
With 220+ pre-built MCP servers, browser automation becomes as simple as having a conversation. No Selenium WebDriver configuration, no CI/CD pipeline complexity, no manual screenshot management — just natural language instructions that execute real browser tests.
Why Gemini CLI and Docker MCP Toolkit Work Better Together
Manual browser testing and performance analysis are broken. You open Chrome DevTools, click through pages, check network requests, analyze performance metrics, take screenshots, write bug reports, and then manually create GitHub issues. This context-switching nightmare wastes hours every sprint.
Traditional automation tools don’t solve the real problem. Selenium requires brittle selectors and a complex setup. Playwright needs JavaScript knowledge and test frameworks. Both require maintaining test scripts that break with every UI change. The “solution” often takes more time than manual testing.
While Gemini provides powerful AI capabilities and MCP provides the protocol, Docker MCP Toolkit makes browser automation practical. Without containerization, setting up browser testing means managing Chrome/Firefox installations, dealing with WebDriver versions, configuring Node.js dependencies, handling screenshot directories manually, and different configurations for every developer’s machine. The setup that should take 2 minutes takes 2-6 hours per developer.
Docker MCP Toolkit eliminates this friction:
220+ pre-built MCP servers in the catalog
One-click deployment through Docker Desktop
Playwright MCP with browsers pre-installed (Chrome, Firefox, WebKit)
GitHub MCP for automated issue creation
Filesystem MCP for artifact storage
Secure credential management via OAuth or encrypted storage
Consistent configuration across Mac, Windows, and Linux
Automatic updates when new server versions are released
We built Docker MCP Toolkit to meet developers where they are. If you’re using Gemini CLI, you should be able to automate browser testing without wrestling with infrastructure.
Your tests run securely on your machine. Everything executes in isolated Docker containers on your local system. Test data, screenshots, and application access never leave your computer. No cloud uploads, no third-party services, no compliance concerns. You get enterprise-grade browser automation with complete privacy.
Setting up Gemini CLI in Docker MCP Toolkit
Prerequisites
Install Docker Desktop 4.40 or later
Enable MCP Toolkit
Step 1. Install Gemini CLI
Install via npm:
npm install -g @google/gemini-cli
Step 2. Launch and authenticate
Once installed, just type gemini in your terminal window.
gemini
Step 3. Log in via Google
Follow the setup wizard:
Select a preferred theme style from the options.
Choose a login method. I recommend “Login with Google”, which allows up to 60 requests/minute and 1,000 requests/day for free
In case you need higher rate limits or enterprise access, I suggested you use an API key from Google AI Studio. You can easily set it as an environment variable:
export GEMINI_API_KEY="YOUR_API_KEY"
After selecting your sign-in method, a browser window will open. Simply log in with your Google account
Step 4. Start chatting with Gemini
Just type “gemini” in your terminal window to start chatting with Gemini and enter your prompt.
Connect Gemini CLI to Docker MCP Toolkit
Option 1: One-Click Connection (Recommended)
Open Docker Desktop
Navigate to MCP Toolkit in the sidebar
Click the Clients tab
Find “Gemini” in the list.
Click Connect
Docker Desktop automatically configures the MCP Gateway connection – the underlying infrastructure that routes requests between Gemini CLI and your MCP servers, handling authentication, containerisation, and secure communication seamlessly.
Option 2: Manual Command Line SetupIf you prefer a command-line setup or need to configure a specific project:
Navigate to your project folder in the terminal
Run this command:
docker mcp client connect gemini –global
You’ll see output like this:
=== System-wide MCP Configurations ===
● gemini: connected
MCP_DOCKER: Docker MCP Catalog (gateway server) (stdio)
● gordon: connected
MCP_DOCKER: Docker MCP Catalog (gateway server) (stdio)
You might have to restart 'gemini'.
The connected status confirms Gemini CLI is linked to the Docker MCP Gateway.
What’s happening under the hood?
The Gemini CLI uses the mcpServers configuration in your settings.json file to locate and connect to MCP servers. This configuration supports multiple servers with different transport mechanisms. The mcpServers object is where you define each MCP server you want the CLI to connect to.
Whenever you hit the “Connect” button under the Docker MCP Client for Gemini CLI, it adds the following Docker MCP Gateway configuration to the ~/.gemini/settings.json file.
{
"theme": "Default",
"selectedAuthType": "oauth-personal",
"mcpServers": {
"MCP_DOCKER": {
"command": "docker",
"args": ["mcp", "gateway", "run"],
"env": {}
}
}
}
Learn more about MCP and Gemini CLI interaction through this link.
Step 5. Restart Gemini CLI
# Exit Gemini CLI if running, then restart
gemini
Step 6. Verify the Connection
Inside Claude Code, type /mcp to see available MCP servers.
You should see the Docker MCP Gateway listed, which provides access to all enabled MCP servers. The /MCP_DOCKER tools indicate a successful connection. As you enable more MCP servers in Docker Desktop, they’ll appear here automatically.
First Run: What to Expect
When you start Gemini CLI for the first time after connecting to Docker MCP Toolkit, you’ll see a prompt about the new MCP server:
Choose Option 1 (recommended). This configures your project to automatically use Docker MCP Toolkit and any MCP servers you enable in Docker Desktop. You won’t need to approve MCP servers individually each time.
You’re now ready to use Gemini with MCP servers from Docker Desktop.
Real-World Demo: Automated Browser Testing and Performance Analysis
Now that you’ve connected Gemini CLI to Docker MCP Toolkit, let’s see it in action with a practical example. We’ll automatically discover real bugs through browser testing and identify performance bottlenecks through detailed analysis — the kind that would take 20 minutes of manual testing, DevTools monitoring, and performance profiling.
What Makes This Realistic?
This isn’t a trivial “Hello World” demo. We’re performing comprehensive browser testing and performance analysis on a real e-commerce application with the kinds of issues you encounter in production:
Uses actual application running on localhost
Executes functional browser tests (navigation, element inspection, console monitoring)
Discovers genuine performance bottlenecks through browser DevTools analysis
Identifies accessibility violations that affect real users
Captures evidence with screenshots and console logs
Measures real performance metrics: page load times, network requests, resource usage
Creates properly formatted GitHub issues with actionable recommendations
Time investment:
Manual process: ~20 minutes (opening browsers, clicking through flows, DevTools analysis, performance profiling, documentation, issue creation)
Automated with Gemini CLI + MCP: ~30 seconds total
That’s a 97% time reduction, but more importantly, it’s consistent, thorough, and documented every time.
What We’re Testing
The catalog-service-node application is a realistic e-commerce catalog with intentional issues that mirror common production problems:
Performance Issues:
No pagination – loads all 15 products at once (will degrade with scale)
Duplicate API calls – requests /api/products twice unnecessarily
Missing optimization – unoptimized loading patterns
Accessibility Issues:
Missing product images – placeholder buttons instead of actual images
Vague button labels – “Fetch” and “Upload” aren’t descriptive for screen readers
Missing ARIA labels – table structure not properly announced
Browser Issues:
Missing favicon – generates 404 errors in console
Console warnings – duplicate request warnings
Let’s see if Gemini CLI can discover all of these automatically through intelligent browser testing and performance analysis, then create a comprehensive GitHub issue.
Step 1: Set Up a Real-World e-Commerce Catalog application
For this demo, we’ll use a real e-commerce catalog application. This gives us realistic performance and accessibility issues to discover.
Clone the repository:
git clone https://github.com/ajeetraina/catalog-service-node
cd catalog-service-node
Start all services:
# Start Docker services (database, S3, Kafka)
docker compose up -d
# Install dependencies
npm install –omit=optional
# Start the application
npm run dev
Verify it’s running:
Frontend: http://localhost:5173
API: http://localhost:3000
Step 2: Seed Test Data
To make testing realistic, create sample products:
# Create seed script
cat > seed-data.sh << 'EOF'
#!/bin/bash
API_URL="http://localhost:3000/api"
echo "Seeding test products…"
curl -s -X POST "$API_URL/products"
-H "Content-Type: application/json"
-d '{"name":"Vintage Camera","description":"Classic 35mm film camera","price":299.99,"upc":"CAM001"}'
> /dev/null && echo "✅ Vintage Camera"
curl -s -X POST "$API_URL/products"
-H "Content-Type: application/json"
-d '{"name":"Rare Vinyl Record – LAST ONE!","description":"Limited edition. Only 1 left!","price":149.99,"upc":"VINYL001"}'
> /dev/null && echo "✅ Rare Vinyl Record"
curl -s -X POST "$API_URL/products"
-H "Content-Type: application/json"
-d '{"name":"Professional DSLR Camera","description":"50MP camera with 8K video","price":2499.99,"upc":"CAMPRO001"}'
> /dev/null && echo "✅ Professional DSLR"
# Add bulk test products
for i in {4..15}; do
curl -s -X POST "$API_URL/products"
-H "Content-Type: application/json"
-d "{"name":"Test Product $i","description":"Bulk test product $i","price":$((50 + RANDOM % 450)).99,"upc":"BULK$(printf '%03d' $i)"}"
> /dev/null && echo "✅ Test Product $i"
done
echo ""
TOTAL=$(curl -s "$API_URL/products" | jq '. | length')
echo "Total products: $TOTAL"
echo "Ready! Visit http://localhost:5173"
EOF
chmod +x seed-data.sh
./seed-data.sh
Expected output:
Seeding test products…
✅ Vintage Camera
✅ Rare Vinyl Record
✅ Professional DSLR
✅ Test Product 4
✅ Test Product 5
…
✅ Test Product 15
Total products: 15
Ready! Visit http://localhost:5173
Now you have a realistic environment with 15 products to analyze.
Configure MCP Servers
For browser testing and performance analysis automation, you’ll orchestrate three MCP servers:
Playwright MCP – Controls browsers, takes screenshots, captures console logs
GitHub MCP – Creates issues automatically with full context
Filesystem MCP – Saves screenshots and test artifacts
Let’s configure each one.
Configure Playwright MCP (Browser Automation)
The Playwright MCP server gives Gemini the ability to control real browsers, Chrome, Firefox, and WebKit, just like a human would.
In Docker Desktop:
Open Docker Desktop → MCP Toolkit → Catalog
Search for “Playwright” or “Browser”
Find Playwright (Browser Automation) in the results
Click + Add
The server will be added with default configuration (no additional setup needed)
Click Start Server
What you get:
21+ browser automation tools including:
browser_navigate – Navigate to URLs
browser_snapshot – Capture page state for analysis
browser_take_screenshot – Save visual evidence
browser_click, browser_type – Interact with elements
browser_console_messages – Get console errors
browser_network_requests – Analyze HTTP requests
The Playwright MCP runs in a secure Docker container with browsers pre-installed. No manual ChromeDriver setup, no WebDriver conflicts, no OS-specific browser installations.
Configure GitHub MCP (Issue Creation)
The GitHub MCP enables Gemini to create issues, PRs, and manage repositories on your behalf.
Option 1: OAuth Authentication (Recommended – Easiest)
In MCP Toolkit → Catalog, search “GitHub Official”
Click + Add
Go to the OAuth tab in Docker Desktop
Find the GitHub entry
Click “Authorize”
Your browser opens GitHub’s authorization page
Click “Authorize Docker” on GitHub
You’re redirected back to Docker Desktop
Return to Catalog tab, find GitHub Official
Click Start Server
Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.
Option 2: Personal Access Token (For Granular Control)
If you prefer manual control or need specific scopes:
Step 1: Create GitHub Personal Access Token
Go to https://github.com and sign in
Click your profile picture → Settings
Scroll to “Developer settings” in the left sidebar
Click “Personal access tokens” → “Tokens (classic)”
Click “Generate new token” → “Generate new token (classic)”
Name it: “Docker MCP Browser Testing”
Select scopes:
repo (Full control of repositories)
workflow (Update GitHub Actions workflows)
Click “Generate token”
Copy the token immediately (you won’t see it again!)
Step 2: Configure in Docker Desktop
In MCP Toolkit → Catalog, find GitHub Official
Click + Add (if not already added)
Go to Configuration tab
Select “Personal Access Token” as the authentication method
Paste your token
Click Start Server
Or via CLI:
docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN=github_pat_YOUR_TOKEN_HERE
Configure Filesystem MCP (Screenshot Storage)
The Filesystem MCP allows Gemini to save screenshots and test artifacts to your local machine.
In Docker Desktop:
Go to MCP Toolkit → Catalog
Search for “Filesystem”
Find Filesystem (Reference) and click + Add
Go to the Configuration tab
Under filesystem.paths, add your project directory:
Example: /Users/yourname/catalog-service-node
Or wherever you cloned the repository
You can add multiple paths by clicking the + button
Click Save
Click Start Server
Important Security Note: Only grant access to directories you’re comfortable with Gemini reading and writing to. The Filesystem MCP is scoped to these specific paths for your protection.
Run the Automation
Change directory to the root of the catalog-service-node project repository and open Gemini CLI:
gemini
Paste this prompt:
Navigate to http://host.docker.internal:5173 and perform a detailed
performance and accessibility analysis:
1. Take a screenshot of the full page
2. Count how many products are displayed
3. Open browser DevTools and check:
– Console for any JavaScript errors
– Network tab: how many HTTP requests are made?
– Performance: how long does the page take to load?
4. Identify performance issues:
– Are all products loading at once with no pagination?
– Are images optimized?
– Any unnecessary network requests?
5. Check for accessibility issues:
– Missing alt text on images
– Color contrast problems
– Vague button labels
Create a GitHub issue titled "Product catalog performance and
accessibility issues" with:
– Screenshots attached
– List of specific problems found
– Severity: Medium
– Labels: performance, accessibility, enhancement
Watch Gemini Work
Here’s what happens behind the scenes as Gemini orchestrates multiple MCP servers:
Step 1: Browser Navigation
The web server is running. I'll navigate to http://host.docker.internal:5173, take a screenshot, and then proceed with the rest of the performance analysis.
Gemini launches a real Chromium browser in the Docker container and navigates to your application.
Step 2: Console Analysis
✦ I can see 15 products listed on the page: * Vintage Camera * Rare Vinyl Record – LAST ONE! * Professional DSLR Camera * Test Product 4-15
Step 3: Performance Check
✦ Checking browser console and network requests… Opening DevTools to analyze performance metrics…
Step 4: Issue Creation
✦ Creating GitHub issue with findings…
create_issue (MCP_DOCKER MCP Server) {"labels":["performance","accessibility","enhancement"],"body":"### Summary…"}
✦ I have successfully created the issue. You can find it at: https://github.com/ajeetraina/catalog-service-node/issues/1
The Complete GitHub Issue
Gemini automatically created this comprehensive issue with structured information:
Workflow Comparison: Before vs After MCP
Step
Before MCP (Manual Testing)
Time
After MCP (Gemini CLI + MCP)
Time
1
Open browser manually
1 min
Paste prompt in Gemini CLI
5 sec
2
Navigate to application
30 sec
Gemini analyzes automatically
25 sec
3
Click through pages
3 min
GitHub issue created
auto
4
Open DevTools manually
30 sec
Screenshots attached
auto
5
Take screenshots
2 min
✓ Complete
–
6
Check browser console
1 min
–
–
7
Analyze network requests
2 min
–
–
8
Document findings
3 min
–
–
9
Write detailed bug report
5 min
–
–
10
Create GitHub issue
2 min
–
–
Summary
Total
~ 20 minutes per test
–
30 sec per test
Time saved per test: 19.5 minutes (97% faster!)
Impact over time:
Per day (5 tests): 97 minutes saved → 1.6 hours
Per week (25 tests): 8 hours saved → 1 full workday
Per sprint (50 tests): 16 hours saved → 2 full workdays
Per year (1,000 tests): 325 hours saved → 40 workdays
Wrapping Up
You’ve just witnessed how Docker MCP Toolkit transforms Gemini CLI from a chat assistant into a complete browser testing and performance analysis platform. What used to require opening browsers, clicking through flows, analyzing DevTools, documenting bugs, and creating issues manually now happens in one 30-second conversation.
The combination of Gemini CLI and Docker MCP Toolkit represents a paradigm shift in AI-assisted development. By leveraging terminal-native tools and containerized services, you get:
Unmatched flexibility in tool selection
Superior performance with minimal overhead
Future-proof architecture that scales with your needs
This setup isn’t just about convenience — it’s about building a development environment that adapts to your workflow rather than forcing you to adapt to it. The developer productivity revolution is here. The question isn’t whether you’ll adopt AI-assisted development — it’s whether you’ll lead with the best tools available or play catch-up later.
Ready to try it? Enable Docker MCP Toolkit in Docker Desktop and start building your own Gemini-powered development workflow today.
Learn more
Explore the MCP Catalog: Discover containerized, security-hardened MCP servers
Open Docker Desktop and get started with the MCP Toolkit (Requires version 4.48 or newer to launch the MCP Toolkit automatically)
Explore our guide on adding MCP Servers to Claude Code with the Docker MCP Toolkit
Check out our MCP Horror Stories series to see common MCP security pitfalls and how you can avoid them.
Quelle: https://blog.docker.com/feed/
At Docker, we are building our hardened images to exacting standards. That means carefully crafting by hand, because humans are still the best security architects. That said, we understand the value of AI and deploy it as an extra set of eyes at critical junctures in our Docker Hardened Image (DHI) build process. With this dual approach, our users get better coverage and more secure products. Humans aren’t perfect, but AI can help them become better. Here’s a recent example of how the AI tools we use for DHI caught a bug, made a product more secure, and contributed back to the community.
How the AI guardrail stepped in
Our upstream release automation opened a routine version bump for nginx-exporter. As part of our release process, the DHI AI guardrail fetched the upstream diffs that the bump would pull in and scanned them with language-aware checks. It spotted a classic logic inversion in the exporter’s new proxy-protocol path and immediately blocked our PR from auto-merging. I reproduced the issue, confirmed the diagnosis, and submitted a small fix upstream. Because the guardrail held the line, customers never saw the bug.
That sequence matters. A normal dependency update would have sailed through and shipped a regression. Instead, Docker’s AI guardrail acted like a release bouncer at the door of a bar. The AI guardrail looked at what was about to enter, recognized a risk pattern, and stopped it from coming in and causing trouble. A human Docker engineer then read the flagged issue, proved the failure and sent the fix. The fix was then accepted by the project, improving their code base. So, this fix not only improved DHI but also improved the project for everyone.
While the fix pull request has been reviewed and approved by the upstream maintainers, our DHI build pipeline applied the patch and shipped the new, patched version to our customers. Making sure that customers never saw the regression in their environments.
The AI assisted DHI process
Ironically, standard AI coding assistants didn’t spot the flaw. This is why having our own internal AI guardrails is so critical. They provide the extra layer of support and the specialization that is hard to get from general-purpose coding assistants.
This is how we want AI to show up in our pipeline. It is not a replacement for engineering judgment, but a force multiplier for safety. The guardrail focuses on a narrow set of high-leverage issues that can cause big problems such as inverted error checks, ignored failures, and resource mishandling or suspicious contributor activity.
The example also shows the value of layered safeguards. We don’t just accept an upstream version and hope for the best. The AI guardrail scrutinizes what changed. Our policy treats high-confidence findings as a hard stop. Humans then verify, reproduce the behavior, and apply the smallest correct patch. Only then does the release move forward. The best security is proactive, not reactive.
As I alluded to above, there’s an important broader open source benefit to our AI guardrails. DHI depends on hundreds of community projects, some of which are downloaded millions or even billions of times per month. When the DHI AI guardrail surfaces an issue, our default is to fix it upstream rather than carry a private patch. That keeps our images clean, reduces long-term maintenance, and gives every downstream user a better baseline. It also helps the upstream projects, which benefit from our AI anomaly detection, our human judgment, and our subsequent code fixes.
So back to the title. DHI is crafted by engineers, but it is protected by AI. Our guardrail is an active and continuously improving AI that reads upstream diffs with context, recognizes risky patterns, scores confidence, and blocks merges that don’t pass muster. Well-applied AI helps the human work better with faster reviews, tighter patches, and fewer regressions.
This is the partnership we’re optimizing for. Humans set intent, exercise design judgment, and ship features. AI enforces discipline at critical gates. Best of all, every catch feeds the model signals for future scans, so protections improve as our ecosystem evolves. Collectively, this adds to the security of the entire open source ecosystem.
It’s a win for Docker, for our customers, and the community.
Quelle: https://blog.docker.com/feed/
Second-generation AWS Outposts racks are now supported in the AWS Europe (Ireland) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Organizations from startups to enterprises and the public sector in and outside of Europe can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.
Quelle: aws.amazon.com
AWS Backup now provides schedule preview for backup plans, helping you validate when your backups are scheduled to run. Schedule preview shows the next ten scheduled backup runs, including when continuous backup, indexing, or copy settings take effect.
Backup plan schedule preview consolidates all backup rules into a single timeline, showing how they work together. You can see when each backup occurs across all backup rules, along with settings like lifecycle to cold storage, point-in-time recovery, and indexing. This unified view helps you quickly identify and resolve conflicts or gaps between your backup strategy and actual configuration.
Backup plan schedule preview is available in all AWS Regions where AWS Backup is available. You can start using this feature automatically from the AWS Backup console, API, or CLI without any additional settings. For more information, visit our documentation.
Quelle: aws.amazon.com
AWS announces AI-powered troubleshooting capabilities with Amazon Q integration in AWS Step Functions console. AWS Step Functions is a visual workflow service that enables customers to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. This integration brings Amazon Q’s intelligent error analysis directly into AWS Step Functions console, helping you quickly identify and resolve workflow issues. When errors occur in your AWS Step Functions workflows, you can now click the “Diagnose with Amazon Q” button that appears in error alerts and the console error notification area to receive AI-assisted troubleshooting guidance. This feature helps you resolve common types of issues including state machine execution failures as well as Amazon States Language (ASL) syntax errors and warnings. The troubleshooting recommendations appear in a dedicated window with remediation steps tailored to your error context, enabling faster resolution and improved operational efficiency. Diagnose with Amazon Q for AWS Step Functions is available in all commercial AWS Regions where Amazon Q is available. The feature is automatically enabled for customers who have access to Amazon Q in their region. To learn more about Diagnose with Amazon Q, see Diagnosing and troubleshooting console errors with Amazon Q or get started by visiting the AWS Step Functions console.
Quelle: aws.amazon.com
Amazon Bedrock is bringing DeepSeek-V3.1, OpenAI open-weight models, and Qwen3 models to more AWS Regions worldwide, expanding access to cutting-edge AI for customers across the globe. This regional expansion enables organizations in more countries and territories to deploy these powerful foundation models locally, ensuring compliance with data residency requirements, reducing network latency, and delivering faster AI-powered experiences to their users. DeepSeek-V3.1 and Qwen3 Coder-480B are now available in the US East (Ohio) and Asia Pacific (Jakarta) AWS Regions. OpenAI open-weight models (20B, 120B) and Qwen3 models (32B, 235B, Coder-30B) are now available in the US East (Ohio), Europe (Frankfurt), and Asia Pacific (Jakarta) AWS Regions. Check out the full Region list for future updates. To learn more about these models visit the Amazon Bedrock product page. To get started, access the Amazon Bedrock console and view the documentation.
Quelle: aws.amazon.com
Amazon Bedrock now provides immediate access to all serverless foundation models by default for users in all commercial AWS regions. This update eliminates the need for manually activating model access, allowing you to instantly start using these models through the Amazon Bedrock console playground, AWS SDK, and Amazon Bedrock features including Agents, Flows, Guardrails, Knowledge Bases, Prompt Management, and Evaluations. While you can quickly begin using serverless foundation models from most providers, Anthropic models, although enabled by default, still require you to submit a one-time usage form before first use. You can complete this form either through the API or through the Amazon Bedrock console by selecting an Anthropic model from the playground. When completed through the AWS organization management account, the form submission automatically enables Anthropic models across all member accounts in the organization. This simplified access is available across all commercial AWS regions where Amazon Bedrock is supported. Account administrators retain full control over model access through IAM policies and Service Control Policies (SCPs) to restrict access as needed. For implementation guidance and examples on access controls, please refer to our blog.
Quelle: aws.amazon.com