How to Add MCP Servers to ChatGPT with Docker MCP Toolkit

ChatGPT is great at answering questions and generating code. But here’s what it can’t do: execute that code, query your actual database, create a GitHub repo with your project, or scrape live data from websites. It’s like having a brilliant advisor who can only talk, never act.

Docker MCP Toolkit changes this completely. 

Here’s what that looks like in practice: You ask ChatGPT to check MacBook Air prices across Amazon, Walmart, and Best Buy. If competitor prices are lower than yours, it doesn’t just tell you, it acts: automatically adjusting your Stripe product price to stay competitive, logging the repricing decision to SQLite, and pushing the audit trail to GitHub. All through natural conversation. No manual coding. No copy-pasting scripts. Real execution.

“But wait,” you might say, “ChatGPT already has a shopping research feature.” True. But ChatGPT’s native shopping can only lookup prices. Only MCP can execute: creating payment links, generating invoices, storing data in your database, and pushing to your GitHub. That’s the difference between an advisor and an actor.

By the end of this guide, you’ll build exactly this: a Competitive Repricing Agent that checks competitor prices on demand, compares them to yours, and automatically adjusts your Stripe product prices when competitors are undercutting you.

Here’s how the pieces fit together:

ChatGPT provides the intelligence: understanding your requests and determining what needs to happen

Docker MCP Gateway acts as the secure bridge: routing requests to the right tools

MCP Servers are the hands: executing actual tasks in isolated Docker containers

The result? ChatGPT can query your SQL database, manage GitHub repositories, scrape websites, process payments, run tests, and more—all while Docker’s security model keeps everything contained and safe.

In this guide, you’ll learn how to add seven MCP servers to ChatGPT by connecting to Docker MCP Toolkit. We’ll use a handful of must-have MCP servers: Firecrawl for web scraping, SQLite for data persistence, GitHub for version control, Stripe for payment processing, Node.js Sandbox for calculations, Sequential Thinking for complex reasoning, and Context7 for documentation. Then, you’ll build the Competitive Repricing Agent shown above, all through conversation.

What is Model Context Protocol (MCP)?

Before we dive into the setup, let’s clarify what MCP actually is.

Model Context Protocol (MCP) is the standardized way AI agents like ChatGPT and Claude connect to tools, APIs, and services. It’s what lets ChatGPT go beyond conversation and perform real-world actions like querying databases, deploying containers, analyzing datasets, or managing GitHub repositories.

In short: MCP is the bridge between ChatGPT’s reasoning and your developer stack. And Docker? Docker provides the guardrails that make it safe.

Why Use Docker MCP Toolkit with ChatGPT?

I’ve been working with AI tools for a while now, and this Docker MCP integration stands out for one reason: it actually makes ChatGPT productive.

Most AI integrations feel like toys: impressive demos that break in production. Docker MCP Toolkit is different. It creates a secure, containerized environment where ChatGPT can execute real tasks without touching your local machine or production systems.

Every action happens in an isolated container. Every MCP server runs in its own security boundary. When you’re done, containers are destroyed. No residue, no security debt, complete reproducibility across your entire team.

What ChatGPT Can and Can’t Do Without MCP

Let’s be clear about what changes when you add MCP.

Without MCP

You ask ChatGPT to build a system to regularly scrape product prices and store them in a database. ChatGPT responds with Python code, maybe 50 lines using BeautifulSoup and SQLite. Then you must copy the code, install dependencies, create the database schema, run the script manually, and set up a scheduler if you want it to run regularly.

Yes, ChatGPT remembers your conversation and can store memories about you. But those memories live on OpenAI’s servers—not in a database you control.

With MCP

You ask ChatGPT the same thing. Within seconds, it calls Firecrawl MCP to actually scrape the website. It calls SQLite MCP to create a database on your machine and store the data. It calls GitHub MCP to save a report to your repository. The entire workflow executes in under a minute.

Real data gets stored in a real database on your infrastructure. Real commits appear in your GitHub repository. Close ChatGPT, come back tomorrow, and ask “Show me the price trends.” ChatGPT queries your SQLite database and returns results instantly because the data lives in a database you own and control, not in ChatGPT’s conversation memory.

The data persists in your systems, ready to query anytime; no manual script execution required.

Why This Is Different from ChatGPT’s Native Shopping

ChatGPT recently released a shopping research feature that can track prices and make recommendations. Here’s what it can and cannot do:

What ChatGPT Shopping Research can do:

Track prices across retailers

Remember price history in conversation memory

Provide comparisons and recommendations

What ChatGPT Shopping Research cannot do:

Automatically update your product prices in Stripe

Execute repricing logic based on competitor changes

Store pricing data in your database (not OpenAI’s servers)

Push audit trails to your GitHub repository

Create automated competitive response workflows

With Docker MCP Toolkit, ChatGPT becomes a competitive pricing execution system. When you ask it to check prices and competitors are undercutting you, it doesn’t just inform you, it acts: updating your Stripe prices to match or beat competitors, logging decisions to your database, and pushing audit records to GitHub. The data lives in your infrastructure, not OpenAI’s servers.

Setting Up ChatGPT with Docker MCP Toolkit

Prerequisites

Before you begin, ensure you have:

A machine with 8 GB RAM minimal, ideally 16GB

Install Docker Desktop

A ChatGPT Plus, Pro, Business, or Enterprise Account

ngrok account (free tier works) – For exposing the Gateway publicly

Step 1. Enable ChatGPT developer mode

Head over to ChatGPT and create a new account. 

Click on your profile icon at the top left corner of the ChatGPT page and select “Settings”. Select “Apps and Connectors” and scroll down to the end of the page to select “Advanced Settings.”

Settings → Apps & Connectors → Advanced → Developer Mode (ON)

ChatGPT Developer Mode provides full Model Context Protocol (MCP) client support for all tools, both read and write operations. This feature was announced in the first week of September 2025, marking a significant milestone in AI-developer integration. ChatGPT can perform write actions—creating repositories, updating databases, modifying files—all with proper confirmation modals for safety.

Key capabilities:

Full read/write MCP tool support

Custom connector creation

OAuth and authentication support

Explicit confirmations for write operations

Available on Plus, Pro, Business, Enterprise, and Edu plans

Step 2. Create MCP Gateway

This creates and starts the MCP Gateway container that ChatGPT will connect to.

docker mcp server init –template=chatgpt-app-basic test-chatgpt-app

Successfully initialized MCP server project in test-chatgpt-app (template: chatgpt-app-basic)
Next steps:
cd test-chatgpt-app
docker build -t test-chatgpt-app:latest .

Step 3. List out all the project files

ls -la
total 64
drwxr-xr-x@ 9 ajeetsraina staff 288 16 Nov 16:53 .
drwxr-x—+ 311 ajeetsraina staff 9952 16 Nov 16:54 ..
-rw-r–r–@ 1 ajeetsraina staff 165 16 Nov 16:53 catalog.yaml
-rw-r–r–@ 1 ajeetsraina staff 371 16 Nov 16:53 compose.yaml
-rw-r–r–@ 1 ajeetsraina staff 480 16 Nov 16:53 Dockerfile
-rw-r–r–@ 1 ajeetsraina staff 88 16 Nov 16:53 go.mod
-rw-r–r–@ 1 ajeetsraina staff 2576 16 Nov 16:53 main.go
-rw-r–r–@ 1 ajeetsraina staff 2254 16 Nov 16:53 README.md
-rw-r–r–@ 1 ajeetsraina staff 6234 16 Nov 16:53 ui.html

Step 4. Examine the Compose file

services:
gateway:
image: docker/mcp-gateway # Official Docker MCP Gateway image
command:
– –servers=test-chatgpt-app # Name of the MCP server to expose
– –catalog=/mcp/catalog.yaml # Path to server catalog configuration
– –transport=streaming # Use streaming transport for real-time responses
– –port=8811 # Port the gateway listens on
environment:
– DOCKER_MCP_IN_CONTAINER=1 # Tells gateway it's running inside a container
volumes:
– /var/run/docker.sock:/var/run/docker.sock # Allows gateway to spawn sibling containers
– ./catalog.yaml:/mcp/catalog.yaml # Mount local catalog into container
ports:
– "8811:8811" # Expose gateway port to host

Step 5. Bringing up the compose services

docker compose up -d
[+] Running 2/2
✔ Network test-chatgpt-app_default Created 0.0s
✔ Container test-chatgpt-app-gateway-1 Started

docker ps | grep test-chatgpt-app
eb22b958e09c docker/mcp-gateway "/docker-mcp gateway…" 21 seconds ago Up 20 seconds 0.0.0.0:8811->8811/tcp, [::]:8811->8811/tcp test-chatgpt-app-gateway-1

Step 6. Verify the MCP session

curl http://localhost:8811/mcp
GET requires an active session

Step 7. Expose with Ngrok

Install ngrok and expose your local gateway. You will need to sign up for an ngrok account to obtain an auth token.

brew install ngrok
ngrok config add-authtoken <your_token_id>
ngrok http 8811

Note the public URL (like https://91288b24dc98.ngrok-free.app). Keep this terminal open.

Step 8. Connect ChatGPT

In ChatGPT, go to Settings → Apps & Connectors → Create.

Step 9. Create connector:

Settings → Apps & Connectors → Create

– Name: Test MCP Server
– Description: Testing Docker MCP Toolkit integration
– Connector URL: https://[YOUR_NGROK_URL]/mcp
– Authentication: None
– Click "Create"

Test it by asking ChatGPT to call the greet tool. If it responds, your connection works. Here’s how it looks:

Real-World Demo: Competitive Repricing Agent

Now that you’ve connected ChatGPT to Docker MCP Toolkit, let’s build something that showcases what only MCP can do—something ChatGPT’s native shopping feature cannot replicate.

We’ll create a Competitive Repricing Agent that checks competitor prices on demand, and when competitors are undercutting you, automatically adjusts your Stripe product prices to stay competitive, logs the repricing decision to SQLite, and pushes audit records to GitHub.

Time to build: 15 minutes  

Monthly cost: Free Stripe (test mode) + $1.50-$15 (Firecrawl API)

Infrastructure: $0 (SQLite is free)

The Challenge

E-commerce businesses face a constant dilemma:

Manual price checking across multiple retailers is time-consuming and error-prone

Comparing competitor prices and calculating optimal repricing requires multiple tools

Executing price changes across your payment infrastructure requires context-switching

Historical trend data is scattered across spreadsheets

Strategic insights require manual analysis and interpretation

Result: Missed opportunities, delayed reactions, and losing sales to competitors with better prices.

The Solution: On-Demand Competitive Repricing Agent

Docker MCP Toolkit transforms ChatGPT from an advisor into an autonomous agent that can actually execute. The architecture routes your requests through a secure MCP Gateway that orchestrates specialized tools: Firecrawl scrapes live prices, Stripe creates payment links and invoices, SQLite stores data on your infrastructure, and GitHub maintains your audit trail. Each tool runs in an isolated Docker container: secure, reproducible, and under your control.

The 7 MCP Servers We’ll Use

Server

Purpose

Why It Matters

Firecrawl

Web scraping

Extracts live prices from any website

SQLite

Data persistence

Stores 30+ days of price history

Stripe

Payment management

Updates your product prices to match or beat competitors

GitHub

Version control

Audit trail for all reports

Sequential Thinking

Complex reasoning

Multi-step strategic analysis

Context7

Documentation

Up-to-date library docs for code generation

Node.js Sandbox

Calculations

Statistical analysis in isolated containers

The Complete MCP Workflow (Executes in under 3 minutes)

Step 1. Scrape and Store (30 seconds)

Agent scrapes live prices from Amazon, Walmart, and Best Buy 

Compares against your current Stripe product price

Step 2: Compare Against Your Price (15 seconds) 

Best Buy drops to $509.99—undercutting your $549.99

Agent calculates optimal repricing strategy

Determines new competitive price point

Step 3: Execute Repricing (30 seconds)

Updates your Stripe product with the new competitive price

Logs repricing decision to SQLite with full audit trail

Pushes pricing change report to GitHub

Step 4: Stay Competitive (instant)

Your product now priced competitively

Complete audit trail in your systems

Historical data ready for trend analysis

The Demo Setup: Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from the Settings menu.

To enable:

Open Docker Desktop

Go to Settings → Beta Features

Toggle Docker MCP Toolkit ON

Click Apply

Click MCP Toolkit in the Docker Desktop sidebar, then select Catalog to explore available servers.

For this demonstration, we’ll use seven MCP servers:

SQLite – RDBMS with advanced analytics, text and vector search, geospatial capabilities, and intelligent workflow automation

Stripe –  Updates your product prices to match or beat competitors for automated repricing workflows

GitHub – Handles version control and deployment

Firecrawl – Web scraping and content extraction

Node.js Sandbox – Runs tests, installs dependencies, validates code (in isolated containers)

Sequential Thinking – Debugs failing tests and optimizes code

Context7 – Provides code documentation for LLMs and AI code editors

Let’s configure each one step by step.

1. Configure SQLite MCP Server

The SQLite MCP Server requires no external database setup. It manages database creation and queries through its 25 built-in tools.

To setup the SQLite MCP Server, follow these steps:

Open Docker Desktop → access MCP Toolkit → Catalog

Search “SQLite”

Click + Add

No configuration needed, just click Start MCP Server

docker mcp server ls
# Should show sqlite-mcp-server as enabled

That’s it. ChatGPT can now create databases, tables, and run queries through conversation.

2. Configure Stripe MCP Server

The Stripe MCP server gives ChatGPT full access to payment infrastructure—listing products, managing prices, and updating your catalog to stay competitive.

Get Stripe API Key

Go to dashboard.stripe.com

Navigate to Developers → API Keys

Copy your Secret Key:

Use sk_test_… for sandbox/testing

Use sk_live_… for production

Configure in Docker Desktop

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Stripe”

Click + Add

Go to the Configuration tab

Add your API key:

Field: stripe.api_key

Value: Your Stripe secret key

Click Save and Start Server

Or via CLI:

docker mcp secret set STRIPE.API_KEY="sk_test_your_key_here"
docker mcp server enable stripe

3. Configure GitHub Official MCP Server

The GitHub MCP server lets ChatGPT create repositories, manage issues, review pull requests, and more.

Option 1: OAuth Authentication (Recommended)

OAuth is the easiest and most secure method:

In MCP Toolkit → Catalog, search “GitHub Official”

Click + Add

Go to the OAuth tab in Docker Desktop

Find the GitHub entry

Click “Authorize”

Your browser opens GitHub’s authorization page

Click “Authorize Docker” on GitHub

You’re redirected back to Docker Desktop

Return to the Catalog tab, find GitHub Official

Click Start Server

Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.

Option 2: Personal Access Token

If you prefer manual control or need specific scopes:

Step 1: Create GitHub Personal Access Token

Go to https://github.com and sign in

Click your profile picture → Settings

Scroll to “Developer settings” in the left sidebar

Click “Personal access tokens” → “Tokens (classic)”

Click “Generate new token” → “Generate new token (classic)”

Name it: “Docker MCP ChatGPT”

Select scopes:

repo (Full control of repositories)

workflow (Update GitHub Actions workflows)

read:org (Read organization data)

Click “Generate token”

Copy the token immediately (you won’t see it again!)

Step 2: Configure in Docker Desktop

In MCP Toolkit → Catalog, find GitHub Official:

Click + Add (if not already added)

Go to the Configuration tab

Select “Personal Access Token” as the authentication method

Paste your token

Click Start Server

Or via CLI:

docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN="github_pat_YOUR_TOKEN_HERE"

Verify GitHub Connection

docker mcp server ls

# Should show github as enabled

4. Configure Firecrawl MCP Server

The Firecrawl MCP server gives ChatGPT powerful web scraping and search capabilities.

Get Firecrawl API Key

Go to https://www.firecrawl.dev

Create an account (or sign in)

Navigate to API Keys in the sidebar

Click “Create New API Key”

Copy the API key

Configure in Docker Desktop

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Firecrawl”

Find Firecrawl in the results

Click + Add

Go to the Configuration tab

Add your API key:

Field: firecrawl.api_key

Value: Your Firecrawl API key

Leave all other entries blank

Click Save and Add Server

Or via CLI:

docker mcp secret set FIRECRAWL.API_KEY="fc-your-api-key-here"
docker mcp server enable firecrawl

What You Get

6+ Firecrawl tools, including:

firecrawl_scrape – Scrape content from a single URL

firecrawl_crawl – Crawl entire websites and extract content

firecrawl_map – Discover all indexed URLs on a site

firecrawl_search – Search the web and extract content

firecrawl_extract – Extract structured data using LLM capabilities

firecrawl_check_crawl_status – Check crawl job status

5. Configure Node.js Sandbox MCP Server

The Node.js Sandbox enables ChatGPT to execute JavaScript in isolated Docker containers.

Note: This server requires special configuration because it uses Docker-out-of-Docker (DooD) to spawn containers.

Understanding the Architecture

The Node.js Sandbox implements the Docker-out-of-Docker (DooD) pattern by mounting /var/run/docker.sock. This gives the sandbox container access to the Docker daemon, allowing it to spawn ephemeral sibling containers for code execution.

When ChatGPT requests JavaScript execution:

Sandbox container makes Docker API calls

Creates temporary Node.js containers (with resource limits)

Executes code in complete isolation

Returns results

Auto-removes the container

Security Note: Docker socket access is a privilege escalation vector (effectively granting root-level host access). This is acceptable for local development but requires careful consideration for production use.

Add Via Docker Desktop

MCP Toolkit → Catalog

Search “Node.js Sandbox”

Click + Add

Unfortunately, the Node.js Sandbox requires manual configuration that can’t be done entirely through the Docker Desktop UI. We’ll need to configure ChatGPT’s connector settings directly.

Prepare Output Directory

Create a directory for sandbox output:

# macOS/Linux
mkdir -p ~/Desktop/sandbox-output

# Windows
mkdir %USERPROFILE%Desktopsandbox-output

Configure Docker File Sharing

Ensure this directory is accessible to Docker:

Docker Desktop → Settings → Resources → File Sharing

Add ~/Desktop/sandbox-output (or your Windows equivalent)

Click Apply & Restart

6. Configure Sequential Thinking MCP Server

The Sequential Thinking MCP server gives ChatGPT the ability for dynamic and reflective problem-solving through thought sequences. Adding the Sequential Thinking MCP server is straightforward –  it doesn’t require any API key. Just search for Sequential Thinking in the Catalog and get it to your MCP server list.

In Docker Desktop:

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Sequential Thinking”

Find Sequential Thinking in the results

Click “Add MCP Server” to add without any configuration

The Sequential Thinking MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

A single Sequential Thinking tool that includes:

sequentialthinking – A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens.

7. Configure Context7 MCP Server

The Context7 MCP enables ChatGPT to access the latest and up-to-date code documentation for LLMs and AI code editors. Adding the Context7 MCP server is straightforward. It doesn’t require any API key. Just search for Context7 in the Catalog and get it added to the MCP server lists.

In Docker Desktop:

Open Docker Desktop → MCP Toolkit → Catalog

Search for “Context7”

Find Context7 in the results

Click “Add MCP Server” to add without any configuration

The Context7 MCP server should now appear under “My Servers” in Docker MCP Toolkit

What you get:

2 Context7 tools including:

get-library-docs – Fetches up-to-date documentation for a library.

resolve-library-id – Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries. 

Verify if all the MCP servers are available and running.

docker mcp server ls

MCP Servers (7 enabled)

NAME OAUTH SECRETS CONFIG DESCRIPTION
————————————————————————————————
context7 – – – Context7 MCP Server — Up-to-da…
fetch – – – Fetches a URL from the internet…
firecrawl – ✓ done partial Official Firecrawl MCP Server…
github-official ✓ done ✓ done – Official GitHub MCP Server, by …
node-code-sandbox – – – A Node.js–based Model Context P…
sequentialthinking – – – Dynamic and reflective problem-…
sqlite-mcp-server – – – The SQLite MCP Server transform…
stripe – ✓ done – Interact with Stripe services o…

Tip: To use these servers, connect to a client (IE: claude/cursor) with docker mcp client connect <client-name>

Configuring ChatGPT App and Connector

Use the following compose file in order to let ChatGPT discover all the tools under Docker MCP Catalog:

services:
gateway:
image: docker/mcp-gateway
command:
– –catalog=/root/.docker/mcp/catalogs/docker-mcp.yaml
– –servers=context7,firecrawl,github-official,node-code-sandbox,sequentialthinking,sqlite-mcp-server,stripe
– –transport=streaming
– –port=8811
environment:
– DOCKER_MCP_IN_CONTAINER=1
volumes:
– /var/run/docker.sock:/var/run/docker.sock
– ~/.docker/mcp:/root/.docker/mcp:ro
ports:
– "8811:8811"

By now, you should be able to view all the MCP tools under ChatGPT Developer Mode.

Let’s Test it Out

Now we give ChatGPT its intelligence. Copy this system prompt and paste it into your ChatGPT conversation:

You are a Competitive Repricing Agent that monitors competitor prices, automatically adjusts your Stripe product prices, and provides strategic recommendations using 7 MCP servers: Firecrawl (web scraping), SQLite (database), Stripe (price management), GitHub (reports), Node.js Sandbox (calculations), Context7 (documentation), and Sequential Thinking (complex reasoning).

DATABASE SCHEMA

Products table: id (primary key), sku (unique), name, category, brand, stripe_product_id, stripe_price_id, current_price, created_at
Price_history table: id (primary key), product_id, competitor, price, original_price, discount_percent, in_stock, url, scraped_at
Price_alerts table: id (primary key), product_id, competitor, alert_type, old_price, new_price, change_percent, created_at
Repricing_log table: id, product_name, competitor_triggered, competitor_price, old_stripe_price, new_stripe_price, repricing_strategy, stripe_price_id, triggered_at, status

Indexes: idx_price_history_product on (product_id, scraped_at DESC), idx_price_history_competitor on (competitor)

WORKFLOW

On-demand check: Scrape (Firecrawl) → Store (SQLite) → Analyze (Node.js) → Report (GitHub)
Competitive repricing: Scrape (Firecrawl) → Compare to your price → Update (Stripe) → Log (SQLite) → Report (GitHub)

STRIPE REPRICING WORKFLOW

When competitor price drops below your current price:
1. list_products – Find your existing Stripe product
2. list_prices – Get current price for the product
3. create_price – Create new price to match/beat competitor (prices are immutable in Stripe)
4. update_product – Set the new price as default
5. Log the repricing decision to SQLite

Price strategies:
– "match": Set price equal to lowest competitor
– "undercut": Set price 1-2% below lowest competitor
– "margin_floor": Never go below your minimum margin threshold

Use Context7 when: Writing scripts with new libraries, creating visualizations, building custom scrapers, or needing latest API docs

Use Sequential Thinking when: Making complex pricing strategy decisions, planning repricing rules, investigating market anomalies, or creating strategic recommendations requiring deep analysis

EXTRACTION SCHEMAS

Amazon: title, price, list_price, rating, reviews, availability
Walmart: name, current_price, was_price, availability
Best Buy: product_name, sale_price, regular_price, availability

RESPONSE FORMAT

Price Monitoring: Products scraped, competitors covered, your price vs competitors
Repricing Triggers: Which competitor triggered, price difference, strategy applied
Price Updated: New Stripe price ID, old vs new price, margin impact
Audit Trail: GitHub commit SHA, SQLite log entry, timestamp

TOOL ORCHESTRATION PATTERNS

Simple price check: Firecrawl → SQLite → Response
Trend analysis: SQLite → Node.js → Response
Strategy analysis: SQLite → Sequential Thinking → Response
Competitive repricing: Firecrawl → Compare → Stripe → SQLite → GitHub
Custom tool development: Context7 → Node.js → GitHub
Full intelligence report: Firecrawl → SQLite → Node.js → Sequential Thinking → GitHub

KEY USAGE PATTERNS

Use Stripe for: Listing products, listing prices, creating new prices, updating product default prices

Use Sequential Thinking for: Pricing strategy decisions (match, undercut, or hold), market anomaly investigations (why did competitor prices spike), multi-factor repricing recommendations

Use Context7 for: Getting documentation before coding, learning new libraries on-the-fly, ensuring code uses latest API conventions

Use Node.js for: Statistical calculations (moving averages, standard deviation, volatility), chart generation, margin calculations

BEST PRACTICES

Space web scraping requests 2 seconds apart to respect rate limits
Calculate price difference as (your_price – competitor_price)
Trigger repricing when competitor drops below your current price
Log all repricing decisions to SQLite with Stripe IDs for audit trail
Push pricing reports to GitHub for compliance
Always use Context7 before writing code with unfamiliar libraries
Respect margin floors—never reprice below minimum acceptable margin

COMMAND RECOGNITION PATTERNS

"Check X prices and stay competitive" → Full repricing pipeline: scrape → compare → if competitor lower: Stripe update + SQLite + GitHub
"Match competitor price for X" → Stripe: list_products → list_prices → create_price (matching)
"Undercut competitors on X" → Stripe: create_price (1-2% below lowest)
"Show price history" → SQLite query → format results
"Analyze pricing strategy for X" → Sequential Thinking analysis
"Why did competitor prices change" → Query data → Sequential Thinking investigation

CORE PRINCIPLE

When competitors change prices, don't just report—ACT. Update your Stripe prices to stay competitive, log decisions to SQLite, push audit records to GitHub. Transform competitor intelligence into automated repricing. All data stays on YOUR infrastructure.

Here’s the prompt:

Set up a competitive repricing agent:
1. I sell MacBook Air M3 on my store – current Stripe price is $549.99
2. Monitor competitor prices on Amazon, Walmart, Best Buy
3. When ANY competitor drops below my price:
– Automatically update my Stripe product price to match or beat them
– Use "undercut" strategy (price 1% below lowest competitor)
– Log the repricing decision to SQLite
– Push pricing change report to GitHub

Check prices now and reprice if needed.

Here’s how ChatGPT responds – orchestrating all 7 MCP servers in a single workflow:

Competitive Repricing Agent – Execution Complete

Within 3 minutes, ChatGPT orchestrated all 7 MCP servers to analyse the market and automatically reprice your product.

Repricing Triggered and Executed:

Metrics

Before

After

Your Price

$549.99

$504.99

Price Change

-$45.00 (-8.2%)

Market Position

3rd (behind Best Buy)

#1 Lowest

Competitor Price Scan Results:

Retailer

Price

Vs. Your New Price

Your Store

$504.99

Market Leader

Best Buy

$509.99

+$5.00 (you beat by 1%)

Walmart

$669.00

+$164.01 higher

Amazon

$699.00

+$194.01 higher

What the Agent did (6 Steps):

Installed SQLite3 and created database schema with 4 tables

Created Stripe product (prod_TZaK0ARRJ5OJJ8) with initial $549.99 price 

Scraped live competitor prices via Firecrawl from Amazon, Best Buy, and Walmart 

Analysed pricing strategy with Sequential Thinking — detected Best Buy at $509.99 below your price

Executed repricing — created new Stripe price at $504.99 (price_1ScRCVI9l1vmUkzn0hTnrLmW)

Pushed audit report to GitHub (commit `64a488aa`)

All data stored on your infrastructure – not OpenAI’s servers. 

To check prices again, simply ask ChatGPT to ‘check MacBook Air M3 competitor prices’—it will scrape, compare, and reprice automatically. Run this check daily, weekly, or whenever you want competitive intelligence

Explore the Full Demo

View the complete repricing report and audit trail on GitHub: https://github.com/ajeetraina/competitive-repricing-agent-mcp

Want true automation? This demo shows on-demand repricing triggered by conversation. For fully automated periodic checks, you could build a simple scheduler that calls the OpenAI API every few hours to trigger the same workflow—turning this into a hands-free competitive intelligence system.Default houston Paragraph Text

Wrapping Up

You’ve just connected ChatGPT to Docker MCP Toolkit and configured multiple MCP servers. What used to require context-switching between multiple tools, manual query writing, and hours of debugging now happens through natural conversation, safely executed in Docker containers.

This is the new paradigm for AI-assisted development. ChatGPT isn’t just answering questions anymore. It’s querying your databases, managing your repositories, scraping data, and executing code—all while Docker ensures everything stays secure and contained.

Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with SQLite, add GitHub, experiment with Firecrawl. Each server unlocks new capabilities.

The future of development isn’t writing every line of code yourself. It’s having an AI partner that can execute tasks across your entire stack securely, reproducibly, and at the speed of thought.

Learn More

New to Docker? Download Docker Desktop

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers

Get Started with MCP Toolkit: Official Documentation

Quelle: https://blog.docker.com/feed/

Is AI the New Insider Threat?

Insider threats have always been difficult to manage because they blur the line between trusted access and risky behavior. 

With generative AI, these risks aren’t tied to malicious insiders misusing credentials or bypassing controls; they come from well-intentioned employees simply trying to get work done faster. Whether it’s developers refactoring code, analysts summarizing long reports, or marketers drafting campaigns, the underlying motivation is almost always productivity and efficiency.

Unfortunately, that’s precisely what makes this risk so difficult to manage. Employees don’t see themselves as creating security problems; they’re solving bottlenecks. Security is an afterthought at best. 

This gap in perception creates an opportunity for missteps. By the time IT or security teams realize an AI tool has been widely adopted, patterns of risky use may already be deeply embedded in workflows.

Right now, AI use in the workplace is a bit of a free-for-all. And when everyone’s saying “it’s fun” and “everyone’s doing it”, it feels like being back in high school: no one wants to be *that* person telling them to stop because it’s risky. 

But, as security, we do have a responsibility.

In this article, I explore the risks of unmanaged AI use, why existing security approaches fall short, and suggest one thing I believe we can do to balance users’ enthusiasm with responsibility (without being the party pooper).

Examples of Risky AI Use

The risks of AI use in the workplace usually fall into one of three categories:

Sensitive data breaches: A single pasted transcript, log, or API key may seem minor, but once outside company boundaries, it’s effectively gone, subject to provider retention and analysis.

Intellectual property leakage: Proprietary code, designs, or research drafts fed into AI tools can erode competitive advantage if they become training data or are exposed via prompt injection.

Regulatory and compliance violations: Uploading regulated data HIPAA, GDPR, etc. into unsanctioned AI systems can trigger fines or legal action, even if no breach occurs.

What makes these risks especially difficult is their subtlety. They emerge from everyday workflows, not obvious policy violations, which means they often go unnoticed until the damage is done.

Shadow AI

For years, Shadow IT has meant unsanctioned SaaS apps, messaging platforms, or file storage systems. 

Generative AI is now firmly in this category. 

Employees don’t think that pasting text into a chatbot like ChatGPT introduces a new system to the enterprise. In practice, however, they’re moving data into an external environment with no oversight, logging, or contractual protection.

What’s different about Shadow AI is the lack of visibility: unlike past technologies, it often leaves no obvious logs, accounts, or alerts for security teams to follow. With cloud file-sharing, security teams could trace uploads, monitor accounts created with corporate emails, or detect suspicious network traffic. 

But AI use often looks like normal browser activity. And while some security teams do scan what employees paste into web forms, those controls are limited. 

Which brings us to the real problem: we don’t really have the tools to manage AI use properly. Not yet, at least.

Controls Are Lacking

We all see people trying to get work done faster, and we know we should be putting some guardrails in place, but the options out there are either expensive, complicated, or still figuring themselves out.

The few available AI governance and security tools have clear limitations (even though their marketing might try to convince you otherwise):

Emerging AI governance platforms offer usage monitoring, policy enforcement, and guardrails around sensitive data, but they’re often expensive, complex, or narrowly focused.

Traditional controls like DLP and XDR catch structured data such as phone numbers, IDs, or internal customer records, but they struggle with more subtle, hard-to-detect information: source code, proprietary algorithms, or strategic documents.

Even with these tools, the pace of AI adoption means security teams are often playing catch-up. The reality is that while controls are improving, they rarely keep up with how quickly employees are exploring AI.

Lessons from Past Security Blind Spots

Employees charging ahead with new tools while security teams scramble to catch up is not so different from the early days of cloud file sharing: employees flocked to Dropbox or Google Drive before IT had sanctioned solutions. Or think back to the rise of “bring your own device” (BYOD), when personal phones and laptops started connecting to corporate networks without clear policies in place.

Both movements promised productivity, but they also introduced risks that security teams struggled to manage retroactively.

Generative AI is repeating this pattern, only at a much faster rate. While cloud tools or BYOD require some setup, or at least a decision to connect a personal device, AI tools are available instantly in a browser. The barrier to entry is practically zero. That means adoption can spread through an organization long before security leaders even realize it’s happening. 

And as with cloud and BYOD, the sequence is familiar: employee adoption comes first, controls follow later, and those retroactive measures are almost always costlier, clumsier, and less effective than proactive governance.

So What Can We Do?

Remember: AI-driven insider risk isn’t about bad actors but about good people trying to be productive and efficient. (OK, maybe with a few lazy ones thrown in for good measure.) It’s ordinary rather than malicious behavior that’s unfortunately creating unnecessary exposure. 

That means there’s one measure every organization can implement immediately: educating employees.

Education works best when it’s practical and relatable. Think less “compliance checkbox,” and more “here’s a scenario you’ve probably been in.” That’s how you move from fuzzy awareness to actual behavior change.

Here are three steps that make a real difference:

Build awareness with real examples. Show how pasting code, customer details, or draft plans into a chatbot can have the same impact as posting them publicly. That’s the “aha” moment most people need.

Emphasize ownership. Employees already know they shouldn’t reuse passwords or click suspicious links; AI use should be framed in the same personal-responsibility terms. The goal is a culture where people feel they’re protecting the company, not just following rules.

Set clear boundaries. Spell out which categories of data are off-limits PII, source code, unreleased products, regulated records) and offer safe alternatives like internal AI sandboxes. Clarity reduces guesswork and removes the temptation of convenience.

Until governance tools mature, these low-friction steps form the strongest defense we have.

If you can enable people to harness AI’s productivity while protecting your critical data, you reduce today’s risks. And you’re better prepared for the regulations and oversight that are certain to follow.

Quelle: https://blog.docker.com/feed/

Highlights from AWS re:Invent: Supercharging Kiro with Docker Sandboxes and MCP Catalog

At the recent AWS re:Invent, Docker focused on a very real developer problem: how to run AI agents locally without giving them access to your machine, credentials, or filesystem.

With AWS introducing Kiro, Docker demonstrated how Docker Sandboxes and MCP Toolkit allow developers to run agents inside isolated containers, keeping host environments and secrets out of reach. The result is a practical setup where agents can write code, run tests, and use tools safely, while you stay focused on building, not cleaning up accidental damage.

Local AI Agents, Isolation, and Docker at AWS re:Invent

Two weeks ago, a Reddit user posted how their filesystem was accidentally deleted by Google Antigravity. And the top comment?

Alright no more antigravity outside of a container

And another user’s home directory was recently wiped using Claude Code this past week. And yet another top comment:

That’s exactly why Claude code should be used only inside an isolated container or vm

We agree that this should never happen and that containers provide the proper isolation and segmentation.

At AWS re:Invent 2025, we were able to show off this vision using Kiro running in our new Docker sandboxes, using MCP servers provided by the Docker MCP Toolkit. 

If you weren’t able to attend or visit us at the booth, fear not! I’ll share the demo with you.

Jim Clark, one of Docker’s Principal Engineers, providing a demo of running an secured AI development environment using Docker’s sandboxes and MCP Toolkit

Giving Kiro safety guardrails

Docker Sandboxes provide the ability to run an agent inside an isolated environment using containers. In this environment, the agent has no access to credentials stored on the host and can only access the files of the specified project directory.

As an example, I have some demo AWS credentials on my machine:

> cat ~/.aws/credentials
[default]
aws_access_key_id=demo_access_key
aws_secret_access_key=demo_secret_key

Now, I’m going to clone the Catalog Service demo project and start a sandbox using Kiro:

git clone https://github.com/dockersamples/catalog-service-node.git
cd catalog-service-node
docker sandbox run –mount-docker-socket kiro

The –mount-docker-socket flag is added to give the sandbox the Docker socket, which will allow the agent to run my integration tests that use Testcontainers.

On the first launch, I will be required to authenticate. After that’s done, I will ask Kiro to tell me about the AWS credentials it has access to:

⢀⣴⣶⣶⣦⡀⠀⠀⠀⢀⣴⣶⣦⣄⡀⠀⠀⢀⣴⣶⣶⣦⡀⠀⠀⢀⣴⣶⣶⣶⣶⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀⠀⠀⠀⢀⣠⣴⣶⣶⣶⣶⣶⣦⣄⡀⠀⠀⠀
⢰⣿⠋⠁⠈⠙⣿⡆⠀⢀⣾⡿⠁⠀⠈⢻⡆⢰⣿⠋⠁⠈⠙⣿⡆⢰⣿⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠈⠙⠻⣦⠀⠀⠀⠀⣴⡿⠟⠋⠁⠀⠀⠀⠈⠙⠻⢿⣦⠀⠀
⢸⣿⠀⠀⠀⠀⣿⣇⣴⡿⠋⠀⠀⠀⢀⣼⠇⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢠⣤⣤⣤⣤⣄⠀⠀⠀⠀⣿⡆⠀⠀⣼⡟⠀⠀⠀⠀⣀⣀⣀⠀⠀⠀⠀⢻⣧⠀
⢸⣿⠀⠀⠀⠀⣿⡿⠋⠀⠀⠀⢀⣾⡿⠁⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⠉⠉⠉⣿⡇⠀⠀⠀⣿⡇⠀⣼⡟⠀⠀⠀⣰⡿⠟⠛⠻⢿⣆⠀⠀⠀⢻⣧
⢸⣿⠀⠀⠀⠀⠙⠁⠀⠀⢀⣼⡟⠁⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⢸⣿⣶⣶⡶⠋⠀⠀⠀⠀⣿⠇⢰⣿⠀⠀⠀⢰⣿⠀⠀⠀⠀⠀⣿⡆⠀⠀⠀⣿⡆
⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠹⣷⡀⠀⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⣼⠟⠀⢸⣿⠀⠀⠀⢸⣿⠀⠀⠀⠀⠀⣿⡇⠀⠀⠀⣿⡇
⢸⣿⠀⠀⠀⠀⠀⣠⡀⠀⠀⠹⣷⡄⠀⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣤⣄⠀⠀⠀⠀⠹⣿⡅⠀⠀⠸⣿⠀⠀⠀⠸⣿⠀⠀⠀⠀⠀⣿⠇⠀⠀⠀⣿⠇
⢸⣿⠀⠀⠀⠀⣾⡟⣷⡀⠀⠀⠘⣿⣆⠀⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡟⣷⡀⠀⠀⠀⠘⣿⣆⠀⠀⢻⣧⠀⠀⠀⠹⣷⣦⣤⣤⣾⠏⠀⠀⠀⣼⡟
⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠈⢻⡇⠀⢸⣿⠀⠀⠀⠀⣿⡇⢸⣿⠀⠀⠀⠀⣿⡇⠹⣷⡀⠀⠀⠀⠈⢻⡇⠀⠀⢻⣧⠀⠀⠀⠀⠉⠉⠉⠀⠀⠀⠀⣼⡟
⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⢀⣼⠇⠀⠸⣿⣄⡀⢀⣠⣿⠇⠸⣿⣄⡀⢀⣠⣿⠇⠀⠙⣷⡀⠀⠀⢀⣼⠇⠀⠀⠀⠻⣷⣦⣄⡀⠀⠀⠀⢀⣠⣴⣾⠟
⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠈⠻⠿⠿⠟⠁⠀⠀⠀⠀⠀⠈⠙⠻⠿⠿⠿⠿⠟⠋⠁
Model: Auto (/model to change) | Plan: KIRO FREE (/usage for more detail)

!> Tell me about the AWS credentials you have access to

From here, Kiro will search the typical places AWS credentials are configured. But, finally, it reaches the following conclusion:

Currently, there are no AWS credentials configured on your system

And why is this? The credentials on the host are not accessible inside the sandbox environment. The agent is in the isolated environment and only has access to the current project directory.

Giving Kiro secure tools with the MCP Toolkit

If we take a step back and think about it, the only credential an agent should have access to is to authenticate with the model provider. All other credentials belong to the tools (or MCP servers) around the agent.

And that’s where the MCP Toolkit comes in!

Sandboxes don’t yet have an automatic way to connect to the MCP Toolkit (it’s coming soon!). Until that’s available I will start a MCP Gateway with the following command:

docker mcp gateway run –transport=streaming

There are a variety of ways to configure Kiro with MCP servers, but the project-level configuration provides an easy way that also works with sandboxes.

In the project, I will create a .kiro/settings/mcp.json file with the following contents:

{
"mcpServers": {
"docker-mcp-toolkit": {
"type": "http",
"url": "http://host.docker.internal:8811/"
}
}
}

After restarting Kiro, I can ask it about the available tools:

/tools

The output then shows the following tools coming from the MCP Gateway:

docker-mcp-toolkit (MCP)
– code-mode trusted
– mcp-add trusted
– mcp-config-set trusted
– mcp-create-profile trusted
– mcp-exec trusted
– mcp-find trusted
– mcp-remove trusted

These tools highlight the ability to dynamically add and remove MCP servers through the Gateway. 

By using an AGENTS.md file, I can provide to the agent on which MCP servers to use and give an additional nudge to remove those servers when they’re no longer needed (which will reduce my context size and costs). This is what my file looks like:

# Special instructions

– When info is needed from GitHub, use the `github-official` MCP server. This will ensure proper auth tokens are used.
– When adding MCP servers, be sure to activate all tools.
– When you have the required info from any MCP server, remove the MCP server to reduce the number of tools in the context.

Before Kiro can use the GitHub MCP server, I need to ensure I’ve configured the MCP server with my OAuth credentials.

Screenshot of the Docker Desktop dashboard showing the GitHub Official MCP server listing within the MCP Toolkit. Specifically, it shows the OAuth configuration has been completed.

Putting it all together

With all of the setup completed, now I’m ready to have Kiro do some work for me. I’ll simply ask it to complete one of the issues on GitHub. I can simply ask:

> Can you help me complete the work for issue #64?

Watching the output, I’ll first see Kiro add the github-official MCP server by using the mcp-add tool.

Running tool mcp-add with the param (from mcp server: docker-mcp-toolkit)
⋮ {
⋮ "name": "github-official",
⋮ "activate": true
⋮ }
– Completed in 0.609s

From there, I’ll see that Kiro will check the git remotes to determine the GitHub organization and repository being used:

> I need to determine the repository owner and name. Let me check the current directory for git information:
I will run the following command: git remote -v (using tool: shell)
Purpose: Get git remote information to identify the repository

origin https://github.com/dockersamples/catalog-service-node.git (fetch)
origin https://github.com/dockersamples/catalog-service-node.git (push)
– Completed in 0.6s

Then, it will use the GitHub MCP server to get the issue details:

> Now I can get the issue details:
Running tool issue_read with the param (from mcp server: docker-mcp-toolkit)
⋮ {
⋮ "method": "get",
⋮ "owner": "dockersamples",
⋮ "repo": "catalog-service-node",
⋮ "issue_number": 64
⋮ }
– Completed in 0.613s

> Perfect! Issue #64 is about adding product metadata to S3 objects.

From here, Kiro will go and make the changes to my source code, update the integration tests, and even run the integration tests to ensure it all works. After it’s completed, I have the following output:

I've successfully completed the work for issue #64. Here's what was implemented:

Changes Made:

1. StorageService.js:
– Added Tagging parameter to S3 PutObjectCommand with product_id=${id}
– Added Metadata parameter with product_name: productName
– Added new AWS SDK imports for GetObjectTaggingCommand and HeadObjectCommand
– Created helper functions getObjectTags() and getObjectMetadata()

2. ProductService.js:
– Modified uploadProductImage() to fetch product name from database and pass it to uploadFile()
– Exposed new functions getProductImageTags() and getProductImageMetadata()

3. Integration Test:
– Added validation to verify S3 objects have correct tags (product_id with the product ID)
– Added validation to verify S3 objects have correct metadata (product_name with the product name)

Test Results:
All integration tests pass, confirming that:
– S3 objects are tagged with the product identifier
– S3 objects include metadata with the product name
– The existing functionality remains intact

And notably, right before that, I see that Kiro automatically removed the GitHub MCP server to remove its tools from my context:

Let me clean up by removing the GitHub MCP server since we're done with it:
Running tool mcp-remove with the param (from mcp server: docker-mcp-toolkit)
⋮ {
⋮ "name": "github-official"
⋮ }
– Completed in 0.2s

With those tools removed from the context, the model has less to tokenize and process which means faster responses and less cost.

Highlighting what’s important

Taking a step back at what we ran, we have the following:

An agent in an isolated environment. With the agent running in a container, it’s unable to access and leak credentials stored on my host machine. And rogue requests to delete my filesystem are limited to the containerized environment where it’s running as a non-root user.

Isolated and containerized MCP servers. Each MCP server runs in its isolated container, preventing host access. In addition, I don’t have to spend any time worrying about runtime environments or configuration. With a container, “it just works!”

API credentials only where they’re needed. The only component that needs access to my GitHub credential is the GitHub MCP server, where it is securely injected. This approach further prevents potential leaks and exposures.

In other words, we have a microserviced architecture where each component runs in its own container and follows least privilege by having access to only the things it needs access to.

Looking forward

Here at Docker, we’re quite excited about this architecture and there’s still a lot to do. Two items I’m excited about include:

A network boundary for agentic workloads. This boundary would limit network access to only authorized hostnames. Then, if a prompt injection tries to send sensitive information to evildomain.com, that request is blocked.

Governance and control for organizations. With this, your organization can authorize the MCP servers that are used and even create its own custom catalogs and rule sets.

If you want to try out Sandboxes, you can do so by enabling the Experimental Feature in Docker Desktop 4.50+. We’d love to hear your feedback and thoughts!

Learn more 

Docker Sandboxes: Simplifies running AI agents securely on your local machine

Explore the MCP Catalog: Discover containerized, security-hardened MCP servers.

Get started with the MCP Toolkit: Run MCP servers easily and securely.

Quelle: https://blog.docker.com/feed/

Validate best practice compliance for SAP ABAP applications with AWS Systems Manager

AWS Systems Manager (AWS SSM) Configuration Manager now allows you to automatically test SAP ABAP based applications on AWS against best practices defined in the AWS Well-Architected Framework SAP Lens. Keeping SAP applications optimally configured requires SAP administrators to stay current with best practices from multiple sources including AWS, SAP, and operating system vendors and manually check their configurations to validate adherence. AWS SSM Configuration Manager automatically assesses SAP applications running on AWS against these standards, proactively identifying misconfigurations and recommending specific remediation steps, allowing you to make the necessary changes before potential impacts to business operations. With this launch, configuration checks can be scheduled or run on-demand for SAP HANA and ABAP applications. SSM for SAP Configuration Manager is available in AWS Regions where SSMSAP is available. To learn more, read the launch blog, or refer to the AWS Systems Manager for SAP documentation.
Quelle: aws.amazon.com

AWS DataSync introduces Terraform support for Enhanced mode

The AWS DataSync Terraform module now supports Enhanced mode for transfers between S3 locations, making it easier for you to set up high-performance data transfers at scale. AWS DataSync is a secure, high-speed file transfer service that optimizes data movement over a network. Enhanced mode uses parallel processing to deliver higher performance and scalability for datasets of any size, while removing file count limitations and providing detailed transfer metrics for better monitoring and management. You can now use Terraform to automatically provision DataSync tasks configured for Enhanced mode. This eliminates manual configuration steps that can be time-consuming and error-prone, while giving you a consistent, repeatable, version-controlled deployment process that can scale across your organization. You can access the AWS DataSync Terraform module on GitHub or through the Terraform Registry. To learn more about DataSync, see the AWS DataSync documentation. To see all Regions where DataSync is available, visit the AWS Region table.
Quelle: aws.amazon.com

Amazon EC2 X2iedn instances now available in AWS Asia Pacific (Thailand) region

Starting today, memory-optimized Amazon Compute Cloud (Amazon EC2) X2iedn instances are available in AWS Asia Pacific (Thailand) region. These instances, powered by 3rd generation Intel Xeon Scalable Processors and built with AWS Nitro System, are designed for memory-intensive workloads. They deliver improvements in performance, price performance, and cost per GiB of memory compared to previous generation X1e instances. These instances are SAP-certified for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, SAP BW/4HANA, and SAP NetWeaver workloads on any database.
To learn more, visit the EC2 X2i Instances Page, or connect with your AWS Support contacts.
Quelle: aws.amazon.com

AWS Elastic Beanstalk is now available in additional regions

We are excited to announce the general availability of AWS Elastic Beanstalk in Asia Pacific (New Zealand) (Melbourne), (Malaysia), (Hyderabad), Canada West (Calgary), and Europe (Zurich). AWS Elastic Beanstalk is a service that simplifies application deployment and management on AWS. The service automatically handles deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring, allowing developers to focus on writing code. For a complete list of regions and service offerings, see AWS Regions. To get started on AWS Elastic Beanstalk, see the AWS Elastic Beanstalk Developer Guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page. 
Quelle: aws.amazon.com

AWS DataSync increases scalability and performance for on-premises file transfers

AWS DataSync Enhanced mode now supports data transfers between on-premises file servers and Amazon S3, enabling customers to transfer datasets that scale to virtually unlimited numbers of files at higher levels of performance than DataSync Basic mode. AWS DataSync is a secure, high-speed file transfer service that optimizes data movement over a network. Enhanced mode uses parallel processing to deliver higher performance and scalability for datasets of any size, while removing file count limitations and providing detailed transfer metrics for better monitoring and management. Previously, Enhanced mode was available for data transfers between Amazon S3 locations and for multicloud transfers. This launch extends the capabilities of Enhanced mode to support transfers between on-premises NFS or SMB file servers, and Amazon S3. Using Enhanced mode, customers can accelerate generative AI workloads by rapidly moving training datasets to AWS, power data lake analytics by synchronizing on-premises data with cloud-based pipelines, and drive large-scale migrations for archival and cloud modernization. This new capability is available in all AWS Regions where AWS DataSync is offered. To get started, visit the AWS DataSync console. For more information, see the AWS DataSync documentation.
Quelle: aws.amazon.com