Amazon RDS Snapshot Export to S3 now available in AWS GovCloud (US) Regions

Amazon RDS Snapshot Export to S3 is now available in AWS GovCloud (US) regions, enabling you to export snapshot data in Apache Parquet format for analytics, data retention, and machine learning use cases. Snapshot export to S3 supports all DB snapshot types (manual, automated system, and AWS Backup snapshots) and runs directly on the snapshot without impacting database performance. The exported data in Apache Parquet format can be analyzed using other AWS services such as Amazon Athena, Amazon SageMaker, or Amazon Redshift Spectrum, or with big data processing frameworks such as Apache Spark. You can create a snapshot export with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. Snapshot Export to S3 is supported for Amazon Aurora PostgreSQL – Compatible Edition and Amazon Aurora MySQL, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for MariaDB snapshots. For more information, including instructions on getting started, read Aurora documentation or Amazon RDS documentation.
Quelle: aws.amazon.com

AWS Observability now available as a Kiro power

Today, AWS announces AWS Observability as a Kiro power, enabling developers and operators to investigate infrastructure and application health issues faster with AI agent-assisted workflows in Kiro. Kiro Powers is a repository of curated and pre-packaged Model Context Protocol (MCP) servers, steering files, and hooks validated by Kiro partners to accelerate specialized software development and deployment use cases. The AWS Observability power packages four specialized MCP servers with targeted observability guidance: the CloudWatch MCP server for observability data; the Application Signals MCP server for application performance monitoring; the CloudTrail MCP server for security analysis and compliance; and the AWS Documentation MCP server for contextual reference access. This unified platform gives Kiro agents instant context for comprehensive workflows including alarm response, anomaly detection, distributed tracing, SLO compliance monitoring, and security investigation. Additionally, the power includes automated gap analysis that helps you identify and fix missing instrumentation. With the AWS Observability power, developers can now accelerate troubleshooting their distributed applications and infrastructure in minutes, directly in their IDE. The power addresses two critical needs: reducing mean time to resolution (MTTR) for active incidents and proactively improving your observability stack. For faster incident response, when investigating an active alarm, the power dynamically loads relevant guidance and operational signals so AI agents receive only the context needed for the specific troubleshooting task at hand. For stack improvement, the automated gap analysis examines your code to identify missing instrumentation patterns—such as unlogged errors, missing correlation IDs, or absent distributed tracing—and provides actionable recommendations. The power includes eight comprehensive steering guides covering incident response, alerting, performance monitoring, security auditing, and gap analysis. The AWS Observability power is available for one-click installation within Kiro IDE and Kiro powers webpage in all AWS Regions, with each underlying MCP server functional based on regional support of the corresponding AWS service. To learn more about AWS observability MCP servers, visit our documentation. 
Quelle: aws.amazon.com

AWS Compute Optimizer now applies AWS-generated tags to EBS snapshots created during automation

AWS Compute Optimizer makes it easier to identify snapshots that are created when snapshotting and deleting unattached Amazon Elastic Block Store (EBS) volumes by automatically applying an AWS-generated tag during creation. This enhancement improves visibility and tracking of EBS snapshots created through Compute Optimizer Automation.
When Compute Optimizer creates a snapshot before deleting an unattached EBS volume—whether initiated through manual actions or automation rules—the snapshot now receives the tag aws:compute-optimizer:automation-event-id with a tag value that links the snapshot to the unique identifier of the automation event that created it. This allows you to easily identify, track, and manage snapshots created through the automated optimization process, helping you maintain better governance over your backup resources and understand the source of snapshots in your environment.
This is available in all AWS Regions where AWS Compute Optimizer Automation is available. To get started with automated optimization, go to the AWS Compute Optimizer console or visit the user guide documentation.
Quelle: aws.amazon.com

Amazon Bedrock now supports server-side tool execution with AgentCore Gateway

Amazon Bedrock now enables server-side tool execution through Amazon Bedrock AgentCore Gateway integration with the Responses API. Customers can connect their AgentCore Gateway tools to Amazon Bedrock models, enabling server-side tool execution without client-side orchestration.
With this launch, customers can specify an AgentCore Gateway ARN as a tool connector in Responses API requests. Amazon Bedrock automatically discovers available tools from the gateway, presents them to the model during inference, and executes tool calls server-side when the model selects them, all within a single API call. This eliminates the need for customers to build and maintain client-side tool orchestration loops, reducing application complexity and latency for agentic workflows. Customers retain full control over tool access through their existing AgentCore Gateway configurations and AWS IAM permissions.
Server-side tool execution with AgentCore Gateway supports all models available through the Amazon Bedrock Responses API. Customers define tools using the MCP server connector type with their gateway ARN, and Amazon Bedrock handles tool discovery, model-driven tool selection, execution, and result injection automatically. Multiple tool calls within a single conversation turn are supported, and tool results are streamed back to the client in real time.
This capability is generally available in all AWS Regions where both Amazon Bedrock’s Responses API and Amazon Bedrock AgentCore Gateway are available. To get started, visit the Amazon Bedrock documentation or the Amazon Bedrock console. For more information about Amazon Bedrock AgentCore Gateway, see the AgentCore documentation.
Quelle: aws.amazon.com

From the Captain’s Chair: Kristiyan Velkov

Docker Captains are leaders from the developer community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “From the Captain’s Chair” is a blog series where we get a closer look at one Captain to learn more about them and their experiences.

Today we are interviewing Kristiyan Velkov, a Docker Captain and Front-end Tech Lead with over a decade of hands-on experience in web development and DevOps.

Kristiyan builds applications with React, Next.js, Angular, and Vue.js, and designs modern front-end architectures. Over the years, Docker has become a core part of his daily work — used as a practical tool for building, testing, and deploying front-end applications in a predictable way. 

He focuses on production-ready Docker setups for front-end teams, including clean Dockerfiles, multi-stage builds, and CI/CD pipelines that work consistently across environments. His work is grounded in real projects and long-term maintenance, not theoretical examples.

Kristiyan is the author of four technical books, one of which is “Docker for Front-end Developers”. He actively contributes to open-source projects and is the person behind several official Docker guides, including guides for React.js, Node.js, Angular, Vue.js, and related front-end technologies.

Through writing, open source,speaking and mentoring, he helps developers understand Docker better — explaining not just how things work, but why they are done a certain way.

As a Docker Captain, his goal is to help bridge the gap between front-end developers and DevOps teams.

Can you share how you first got involved with Docker?

I first started using Docker because I was tired of making the excuse “it works on my machine”. We didn’t have many DevOps people, and the ones we had didn’t really know the front-end or how the application was supposed to behave. At the same time, I didn’t know Docker. That made communication difficult and problems hard to debug.

As a front-end developer, I initially thought Docker wasn’t something I needed to care about. It felt like a DevOps concern. But setting up projects and making sure they worked the same everywhere kept causing issues. Docker solved that problem and completely changed the way I work.

At first, Docker wasn’t easy to understand. But the more I used it, the more I saw how much simpler things became. My projects started running the same across environments, and that consistency saved time and reduced stress.

Over time, my curiosity grew and I went deeper — learning how to design well-structured, production-ready Dockerfiles, optimize build performance, and integrate Docker into CI/CD pipelines following clear, proven best practices, not just setups that work, but ones that are reliable and maintainable long term.

For me, Docker has never been about trends. I started using it to reduce friction between teams and avoid recurring problems, and it has since become a core part of my daily work.

What inspired you to become a Docker Captain?

What inspired me to become a Docker Captain was the desire to share the real struggles I faced as a front-end developer. When I first started using Docker, I wasn’t looking for recognition or titles — I was just trying to fix the problems that were slowing me down and it was hard to explain to some DevOps developers what and why this should work like that without knowing the DevOps terms. 

I clearly remember how exhausting it was to set up projects and how much time I wasted dealing with environment issues instead of real front-end work. Docker slowly changed the way I approached development and gave me a more reliable way to build and ship applications.

At some point, I realized I wasn’t the only one in this situation. Many front-end developers were avoiding Docker because they believed it was only meant for back-end or DevOps engineers. I wanted to change that perspective and show that Docker can be practical and approachable for front-end developers as well.

That’s also why I wrote the book Docker for Front-end Developers, where I explain Docker from a front-end perspective, using a real React.js application and walking through how to containerize and deploy it to AWS, with practical code examples and clear diagrams. The goal was to make Docker understandable and useful for people who build user-facing applications every day.

I also contributed official Docker guides for React.js, Angular, and Vue.js — not because I had all the answers, but because I remembered how difficult it felt when there was no clear guidance.

For me, becoming a Docker Captain was never about a title. It has always been about sharing what I’ve learned, building a bridge between front-end developers and containerization, and hopefully making someone else’s journey a little easier than mine.

What are some of your personal goals for the next year?

Over the next year, I want to continue writing books. Writing helps me structure my own knowledge, go deeper into the topics I work with, and hopefully make things clearer for other developers as well. I also want to push myself to speak at more conferences. Public speaking doesn’t come naturally to me, but it’s a good way to grow and to share real, hands-on experience with a broader audience and meet amazing people. I plan to keep contributing to open-source projects and maintaining the official Docker guides I’ve written for Angular, Vue.js, and React.js. People actively use these guides, so keeping them accurate and up to date is important to me. Alongside that, I’ll continue writing on my blog and newsletter, sharing practical insights from day-to-day work.

If you weren’t working in tech, what would you be doing instead?

If I weren’t working in tech, I’d probably be a lawyer — I’m a law graduate. Studying law gave me a strong sense of discipline and a structured approach to problem-solving, which I still rely on today. Over time, though, I realized that technology gives me a different kind of fulfillment. It allows me to build things, create practical solutions, and share knowledge in a way that has a direct and visible impact on people. I don’t think anything else would give me the same satisfaction. In tech, I get to solve problems every day, write code, contribute to open-source projects, write books, and share what I’ve learned with the community. That mix of challenge, creativity, and real impact is hard to replace. Law could have been my profession, but technology is where I truly feel at home.

Can you share a memorable story from collaborating with the Docker community?

One of my most memorable experiences with the Docker community was publishing my open-source project frontend-prod-dockerfiles, which provides production-ready Dockerfiles for most of the popular front-end applications. I originally created it to solve a gap I kept seeing: front-end developers didn’t have a clear, reliable reference for well-structured and optimized Dockerfiles.

The response from the community was better than I expected. Developers from all over the world started using it, sharing feedback and suggesting ideas I hadn’t even considered.

That experience was a strong reminder of what makes the Docker community special — openness, collaboration, and a genuine willingness to help each other grow.

The Docker Captains Conference in Turkey (2025) was amazing. It was well organized, inspiring, and full of great energy. I met great people who share the same passion for Docker.

What’s your favorite Docker product or feature right now, and why?

Right now, my favorite Docker features are Docker Offload and Docker Model Runner.

Offload is a game-changer because it lets me move heavy builds and GPU workloads to secure cloud resources directly from the same Docker CLI/Desktop flow I already use. I don’t have to change the way I work locally, but I get cloud-scale speed whenever I need it.

Model Runner lets me run open models locally in just minutes. And when I need more power, I can pair it with Offload to scale out to GPUs.

Can you walk us through a tricky technical challenge you solved recently?

A recent challenge I dealt with was reviewing Dockerfiles that had been generated with AI. A lot of developers were starting to use AI in our company, but I noticed some serious problems right away, images that were too large, broken caching, hardcoded environment variables, and containers running as root. It was a good reminder that while AI can help, we still need to carefully review and apply best practices when it comes to security and performance.

What’s one Docker tip you wish every developer knew?

One tip I wish every developer knew is that Docker is for everyone, not just DevOps or back-end developers. Front-end developers can benefit just as much by using Docker to create consistent environments, ship production-ready builds, and collaborate more smoothly with their teams. It’s not just infrastructure , it’s a productivity boost for the whole stack. I saw a racing number of tech jobs required to have such kind of basic knowledge which overall is positive.

If you could containerize any non-technical object in real life, what would it be and why?

If I could containerize any non-technical object, it would be a happy day. I’d package a perfectly joyful day and redeploy it whenever I needed , no wasted hours, no broken routines, just a consistent, repeatable “build” of happiness.

Where can people find you online?

On LinkedIn, x.com and also my website. I regularly write technical articles on Medium and share insights in my newsletter Front-end World. My open-source projects, including production-ready Dockerfiles for front-end frameworks, are available on GitHub.

Rapid Fire Questions

Cats or Dogs?

Both, I love animals.

Morning person or night owl?

Morning person for study, night owl for work.

Favorite comfort food?

Pasta.

One word friends would use to describe you?

Persistent

A hobby you picked up recently?

Hiking, I love nature

Quelle: https://blog.docker.com/feed/