Docker State of App Dev: Dev Ex & Productivity 

Report: What’s helping devs thrive — and what’s still holding them back? 

A look at how culture, tooling, and habits are shaping the developer experience today, per Docker’s 2025 State of Application Development Survey.

Great culture, better tools — but developers often still feel stuck. From pull requests stuck in review to tasks without clear estimates, the inner loop remains cluttered with surprisingly persistent friction points. This year’s data maps the disconnect between what developers need, where they’re blocked, and how better tooling and cultural support can keep velocity on track.

Here are six key insights into developer experience and productivity from Docker’s annual State of Application Development Survey, based on responses from over 4,500 industry professionals.

1. How devs learn — and what’s changing

Self-guided learning is on the upswing. Across all industries, fully 85% of respondents turn to online courses or certifications, far outpacing traditional sources like school (33%), books (25%), or on-the-job training (25%). 

Among IT folks, the picture is more nuanced. School is still the top venue for learning to code (65%, up from 57% in our 2024 survey), but online resources are also trending upward. Some 63% of IT pros learned coding skills via online resources (up from 54% in our 2024 survey) and 57% favored online courses or certifications (up from 45% in 2024).

Note: For this year’s report, we surveyed over three times more users across a broader spectrum of industries than for our more IT-focused 2024 report.

As for how devs prefer to learn, reading documentation tops the list, as in last year’s report — that despite the rise in new and interactive forms of learning. Some 29% say they lean on documentation, edging out videos and side projects (28% each) and slightly ahead of structured online training (26%). 

AI tools play a relatively minor role in how respondents learn, with GitHub Copilot cited by just 13% overall — and only 9% among IT pros. It’s also cited by 13% as a preferred learning method.

2. Containers: the great divide?

Among IT pros, container usage soared to 92% — up from 80% in our 2024 survey. Zoom out to a broader view across industries, however, and adoption appears considerably lower. Just 30% of developers say they use containers in any part of their workflow. 

Why the gap? Differences in app structure may offer an explanation: IT industry respondents work with microservice-based architectures more often than those in other industries (68% versus 31%). So the higher container adoption may stem from IT pros’ need for modularity and scalability — which containers provide in spades.

And among container users, needs are evolving. They want better tools for time estimation (31%), task planning (18%), and monitoring/logging (15%) — stubborn pain points across the software lifecycle.

3. An equal-opportunity headache: estimating time

No matter the role, estimating how long a task will take is the most consistent pain point across the board. Whether you’re a front-end developer (28%), data scientist (31%), or a software decision-maker (49%), precision in time planning remains elusive.

Other top roadblocks? Task planning (26%) and pull-request review (25%) are slowing teams down. Interestingly, where people say they need better tools doesn’t always match where they’re getting stuck. Case in point, testing solutions and Continuous Delivery (CD) come up often when devs talk about tooling gaps — even though they’re not always flagged as blockers.

4. Productivity by persona: different hats, same struggles

When you break it down by role, some unique themes emerge:

Experienced developers struggle most with time estimation (42%).

Engineering managers face a three-way tie: planning, time estimation, and designing from scratch (28% each).

Data scientists are especially challenged by CD (21%) — a task not traditionally in their wheelhouse.

Front-end devs, surprisingly, list writing code (28%) as a challenge, closely followed by CI (26%).

Across personas, a common thread stands out: even seasoned professionals are grappling with foundational coordination tasks — not the “hard” tech itself, but the orchestration around it.

5. Tools vs. culture: two sides of the experience equation

On the tooling side, the biggest callouts for improvement include:

Time estimation (22%)

Task planning (18%)

Designing solutions from scratch (17%)

But productivity isn’t just about tools — it’s deeply cultural. When asked what’s working well, developers pointed to work-life balance (39%), location flexibility such as work from home policies (38%), and flexible hours (37%) as top cultural strengths.

The weak spots? Career development (38%), recognition (36%), and meaningful work (33%). In other words: developers like where, when, and how they work, but not always why.

6. What’s easy? What’s not?

While the dev world is full of moving parts, a few areas are surprisingly not challenging:

Editing config files (8%)

Debugging in dev (8%)

Writing config files (7%)

Contrast that with the most taxing areas:

Troubleshooting in production (9%)

Debugging in production (9%)

Security-related tasks (8%)

It’s a reminder that production is still where the stress — and the stakes — are highest.

Bottom line:

Developer productivity isn’t about just one thing. It’s the compound effect of better tools, smarter learning, sharper planning — and yes, a healthy team culture. For orgs to excel, they need to invest not just in platforms, but also in people. Because when you improve the experience, you unlock the performance.

Quelle: https://blog.docker.com/feed/

Using Gordon to Containerize Your Apps and Work with Containers

These days, almost every tech company is looking for ways to integrate AI into their apps and workflows, and Docker is no exception. They’ve been rolling out some impressive AI capabilities across their products. This is my first post as a Docker Captain and in this post, I want to shine a spotlight on a feature that hasn’t gotten nearly enough attention in my opinion: Docker’s AI Agent Gordon (also known as Docker AI), which is built into Docker Desktop and CLI.

Gordon is really helpful when it comes to containerizing applications. Not only does it help you understand how to package your app as a container, but it also reduces the overhead of figuring out dependencies, runtime configs, and other pieces that add to a developer’s daily cognitive load. The best part? Gordon doesn’t just guide you with responses; it can also generate or update the necessary files for you.

The Problem: Containerizing apps and optimizing containers isn’t always easy

Containerizing apps can range from super simple to a bit tricky, depending on what you’re working with. If your app has a single runtime like Node.js, Python, or .NET Core, with clearly defined dependencies and no external services, it will be straightforward.

A basic Dockerfile will usually get you up and running without much effort. But once you start adding more complexity, like a backend, frontend, database, and caching layer, you now have the need for a multi-container app. At this point, you might be dealing with additional Dockerfile configurations and potentially a Docker Compose setup. That’s where things can start to be challenging to get going.

This is where Gordon shines. It’s helpful in containerizing apps and can even handle multi-service container app setups, guiding you through what’s needed and even generating the supporting config files, such as Dockerfiles and docker-compose, to get you going.

Optimizing containers can be a headache too

Beyond just containerizing, there’s also the need to optimize your containers for performance, security, and image size. And let’s face it, optimizing can be tedious. You need to know what base images to use, how to slim them down, how to avoid unnecessary layers, and more.

Gordon can help here too. It provides optimization suggestions, shows you how to apply best practices like multi-stage builds or removing dev dependencies, and helps you create leaner, more secure images.

Why not just use general-purpose Generative AI?

Sure, general-purpose AI tools like ChatGPT, Claude, Gemini, etc. are great and I use them regularly. But when it comes to containers, they can lack the context needed for accurate and efficient help. Gordon, on the other hand, is purpose-built for Docker. It has access to Docker’s ecosystem and has been trained on Docker documentation, best practices, and the nuances of Docker tooling. That means its recommendations are more likely to be precise and aligned with the latest standards.

Walkthrough of Gordon

Gordon can help with containerizing applications, optimizing your containers and more. Gordon is still a Beta feature. To start using Gordon, you need Docker Desktop version 4.38 or later. Gordon is powered by Large Language Models (LLMs), and it goes beyond prompt and response: it can perform certain tasks for you as an AI agent. Gordon can have access to your local files and local images when you give it permission. It will prompt you for access if needed for a task.

Please note, the examples I will show in this post are based on a single working session. Now, let’s dive in and start to explore Gordon.

Enabling Gordon / Docker AI

In order to turn Gordon on, go to Settings > Beta features check the Enable Docker AI box as shown in the following screenshot. 

Figure 1: screenshot of where to enable Docker AI in beta features

Accept the terms. The AI in Docker Desktop is in two forms. The first one is through the Docker Desktop UI and is known as Gordon. The second option is Docker AI. Docker AI is accessed through the Docker CLI. The way you activate it is by typing Docker AI in the CLI. I will demonstrate this later on in this blog post.  

Figure 2: screenshot of Docker AI terms acceptance dialog box

Exploring Gordon in Docker Desktop

Now Gordon will appear in your Docker Desktop UI. Here you can prompt it just like any Generative AI tool. Gordon will also have examples that you can use to get started working with it.

You can access Gordon throughout Docker Desktop by clicking on the AI icon as shown in the following screenshot.

Figure 3: screenshot of Docker Desktop interface showing the AI icon for Gordon

When you click on the AI icon a Gordon prompt box appears along with suggested prompts as shown in the following screenshot. The suggestions will change based on the object the AI is next to, and are context-aware.

Figure 4: Screenshot showing Gordon’s suggestion prompt box in Docker Desktop UI

Here is another example of Docker AI suggestions being context-aware based on what area of Docker Desktop you are in. 

 Figure 5: Screenshot showing Docker AI context- specific suggestions 

Another common use case for Gordon is listing local images and using AI to work with them. You can see this in the following set of screenshots. Notice that Gordon will prompt you for permission before showing your local images.

Figure 6: Screenshot showing Gordon referencing local images 

You can also prompt Gordon to take action. As shown in the following screenshot, I asked Gordon to run one of my images.

Figure 7: Screenshot showing Gordon prompts 

If it can’t perform the action, it will attempt to help you. 

Figure 8: Screenshot showing Gordon prompt response to failed request 

Another cool use of Gordon is to explain a container image to you. When you ask this, Gordon will ask you to select the directory where the Dockerfile is and permission to access it as shown in the following screenshot.

Figure 9: Screenshot showing Gordon’s request for particular directory access 

After you give it access to the directory where the Dockerfile is, it will then breakdown what’s in the Dockerfile. 

Figure 10: Screenshot showing Gordon’s response to explaining a Dockerfile 

As shown in the following screenshot, I followed up with a prompt asking Gordon to display what’s in the Dockerfile. It did a good job of explaining its contents, as shown in the following screenshot.

Figure 11: Screenshot showing Gordon’s response regarding Dockerfile contents

Exploring Gordon in the Docker Desktop CLI

Let’s take a quick tour through Gordon in the CLI. Gordon is referred to as Docker AI in the CLI. To work with Docker AI, you need to launch the Docker CLI as shown in the following screenshot. 

Figure 12: Screenshot showing how to launch Docker AI from the CLI 

Once in the CLI you can type “docker ai” and it will bring you into the chat experience so you can prompt Gordon. In my example, I asked Gordon about one of my local images. You can see that it asked me for permission. 

Figure 13: Screenshot showing Docker CLI request for access

Next, I asked Docker AI to list all of my local images as shown in the following screenshot. 

Figure 14: Screenshot showing Docker CLI response to display local images 

I then tested pulling an image using Docker AI. As you can see in the following screenshot, Gordon pulled a nodeJS image for me!

Figure 15: Screenshot showing Docker CLI pulling nodeJS image

Containerizing an application with Gordon

Now let’s explore the experience of containerizing an application using Gordon.

I started by clicking on the example for containerizing an application. Gordon then prompted me for the directory where my application code is. 

Figure 16: Screenshot showing where to enable access to directory for containerizing an application 

I pointed it to my apps directory and gave it permission. It then started to analyze and containerize my app. It picked up the language and started to read through my app’s README file.

Figure 17: Screenshot showing Gordon starting to analyze and containerize app 

You can see it understand the app was written in JavaScript and worked through the packages and dependencies.

Figure 18: Screenshot showing final steps of Gordon processing

Gordon understands that my app has a backend, frontend, and a database, knowing from this that I would need a Docker compose file.

Figure 19: Screenshot showing successful completion of steps to complete the Dockerfiles

From the following screenshot you can see the Docker related files needed for my app. Gordon created all of these.

Figure 20: Screenshot showing files produced from Gordon 

Gordon created the Dockerfile (on the left) and a Compose yaml file (on the right) even picking up that I needed a Postgres DB for this application.

Figure 21: Screenshot showing Dockerfile and Compose yaml file produced from Gordon

I then took it a step further and asked Gordon to build and run the container for my application using the prompt “Can you build and run this application with compose?” It created the Docker Compose file, built the images, and ran the containers!

Figure 22: Screenshot showing completed containers from Gordon

Conclusion

I hope you picked up some useful insights about Docker and discovered one of its lesser-known AI features in Docker Desktop. We explored what Gordon is, how it compares to general-purpose generative AI tools like ChatGPT, Claude, and Gemini, and walked through use cases such as containerizing an application and working with local images. We also touched on how Gordon can support developers and IT professionals who work with containers. If you haven’t already, I encourage you to enable Gordon and take it for a test run. Thanks for reading and stay tuned for more blog posts coming soon.
Quelle: https://blog.docker.com/feed/