Containerizing a Legendary PetClinic App Built with Spring Boot

Per the latest Health for Animals Report, over half of the global population (billions of households) is estimated to own a pet. In the U.S. alone, this is true for 70% of households.
A growing pet population means a greater need for veterinary care. In a survey by the World Small Animal Veterinary Association (WSAVA), three-quarters of veterinary associations shared that subpar access to veterinary medical products hampered their ability to meet patient needs and provide quality service.
Source: Unsplash
 
The Spring Framework team is taking on this challenge with its PetClinic app. The Spring PetClinic is an open source sample application developed to demonstrate the database-oriented capabilities of Spring Boot, Spring MVC, and the Spring Data Framework. It’s based on this Spring stack and built with Maven.
PetClinic’s official version also showcases how these technologies work with Spring Data JPA. Overall, the Spring PetClinic community maintains nine PetClinic app forks and 18 repositories under Docker Hub. To learn how the PetClinic app works, check out Spring’s official resource.
Deploying a Pet Clinic app is simple. You can clone the repository, build a JAR file, and run it from the command line:
git clone https://github.com/dockersamples/spring-petclinic-docker
cd spring-petclinic-docker
./mvnw package
java -jar target/*.jar
 
You can then access PetClinic at http://localhost:8080 in your browser:
 

 
Why does the PetClinic app need containerization?
The biggest challenge developers face with Spring Boot apps like PetClinic is concurrency — or the need to do too many things simultaneously. Spring Boot apps may also unnecessarily increase deployment binary sizes with unused dependencies. This creates bloated JARs that may increase your overall application footprint while impacting performance.
Other challenges include a steep learning curve and complexities while building a customized logging mechanism. Developers have been seeking solutions to these problems. Unfortunately,  even the Docker Compose file within Spring Boot’s official repository shows how to containerize the database, but doesn’t extend this to the complete application.
How can you offset these drawbacks? Docker simplifies and accelerates your workflows by letting you freely innovate with your choice of tools, application stacks, and deployment environments for each project. You can run your Spring Boot artifact directly within Docker containers. This lets you quickly create microservices. This guide will help you completely containerize your PetClinic solution.
Containerizing the PetClinic application
Docker helps you containerize your Spring app — letting you bundle together your complete Spring Boot application, runtime, configuration, and OS-level dependencies. This includes everything needed to ship a cross-platform, multi-architecture web application. 
We’ll explore how to easily run this app within a Docker container, using a Docker Official image. First, you’ll need to download Docker Desktop and complete the installation process. This gives you an easy-to-use UI and includes the Docker CLI, which you’ll leverage later on.
Docker uses a Dockerfile to specify each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Let’s create an empty Dockerfile in our Spring project.
Building a Dockerfile
A  Dockerfile is a text document that contains the instructions to assemble a Docker image. When we have Docker build our image by executing the docker build command, Docker reads these instructions, executes them, and creates a Docker image as a result.
Let’s walk through the process of creating a Dockerfile for our application. First create the following empty Dockerfile in the root of your Spring project.
touch Dockerfile
 
You’ll then need to define your base image.
The upstream OpenJDK image no longer provides a JRE, so no official JRE images are produced. The official OpenJDK images just contain “vanilla” builds of the OpenJDK provided by Oracle or the relevant project lead. That said, we need an alternative!
One of the most popular official images with a build-worthy JDK is Eclipse Temurin . The Eclipse Temurin project provides code and processes that support the building of runtime binaries and associated technologies. Temurin is high performance, enterprise-caliber, and cross-platform.
FROM eclipse-temurin:17-jdk-jammy
 
Next, let’s quickly create a directory to house our image’s application code. This acts as the working directory for your application:
WORKDIR /app
 
The following COPY instruction copies the Maven wrapper and our pom.xml file from the host machine to the container image. The COPY command takes two parameters. The first tells Docker which file(s) you would like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /app.

COPY .mvn/ .mvn
COPY mvnw pom.xml ./

 
Once we have our pom.xml file inside the image, we can use the RUN command to execute the command ./mvnw dependency:resolve. This works identically to running the .mvnw (or mvn) dependency locally on our machine, but this time the dependencies will be installed into the image.

RUN./mvnw dependency:resolve

 
The next thing we need to do is to add our source code into the image. We’ll use the COPY command just like we did with our pom.xml  file above.

COPY src ./src

 
Finally, we should tell Docker what command we want to run when our image is executed inside a container. We do this using the CMD instruction.

CMD ["./mvnw", "spring-boot:run"]

 
Here’s your complete Dockerfile:

FROM eclipse-temurin:17-jdk-jammy
WORKDIR /app
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:resolve
COPY src ./src
CMD ["./mvnw", "spring-boot:run"]

 
Create a .dockerignore file
To increase build performance, and as a general best practice, we recommend creating a  .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain just one line:
target
 
This line excludes the target directory — which contains output from Maven — from Docker’s build context. There are many good reasons to carefully structure a .dockerignore file, but this simple file is good enough for now.
So, what’s this build context and why’s it essential? The docker build command builds Docker images from a Dockerfile and a context. This context is the set of files located in your specified PATH or URL. The build process can reference any of these files.
Meanwhile, the compilation context is where the developer works. It could be a folder on Mac, Windows, or a Linux directory. This directory contains all necessary application components like source code, configuration files, libraries, and plugins. With the .dockerignore file, you can determine which of the following elements like source code, configuration files, libraries, plugins, etc. to exclude while building your new image.
Building a Docker image
Let’s build our first Docker image:

docker build –tag petclinic-app .

 
Once the build process is completed, you can list out your images by running the following command:

$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
petclinic-app latest 76cb88b61d39 About an hour ago 559MB
eclipse-temurin 17-jdk-jammy 0bc7a4cbe8fe 5 weeks ago 455MB

 
With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit tests. A separate image holds the application runtime. This makes the final image more secure and smaller in size (since it doesn’t contain any development or debugging tools).
Multi-stage Docker builds are a great way to ensure your builds are 100% reproducible and as lean as possible. You can create multiple stages within a Dockerfile and control how you build that image.
Spring Boot uses a “fat JAR” as its default packaging format. When we inspect the fat JAR, we see that the application is a very small portion of the entire JAR. This portion changes most frequently. The remaining portion contains your Spring Framework dependencies. Optimization typically involves isolating the application into a separate layer from the Spring Framework dependencies. You only have to download the dependencies layer — which forms the bulk of the fat JAR — once. It’s also cached in the host system.
In the first stage, the base target is building the fat JAR. In the second stage, it’s copying the extracted dependencies and running the JAR:

FROM eclipse-temurin:17-jdk-jammy as base
WORKDIR /app
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:resolve
COPY src ./src

FROM base as development
CMD ["./mvnw", "spring-boot:run", "-Dspring-boot.run.profiles=mysql", "-Dspring-boot.run.jvmArguments=’-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000’"]
FROM base as build
RUN ./mvnw package

FROM eclipse-temurin:17-jre-jammy as production
EXPOSE 8080
COPY –from=build /app/target/spring-petclinic-*.jar /spring-petclinic.jar
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/spring-petclinic.jar"]

 
The first image eclipse-temurin:17-jdk-jammy is labeled base. This helps us refer to this build stage in other build stages. Next, we’ve added a new stage labeled development. We’ll leverage this stage while writing Docker Compose later on.
Notice that this Dockerfile has been split into two stages. The latter layers contain the build configuration and the source code for the application, and the earlier layers contain the complete Eclipse JDK image itself. This small optimization also saves us from copying the target directory to a Docker image — even a temporary one used for the build. Our final image is just 318 MB, compared to the first stage build’s 567 MB size.
Now, let’s rebuild our image and run our development build. We’ll run the docker build command as above, but this time we’ll add the –target development flag so that we specifically run the development build stage.

docker build -t petclinic-app –target development .

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
petclinic-app latest 05a13ed412e0 About an hour ago 313MB
 
Using Docker Compose to develop locally
In this section, we’ll create a Docker Compose file to start our PetClinic and the MySQL database server with a single command.
Here’s how you define your services in a Docker Compose file:

services:
petclinic:
build:
context: .
dockerfile: Dockerfile
target: development
ports:
– 8000:8000
– 8080:8080
environment:
– SERVER_PORT=8080
– MYSQL_URL=jdbc:mysql://mysqlserver/petclinic
volumes:
– ./:/app
depends_on:
– mysqlserver
mysqlserver:
image: mysql/mysql-server:8.0
ports:
– 3306:3306
environment:
– MYSQL_ROOT_PASSWORD=
– MYSQL_ALLOW_EMPTY_PASSWORD=true
– MYSQL_USER=petclinic
– MYSQL_PASSWORD=petclinic
– MYSQL_DATABASE=petclinic
volumes:
– mysql_data:/var/lib/mysql
– mysql_config:/etc/mysql/conf.d
volumes:
mysql_data:
mysql_config:
 
You can clone the repository or download the YAML file directly from here.
This Compose file is super convenient, as we don’t have to enter all the parameters to pass to the docker run command. We can declaratively do that using a Compose file.
Another cool benefit of using a Compose file is that we’ve set up DNS resolution to use our service names. Resultantly, we’re now able to use mysqlserver in our connection string. We use mysqlserver since that’s how we’ve named our MySQL service in the Compose file.
Now, let’s start our application and confirm that it’s running properly:

docker compose up -d –build

 
We pass the –build flag so Docker will compile our image and start our containers. Your terminal output will resemble what’s shown below if this is successful:
 

 
Next, let’s test our API endpoint. Run the following curl commands:
$ curl –request GET
–url http://localhost:8080/vets
–header ‘content-type: application/json’

 
You should receive the following response:

{

"vetList": [
{
"id": 1,
"firstName": "James",
"lastName": "Carter",
"specialties": [],
"nrOfSpecialties": 0,
"new": false
},
{
"id": 2,
"firstName": "Helen",
"lastName": "Leary",
"specialties": [
{
"id": 1,
"name": "radiology",
"new": false
}
],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 3,
"firstName": "Linda",
"lastName": "Douglas",
"specialties": [
{
"id": 3,
"name": "dentistry",
"new": false
},
{
"id": 2,
"name": "surgery",
"new": false
}
],
"nrOfSpecialties": 2,
"new": false
},
{
"id": 4,
"firstName": "Rafael",
"lastName": "Ortega",
"specialties": [
{
"id": 2,
"name": "surgery",
"new": false
}
],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 5,
"firstName": "Henry",
"lastName": "Stevens",
"specialties": [
{
"id": 1,
"name": "radiology",
"new": false
}
],
"nrOfSpecialties": 1,
"new": false
},
{
"id": 6,
"firstName": "Sharon",
"lastName": "Jenkins",
"specialties": [],
"nrOfSpecialties": 0,
"new": false
}
]
}

 
Conclusion
Congratulations! You’ve successfully learned how to containerize a PetClinic application using Docker. With a multi-stage build, you can easily minimize the size of your final Docker image and improve runtime performance. Using a single YAML file, we demonstrated how Docker Compose helps you easily build and deploy your PetClinic app in seconds. With just a few extra steps, you can apply this tutorial while building applications with much greater complexity.
Happy coding.
References

Build Your Java Image
Kickstart your Spring Boot Application Development
Spring PetClinic Application Repository

Quelle: https://blog.docker.com/feed/

Virtual Desktop Support, Mac Permission Changes, & New Extensions in Docker Desktop 4.11

Docker Desktop 4.11 is now live! With this release, we added some highly-requested features designed to help make developers’ lives easier and help security-minded organizations breathe easier.
Run Docker Desktop for Windows in Virtual Desktop Environments
Docker Desktop for Windows is officially supported on VMware ESXi and Azure Windows VMs for our Docker Business subscribers. Now you can use Docker Desktop on your virtual environments and get the same experience as running it natively on Windows, Mac, or Linux machines.
Currently, we support virtual environments where the host hypervisors are VMware ESXi or Windows Hyper-V — both on-premises and in the cloud. Citrix Hypervisor support is also coming soon. As a Docker Business subscriber, you’ll receive dedicated support for running Docker Desktop in your virtual environments.
To learn more about running Docker Desktop for Windows in a virtual environment, please visit our documentation.
Changes to permission requirements on Docker Desktop for Mac
The first time you run Docker Desktop for Mac, you have to authenticate as root in order to install a privileged helper process. This process is needed to perform a limited set of privileged operations and runs in the background on the host machine while Docker Desktop is running.
In Docker Desktop v4.11, you don’t have to run this privilege helper service at all. Use the —user flag in the install command to set everything up in advance. Docker Desktop will then run without needing root on the Mac.
For more details on Docker Desktop for Mac’s permission requirements, check out our documentation.
New Extensions in the Marketplace
We’re excited to announce the addition of two new extensions to the Extensions Marketplace:

vcluster  – Create and manage virtual Kubernetes clusters using vcluster. Learn more about vcluster here.
PGAdmin4 – Quickly admin and monitor PostgreSQL databases with PGAdmin4 tools. Learn more about PGAdmin here.

Customize your Docker Desktop Theme
Prefer dark mode on the Docker Dashboard? With 4.11 you can now customize your preference or have it respect your system settings. Go to settings in the upper right corner to try it for yourself.

Fixing the Frequency of Docker Desktop Feedback Prompts
Thanks to all your feedback, we identified a bug that was asking some users for feedback too frequently. Docker Desktop should now only request your feedback twice a year.
As we outlined here, you’ll be asked for feedback 30 days after the product is installed. You can choose to give feedback or decline. You then won’t be asked for a rating again for 180 days.
These scores help us understand user experience trends so we can keep improving Docker Desktop — and your comments have helped us make changes like this. Thanks for helping us fix this for everyone!
Have any feedback for us?
Upvote, comment, or submit new ideas via either our in-product links or our public roadmap. Check out our release notes to learn more about Docker Desktop 4.11.
Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey.
Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 — Julien Maitrehenry

Docker Captain Take 5 — Julien Maitrehenry

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Julien who recently joined the Captains Program. He is a developer and devops at Kumojin and is based in Quebec City.

How/when did you first discover Docker?
I don’t remember how, but, back in 2014, I was working in a PHP shop, and we had our dev environment running inside a VM (with Vagrant). But, it wasn’t really easy to share and maintain. So, we started experimenting with Docker for our dev env. After that, I learned about (legacy/classic) Swarm in Docker, and these tools resolved some of my issues in production for handling load balancer reconfiguration, deployment, and new version management. Check out my first conference about Docker here.
Since then, I continue to learn, use, and share about Docker. It’s useful in so many use cases — it’s amazing!
What is your favorite Docker command?
Docker help. I still need to check the documentation sometimes!
But also, when I need more space on Docker Desktop, I’ll choose docker buildx prune!
What is your top tip for working with Docker that others may not know?
If your team uses ARM (Apple M1, for example) and intel CPU and you want to share a docker image with your team, build a cross platform image:
docker buildx build –push –platform linux/arm64/v8,linux/amd64 –tag xxx/nginx-proxy:1.21-alpine
What’s the coolest Docker demo you have done/seen?
Back during the 2018 Dockercon, I was impressed by the usage of Docker for the DART (Double Asteroid Redirection Test) project. A project as easy as building an aircraft, hitting an asteroid with it, and saving the world!
You should check how they use Docker for space hardware emulation and testing — it’s brilliant to see how Docker could be used to help save the world: https://www.youtube.com/watch?v=RnWXOAplvjY
What have you worked on in the past six months that you’re particularly proud of?
Being a mentor for a developer school in Quebec (42 Quebec). It’s amazing to see the new generation of developers and help them with all their questions, fears, and concerns! And it’s cool when someone calls you “Mister Docker” because he watches a docker conference I gave to answer questions about usage and more.
What do you anticipate will be Docker’s biggest announcement this year?
After Docker Extension and SBOM? It’s really hard to say. I need more time to explore and create my first extension, but, I’m sure the Docker team will find something.
What are some personal goals for the next year with respect to the Docker community?
Give my first conference in English as I always give them in French. I’d also like to update my blog with more content.
What was your favorite thing about DockerCon 2022?
The French community room. It was a pleasure to engage with Aurélie and Rachid and have so many great speakers with us! I would do it again anytime!
Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?
7 years from now, I still think Docker will continue to innovate and find new ways to simplify the life of the developer community!
Rapid fire questions…
What new skill have you mastered during the pandemic?
Using a face mask and forgetting about it! Or traveling between different countries during the pandemic with all different kinds of restrictions and rules.
Cats or Dogs?
Cats! I’m sorry, but a dog requires too much time, and I already have 3 young kids.
Salty, sour or sweet?
Salty or Umami
Beach or mountains?
Mountains!
Your most often used emoji?
🤣 or 😄
Quelle: https://blog.docker.com/feed/

Bulk User Add for Docker Business and Teams

Docker’s goal is to create a world-class product experience for our customers. We want to build a robust product that will help all teams achieve their goals. In line with that, we’ve tried to simplify the process of onboarding your team into the Docker ecosystem with our Bulk User Add feature for Docker Business and Docker Team subscriptions.
You can invite your team to their accounts by uploading a file including their email addresses to Docker Hub. The CSV file can either be a file you create for this specific purpose, or one that’s extracted from another in-house system. The sole requirement is that the file contains a column with the email addresses of the users that will be invited into Docker. Once the CSV file is uploaded using Docker Hub, each team member in the file will receive an invitation to use their account.
We’ve also updated Docker Hub’s web interface to add multiple members at once. We hope this is useful for smaller teams that can just copy and paste a list of emails directly in the web interface and onboard everyone they need. Once your team is invited, you can see both the pending and accepted invites through Docker Hub.

Bulk User Add can be used without needing to have SSO setup for your organization. This feature allows you to get the most out of your Docker Team or Business subscription, and it greatly simplifies the onboarding process.
Learn more about the feature on our docs page, and sign in to your Docker Hub account to try it for yourself.
And if you have any questions or would like to discuss this feature, please attend our upcoming
Docker Office Hours.
 
Quelle: https://blog.docker.com/feed/

July 2022 Newsletter

The latest and greatest content for developers.

Community All-Hands: September 1st
Join us at our Community All-Hands on September 1st! This virtual event is an opportunity for the community to come together with Docker staff to learn, share, and collaborate. Interested in speaking? Submit your proposal.

Register Now

News you can use and monthly highlights:
How to optimize production Docker images running Node.js with Yarn – Are the images that you’re picking up to build Node.js apps getting bloated? Here’s a quick way to improve the production lifecycle by efficiently optimizing your Docker images.
How to Containerize a Golang App With Docker for Development and Production – Need to pack your Golang project into a Docker container locally and then deploy it to production? Here’s a detailed guide for you.
NextJS in Docker – Are you planning to containerize your Next.js project? Here’s a quick tip to conquer the Next.js environmental variable problem and get the right Dockerfile working for you.
SQLcl Docker Desktop Extension – Here’s a Docker Extension that allows you to run a simple SQL command line tool to flawlessly connect to your Oracle XE 21c or any other RDBMS instance.
Nest.js — Reducing Docker container size – Dockerizing Nest.js can be done in a snap. However, many important concerns like image bloat, missing image tags, and poor build performance aren’t addressed. Here’s a survival guide for you.
How to run docker compose files in Rider – Jetbrains Rider is a fast and powerful cross-platform .NET ID. The latest release provides Docker support using the Docker plugin. Learn more about its usage.

Dear Moby with Kat and Shy
Ever wish there was an advice column just for developers? Introducing Dear Moby with Kat and Shy — the web series with content sourced by and for you, our Docker community. Join them for fun facts, tips of the week, and chats about all things app development.

Watch Dear Moby

The latest tips and tricks from the community:

How to Build and Deploy a URL Shortener Using TypeScript and Nest.js
9 Tips for Containerizing Your .NET Application
How I created my Homepage (for free) using Docker, Hugo, and Firebase
Caching Gems with Docker Multi-Stage build
Understand how to monitor Docker Metrics with Docker Stats

Tips for Using BusyBox
Our BusyBox image has been downloaded over one billion times — making it one of our most popular images! Learn more about this powerhouse featherweight (less than 2.71 MB in size), and explore use cases and best practices.

Learn More

Educational content created by the experts at Docker:

Getting Started with Visual Studio Code and IntelliJ IDEA Docker Plugins
How to Train and Deploy a Linear Regression Model Using PyTorch
Top Tips and Use Cases for Managing Your Volumes
How to Rapidly Build Multi-Architecture Images with Buildx
Why Containers and WebAssembly Work Well Together
Resources to Use Javascript, Python, Java, and Go with Docker
Quickly Spin Up New Development Projects with Awesome Compose

Docker Captain: Thorsten Hans
For this month’s Docker Captain shoutout, we’re excited to welcome Thorsten Hans, a Cloudnative Consultant at Thinktecture with a love for sweet Alabama BBQ sauce. As far back as 2015, Thorsten has had deep appreciation for Docker’s intuitive design and tried-and-true efficiency.

Meet the Captain

See what the Docker team has been up to:

Docker Hub v1 API Deprecation
New Extensions, Improved logs, and more in Docker Desktop 4.10
Key Insights from Stack Overflow’s 2022 Developer Survey
New SCIM Capabilities for Docker Business

DockerCon 2022 On-Demand
With over 50 sessions for developers by developers, watch the latest developer news, trends, and announcements from DockerCon 2022. From the keynote to product demos to technical breakout sessions, hacks, and tips & tricks, there’s something for everyone.

Watch On-Demand

Subscribe to our newsletter to get the latest news, blogs, tips, how-to guides, best practices, and more from Docker experts sent directly to your inbox once a month.

Quelle: https://blog.docker.com/feed/

How to Build and Deploy a Task Management Application Using Go

Golang is designed to let developers rapidly develop scalable and secure web applications. Go ships with an easy to use, secure, and performant web server alongside its own web templating library. Enterprise users also leverage the language for rapid, cross-platform deployment. With its goroutines, native compilation, and the URI-based package namespacing, Go code compiles to a single, small binary with zero dependencies — making it very fast.
Developers also favor Go’s performance, which stems from its concurrency model and CPU scalability. Whenever developers need to process an internal request, they use separate goroutines, which consume just one-tenth of the resources that Python threads do. Via static linking, Go actually combines all dependency libraries and modules into a single binary file based on OS and architecture.
Why is containerizing your Go application important?
Go binaries are small and self-contained executables. However, your application code inevitably grows over time as it’s adapted for additional programs and web applications. These apps may ship with templates, assets and database configuration files. There’s a higher risk of getting out-of-sync, encountering dependency hell, and pushing faulty deployments.
Containers let you synchronize these files with your binary. They also help you create a single deployable unit for your complete application. This includes the code (or binary), the runtime, and its system tools or libraries. Finally, they let you code and test locally while ensuring consistency between development and production.
We’ll walk through our Go application setup, and discuss the Docker SDK’s role during containerization.
Table of Contents

Building the Application
Key Components
Getting Started
Define a Task
Create a Task Runner
Container Manager
Sequence Diagram
Conclusion

Building the Application
In this tutorial, you’ll learn how to build a basic task system (Gopher) using Go.
First, we’ll create a system in Go that uses Docker to run its tasks. Next, we’ll build a Docker image for our application. This example will demonstrate how the Docker SDK helps you build cool projects. Let’s get started.
Key Components

Go

Go Docker SDK

Microsoft Visual Studio Code

Docker Desktop

Getting Started
Before getting started, you’ll need to install Go on your system. Once you’ve finished up, follow these steps to build a basic task management system with the Docker SDK.
Here’s the directory structure that we’ll have at the end:
➜ tree gopher
gopher
├── go.mod
├── go.sum
├── internal
│ ├── container-manager
│ │ └── container_manager.go
│ ├── task-runner
│ │ └── runner.go
│ └── types
│ └── task.go
├── main.go
└── task.yaml

4 directories, 7 files

You can click here to access the complete source code developed for this example. This guide leverages important snippets, but the full code isn’t documented throughout.  
version: v0.0.1
tasks:
– name: hello-gopher
runner: busybox
command: ["echo", "Hello, Gopher!"]
cleanup: false
– name: gopher-loops
runner: busybox
command:
[
"sh",
"-c",
"for i in `seq 0 5`; do echo ‘gopher is working'; sleep 1; done",
]
cleanup: false
 
Define a Task
First and foremost, we need to define our task structure. This task is going to be a YAML definition with the following structure:
The following table describes the task definition:
 

 
Now that we have a task definition, let’s create some equivalent Go structs.
Structs in Go are typed collections of fields. They’re useful for grouping data together to form records. For example, this Task Task struct type has Name, Runner, Command, and Cleanup fields.
// internal/types/task.go

package types

// TaskDefinition represents a task definition document.
type TaskDefinition struct {
Version string `yaml:"version,omitempty"`
Tasks []Task `yaml:"tasks,omitempty"`
}

// Task provides a task definition for gopher.
type Task struct {
Name string `yaml:"name,omitempty"`
Runner string `yaml:"runner,omitempty"`
Command []string `yaml:"command,omitempty"`
Cleanup bool `yaml:"cleanup,omitempty"`
}
 
Create a Task Runner
The next thing we need is a component that can run our tasks for us. We’ll use interfaces for this, which are named collections of method signatures. For this example task runner, we’ll simply call it Runner and define it below:

// internal/task-runner/runner.go

type Runner interface {
Run(ctx context.Context, doneCh chan<- bool)
}

Note that we’re using a done channel (doneCh). This is required for us to run our task asynchronously — and it also notifies us once this task is complete.
You can find your task runner’s complete definition here. In this example, however, we’ll stick to highlighting specific pieces of code:

// internal/task-runner/runner.go

func NewRunner(def types.TaskDefinition) (Runner, error) {
client, err := initDockerClient()
if err != nil {
return nil, err
}

return &runner{
def: def,
containerManager: cm.NewContainerManager(client),
}, nil
}

func initDockerClient() (cm.DockerClient, error) {
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
return nil, err
}

return cli, nil
}

The NewRunner returns an instance of the struct, which provides the implementation of the Runner interface. The instance will also hold a connection to the Docker Engine. The initDockerClient function initializes this connection by creating a Docker API client instance from environment variables.
By default, this function creates an HTTP connection over a Unix socket unix://var/run/docker.sock (the default Docker host). If you’d like to change the host, you can set the DOCKER_HOST environment variable. The FromEnv will read the environment variable and make changes accordingly.
The Run function defined below is relatively basic. It loops over a list of tasks and executes them. It also uses a channel named taskDoneCh to see when a task completes. It’s important to check if we’ve received a done signal from all the tasks before we return from this function.

// internal/task-runner/runner.go

func (r *runner) Run(ctx context.Context, doneCh chan<- bool) {
taskDoneCh := make(chan bool)
for _, task := range r.def.Tasks {
go r.run(ctx, task, taskDoneCh)
}

taskCompleted := 0
for {
if <-taskDoneCh {
taskCompleted++
}

if taskCompleted == len(r.def.Tasks) {
doneCh <- true
return
}
}
}

func (r *runner) run(ctx context.Context, task types.Task, taskDoneCh chan<- bool) {
defer func() {
taskDoneCh <- true
}()

fmt.Println("preparing task – ", task.Name)
if err := r.containerManager.PullImage(ctx, task.Runner); err != nil {
fmt.Println(err)
return
}

id, err := r.containerManager.CreateContainer(ctx, task)
if err != nil {
fmt.Println(err)
return
}

fmt.Println("starting task – ", task.Name)
err = r.containerManager.StartContainer(ctx, id)
if err != nil {
fmt.Println(err)
return
}

statusSuccess, err := r.containerManager.WaitForContainer(ctx, id)
if err != nil {
fmt.Println(err)
return
}

if statusSuccess {
fmt.Println("completed task – ", task.Name)

// cleanup by removing the task container
if task.Cleanup {
fmt.Println("cleanup task – ", task.Name)
err = r.containerManager.RemoveContainer(ctx, id)
if err != nil {
fmt.Println(err)
}
}
} else {
fmt.Println("failed task – ", task.Name)
}
}

 
The internal run function does the heavy lifting for the runner. It accepts a task and transforms it into a Docker container. A ContainerManager executes a task in the form of a Docker container.
Container Manager
The container manager is responsible for:

Pulling a Docker image for a task

Creating the task container

Starting the task container

Waiting for the container to complete

Removing the container, if required

Therefore, with respect to Go, we can define our container manager as shown below:
// internal/container-manager/container_manager.go

type ContainerManager interface {
PullImage(ctx context.Context, image string) error
CreateContainer(ctx context.Context, task types.Task) (string, error)
StartContainer(ctx context.Context, id string) error
WaitForContainer(ctx context.Context, id string) (bool, error)
RemoveContainer(ctx context.Context, id string) error
}

type DockerClient interface {
client.ImageAPIClient
client.ContainerAPIClient
}

type ImagePullStatus struct {
Status string `json:"status"`
Error string `json:"error"`
Progress string `json:"progress"`
ProgressDetail struct {
Current int `json:"current"`
Total int `json:"total"`
} `json:"progressDetail"`
}

type containermanager struct {
cli DockerClient
}
 
The containerManager interface has a field called cli with a DockerClient type. The interface in-turn embeds two interfaces from the Docker API, namely ImageAPIClient and ContainerAPIClient. Why do we need these interfaces?
For the ContainerManager interface to work properly, it must act as a client for the Docker Engine and API. For the client to work effectively with images and containers, it must be a type which provides required APIs. We need to embed the Docker API’s core interfaces and create a new one.
The initDockerClient function (seen above in runner.go) returns an instance that seamlessly implements those required interfaces. Check out the documentation here to better understand what’s returned upon creating a Docker client.
Meanwhile, you can view the container manager’s complete definition here.
Note: We haven’t individually covered all functions of container manager here, otherwise the blog would be too extensive.
Entrypoint
Since we’ve covered each individual component, let’s assemble everything in our main.go, which is our entrypoint. The package main tells the Go compiler that the package should compile as an executable program instead of a shared library. The main() function in the main package is the entry point of the program.

// main.go

package main

func main() {
args := os.Args[1:]

if len(args) < 2 || args[0] != argRun {
fmt.Println(helpMessage)
return
}

// read the task definition file
def, err := readTaskDefinition(args[1])
if err != nil {
fmt.Printf(errReadTaskDef, err)
}

// create a task runner for the task definition
ctx := context.Background()
runner, err := taskrunner.NewRunner(def)
if err != nil {
fmt.Printf(errNewRunner, err)
}

doneCh := make(chan bool)
go runner.Run(ctx, doneCh)

<-doneCh
}

 
Here’s what our Go program does:

Validates arguments

Reads the task definition

Initializes a task runner, which in turn initializes our container manager

Creates a done channel to receive the final signal from the runner

Runs our tasks

Building the Task System
1) Clone the repository
The source code is hosted over GitHub. Use the following command to clone the repository to your local machine.
git clone https://github.com/dockersamples/gopher-task-system.git
 
2) Build your task system
The go build command compiles the packages, along with their dependencies.
go build -o gopher

3) Run your tasks
You can directly execute gopher file to run the tasks as shown in the following way:
$ ./gopher run task.yaml

preparing task – gopher-loops
preparing task – hello-gopher
starting task – gopher-loops
starting task – hello-gopher
completed task – hello-gopher
completed task – gopher-loops

 
4) View all task containers  
You can view the full list of containers within the Docker Desktop. The Dashboard clearly displays this information: 

5) View all task containers via CLI
Alternatively, running docker ps -a also lets you view all task containers:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
396e25d3cea8 busybox "sh -c ‘for i in `se…" 6 minutes ago Exited (0) 6 minutes ago gopher-loops
aba428b48a0c busybox "echo ‘Hello, Gopher…" 6 minutes ago Exited (0) 6 minutes ago

Note that in task.yaml the cleanup flag is set to false for both tasks. We’ve purposefully done this to retrieve a container list after task completion. Setting this to true automatically removes your task containers.
Sequence Diagram
 

Conclusion
Docker is a collection of software development tools for building, sharing, and running individual containers. With the Docker SDK’s help, you can build and scale Docker-based apps and solutions quickly and easily. You’ll also better understand how Docker works under the hood. We look forward to sharing more such examples and showcasing other projects you can tackle with Docker SDK, soon!
Want to start leveraging the Docker SDK, yourself? Check out our documentation for install instructions, a quick-start guide, and library information.
References

Docker SDK
Go SDK Reference
Getting Started with Go

Quelle: https://blog.docker.com/feed/

Docker Captains Take 5 — Thorsten Hans

Docker Captains are select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Thorsten, who recently joined as a Docker Captain. He’s a Cloud-Native Consultant at Thinktecture and is based in Saarbrücken, Germany.

How/when did you first discover Docker?
I started using Docker when I got a shiny new MacBook Pro back in 2015. Before unboxing the new device, I was committed to keeping my new rig as clean and efficient as possible. I didn’t want to mess up another device with numerous databases, SDKs, or other tools for every project. Docker sounded like the perfect match for my requirements. (Spoiler: It was!)
When using macOS as an operating system, Docker Toolbox was the way to go back in those days.
Although quite some time has passed since 2015, I still remember how amazed I was by Docker’s clean CLI design and how Docker made underlying (read: way more complicated) concepts easy to understand and adopt.
What’s your favorite Docker command?
To be honest, I think “favorite” is a bit too complicated to answer! Based on hard facts, it’s docker run.
According to my ZSH history, it’s the command with most invocations. By the way, if you want to find yours, use this command:

bash

history | awk ‘BEGIN {FS="[ t]+||"} {print $3,$4}’ | sort | uniq -c | sort -nr | grep docker | head -n 10

Besides docker run, I would go with docker sbom and docker scan. Those help me to address common requirements when it comes to shift-left security.
What’s your top tip for working with Docker that others may not know?
From a developer’s perspective, it’s definitely docker context in combination with Azure and AWS.
Adding Azure Container Instances (ACI) or Amazon Elastic Container Service (ECS) as a Docker context and running your apps straight in the public cloud within seconds is priceless.
Perhaps you want to quickly try out your application, or you have to verify that your containerized application works as expected in the desired cloud infrastructure. Serverless contexts from Azure and AWS with native integration in Docker CLI provide an incredible inner-loop experience for both scenarios.
What’s the coolest Docker demo you’ve done/seen?
It might sound a bit boring these days. However, I still remember how cool the first demo on debugging applications running in Docker containers from people at Microsoft was.
Back in those days, they demonstrated how to debug applications running in Docker containers on the local machine and attach the local debugger to Docker containers running in the cloud. Seeing the debugger stopping at the desired breakpoint, showing all necessary contextual information, and knowing about all the nitty-gritty infrastructure in-between was just mind blowing.
That was the “now we’re talking” moment for many developers in the audience.
What have you worked on in the past six months that you’re particularly proud of?
As part of my daily job, I help developers understand and master technologies. The most significant achievement is when you recognize that they don’t need your help anymore. It’s that moment when you realize they’ve grasped the technologies — which ultimately permits them to master their technology challenges without further assistance.
What do you anticipate will be Docker’s biggest announcement this year?
Wait. There is more to come? Really!? TBH, I have no clue. We’ve had so many significant announcements already in 2022. Just take a look at the summary of DockerCon 2022 and you’ll see what I mean.
Personally, I hope to see handy extensions appearing in Docker Desktop, and I would love to see new features in Docker Hub when it comes to automations.
What are some personal goals for the next year with respect to the Docker community?
I want to help more developers adopt Docker and its products to improve their day-to-day workflow. As we start to see more in-person conferences here in Europe, I can’t wait to visit new communities, meetups, and conferences to demonstrate how Docker can help them take their productivity to a whole new level.
Speaking to all the event organizers: If you want me to address inner-loop performance and shift-left security at your event, ping me on Twitter and we’ll figure out how I can contribute.
What was your favorite thing about DockerCon 2022?
I won’t pick a particular announcement. It’s more the fact that Docker as a company continually sharpens its communication, marketing, and products to address the specific needs of developers. Those actions help us as an industry build faster inner-loop workflows and address shift-left security’s everyday needs.
Looking to the distant future, what’s the technology that you’re most excited about and that you think holds a lot of promise?
Definitely cloud-native. Although the term cloud-native has been around for quite some time now, I think we haven’t nailed it yet. Vendors will abstract complex technologies to simplify the orchestration, administration, and maintenance of cloud-native applications.
Instead of thinking about technical terms, we must ensure everyone thinks about this behavior when the term cloud-native is referenced.
Additionally, the number of tools, CLIs, and technologies developers must know and master to take an idea into an actual product is too high. So I bet we’ll see many abstractions and simplifications in the cloud-native space.
Rapid fire questions…
What new skill have you mastered during the pandemic?
Although I haven’t mastered it (yet), I would answer this question with Rust. During the pandemic, I looked into some different programming languages. Rust is the language that stands out here. It has an impressive language design and helps me write secure, correct, and safe code. The compiler, the package manager, and the entire ecosystem are just excellent.
IMO, every developer should dive into new programming languages from time to time to get inspired and see how other languages address common requirements.
Cats or Dogs?
Dogs. We thought about and discussed having a dog for more than five years. Finally, in December 2022, we found Marley, the perfect dog to complete our family.

Salty, sour, or sweet?
Although I would pick salty, I love sweet Alabama sauce for BBQ.
Beach or mountains?
Beach, every time.
Your most often used emoji?
Phew, There are tons of emojis I use quite frequently. Let’s go with 🚀.
Quelle: https://blog.docker.com/feed/

New SCIM Capabilities for Docker Business

Managing users across hundreds of applications and systems can be a painful process. And it only gets more challenging the larger your organization gets. To make it easier, we introduced Single Sign-On (SSO) earlier this year so you could securely manage Docker users through your standard identity provider (IdP).
Today, we’re excited to announce enhancements to the way you manage users with the addition of System for Cross-Domain Identity Management (SCIM) capabilities. By integrating Docker with your IdP via SCIM, you can automate the provisioning and deprovisioning of user seats. In fact, whatever user changes you make in your IdP will automatically be updated in Docker, eliminating the need to manually add or remove users as they come and go from your organization.
The best part? SCIM is now available with a Docker Business subscription!

What is System for Cross-Domain Identity Management (SCIM)?
SCIM is a provisioning system that allows customers to manage Docker users within their IdP. When SCIM is enabled, you no longer need to update both your organization’s IdP and Docker with any user changes like adding/removing users of profile updates. Your IdP can be the single source of truth. Whatever updates are made there will automatically be reflected in the Members tab on Docker Hub. We recommend enabling SCIM after you verify your domain and set up the SSO connection between your IdP and Docker (SSO enforcement won’t be necessary).
For more information on SSO and SCIM, check out our docs page.
Check out SSO and SCIM in action!
View our webinar on demand. We walk through the advanced management and security tools included in your Docker Business subscription — including a demo of SSO and SCIM — and answer some questions along the way.
SSO and SCIM are available to organizations with a Docker Business subscription.
Click here to learn more about how Docker Business supercharges developer productivity and collaboration without compromising on security and compliance.
Quelle: https://blog.docker.com/feed/

9 Tips for Containerizing Your .NET Application

Over the last five years, .NET has maintained its position as a top framework among professional developers. In Stack Overflow’s 2022 Developer Survey, .NET ranked first in the “other framework and libraries” category. Stack Overflow reserves this for developers who’ve done extensive development work with key technologies in the past year, and want to continue using them the next.
 
Data courtesy of Stack Overflow.
 
Over 60,000 developers and 3,700 companies have contributed to the .NET platform. Since its 2002 debut, .NET has supported multiple languages (C#, F#, Visual Basic), platforms (.NET Core, .NET framework, Mono), editors, and libraries for building for diverse applications. .NET provides standard sets of base class libraries and APIs common to all .NET applications.
Why is containerizing a .NET application important?
.NET was originally designed for Windows. Meanwhile, we originally based Docker around Linux. .NET has the application virtual machine (called Common Language Runtime) and other components aimed at solving build problems common to large enterprise applications from 10 to 20 years ago. The two weren’t inherently compatible on day one.
Both have since evolved to become cross-platform, open-source developer platforms. When building tiny containers with a single process running inside, using a directly compiled language is typically faster. That said, .NET has come a long way and is now container-friendly. Microsoft has made a concerted effort to enable the container system since Windows Server 2016 SP2. Its goal has been keeping up with this growing container ecosystem. Today, you can run containers on Windows hosts that aren’t just based on the Linux kernel, but also the Windows kernel.
Running your .NET application in a Docker container has numerous benefits. First, Docker containers can act as isolated test environments. .NET developers can code and test locally while ensuring consistency between development and production. Second, it eliminates deployment issues caused by missing dependencies while moving to a production environment. Third, containers let developers of all skill levels build, share, and run containerized .NET applications. Containers are immutable infrastructure, provide portability, and help improve scalability. Likewise, the modularity and lightweight nature of .NET 6 make it perfect for containers. 
Containerizing a .NET application is easy. You can do this by copying source code files and building a Docker image. We’ll also cover common concerns like image bloat, missing image tags, and poor build performance with these nine tips for containerizing your .NET application code.
Containerizing a Student Record Management Application
To better understand those concerns, let’s look at a simple student record management application. In our last blog post, you saw how easy building and deploying a student database application is via a Dockerfile and Docker Compose.
Running your application is simple. You’ll clone the GitHub project repository and use the Docker Compose CLI to bring up the complete application with the following commands:

git clone https://github.com/dockersamples/student-record-management

 
Change your directory to student-record-management to see the following Docker Compose file:

services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
volumes:
– postgres-data:/var/lib/postgresql/data
adminer:
image: adminer
restart: always
ports:
– 8080:8080
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
– 5000:80
depends_on:
– db
volumes:
postgres-data:

 
We’ve defined two services in this Compose file by the name db and app attributes. The Adminer (formerly phpMinAdmin) Docker image is a fully-featured database management tool written in PHP. We’ve set up port forwarding via the ports attribute. The depends_on attribute lets us express dependencies between services. In this case, we’ll start Postgres before our core application.  
Run the following command to bring up our student record management application:

docker-compose up -d

 
Once it’s up and running, you can view the Docker Dashboard and click on the “arrow” key (shown in app-1) to quickly access the application:
 

 
Typically, developers use the following Dockerfile template to build a Docker image. A Dockerfile is a list of sequential instructions that build your container image. This image is composed of a stack of layers, and each represents an instruction in our Dockerfile. Each layer contains changes to its underlying layer.

FROM mcr.microsoft.com/dotnet/sdk:6.0

WORKDIR /src
COPY . ./src

RUN dotnet build -o /app
RUN dotnet publish -o /publish

WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:80/
EXPOSE 80
CMD ["./myWebApp"]

 
The first line defines our base image, which is around 754 MB in size (or, alternatively, 994 MB for Nano Server and 6.34GB for Windows Server). The COPY copies the necessary project file from the host system to the root of the Docker image. The EXPOSE instruction tells Docker that the container listens specifically on network port 80 at runtime. Lastly, our CMD lets us configure a container that’ll run as an executable.
To build a Docker image, we’ll use the docker build command:

docker build -t student-app .

 
Let’s check the size of our new Docker image:

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
student-app latest d3caa8643c2c 4 minutes ago 827MB

 
One key drawback of this example is that our Docker image isn’t optimized. Crucially, optimization lets teams share smaller images, boost performance, and enables easier debugging. It’s essential at every CI/CD stage including production. If you’re using Windows base images, you can expect your images to be much larger vs. Linux base images. There must be a better build approach that lets us discard unneeded files after compilation, since these aren’t required in our final image.
1) Choosing the Right .NET Docker Images
The official .NET Docker images are publicly available in the Microsoft repositories on Docker Hub. The process of identifying and picking up the right container base image while building applications can be confusing. To simplify the selection process, most images repositories provide extension tagging to help you select both a specific framework version. They also let you choose the right operating system, like a specific Linux distribution or Windows version.
Microsoft offers two categories of images. The first encompasses images used to develop and build .NET apps, while the second houses those used to run .NET apps. For example, mcr.microsoft.com/dotnet/sdk:6.0 is used during the development and build process. This image includes the compiler and any other .NET dependencies. Meanwhile, mcr.microsoft.com/dotnet/aspnet:6.0 is ideal for production environments. This image includes ASP.NET Core, with runtime only alongside ASP.NET Core optimizations, on Linux and Windows (multi-arch).
You can visit GitHub to browse available Docker images.
2) Optimize your Dockerfile for dotnet Restore
When building .NET Core apps with Docker, it’s important to consider how Docker caches layers while building your app.
A common way to leverage the build cache is to copy only the .csproj ,.sln, and nuget.config files for your app before performing a dotnet restore — instead of copying the full source code. The NuGet package restore can be one of the slowest parts of the build, and it only depends on these files. By copying them first, Docker can cache the restore result. For example, it won’t need to run again if you only change a .cs file.

FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /src

COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish
WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:80/
EXPOSE 80
CMD ["./myWebApp"]

 
💁  The dotnet restore command uses NuGet to restore dependencies and project-specific tools that are specified in the project file.
3) Use a Multi-Stage Build
With multi-stage builds, Docker can use one base image for compilation, packaging, and unit tests. Another image then holds the application runtime. This makes the final image more secure and smaller in size (as it does not contain any development or debugging tools). Multi-stage Docker builds are a great way to ensure your builds are 100% reproducible and as lean as possible. You can create multiple stages within a Dockerfile and control how you build that image.
The .NET SDK includes .NET runtimes and tooling to develop, build, and package .NET applications. One best practice while creating docker images is keeping the image compact. You can containerize your .NET applications using a multi-layer approach. Each layer may contain different parts of the application like dependencies, source code, resources, and even snapshot dependencies. Alternatively, you can build any application as a separate image from the final image that contains the runnable application. To better understand this, let’s analyze the following Dockerfile.
The build stage uses SDK images to build the application and create final artifacts in the publish folder. The final stage copies artifacts from the build stage to the app folder, exposing port 80 to incoming requests and specifying the command to run the application, WebApp. In the first stage, we’ll extract the dependencies. In the second stage, we’ll copy the extracted dependencies to the final image.  Here’s a sample multi-stage Dockerfile for the student database example:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY –from=build /publish /app
WORKDIR /app
EXPOSE 80
CMD ["./myWebApp"]

The first stage is labeled build, where mcr.microsoft.com/dotnet/sdk is the base image.

docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mywebapp_app latest 1d4d9778ce14 3 hours ago 229MB

 
Our final image size shrinks dramatically to 229 MB, when compared to the single stage Dockerfile size of 827MB!
4) Use Specific Base Image tags, Instead of “Latest”
While building Docker images, we always recommended tagging them with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying the application in different environments. Conversely, we don’t recommend relying on the :latest tag. This :latest tag is often updated frequently and new versions can cause breaking changes. If you want to protect yourself against breaking changes, it’s best to pin to a specific version then update to newer versions when you’re ready.
For example, we’d avoid using mcr.microsoft.com/dotnet/sdk:latest as a base image. Instead, you should use specific tags like mcr.microsoft.com/dotnet/sdk:6.0, mcr.microsoft.com/dotnet/sdk:6.0-windowsservercore-ltsc2019, or others.
5) Run as a Non-root User for Security Purposes
While running an application within a Docker container, it has default access to the root for Linux or administrator privileges for Windows. This can undermine application security. You can solve this problem by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions.
Windows networks commonly use Active Directory (AD) to enable authentication and authorization between users, computers, and other network resources. Windows application developers often use Integrated Windows Authentication. This makes it easy for users and other services to automatically, transparently sign into the application using their credentials. Although Windows containers cannot be domain joined, they can still use Active Directory domain identities to support various authentication scenarios.
To achieve this, you can configure a Windows container to run with a group Managed Service Account (gMSA), which is a special type of service account introduced in Windows Server 2012. It’s designed to let multiple computers share an identity without requiring a password.
6) Use .dockerignore
To increase the build performance (and as a general best practice) we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain the following lines:

Dockerfile*
**/[b|B]in/
**/[O|o]bj/

 
These lines exclude the  bin and obj files from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple version works for now. It’s also helpful to understand how the docker build command works and what the build context means.
The build context is the place or space where the developer works. It can be a folder in Windows or a directory in Linux. In this directory, you’ll find every necessary app component like source code, configuration files, libraries, and plugins. You’ll determine which of these components to include while constructing a new image.
With the .dockerignore file, we can determine which components are vital. They’ll ultimately belong to the new image that we’re building.
For example, if we don’t want to include the bin and conf directory in our image build, we just need to indicate that within our .dockerignore file.
7) Add Health Checks to Your Containers
The HEALTHCHECK instruction tells Docker how to test a container and confirm that it’s still working. This can detect (for example) when a web server is stuck in an infinite loop and unable to handle new connections — even though the server process is still running.
When an application is deployed in production, an orchestrator like Kubernetes or a service fabric will most likely manage it. By providing the health check, you’re sharing the status of your containers with the orchestrator to permit management tasks based on your configurations. Let’s look at the following example:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app
RUN dotnet publish -o /publish

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY –from=build /publish /app
WORKDIR /app
EXPOSE 80
#If you’re using the Linux Container
HEALTHCHECK CMD curl –fail http://localhost || exit 1
#If you’re using Windows Container with Powershell
#HEALTHCHECK CMD powershell -command `
# try { `
# $response = iwr http://localhost; `
# if ($response.StatusCode -eq 200) { return 0} `
# else {return 1}; `
# } catch { return 1 }

CMD ["./myWebApp"]

 
When HEALTHCHECK is present in a Dockerfile, you’ll see the container’s health in the STATUS column while running docker ps. A container that passes this check displays as healthy. An unhealthy container displays as unhealthy.

docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7bee4d6a652a student-app "./myWebApp" 2 seconds ago Up 1 second (health: starting) 0.0.0.0:5000-80/tcp modest_murdock

 
8) Optimize for Startup Performance
You can improve .NET app startup times and reduce latency by compiling your assemblies with Ready to Run (R2R) compilation. However, this will increase your build time as a compromise. You can do this by setting the PublishReadyToRun property, which takes effect when you publish an application.
You can add the PublishReadyToRun property in two ways:
1) Set it within your project file:

<PropertyGroup>
<PublishReadyToRun>true</PublishReadyToRun>
</PropertyGroup>

 
2) Set it using the command line:

/p:PublishReadyToRun=true

 
The default Dockerfile that comes with the sample doesn’t use R2R compilation since the application is too small to warrant it. The bulk of the IL code that’s executed in this sample application is within .NET’s libraries, which are already R2R-compiled. This example enables R2R in Dockerfile, where we pass the /p:PublishReadyToRun=true to the dotnet build and dotnet publish commands.

FROM mcr.microsoft.com/dotnet/sdk:6.0 as build

WORKDIR /src
COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet build -o /app -r linux-x64 /p:PublishReadyToRun=true
RUN dotnet publish -o /publish -r linux-x64 –self-contained true –no-restore /p:PublishTrimmed=true /p:PublishReadyToRun=true /p:PublishSingleFile=true

FROM mcr.microsoft.com/dotnet/aspnet:6.0 as base
COPY –from=build /publish /app
WORKDIR /app
EXPOSE 80
HEALTHCHECK CMD curl –fail http://localhost || exit 1

CMD ["./myWebApp"]

9) Choose the Appropriate Isolation Mode For Windows Containers
There are two distinct modes of runtime isolation for Windows containers:  

Process Isolation – In this mode, multiple container instances can run concurrently in the same host with isolation on the file system, registry, network ports, process, thread ID space, and Object Manager namespace. It’s almost identical to how Linux containers run.
Hyper-V Isolation – In this mode, containers run inside a highly-optimized virtual machine, which provides hardware-level isolation between containers and hosts.

 
Most developers prefer process isolation when developing locally. It typically consumes fewer hardware resources than Hyper-V isolation. Hence, developers must account for the additional hardware needed while running the container in Hyper-V mode. However, your primary consideration when deciding to choose Hyper-V isolation is security — since it provides added hardware-level isolation. While Windows Server supports both options (default: Process Isolation), Windows 10+ only supports Hyper-V isolation.
To specify the isolation level, you should specify the –isolation flag: 

docker run -it –isolation=process mcr.microsoft.com/windows/servercore:ltsc2019 cmd

Conclusion
You’ve now seen some of the many methods for optimizing your Docker images. In any case, carefully crafting your Dockerfile is essential. If you’d like to go further, check out these bonus resources that cover recommendations and best practices for building secure, production-grade Docker images:

Docker Development Best Practices
Dockerfile Best Practices
Build Images with BuildKit
Best Practices for Scanning Images
Getting Started with Docker Extensions

 
At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on our Docker Community Slack channel, and we might feature your work!
 

Quelle: https://blog.docker.com/feed/

Use Cases and Tips for Using the BusyBox Docker Official Image

While developing applications, using the slimmest possible images can help reduce build times while reducing your app’s overall footprint. Similarly, successfully deploying such compact, Linux-friendly applications means packaging them into a cross-platform unit. That’s where containers and the BusyBox Docker Official Image come in handy.
 

Maintaining the BusyBox image has also been an ongoing priority at Docker. In fact, our very first container demo used BusyBox back in 2013! Users have downloaded it over one billion times, making BusyBox one of our most popular images.
Not exceeding 2.71 MB in size — with most tags under 900 KB, depending on architecture — the BusyBox container image is incredibly lightweight. It’s even much smaller than our Alpine image, which developers gravitate towards given its slimness. BusyBox’s compact size enables quicker sharing, by greatly reducing initial upload and download times. Smaller base images, depending on changes and optimizations to their subsequent layers, can also reduce your application’s attack surface.
In this guide, we’ll introduce you to BusyBox, cover some potential use cases, explore best practices, and briefly show you how to use its container image.
What’s BusyBox?
Dubbed the “Swiss Army Knife of Embedded Linux,” BusyBox packages together multiple, common UNIX utilities (or applets) into one executable binary. It helps you create your own Linux distribution, and our associated container image helps you deploy it across different devices.
This is possible thanks to BusyBox’s ability to run in numerous POSIX environments — which also includes FreeBSD and Android. It works in concert with the Linux kernel.
Miniature but mighty, it contains nearly 400 of UNIX’s leading commands while replacing many GNU utilities (shellutils, fileutils, and more) with something comparable in its full distribution. Although some of these may not be fully-featured, their core functionalities remain intact without forcing developers to make concessions.
Which BusyBox version should I use?
BusyBox helps replicate the experience of using common shell commands. Some Linux distributions use GNU’s coreutils package to ship these commands, while others have instead opted for BusyBox. Though BusyBox isn’t the most complete environment available, it checks most boxes for developers who need something approachable and lightweight.
BusyBox comes in a variety of pre-built binary versions. As a result, we support over 30 image tags on Docker Hub. Each includes its own Linux binary variant per CPU and sets of dependencies — impacting both image size and functionality.
Each is also built against various libc variants. To understand how each image’s relation to musl, uClibc, dietlibc, and glibc impacts your build, check out this comparison chart. This will help you choose the correct image for your specific use case.
That said, which use cases pair best with the BusyBox image? Let’s jump in.
BusyBox Use Cases
The Linux landscape is vast, and developer use cases will vary pretty greatly. However, we’ll tackle a few interesting examples and why they matter.
Building Distros for Embedded Systems
Known for having very limited available resources, embedded systems require distros with minute sizes that only include essential functionality. There’s very little extra room for frills or extra dependencies. Consequently, embedded Linux versions must be streamlined and purpose-built, which is where BusyBox excels.
BusyBox’s maintainers highlight its modularity. You can choose any BusyBox image that suits your build, yet you can also pick and choose commands or features during compilation. You don’t have to package together — to a point — anything you don’t need. While you can run atop the Linux kernel, containerizing your BusyBox implementation alleviates the need to include this kernel within the container itself. BusyBox will instead leverage your embedded system’s kernel by default, saving space.
Each applet’s behavior within your given image will determine how it works within a given embedded environment. BusyBox lets you modify configuration files, directories, and infrastructure to best fit your embedded system of choice.
Leveraging Kubernetes Init Containers
While the BusyBox Docker Official Image is a great base for other projects, BusyBox works well with the Kubernetes initContainer feature. These specialized Docker containers (for our example) run before app containers in a Pod. Init containers can contain scripts or other utilities that reside outside of the application image, and properly initializing these “regular” containers may depend on k8s spinning up these components first. Init containers always run until their tasks finish, and they run synchronously.
These containers also adhere to strictly-configured resource limits, support volumes, and respect your security settings. Why would you use an initContainer? According to the k8s documentation, you can do the following:

Wait for a Service to be created as Pods spin up
Register a Pod with a remote server from an API
Wait for an allotted period of time before finally starting an app container
Clone a Git repo into a volume
Generate configuration files automatically from value inputs

 
Kubernetes uses its configuration files to specify how these processes occur — alongside any shell commands. You can specify your BusyBox Docker image in this file with your chosen tag. Kubernetes will pull your BusyBox image, then create and start Docker containers from it while assigning them unique IDs.
By using init containers with BusyBox and Docker, you can better prepare your app containers to run vital workflows before they spin up.
Running an HTTP Web Server
Since the BusyBox container image helps you create a basic Linux environment, we can use that environment to run compiled Linux applications.
As a result, we can use our BusyBox base image to create custom executables, which — in this case — support a web app powered by a Go server. We’d like to shoutout developer Soham Kamani for highlighting this example!
How is this possible? To simplify the process, Soham accomplished this by:

Creating a BusyBox container using the Docker CLI (enabling us to run common commands).
Running custom executables after creating a custom Golang “hello world” program, and creating a companion Dockerfile to support it.
Building and running a Docker image using BusyBox as the base.
Creating a server.go file, compiling it, and running it as a web server using Docker components.

 
BusyBox lets you tackle this workflow while creating a final image that’s very slim. It gives developers an environment where their applications can run, thrive, scale, and deploy effectively. You can even manage your images and containers easily with Docker Desktop, if you prefer a visual interface.
You can read through the entire tutorial here, and view the sample code on GitHub. Want to explore more Go-based server deployments? Check out our Caddy 2 image guide.
This is not an exhaustive list of BusyBox use cases. However, these examples do showcase how creative you can get, even with a simple Linux base image.
Getting Started with the BusyBox Image
Hopefully you’ve discovered how the BusyBox image punches above its weight in terms of functionality. Luckily, using the BusyBox image is equally simple. Here’s how to get started in a Docker context.
First, run BusyBox as a shell with the following command:

$ docker run -it –rm busybox

 
This lets you execute commands within your BusyBox system, since you’re now effectively sh-ing into your environment. The -it flag combines both -i and -t together — which keeps STDIN open and allocates a pseudo-tty. This -tty tells Docker to create a virtual terminal session within your BusyBox container. Using the –rm flag tells Docker to tidy up your container and remove the filesystem when it exits.
Next, you’ll create a Dockerfile for your statically-compiled BusyBox binary. Here’s how that basic Dockerfile could look:

FROM busybox
COPY ./my-static-binary /my-static-binary
CMD ["/my-static-binary"]

 
Note that you’ll have to complete this compilation in another location, like a Docker container. This is possible with another Linux image like Alpine, but BusyBox is perfect for situations where heavy extensibility isn’t needed.
Lastly, always choose the variant which best fits your needs. You can use either busybox:uclibc, busybox:glibc, or busybox:musl as required. Options one and three are statically compiled, while glibc stems from Debian.
Docker and BusyBox Equal Simplicity
BusyBox is an essential tool for developers who love simplistic Linux. It lets you create powerful, customized Linux executables within a stripped-down (yet accommodating) Linux environment. Use cases are diverse, and the BusyBox image helps reduce bloat.
Both Docker and BusyBox work well together, while being inclusive of popular, related technologies like Kubernetes. Despite the BusyBox image’s small size, it unlocks many exciting development possibilities that are continually evolving. Visit Docker Hub to learn more and quickly pull your first BusyBox image.
Quelle: https://blog.docker.com/feed/