Quickly Spin Up New Development Projects with Awesome Compose

Containers optimize our daily development work. They’re standardized, so that we can easily switch between development environments — either migrating to testing or reusing container images for production workloads.
However, a challenge arises when you need more than one container. For example, you may develop a web frontend connected to a database backend with both running inside containers. While possible, this approach risks negating some (or all) of that container magic, since we must also consider storage interaction, network interaction, and port configurations. Those added complexities are tricky to navigate.
How Docker Compose Can Help
Docker Compose streamlines many development workloads based around multi-container implementations. One such example is a WordPress website that’s protected with an NGINX reverse proxy, and requires a MySQL database backend.
Alternatively, consider an eCommerce platform with a complex microservices architecture. Each cycle runs inside its own container — from the product catalog, to the shopping cart, to payment processing, and, finally, product shipping. These processes rely on the same database backend container runtime, using a Redis container for caching and performance.
Maintaining a functional eCommerce platform means running several container instances. This doesn’t fully address the additional challenges of scalability or reliable performance.
While Docker Compose lets us create our own solutions, building the necessary Dockerfile scripts and YAML files can take some time. To simplify these processes, Docker introduced the open source Awesome Compose library in March 2020. Developers can now access pre-built samples to kickstart their Docker Compose projects.
What does that look like in practice? Let’s first take a more detailed look at Docker Compose. Next, we’ll explore step-by-step how to spin up a new development project using Awesome Compose.
Having some practical knowledge of Docker concepts and base commands is helpful while following along. However, this isn’t required! If you’d like to brush up or become familiarized with Docker, check out our orientation page and our CLI reference page.
How Docker Compose Works
Docker Compose is based on a compose.yaml file. This file specifies the platform’s building blocks — typically referencing active ports and the necessary, standalone Docker container images.
The diagram below represents snippets of a compose.yaml file for a WordPress site with a MySQL database, a WordPress frontend, and an NGINX reverse proxy:
 

 
We’re using three separate Docker images in this example: MySQL, WordPress, and NGINX. Each of these three containers has its own characteristics, such as network ports and volumes.

mysql:
image: mysql:8.0.28
container_name: demomysql
networks:
– network
wordpress:
depends_on:
– mysql
image: wordpress:5.9.1-fpm-alpine
container_name: demowordpress
networks:
– network
nginx:
depends_on:
– wordpress
image: nginx:1.21.4-alpine
container_name: nginx
ports:
– 80:80
volumes:
– wordpress:/var/www/html

 
Originally, you’d have to use the docker run command to start each individual container. However, this introduces hiccups while managing interactions across each container related to network and storage. It’s much more efficient to consolidate all necessary objects into a docker compose scenario.
To help developers deploy baseline scenarios faster, Docker provides a GitHub repository with several environments, available for you to reuse, called Docker Awesome Compose. Let’s explore how to run these on your own machine.
How to Use Docker Compose
Getting Started
First, you’ll need to download and install Docker Desktop (for macOS, Windows, or Linux). Note that all example outputs in this article, however, come from a Windows Docker host.
You can verify that Docker is installed by running a simple docker run hello-world command:
C:>docker run hello-world
 
This should produce the following output, indicating that things are working correctly:
 

 
You’ll also need to install Docker Compose on your machine. Similarly, you can verify this installation by running a basic docker compose command, which triggers a corresponding response:
 
C:>docker compose
 

 
Next, either locally download or clone the Awesome Compose GitHub repository. If you have Git running locally, simply enter the following command:
git clone https://github.com/docker/awesome-compose.git
 

 
If you’re not running Git, you can download the Awesome Compose repository as a ZIP file. You’ll then extract it within its own folder.
Adjusting Your Awesome Compose Code
After downloading Awesome Compose, jump into the appropriate subfolder and spin up your sample environment. For this example, we’ll use WordPress with MariaDB. You’ll then want to access your wordpress-mysql subfolder.
Next, open your compose.yaml file within your favorite editor and inspect its contents. Make the following changes in your provided YAML file:
 

Update line 9: volumes: – mariadb:/var/lib/mysql
Provide a complex password for the following variables:

MYSQL_ROOT_PASSWORD (line 12)
MYSQL_PASSWORD (line 15)
WORDPRESS_DB_PASSWORD (line 27)

Update line 30: volumes: mariadb (to reflect the name used in line 9 for this volume)

 
While this example has mariadb enabled, you can switch to a mysql example by commenting out image: mariadb:10.7 and uncommenting #image: mysql:8.0.27.
Your updated file should look like this:

services:
db:
# We use a mariadb image which supports both amd64 & arm64 architecture
image: mariadb:10.7
# If you really want to use MySQL, uncomment the following line
#image: mysql:8.0.27
#command: ‘–default-authentication-plugin=mysql_native_password’
volumes:
– mariadb:/var/lib/mysql
restart: always
environment:
– MYSQL_ROOT_PASSWORD=P@55W.RD123
– MYSQL_DATABASE=wordpress
– MYSQL_USER=wordpress
– MYSQL_PASSWORD=P@55W.RD123
expose:
– 3306
– 33060
wordpress:
image: wordpress:latest
ports:
– 80:80
restart: always
environment:
– WORDPRESS_DB_HOST=db
– WORDPRESS_DB_USER=wordpress
– WORDPRESS_DB_PASSWORD=P@55W.RD123
– WORDPRESS_DB_NAME=wordpress
volumes:
mariadb:

 
Save these file changes and close your editor.
Running Docker Compose
Starting up Docker Compose is easy. To begin, ensure you’re in the wordpress-mysql folder and run the following from the Command Prompt:
docker compose up -d
 
This command kicks off the startup process. It downloads and soon runs your various container images from Docker Hub. Now, enter the following Docker command to confirm your containers are running as intended:
docker compose ps
 
This command should show all running containers and their active ports:

 
Verify that your WordPress app is active by navigating to http://localhost:80 in your browser — which should display the WordPress welcome page.
If you complete the required fields, it’ll redirect you to the WordPress dashboard, where you can start using WordPress. This experience is identical to running on a server or hosting environment.
 

 
Once testing is complete (or you’ve finished your daily development work), you can shut down your environment by entering the docker compose down command.
 

 
Reusing Your Environment
If you want to continue developing in this environment later, simply re-enter docker compose up -d. This action displays the development setup containing all of the previous information in the MySQL database. This takes just a few seconds.
 

 
However, what if you want to reuse the same environment with a fresh database?
To bring down the environment and remove the volume — which we defined within compose.yaml — run the following command:
docker compose down -v
 

 
Now, if you restart your environment with docker compose up, Docker Compose will summon a new WordPress instance. WordPress will have you configure your settings again, including the WordPress user, password, and website name:
 

 
While Awesome Compose sample projects work out of the box, always start with the README.md instructions file. You’ll typically need to update your sample YAML file with some environmental specifics — such as a password, username, or chosen database name. If you skip this step, the runtime won’t start correctly.
Awesome Compose Simplifies Multi-Container Management
Agile developers always need access to various application development-and-testing environments. Containers have been immensely helpful in providing this. However, more complex microservices architectures — which rely on containers running in tandem — are still quite challenging. Luckily, Docker Compose makes these management processes far more approachable.
Awesome Compose is Docker’s open-source library of sample workloads that empowers developers to quickly start using Docker Compose. The extensive library includes popular industry workloads such as ASP.NET, WordPress, and React web frontends. These can connect to MySQL, MariaDB, or MongoDB backends.
You can spin up samples from the Awesome Compose library in minutes. This lets you quickly deploy new environments locally or virtually. Our example also highlighted how easy customizing your Docker Compose YAML files and getting started are.
Now that you understand the basics of Awesome Compose, check out our other samples and explore how Docker Compose can streamline your next development project.
Quelle: https://blog.docker.com/feed/

Resources to Use Javascript, Python, Java, and Go with Docker

With so many programming and scripting languages out there, developers can tackle development projects any number of ways. However, some languages — like JavaScript, Python, and Java — have been perennial favorites. (We’ve previously touched on this while unpacking Stack Overflow’s 2022 Developer Survey results.)
 
Image courtesy of Joan Gamell, via Unsplash.
 
Many developers use Docker in tandem with these languages. We’ve seen our users create some amazing applications! Here are some resources and recommendations to level up your container game with these languages.
Getting Started with Docker
If you’ve never used Docker, you may want to familiarize yourself with some basic concepts first. You can learn the technical fundamentals of Docker and containerization via our “Orientation and Setup” guide and our introductory page. You’ll learn how containers work, and even how to harness tools like the Docker CLI or Docker Desktop.
Our Orientation page also serves as a foundation for many of our own official walkthroughs. This is a great resource if you’re completely new to Docker!
If you prefer hands-on learning, look no further than Shy Ruparel’s “Getting Started with Docker” video guide. Shy will introduce you to Docker’s architecture, essential CLI commands, Docker Desktop tips, and sample applications.
 

 
If you’re feeling comfortable with Docker, feel free to jump to your language-specific section using the links below. We’ve created language-specific workflows for each top language within our documentation (AKA “Our Language Modules” in this blog). These steps are linked below alongside some extra exploratory resources. We’ll also include some awesome-compose code samples to accelerate similar development projects — or to serve as inspiration.
Table of Contents

How to Use Docker with JavaScript
How to Use Docker with Python
How to Use Docker with Java
How to Use Docker with Go

 
 
How to Use Docker with JavaScript
JavaScript has been the programming world’s leading language for 10 years running. Luckily, there are also many ways to use JavaScript and Docker together. Check out these resources to harness JavaScript, Node.js, and other runtimes or frameworks with Docker.
Docker Node.js Modules
Before exploring further, it’s worth completing our learning modules for Node. These take you through the basics and set you up for increasingly-complex projects later on. We recommend completing these in order:

Overview for Node.js (covering learning objectives and containerization of your Node application)
Build your Node image
Run your image as a container
Use containers for development
Run your tests using Node.js and Mocha frameworks
Configure CI/CD for your application
Deploy your app

It’s also possible that you’ll want to explore more processes for building minimum viable products (MVPs) or pulling container images. You can read more by visiting the following links.
Other Essential Node Resources

Docker Docs: Building a Simple Todo List Manager with Node.js (creating a minimum viable product)
Docker Hub: The Node.js Official Image
Docker Hub: The docker/dev-environments-javascript image (contains Dockerfiles for building images used by the Docker Dev Environments feature)
GitHub: Official Docker and Node.js Best Practices (via the OpenJS Foundation)
GitHub: Awesome Compose sample #1 (building a Node.js application with an NGINX proxy and a Redis database)
GitHub: Awesome Compose samples #2 and #3 (building a React app with a Node backend and either a MySQL or MongoDB database)

 
How to Use Docker with Python
Python has consistently been one of our developer community’s favorite languages. From building simple sample apps to leveraging machine learning frameworks, the language supports a variety of workloads. You can learn more about the dynamic duo of Python and Docker via these links.
Docker Python Modules
Similar to Node.js, these pages from our documentation are a great starting point for harnessing Python and Docker:

Overview for Python
Build your Python image
Run your image as a container
Use containers for development (featuring Python and MySQL)
Configure CI/CD for your application
Deploy your app

Other Essential Python Resources

Docker Hub: The Python Official Image
Docker Hub: The PyPy Official Image (a fast, compliant alternative implementation of the Python language)
Docker Hub: The Hylang Official Image (for converting expressions and data structures into Python’s abstract syntax tree (AST))
Docker Blog: How to “Dockerize” Your Python Applications (tips for using CLI commands, Docker Desktop, and third-party libraries to containerize your app)
Docker Blog: Tracking Global Vaccination Rates with Docker, Python, and IoT (an informative, beginner-friendly tutorial for running Python containers atop Raspberry Pis)
GitHub: Awesome Compose sample #1 (building a sample app using both Python/Flask and a Redis database)
GitHub: Awesome Compose samples #2 and #3 (building a Python/Flask app with an NGINX proxy and either a MongoDB or MySQL database)

 
How to Use Docker with Java
Both its maturity and the popularity of Spring Boot have contributed to Java’s growth over the years. It’s easy to pair Java with Docker! Here are some resources to help you do it.
Docker Java Modules
Like with Python, these modules can help you hit the ground running with Java and Docker:

Overview for Java
Build your Java image
Run your image as a container
Use containers for development
Run your tests
Configure CI/CD for your application
Deploy your app

Other Essential Java Resources

Docker Hub: The openjdk Official Image (use this instead of the Java Official Image, which is now deprecated)
Docker Hub: The Apache Tomcat Official Image (an open source web server that implements both the Java Servlet and JavaServer Pages (JSP)
Docker Hub: The ibmjava Official Image (implementing IBM’s SDK, Java Technology Edition Docker Image)
Docker Hub: The Apache Groovy Official Image (an optionally-typed, dynamic language for statically compiling Java applications and boosting productivity)
Docker Hub: The eclipse-temurin Official Image (provides code and processes for building runtime binaries or associated technologies, featured in the following “9 Tips” blog post)
Docker Blog: 9 Tips for Containerizing Your Spring Boot Code
Docker Blog: Kickstart Your Spring Boot Application Development
GitHub: Awesome Compose sample #1 (building a React app with a Spring backend and a MySQL database)
GitHub: Awesome Compose sample #2 (building a Java Spark application with a MySQL database)
GitHub: Awesome Compose sample #3 (building a simple Spark Java application)
GitHub: Awesome Compose sample #4 (building a Java app with the Spring Framework and a Postgres database)

 
How to Use Docker with Go
Last, but not least, Go has become a popular language for Docker users. According to Stack Overflow’s 2022 Developer Survey, over 10,000 JavaScript users (of roughly 46,000) want to start or continue developing in Go or Rust. It’s often positioned as an alternative to C++, yet many Go users originally transition over from Python and Ruby.
There’s tremendous overlap there. Go’s ecosystem is growing, and it’s become increasingly useful for scaling workloads. Check out these links to jumpstart your Go and Docker development.
Docker Go Modules

Overview for Go
Build your Go image
Run your image as a container
Use containers for development
Run your tests using Go test
Configure CI/CD for your application
Deploy your app

Other Essential Go Resources

Docker Hub: The Golang Official Image
Docker Hub: The Caddy Official Image (for building enterprise-ready web servers with automatic HTTPS)
Docker Hub: The circleci/golang image (for extending the Golang Official Image to work better with CircleCI)
Docker Blog: Deploying Web Applications Quicker and Easier with Caddy 2 (creating a Caddy 2 web server and Dockerizing any associated applications)
GitHub: Awesome Compose samples #1 and #2 (building a Go server with an NGINX proxy and either a Postgres or MySQL database)
GitHub: Awesome Compose sample #3 (building an NGINX proxy with a Go backend)
GitHub: Awesome Compose sample #4 (building a TRAEFIK proxy with a Go backend)

 
Build in the Language You Want with Docker
Docker supports all of today’s leading languages. It’s easy to containerize your application and deploy cross-platform without having to make concessions. You can bring your workflows, your workloads, and, ultimately, your users along.
And that’s just the tip of the iceberg. We welcome developers who develop in other languages like Rust, TypeScript, C#, and many more. Docker images make it easy to create these applications from scratch.
We hope these resources have helped you discover and explore how Docker works with your preferred language. Visit our language-specific guides page to learn key best practices and image management tips for using these languages with Docker Desktop.
Quelle: https://blog.docker.com/feed/

How to Build and Deploy a URL Shortener Using TypeScript and Nest.js

At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on the Docker Community Slack and we might feature your work!
 
Over the last five years, TypeScript’s popularity has surged among enterprise developers. In Stack Overflow’s 2022 Developer Survey, TypeScript ranked third in the “most wanted” category. Stack Overflow reserves this distinction for developers who aren’t developing with a specific language or technology, but have expressed interest in doing so.
 
Data courtesy of Stack Overflow.
 
TypeScript’s incremental adoption is attributable to enhancements in developer code quality and comprehensibility. Overall, Typescript encourages developers to thoroughly document their code and inspires greater confidence through ease of use. TypeScript offers every modern JavaScript feature while introducing powerful concepts like interfaces, unions, and intersection types. It improves developer productivity by clearly displaying syntax errors during compilation, rather than letting things fail at runtime.
However, remember that every programming language comes with certain drawbacks, and TypeScript is no exception. Long compilation times and a steeper learning curve for new JavaScript users are most noteworthy.
Building Your Application
In this tutorial, you’ll learn how to build a basic URL shortener from scratch using TypeScript and Nest.
First, you’ll create a basic application in Nest without using Docker. You’ll see how the application lets you build a simple URL shortening service in Nest and TypeScript, with a Redis backend. Next, you’ll learn how Docker Compose can help you jointly run a Nest.js, TypeScript, and Redis backend to power microservices. Let’s jump in.
Getting Started
The following key components are essential to completing this walkthrough:

Node.js
NPM
VS Code
Docker Desktop 

 
Before starting, make sure you have Node installed on your system. Then, follow these steps to build a simple web application with TypeScript.
Creating a Nest Project
Nest is currently the fastest growing server-side development framework in the JavaScript ecosystem. It’s ideal for writing scalable, testable, and loosely-coupled applications. Nest provides a level of abstraction above common Node.js frameworks and exposes their APIs to the developer. Under the hood, Nest makes use of robust HTTP server frameworks like Express (the default) and can optionally use Fastify as well! It supports databases like PostgreSQL, MongoDB, and MySQL. NestJS is heavily influenced by Angular, React, and Vue — while offering dependency injection right out of the box.
For first-time users, we recommend creating a new project with the Nest CLI. First, enter the following command to install the Nest CLI.

npm install -g @nestjs/cli

 
Next, let’s create a new Nest.js project directory called backend.

mkdir backend

 
It’s time to populate the directory with the initial core Nest files and supporting modules. From your new backend directory, run Nest’s bootstrapping command. We’ll call our new application link-shortener:

nest new link-shortener

⚡ We will scaffold your app in a few seconds..

CREATE link-shortener/.eslintrc.js (665 bytes)
CREATE link-shortener/.prettierrc (51 bytes)
CREATE link-shortener/README.md (3340 bytes)
CREATE link-shortener/nest-cli.json (118 bytes)
CREATE link-shortener/package.json (1999 bytes)
CREATE link-shortener/tsconfig.build.json (97 bytes)
CREATE link-shortener/tsconfig.json (546 bytes)
CREATE link-shortener/src/app.controller.spec.ts (617 bytes)
CREATE link-shortener/src/app.controller.ts (274 bytes)
CREATE link-shortener/src/app.module.ts (249 bytes)
CREATE link-shortener/src/app.service.ts (142 bytes)
CREATE link-shortener/src/main.ts (208 bytes)
CREATE link-shortener/test/app.e2e-spec.ts (630 bytes)
CREATE link-shortener/test/jest-e2e.json (183 bytes)

? Which package manager would you ❤️ to use? (Use arrow keys)
❯ npm
yarn
pnpm

 
All three packages managers are usable, but we’ll choose npm for the purposes of this walkthrough.

Which package manager would you ❤️ to use? npm
✔ Installation in progress… ☕

🚀 Successfully created project link-shortener
👉 Get started with the following commands:

$ cd link-shortener
$ npm run start

Thanks for installing Nest 🙏
Please consider donating to our open collective
to help us maintain this package.

🍷 Donate: https://opencollective.com/nest</pre>

 
Once the command is executed successfully, it creates a new link-shortener project directory with node modules and a few other boilerplate files. It also creates a new src/ directory populated with several core files as shown in the following directory structure:
tree -L 2 -a
.
└── link-shortener
├── dist
├── .eslintrc.js
├── .gitignore
├── nest-cli.json
├── node_modules
├── package.json
├── package-lock.json
├── .prettierrc
├── README.md
├── src
├── test
├── tsconfig.build.json
└── tsconfig.json

5 directories, 9 files
 
Let’s look at the core files ending with .ts (TypeScript) under /src directory:
src % tree
.
├── app.controller.spec.ts
├── app.controller.ts
├── app.module.ts
├── app.service.ts
└── main.ts

0 directories, 5 files
 
Nest embraces modularity. Accordingly, two of the most important Nest app components are controllers and providers. Controllers determine how you handle incoming requests. They’re responsible for accepting incoming requests, performing some kind of operation, and returning the response. Meanwhile, providers are extra classes which you can inject into the controllers or to certain providers. This grants various supplemental functionality. We always recommend reading up on providers and controllers to better understand how they work.
The app.module.ts is the root module of the application and bundles up a couple of controllers and providers that the controller uses.

cat app.module.ts
import { Module } from ‘@nestjs/common';
import { AppController } from ‘./app.controller';
import { AppService } from ‘./app.service';

@Module({
imports: [],
controllers: [AppController],
providers: [AppService],
})
export class AppModule {}

 
As shown in the above file, AppModule is just an empty class. Nest’s @Module decorator is responsible for providing the config that lets Nest build a functional application from it.
First, app.controller.ts exports  a basic controller with a single route. The app.controller.spec.ts is the unit test for the controller. Second, app.service.ts is a basic service with a single method. Third, main.ts is the entry file of the application. It bootstraps the application by calling NestFactory.create, then starts the new application by having it listen for inbound HTTP requests on port 3000.

import { NestFactory } from ‘@nestjs/core';
import { AppModule } from ‘./app.module';

async function bootstrap() {
const app = await NestFactory.create(AppModule);
await app.listen(3000);
}
bootstrap();

 
Running the Application
Once the installation is completed, run the following command to start your application:

npm run start

> link-shortener@0.0.1 start
> nest start

[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [NestFactory] Starting Nest application…
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [InstanceLoader] AppModule dependencies initialized +24ms
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [RoutesResolver] AppController {/}: +4ms
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [RouterExplorer] Mapped {/, GET} route +2ms
[Nest] 68686 – 05/31/2022, 5:50:59 PM LOG [NestApplication] Nest application successfully started +1ms

This command starts the app with the HTTP server listening on the port defined in the src/main.ts file. Once the application is successfully running, open your browser and navigate to http://localhost:3000. You should see the “Hello World!” message:

 
Let’s now add a new test for our new endpoint in app.service.spec.ts:

import { Test, TestingModule } from "@nestjs/testing";
import { AppService } from "./app.service";
import { AppRepositoryTag } from "./app.repository";
import { AppRepositoryHashmap } from "./app.repository.hashmap";
import { mergeMap, tap } from "rxjs";

describe(‘AppService’, () => {
let appService: AppService;

beforeEach(async () => {
const app: TestingModule = await Test.createTestingModule({
providers: [
{ provide: AppRepositoryTag, useClass: AppRepositoryHashmap },
AppService,
],
}).compile();

appService = app.get<AppService>(AppService);
});

describe(‘retrieve’, () => {
it(‘should retrieve the saved URL’, done => {
const url = ‘docker.com';
appService.shorten(url)
.pipe(mergeMap(hash => appService.retrieve(hash)))
.pipe(tap(retrieved => expect(retrieved).toEqual(url)))
.subscribe({ complete: done })
});
});
});

 
Before running our tests, let’s implement the function in app.service.ts:

import { Inject, Injectable } from ‘@nestjs/common';
import { map, Observable } from ‘rxjs';
import { AppRepository, AppRepositoryTag } from ‘./app.repository';

@Injectable()
export class AppService {
constructor(
@Inject(AppRepositoryTag) private readonly appRepository: AppRepository,
) {}

getHello(): string {
return ‘Hello World!';
}

shorten(url: string): Observable<string> {
const hash = Math.random().toString(36).slice(7);
return this.appRepository.put(hash, url).pipe(map(() =>; hash)); // <– here
}

retrieve(hash: string): Observable<string> {
return this.appRepository.get(hash); // <– and here
}
}

 
Run these tests once more to confirm that everything passes, before we begin storing the data in a real database.
Add a Database
So far, we’re just storing our mappings in memory. That’s fine for testing, but we’ll need to store them somewhere more centralized and durable in production. We’ll use Redis, a popular key-value store available on Docker Hub.
Let’s install this Redis client by running the following command from the backend/link-shortener directory:

npm install redis@4.1.0 –save

 
Inside /src, create a new version of the AppRepository interface that uses Redis. We’ll call this file app.repository.redis.ts:

import { AppRepository } from ‘./app.repository';
import { Observable, from, mergeMap } from ‘rxjs';
import { createClient, RedisClientType } from ‘redis';

export class AppRepositoryRedis implements AppRepository {
private readonly redisClient: RedisClientType;

constructor() {
const host = process.env.REDIS_HOST || ‘redis';
const port = +process.env.REDIS_PORT || 6379;
this.redisClient = createClient({
url: `redis://${host}:${port}`,
});
from(this.redisClient.connect()).subscribe({ error: console.error });
this.redisClient.on(‘connect’, () => console.log(‘Redis connected’));
this.redisClient.on(‘error’, console.error);
}

get(hash: string): Observable<string&> {
return from(this.redisClient.get(hash));
}

put(hash: string, url: string): Observable<string> {
return from(this.redisClient.set(hash, url)).pipe(
mergeMap(() => from(this.redisClient.get(hash))),
);
}
}

 
Finally, it’s time to change the provider in app.module.ts to our new Redis repository from the in-memory version:

import { Module } from ‘@nestjs/common';
import { AppController } from ‘./app.controller';
import { AppService } from ‘./app.service';
import { AppRepositoryTag } from ‘./app.repository';
import { AppRepositoryRedis } from "./app.repository.redis";

@Module({
imports: [],
controllers: [AppController],
providers: [
AppService,
{ provide: AppRepositoryTag, useClass: AppRepositoryRedis }, // <– here
],
})
export class AppModule {}

 
Finalize the Backend
Head back to app.controller.ts and create another endpoint for redirect:

import { Body, Controller, Get, Param, Post, Redirect } from ‘@nestjs/common';
import { AppService } from ‘./app.service';
import { map, Observable, of } from ‘rxjs';

interface ShortenResponse {
hash: string;
}

interface ErrorResponse {
error: string;
code: number;
}

@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}

@Get()
getHello(): string {
return this.appService.getHello();
}

@Post(‘shorten’)
shorten(@Body(‘url’) url: string): Observable<ShortenResponse | ErrorResponse> {
if (!url) {
return of({ error: `No url provided. Please provide in the body. E.g. {‘url':’https://google.com’}`, code: 400 });
}
return this.appService.shorten(url).pipe(map(hash => ({ hash })));
}

@Get(‘:hash’)
@Redirect()
retrieveAndRedirect(@Param(‘hash’) hash): Observable<{ url: string }> {
return this.appService.retrieve(hash).pipe(map(url => ({ url })));
}
}

 
Click here to access the code previously developed for this example.
Containerizing the TypeScript Application
Docker helps you containerize your TypeScript app, letting you bundle together your complete TypeScript application, runtime, configuration, and OS-level dependencies. This includes everything needed to ship a cross-platform, multi-architecture web application. 
Let’s see how you can easily run this app inside a Docker container using a Docker Official Image. First, you’ll need to download Docker Desktop. Docker Desktop accelerates the image-building process while making useful images more discoverable. Complete the installation process once your download is finished.
Docker uses a Dockerfile to specify an image’s “layers.” Each layer stores important changes building upon the base image’s standard configuration. Create the following empty Dockerfile in your Nest project.
 
touch Dockerfile
 
Use your favorite text editor to open this Dockerfile. You’ll then need to define your base image. Let’s also quickly create a directory to house our image’s application code. This acts as the working directory for your application:
 
WORKDIR /app
 
The following COPY instruction copies the files from the host machine to the container image:
 
COPY . .
 
Finally, this closing line tells Docker to compile and run your application packages:
 
CMD[“npm”, “run”, “start:dev”]
 
Here’s your complete Dockerfile:

FROM node:16
COPY . .
WORKDIR /app
RUN npm install
EXPOSE 3000
CMD ["npm", "run", "start:dev"]

 
You’ve effectively learned how to build a Dockerfile for a sample TypeScript app. Next, let’s see how to create an associated Docker Compose file for this application. Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you’ll use a YAML file to configure your services. Then, with a single command, you can create and start every service from your configuration.
Defining Services Using a Compose File
It’s time to define your services in a Docker Compose file:

services:
redis:
image: ‘redis/redis-stack’
ports:
– ‘6379:6379′
– ‘8001:8001′
networks:
– urlnet
dev:
build:
context: ./backend/link-shortener
dockerfile: Dockerfile
environment:
REDIS_HOST: redis
REDIS_PORT: 6379
ports:
– ‘3000:3000′
volumes:
– ‘./backend/link-shortener:/app’
depends_on:
– redis
networks:
– urlnet

networks:
urlnet:

 
Your example application has the following parts:

Two services backed by Docker images: your frontend dev app and your backend database redis
The redis/redis-stack Docker image is an extension of Redis that adds modern data models and processing engines to provide a complete developer experience. We use port 8001 for RedisInsight — a visualization tool for understanding and optimizing Redis data.
The frontend, accessible via port 3000
The depends_on parameter, letting you create your backend service before the frontend service starts
One persistent volume, attached to the backend
The environmental variables for your Redis database

 
Once you’ve stopped the frontend and backend services that we ran in the previous section, let’s build and start our services using the docker-compose up command:

docker compose up -d –build

 
Note: If you’re using Docker Compose v1, the command line syntax is docker-compose with a hyphen. If you’re using v2, which ships with Docker Desktop, the hyphen is omitted and docker compose is correct. 

docker compose ps
NAME COMMAND SERVICE STATUS PORTS
link-shortener-js-dev-1 "docker-entrypoint.s…" dev running 0.0.0.0:3000->3000/tcp
link-shortener-js-redis-1 "/entrypoint.sh" redis running 0.0.0.0:6379->6379/tcp, 0.0.0.0:8001->8001/tcp

 
Just like that, you’ve created and deployed your TypeScript URL shortener! You can use this in your browser like before. If you visit the application at https://localhost:3000, you should see a friendly “Hello World!” message. Use the following curl command to shorten a new link:

curl -XPOST -d "url=https://docker.com" localhost:3000/shorten

 
Here’s your response:

{"hash":"l6r71d"}

 
This hash may differ on your machine. You can use it to redirect to the original link. Open any web browser and visit https://localhost:3000/l6r71d to access Docker’s website.
Viewing the Redis Keys
You can view the Redis keys with the RedisInsight tool by visiting https://localhost:8001.
 

Viewing the Compose Logs
You can use docker compose logs -f to check and view your Compose logs:

[6:17:19 AM] Starting compilation in watch mode…
link-shortener-js-dev-1 |
link-shortener-js-dev-1 | [6:17:22 AM] Found 0 errors. Watching for file changes.
link-shortener-js-dev-1 |
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [NestFactory] Starting Nest application…
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [InstanceLoader] AppModule dependencies initialized +21ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RoutesResolver] AppController {/}: +3ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RouterExplorer] Mapped {/, GET} route +1ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RouterExplorer] Mapped {/shorten, POST} route +0ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [RouterExplorer] Mapped {/:hash, GET} route +1ms
link-shortener-js-dev-1 | [Nest] 31 – 06/18/2022, 6:17:23 AM LOG [NestApplication] Nest application successfully started +1ms
link-shortener-js-dev-1 | Redis connected

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application:
 

 
You can also inspect important logs via the Docker Dashboard:
 

Conclusion
Congratulations! You’ve successfully learned how to build and deploy a URL shortener with TypeScript and Nest. Using a single YAML file, we demonstrated how Docker Compose helps you easily build and deploy a TypeScript-based URL shortener app in seconds. With just a few extra steps, you can apply this tutorial while building applications with much greater complexity.
Happy coding.
Quelle: https://blog.docker.com/feed/

Why Containers and WebAssembly Work Well Together

Developers favor the path of least resistance when building, shipping, and deploying their applications. It’s one of the reasons why containerization exists — to help developers easily run cross-platform apps by bundling their code and dependencies together.
While we’ve built upon that with Docker Desktop and Docker Hub, other groups like the World Wide Web Consortium (W3C) have also created complementary tools. This is how WebAssembly (AKA “Wasm”) was born.
Though some have asserted that Wasm is a replacement for Docker, we actually view Wasm as a companion technology. Let’s look at WebAssembly, then dig into how it and Docker together can support today’s demanding workloads.
What is WebAssembly?
WebAssembly is a compact binary format for packaging code to a portable compilation target. It leverages its JavaScript, C++, and Rust compatibility to help developers deploy client-server web applications. In a cloud context, Wasm can also access the filesystem, environment variables, or the system clock.
Wasm uses modules — which contain stateless, browser-compiled WebAssembly code — and host runtimes to operate. Guest applications (another type of module) can run within these host applications as executables. Finally, the WebAssembly System Interface (WASI) brings a standardized set of APIs to enable greater functionality and access to system resources.
Developers use WebAssembly and WASI to do things like:

Build cross-platform applications and games
Reuse code between platforms and applications
Running applications that are Wasm and WASI compilable on one runtime
Compile WebAssembly files to a single target for dependencies and code

 
How does WebAssembly fit into a containerized world?
If you’re familiar with Docker, you may already see some similarities. And that’s okay! Matt Butcher, CEO of Fermyon, explained how Docker and Wasm can unite to achieve some pretty powerful development outcomes. Given the rise of cloud computing, having multiple ways to securely run any software atop any hardware is critical. That’s what makes virtualized, isolated runtime environments like Docker containers and Wasm so useful.
 

 
Matt highlights Docker Co-Founder Solomon Hykes’ original tweet on Docker and Wasm, yet is quick to mention Solomon’s follow-up message regarding Wasm. This sheds some light on how Docker and Wasm might work together in the near future:
 

“So will wasm replace Docker?” No, but imagine a future where Docker runs linux containers, windows containers and wasm containers side by side. Over time wasm might become the most popular container type. Docker will love them all equally, and run it all ???? https://t.co/qVq3fomv9d
— Solomon Hykes (@solomonstre) March 28, 2019

 
Accordingly, Docker and Wasm can be friends — not foes — as cloud computing and microservices grow more sophisticated. Here’s some key points that Matt shared on the subject.
Let Use Cases Drive Your Technology Choices
We have to remember that the sheer variety of use cases out there far exceeds the capabilities of any one tool. This means that Docker will be a great match for some applications, WebAssembly for others, and so on. While Docker excels at building and deploying cross-platform cloud applications, Wasm is well-suited to portable, binary code compilation for browser-based applications.
Developers have long favored WebAssembly while creating their multi-architecture builds. This remains a sticking point for Wasm users, but the comparative gap has been narrowed with the launch of Docker Buildx. This helps developers achieve very similar results as those using Wasm. You can learn more about this process in our recent blog post.
During his presentation, Matt introduced what he called “three different categories of compute infrastructure. Each serves a different purpose, and has unique relevance both currently and historically:

Virtual machines (heavyweight class) – AKA the “workhorse” of the cloud, VMs package together an entire operating system — kernels and drivers included, plus code or data — to run an application virtually on compatible hardware. VMs are also great for OS testing and solving infrastructure challenges related to servers, but, they’re often multiple GB in size and consequently start up very slowly.
Containers (middleweight class) – Containers make it remarkably easy to package all application code, dependencies, and other components together and run cross-platform. Container images measure just tens to hundreds of MB in size, and start up in seconds.
WebAssembly (lightweight class) – A step smaller than containers, WebAssembly binaries are minuscule, can run in a secure sandbox, and start up nearly instantly since they were initially built for web browsers.

 
Matt is quick to point out that he and many others expected containers to blow VMs out of the water as the next big thing. However, despite Docker’s rapid rise in popularity, VMs have kept growing. There’s no zero-sum game when it comes to technology. Docker hasn’t replaced VMs, and similarly, WebAssembly isn’t poised to displace the containers that came before it. As Matt says, “each has its niche.”
Industry Experts Praise Both Docker and WebAssembly
A recent New Stack article digs into this further. Focusing on how WebAssembly can replace Docker is “missing the point,” since the main drivers behind these adoption decisions should be business use cases. One important WebAssembly advantage revolves around edge computing. However, Docker containers are now working more and more harmoniously with edge use cases. For example, exciting IoT possibilities await, while edge containers can power streaming, real-time process, analytics, augmented reality, and more.
If we reference Solomon’s earlier tweet, he alludes to this when envisioning Docker running Wasm containers. The trick is identifying which apps are best suited for which technology. Applications that need heavy filesystem control and IO might favor Docker. The same applies if they need sockets layer access. Meanwhile, Wasm is optimal for fuss-free web server setup, easy configuration, and minimizing costs.
With both technologies, developers are continuously unearthing both new and existing use cases.
Docker and Wasm Team Up: The Finicky Whiskers Game
Theoretical applications are promising, but let’s see something in action. Near the end of his talk, Matt revealed that the Finicky Whiskers game he demoed to start the session actually leveraged Docker, WebAssembly, and Redis. These three technologies comprised the game engine to form a lightning-fast backend:
 
Matt’s terminal displays backend activity as he interacts with the game.
 
Finicky Whiskers relies on eight separate WebAssembly modules, five of which Matt covered during his session. In this example, each button click sends an HTTP request to Spin — Fermyon’s framework for running web apps, microservices, and server applications.
These clicks generate successively more Wasm modules to help the game run. These modules spin up or shut down almost instantly in response to every user action. The total number of invocations changes with each module. Modules also grab important files that support core functionality within the application. Though masquerading as a game, Finicky Whiskers is actually a load generator.
A Docker container has a running instance of Redis and pubsub, which are used to broker messages and access key/value pairs. This forms a client-server bridge, and lets Finicky Whiskers communicate. Modules perform data validation before pushing it to the Redis pubsub implementation. Each module can communicate with the services within this Docker container — along with the file server — proving that Docker and Wasm can jointly power applications:
 

 
 
 
 
 
 
 
 
 
 
 
Specifically, Matt used Wasm to rapidly start and stop his microservices. It also helped these services perform simple tasks. Meanwhile, Docker helped keep the state and facilitate communication between Wasm modules and user input. It’s the perfect mix of low resource usage, scalability, long-running pieces, and load predictability.
Containers and WebAssembly are Fast Friends, Not Mortal Enemies
As we’ve demonstrated, containers and WebAssembly are companion technologies. One isn’t meant to defeat the other. They’re meant to coexist, and in many cases, work together to power some pretty interesting applications. While Finicky Whiskers wasn’t the most complex example, it illustrates this point perfectly.
In instances where these technologies stand apart, they do so because they’re supporting unique workloads. Instead of declaring one technology better than the other, it’s best to question where each has its place.
We’re excited to see what’s next for Wasm at Docker. We also want Docker to lend a helping hand where it can with Wasm applications. Our own Jake Levirne, Head of Product, says it best:
“Wasm is complementary to Docker — in whatever way developers choose to architect and implement parts of their application, Docker will be there to support their development experience,” Levirne said.
Development, testing and deployment toolchains that use Docker make it easier to maintain reproducible pipelines for application delivery regardless of application architecture, Levirne said. Additionally, the millions of pre-built Docker images, including thousands of official and verified images, provide “a backbone of core services (e.g. data stores, caches, search, frameworks, etc. )” that can be used hand-in-hand with Wasm modules.”
We even maintain a collection of WebAssembly/Wasm images on Docker Hub! Download Docker Desktop to start experimenting with these images and building your first Docker-backed Wasm application. Container runtimes and registries are also expanding to include native WebAssembly support.
Quelle: https://blog.docker.com/feed/

New Extensions, Improved logs, and more in Docker Desktop 4.10

We’re excited to announce the launch of Docker Desktop 4.10. We’ve listened to your feedback, and here’s what you can expect from this release. 
Easily find what you need in container logs
If you’re going through logs to find specific error codes and the requests that triggered them — or gathering all logs in a given timeframe — the process should feel frictionless. To make logs more usable, we’ve made a host of improvements to this functionality within the Docker Dashboard. 
First, we’ve improved the search functionality in a few ways:

You can begin searching simply by typing Cmd + F / Ctrl + F (for Mac and Windows).
Log search results matches are now highlighted. You can use the right/left arrows or  Enter / Shift + Enter  to jump between matches, while still keeping previous logs and subsequent logs in view.
We’ve added regular expression search, in case you want to do things like find all errors codes in a range.

Second, we’ve also made some usability enhancements:

Smart scroll, so that you don’t have to manually disable “stick to bottom” of logs. When you’re at the bottom of the logs, we’ll automatically stick to the bottom, but the second you scroll up it’ll stick again. If you want to restore this sticky behavior, simply click the arrow in the bottom right corner.

You can now select any external links present within your logs.
Selecting something in the terminal automatically copies that selection to the clipboard.

Third, we’ve added a new feature:

You can now clear a running container’s logs, making it easy to start fresh after you’ve made a change.

Take a tour of the functionality: https://drive.google.com/file/d/12TZjYwQgKcFrIaor1rMLkQxaUfR7KELA/view?usp=sharing
Adding Environment Variables on Image Run 
Previously you could easily add environment variables while starting a container from the CLI, but you’d quickly encounter roadblocks while starting your container afterwards from the Docker Dashboard. It wasn’t possible to add these variables while running an image. Now, when running a new container from an image, you can add environment variables that immediately take effect at runtime.

We’re also looking into adding some more features that let you quickly edit environment variables in running containers. Please share your feedback or other ideas on this roadmap item.
Containers Overview: bringing back ports, container name, and status
We want to give a big thanks to everyone who left feedback on the new Containers tab. It helped highlight where our changes missed the mark, and helped us quickly address them. In 4.10, we’ve:

Made container names and image names more legible, so you can quickly identify which container you need to manage
Brought back ports on the Overview page
Restored the “container status” icon so you can easily see which ones are running.

Easy Management with Bulk Container Actions
Many people loved the addition of bulk container deletion, which lets users delete everything at once. You can now simultaneously start, stop, pause, and resume multiple containers or apps you’re working on rather than going one by one. You can start your day and every app you need in a few clicks. You also have more flexibility while pausing and resuming — since you may want to pause all containers at once, while still keeping the Docker Engine running. This lets you tackle tasks in other parts of the Dashboard.

What’s up with the Homepage?
We’ve heard your feedback! When we first rolled out the new Homepage, we wanted to make it easier and faster to run your first container. Based on community feedback, we’re updating how we deliver that Homepage content. In this release, we’ve removed the Homepage so your default starting page is once again the Containers tab. 
But, don’t worry! While we rework this functionality, you can still access some of our most popular Docker Official Images while no containers are running. If you’d like to share any feedback, please leave it here.

New Extensions are Joining the Lineup
We’re happy to announce the addition of two new extensions to the Extensions Marketplace:

Ddosify – a simple, high performance, and open-source tool for load testing, written in Golang. Learn more about Ddosify here. 
Lacework Scanner – enables developers to leverage Lacework Inline Scanner directly within Docker Desktop. Learn more about Lacework here. 

Please help us keep improving
Your feedback and suggestions are essential to keeping us on the right track! You can upvote, comment, or submit new ideas via either our in-product links or our public roadmap. Check out our release notes to learn even more about Docker Desktop 4.10. 
Looking to become a new Docker Desktop user? Visit our Get Started page to jumpstart your development journey. 
Quelle: https://blog.docker.com/feed/

Key Insights from Stack Overflow’s 2022 Developer Survey

Continually taking the pulse of the development community is key to understanding development trends. Less than a week ago, Stack Overflow published the results of its 2022 Developer Survey. We eagerly reviewed these findings to discover how tech trends have changed over the past year.
While a lot of the players in the top 10 spots have remained consistent from last year, some trends spoke volumes. It’s clear that application development demands flexibility, agility, and tools that let you bring your own tech stack. Here’s why.
No Single Language Rules Them All
Nothing is certain but death, taxes…and JavaScript. The language has claimed the top spot among popular programming, scripting, and markup languages for the 10th year running. There’s also similar stability near the top of the rankings — where the usual suspects like HTML/CSS, SQL, and Python reign supreme. We’ve discussed the leaders, but what about the remaining languages?
2022’s top languages. Data courtesy of Stack Overflow.
 
Overall, the sheer variety of languages used has grown substantially in the past five years. While Stack Overflow tracked the popularity of 25 different technologies in 2017, this year’s most popular technologies list featured 42!
Given that hundreds of programming languages alone exist today — with roughly 50 considered “most popular” overall — it’s interesting to see such representation. There’s truly something out there for every imaginable use case. We love variety because it encourages innovation. However, it’s also worth noting that variety can represent added complexity for developers.
Diversity in Databases and Frameworks
Similar things can be said for databases and web frameworks, where no single technology claims 50% or greater usage. Developers demand flexible tools that let them innovate, since so many technologies are highly popular:
2022’s top database technologies. Data courtesy of Stack Overflow.
 
2022’s top web frameworks and technologies. Data courtesy of Stack Overflow.
 
Stack’s findings also underscore a key element of current development. Developers are using innumerable combinations of languages, frameworks, tools — and even OSes — during the development lifecycle. There’s no widespread consensus across these categories. Tech stacks are also increasingly use case driven, and not the other way around. Developers are also trying to reach even wider audiences.
The growing complexities stemming from these trends are major paint points. Therefore, having a consistent environment where you can universally build and deploy with any of your preferred technologies is incredibly valuable.
Cross-Platform Development Is Essential
Even if Windows holds the majority in personal use, there’s no clear OS winner. Developers are creating applications across a wide variety of platforms, which means that a consistent environment must be able to build and package cross-platform applications.
 

 
Our users have shared that tools like Docker Desktop, Docker Hub, and others have noticeably simplified and accelerated their cross platform development projects. It’s much easier to package all application code, dependencies, and essential components together when deploying atop varied operating systems and CPU architectures.
For experienced developers and even container newcomers, Docker CLI commands are both plentiful and usable. Alternatively, you can start, stop, and manage your containers using Docker Desktop’s Dashboard UI. Volume management and image management options are also included. Our goal is to equip all developers with the tools they need to get more done, faster — while enjoying themselves in the process.
Containers Are Going Strong
Gartner believes that 70% of organizations will be running multiple containerized apps by 2023. Momentum has definitely grown, and it’s led us to some very humbling discoveries in 2022: Docker is the #1 most loved development tool, and remains the #1 most-wanted tool.
 
Data courtesy of Stack Overflow.
 
Data courtesy of Stack Overflow.
 
Over 75% of devs who’ve regularly used Docker in the past year want to keep using it. More developers (37% vs. 30% in 2021) who haven’t yet used Docker are now interested. Additionally, professional developers currently view Docker as the most fundamental tool, jumping 14 percentage points since last year.
First and foremost, this couldn’t have been possible without countless contributions from our developer community. Your continual feedback across all of our products and tools has helped drive development forward — and improve developer experiences. Most new features have stemmed directly from community engagement and contributions to our public roadmap. Your collective input has been so impactful.
We also know how increasingly saturated the tooling market is becoming every day, which makes us feel even luckier to have a strong following. Thank you so much for your support, and for letting us streamline your critical daily workflows! If you’re considering using Docker for the first time, check out our Orientation documentation and Shy Ruparel’s “Getting Started with Docker” workshop:
 

Developers Value Flexibility, Ease, and Stability
We’ve noticed some main themes from Stack Overflow’s 2022 Developer Survey. Firstly, there’s a massive variety of technologies currently used across the industry. Second, developers are using these technologies both because they’re essential and because they love them so much. Third, containers are becoming even more popular as teams better understand their benefits.
Docker maintains a number of tools that make development easier. Beyond our container technology, each tool supports rapid development and deployment across highly-diverse environments. You can harness your favorite tech stack without encountering hiccups.
We think this can benefit millions upon millions of developers, and we couldn’t be happier that you agree. And if you haven’t given Docker a try, remember to download Docker Desktop. Here’s looking forward to another successful year!
Hungry for more data? You can view Stack Overflow’s complete survey results here, or read the official summary here.
Quelle: https://blog.docker.com/feed/

June 2022 Newsletter

The latest and greatest content for developers.

Introducing Dear Moby
Moby has accrued a “whaleth” of knowledge over the years, and as it turns out, can’t wait to share his advice and best practices with you — the Docker community. Submit your questions for the opportunity to be featured in our Dear Moby column or videos!

Learn More

News you can use and monthly highlights:
Serving Machine Learning Models With Docker: 5 Mistakes You Should Avoid – Here are a few quick tips on what to do and what not to do when serving your machine learning models with Docker.
Efficient Python Docker Image from any Poetry Project – Need to pack your python project into a Docker container? Using Poetry as a package manager? Check out how this Dockerfile can be a starting point for creating a small, efficient image out of your Poetry project, with no additional operations to perform.
NestJS and Postgres local development with Docker Compose – Modern applications demand high-performing frameworks that allow developers to build efficient and scalable server-side apps. Learn how you can use Docker Compose to build a local development environment for Nest.js and Postgresql with hot reloading.
Building a live chart with Deno, WebSockets, Chart.js, and Materialize – Here’s a quick step-by-step guide that helps you to build a simple live dashboard app that displays real-time data from a Deno Web Socket server in a real-time chart using Chart.js powered with Docker Compose.

Supporting the LGBTQ+ Community
Happy Pride! We’re always proud to swim alongside our LGBTQ+ community, colleagues, family, and friends. Learn more about eight organizations supporting the LGBTQ+ tech community.

Learn More

The latest tips and tricks from the community:

Merge+Diff: Building DAGs More Efficiently and Elegantly
Docker Technology Enables the Next Generation of Desktop as a Service
Kickstart Your Spring Boot Application Development
Building Your First Dockerized MERN Stack Web App
NestJS and Postgres local development with Docker Compose
9 Tips for Containerizing Your Spring Boot Code
How to Build and Deploy a Django-based URL Shortener App from Scratch

Creating Flappy Dock
The feedback from our community has been overwhelmingly positive for our latest feature releases, including Docker Extensions. To demonstrate the limitless potential of the SDK, our team had a little fun and created a game: Flappy Dock. See how we built it and try it for yourself.

Learn More

Educational content created by the experts at Docker:

Deploying Web Applications Quicker and Easier with Caddy 2
JumpStart Your Node.js Development
6 Development Tool Features that Backend Developers Need
Build Your First Docker Extension
Simplify Your Deployments Using the Rust Official Image
Cross Compiling Rust Code for Multiple Architectures
From Edge to Mainstream: Scaling to 100K+ IoT Devices
How to Quickly Build, Deploy, and Serve Your HTML5 Game
Connecting Decentralized Storage Solutions to Your Web 3.0 Applications

Docker Captain: Damian Naprawa
This month we’re welcoming a new Captain into our crew: Damian Naprawa. Damian started writing blogs for the Polish Docker Community to share his knowledge. His favorite command is docker sbom and he’s very interested in improving developer’s productivity.

Meet the Captain

See what the Docker team has been up to:

Dockerfiles now Support Multiple Build Contexts
Dockershim not needed: Docker Desktop with Kubernetes 1.24+
Introducing Registry Access Management for Docker Business
New Extensions and Container Interface Enhancements in Docker Desktop 4.9
Securing the Software Supply Chain: Atomist Joins Docker
Docker advances container isolation and workloads with acquisition of Nestybox
Welcome Tilt: Fixing the pains of microservice development for Kubernetes

DockerCon 2022 On-Demand
With over 50 sessions for developers by developers, watch the latest developer news, trends, and announcements from DockerCon 2022. From the keynote to product demos to technical breakout sessions, hacks, and tips & tricks, there’s something for everyone.

Watch On-Demand

Quelle: https://blog.docker.com/feed/

Docker Captain Take 5 – Damian Naprawa

Docker Captains are select community members that are both experts in their field and passionate about sharing their Docker knowledge with others. “Docker Captains Take 5” is a regular blog series where we get a closer look at our Captains and ask them the same broad set of questions, ranging from what their best Docker tip is to whether they prefer cats or dogs (personally, we like whales and turtles over here). Today, we’re interviewing Damian Naprawa, who recently became a Docker Captain. He’s a Software Architect at Capgemini and is based in Mielec, Poland.

How/when did you first discover Docker?
It was a long time ago! 
For the first time, I saw some blog posts about Docker and also participated in Docker introductory workshops (thanks to Bart & Dan!). However, I do remember that at the beginning I couldn’t understand how it works and what the benefits are from the developer’s perspective. Since I always want to not only use, but also understand how the technology I use works under the hood, I spent a lot of time on learning and practicing. 
After some time, the “aha” moment happened. I remember telling myself, “That’s awesome!”
After a couple of years, I decided to launch my own blog dedicated to the Polish community: szkoladockera.pl (in English it means “Docker School”). I wanted to help others understand Docker and containers, and hoped to share this great technology across the Polish community. I still remember how difficult it was for me – before that “aha” moment came, and before I started to know what I’m doing while using docker run commands.
What is your favorite Docker command?
It used to be docker exec (to see the container file system or for debugging purposes), but now the winner is docker sbom.
Why? Because one of my top interests is container security. 
With docker sbom, I can see every installed package inside my final Docker image – which I couldn’t see before. Every time we use a FROM command in the Dockerfile, we’re referring to some base image. In most cases, we don’t create them ourselves, and we aren’t aware of what packages are installed on an OS level (like curl) and application level (like Log4j). There could be a lot of packages that your app doesn’t need anymore, and you should be aware of that.
What is your top tip for working with Docker that others may not know?
Using Docker in combination with Ngrok lets developers expose their containerized, microservices-based apps to the internet directly from their machines. It’s very helpful when we want to present what code changes we made to our teammates, stakeholders, and clients, plus how it works from a user perspective – without needing to build and publish the app in the destination environment. You can find an example here.
What’s the coolest Docker demo you have done/seen?
I have seen and done a lot of demos. However, if I need to mention just one, there’s one I’m really proud of.

In 2021, I organized an online conference for the Polish community called “Docker & Kubernetes Festival”. During that, I held a talk called  “Docker for Developers”, where I presented quite a large amount of tips for working with Docker and how to speed up developer productivity. 
There were around 700 Polish community members watching it live and thousands who watched the recording.
What have you worked on in the past six months that you’re particularly proud of?
I’ve been working closely with developer teams on containerizing microservices-based apps written in Java and Python (ML). Since I used to code mostly with JavaScript and the .NET platform, it was a very interesting experience. I had to dive deeply into the Java and Python code to understand architecture and implementation details. I then advised developers on refactoring the code and smoothly migrating to containers.
What do you anticipate will be Docker’s biggest announcement this year?
Docker SBOM. It’s a game changer for me to have an overview of packages installed in my final docker image both on OS-level (like curl) and application level (like Log4j)
What are some personal goals for the next year with respect to the Docker community ?
I’d like to share more knowledge on my blog about specific technologies (like NestJS, Java, Python etc.) – how to prepare the Dockerfiles using best practices, and how to refactor apps to smoothly migrate them into containers.
What was your favorite thing about DockerCon 2022?
Since I’m working closely with development teams, everything related to microservices and speeding up developer productivity.
Looking to the distant future, what is the technology that you’re most excited about and that you think holds a lot of promise?
Containers, of course! I do see a huge demand for container experts and I predict this demand will increase. While speaking with the clients or with my students (of online courses), I’ve learned that companies have started to appreciate the benefits of containers, and they just want them in their workflows. 
Apart from that, I’m excited about web3 and NFT. I guess there’ll also be a demand for blockchain/web3 developers and security specialists in the next few years.
Rapid fire questions…
What new skill have you mastered during the pandemic?
I gave a lot of online demos and conducted a lot of webinars, but now I’m really keen to meet with people offline! I also started my podcast, More Than Containers, but I need to go back to regular recordings!
Cats or Dogs?
Both!
Salty, sour or sweet?
Salty. Nobody believes me, but I can live without sweet ???? 
Beach or mountains?
I love to travel, discover new things, and visit new places. Life is too short to choose between beach and mountains ????
Your most often used emoji?
Captain emoji! 
Quelle: https://blog.docker.com/feed/

9 Tips for Containerizing Your Spring Boot Code

At Docker, we’re incredibly proud of our vibrant, diverse and creative community. From time to time, we feature cool contributions from the community on our blog to highlight some of the great work our community does. Are you working on something awesome with Docker? Send your contributions to Ajeet Singh Raina (@ajeetraina) on the Docker Community Slack and we might feature your work!
Tons of developers use Docker containers to package their Spring Boot applications. According to VMWare’s State of Spring 2021 report, the number of organizations running containerized Spring apps spiked to 57% — compared to 44% in 2020.
What’s driving this significant growth? The ever-increasing demand to reduce startup times of web applications and optimize resource usage, which greatly boosts developer productivity.
Why is containerizing a Spring Boot app important?
Running your Spring Boot application in a Docker container has numerous benefits. First, Docker’s friendly, CLI-based workflow lets developers build, share, and run containerized Spring applications for other developers of all skill levels. Second, developers can install their app from a single package and get it up and running in minutes. Third, Spring developers can code and test locally while ensuring consistency between development and production.
Containerizing a Spring Boot application is easy. You can do this by copying the .jar or .war file right into a JDK base image and then packaging it as a Docker image. There are numerous articles online that can help you effectively package your apps. However, many important concerns like Docker image vulnerabilities, image bloat, missing image tags, and poor build performance aren’t addressed. We’ll tackle those common concerns while sharing nine tips for containerizing your Spring Boot code.
A Simple “Hello World” Spring Boot application
To better understand the unattended concern, let’s build a sample “Hello World” application. In our last blog post, you saw how easy it is to build the “Hello World!” application by downloading this pre-initialized project and generating a ZIP file. You’d then unzip it and complete the following steps to run the app.
 

 
Under the src/main/java/com/example/dockerapp/ directory, you can modify your DockerappApplication.java file with the following content:

package com.example.dockerapp;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
@SpringBootApplication
public class DockerappApplication {

@RequestMapping("/")
public String home() {
return "Hello World!";
}

public static void main(String[] args) {
SpringApplication.run(DockerappApplication.class, args);
}

}

 
The following command takes your compiled code and packages it into a distributable format, like a JAR:

./mvnw package
java -jar target/*.jar

 
By now, you should be able to access “Hello World” via http://localhost:8080.
In order to Dockerize this app, you’d use a Dockerfile.  A Dockerfile is a text document that contains every instruction a user could call on the command line to assemble a Docker image. A Docker image is composed of a stack of layers, each representing an instruction in our Dockerfile. Each subsequent layer contains changes to its underlying layer.
Typically, developers use the following Dockerfile template to build a Docker image.

FROM eclipse-temurin
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]

 
The first line defines the base image which is around 457 MB. The  ARG instruction specifies variables that are available to the COPY instruction. The COPY copies the  JAR file from the target/ folder to your Docker image’s root. The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. Lastly, an ENTRYPOINT lets you configure a container that runs as an executable. It corresponds to your java -jar target/*.jar  command.
You’d build your image using the docker build command, which looks like this:

$ docker build -t spring-boot-docker .
Sending build context to Docker daemon 15.98MB
Step 1/5 : FROM eclipse-temurin
—a3562aa0b991
Step 2/5 : ARG JAR_FILE=target/*.jar
—Running in a8c13e294a66
Removing intermediate container a8c13e294a66
—aa039166d524
Step 3/5 : COPY ${JAR_FILE} app.jar
COPY failed: no source files were specified

 
One key drawback of our above example is that it isn’t fully containerized. You must first create a JAR file by running the ./mvnw package command on the host system. This requires you to manually install Java, set up the  JAVA_HOME environment variable, and install Maven. In a nutshell, your JDK must reside outside of your Docker container — adding even more complexity into your build environment. There has to be a better way.
1) Automate all the manual steps
We recommend building up the JAR during the build process within your Dockerfile itself. The following RUN instructions trigger a goal that resolves all project dependencies, including plugins, reports, and their dependencies:

FROM eclipse-temurin
WORKDIR /app

COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline

COPY src ./src

CMD ["./mvnw", "spring-boot:run"]

 
💡 Avoid copying the JAR file manually while writing a Dockerfile
2) Use a specific base image tag, instead of latest
When building Docker images, it’s always recommended to specify useful tags which codify version information, intended destination (prod or test, for instance), stability, or other useful information for deploying your application in different environments. Don’t rely on the automatically-created latest tag. Using latest is unpredictable and may cause unexpected behavior. Every time you pull the latest image, it might contain a new build or untested release that could break your application.
For example, using the eclipse-temurin:latest Docker image as a base image isn’t ideal. Instead, you should use specific tags like eclipse-temurin:17-jdk-jammy , eclipse-temurin:8u332-b09-jre-alpin etc.
 
💡 Avoid using FROM eclipse-temurin:latest in your Dockerfile
3) Use Eclipse Temurin instead of JDK, if possible
On the OpenJDK Docker Hub page, you’ll find a list of recommended Docker Official Images that you should use while building Java applications. The upstream OpenJDK image no longer provides a JRE, so no official JRE images are produced. The official OpenJDK images just contain “vanilla” builds of the OpenJDK provided by Oracle or the relevant project lead.
One of the most popular official images with a build-worthy JDK is Eclipse Temurin. The Eclipse Temurin project provides code and processes that support the building of runtime binaries and associated technologies. These are high performance, enterprise-caliber, and cross-platform.

FROM eclipse-temurin:17-jdk-jammy

WORKDIR /app

COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline

COPY src ./src

CMD ["./mvnw", "spring-boot:run"]

 
4) Use a Multi-Stage Build
With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit tests. Another image holds the runtime of the application. This makes the final image more secure and smaller in size (as it does not contain any development or debugging tools). Multi-stage Docker builds are a great way to ensure your builds are 100% reproducible and as lean as possible. You can create multiple stages within a Dockerfile and control how you build that image.
You can containerize your Spring Boot applications using a multi-layer approach. Each layer may contain different parts of the application such as dependencies, source code, resources, and even snapshot dependencies. Alternatively, you can build any application as a separate image from the final image that contains the runnable application. To better understand this, let’s consider the following Dockerfile:

FROM eclipse-temurin:17-jdk-jammy
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]

 
Spring Boot uses a “fat JAR” as its default packaging format. When we inspect the fat JAR, we see that the application forms a very small part of the entire JAR. This portion changes most frequently. The remaining portion contains the Spring Framework dependencies. Optimization typically involves isolating the application into a separate layer from the Spring Framework dependencies. You only have to download the dependencies layer — which forms the bulk of the fat JAR — once, plus it’s cached in the host system.
The above Dockerfile assumes that the fat JAR was already built on the command line. You can also do this in Docker using a multi-stage build and copying the results from one image to another. Instead of using the Maven or Gradle plugin, we can also create a layered JAR Docker image with a Dockerfile. While using Docker, we must follow two more steps to extract the layers and copy those into the final image.
In the first stage, we’ll extract the dependencies. In the second stage, we’ll copy the extracted dependencies to the final image:

FROM eclipse-temurin:17-jdk-jammy as builder
WORKDIR /opt/app
COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline
COPY ./src ./src
RUN ./mvnw clean install

FROM eclipse-temurin:17-jre-jammy
WORKDIR /opt/app
EXPOSE 8080
COPY –from=builder /opt/app/target/*.jar /opt/app/*.jar
ENTRYPOINT ["java", "-jar", "/opt/app/*.jar" ]

 
The first image is labeled builder. We use it to run eclipse-temurin:17-jdk-jammy, build the fat JAR, and unpack it.
Notice that this Dockerfile has been split into two stages. The later layers contain the build configuration and the source code for the application, and the earlier layers contain the complete Eclipse JDK image itself. This small optimization also saves us from copying the target directory to a Docker image — even a temporary one used for the build. Our final image is just 277 MB, compared to the first stage build’s 450MB size.
5) Use .dockerignore
To increase build performance, we recommend creating a .dockerignore file in the same directory as your Dockerfile. For this tutorial, your .dockerignore file should contain just one line:

target

 
This line excludes the target directory, which contains output from Maven, from the Docker build context. There are many good reasons to carefully structure a .dockerignore file, but this simple file is good enough for now. Let’s now explain the build context and why it’s essential . The docker build command builds Docker images from a Dockerfile and a “context.” This context is the set of files located in your specified PATH or URL. The build process can reference any of these files.
Meanwhile, the compilation context is where the developer works. It could be a folder on Mac, Windows or a Linux directory. This directory contains all necessary application components like source code, configuration files, libraries, and plugins. With the .dockerignore file, we can determine which of the following elements like source code, configuration files, libraries, plugins, etc. to exclude while building your new image.
Here’s how your .dockerignore file might look if you choose to exclude the conf, libraries, and plugins directory from your build:
 

 
6) Favor Multi-Architecture Docker Images
Your CPU can only run binaries for its native architecture. For example, Docker images built for an x86 system can’t run on an Arm-based system. With Apple fully transitioning to their custom Arm-based silicon, it’s possible that your x86 (Intel or AMD) Docker Image won’t work with Apple’s recent M-series chips. Consequently, we always recommended building multi-arch container images. Below is the mplatform/mquery Docker image that lets you query the multi-platform status of any public image, in any public registry:

docker run –rm mplatform/mquery eclipse-temurin:17-jre-alpine
Image: eclipse-temurin:17-jre-alpine (digest: sha256:ac423a0315c490d3bc1444901d96eea7013e838bcf7cc09978cf84332d7afc76)
* Manifest List: Yes (Image type: application/vnd.docker.distribution.manifest.list.v2+json)
* Supported platforms:
– linux/amd64

 
We introduced the docker buildx command to help you build multi-architecture images. Buildx is a Docker component that enables many powerful build features with a familiar Docker user experience. All builds executed via Buildx run via the Moby BuildKit builder engine. BuildKit is designed to excel at multi-platform builds, or those not just targeting the user’s local platform. When you invoke a build, you can set the –platform flag to specify the build output’s target platform, (like linux/amd64, linux/arm64, or darwin/amd64):

docker buildx build –platform linux/amd64, linux/arm64 -t spring-helloworld .

7) Run as non-root user for security purposes
Running applications with user privileges is safer, since it helps mitigate risks. The same applies to Docker containers. By default, Docker containers and their running apps have root privileges. It’s therefore best to run Docker containers as non-root users. You can do this by adding USER instructions within your Dockerfile. The USER instruction sets the preferred user name (or UID) and optionally the user group (or GID) while running the image — and for any subsequent RUN, CMD, or ENTRYPOINT instructions:

FROM eclipse-temurin:17-jdk-alpine
RUN addgroup demogroup; adduser –ingroup demogroup –disabled-password demo
USER demo

WORKDIR /app

COPY .mvn/ .mvn
COPY mvnw pom.xml ./
RUN ./mvnw dependency:go-offline

COPY src ./src

CMD ["./mvnw", "spring-boot:run"]

8) Fix security vulnerabilities in your Java image
Today’s developers rely on third-party code and applications while building their services. By using external software without care, your code may be more vulnerable. Leveraging trusted images and continually monitoring your containers is essential to combatting this. Whenever you build a “Hello World” Docker image, Docker Desktop prompts you to run security scans of the image to detect any known vulnerabilities, like Log4Shell:

exporting to image 0.0s
== exporting layers 0.0s
== writing image sha256:cf6d952a1ece4eddcb80c8d29e0c5dd4d3531c1268291 0.0s
== naming to docker.io/library/spring-boot1 0.0s

Use ‘docker scan’ to run Snyk tests against images to find vulnerabilities and learn how to fix them

 
Let’s use the the Snyk Extension for Docker Desktop to inspect our Spring Boot application. To begin, install Docker Desktop 4.8.0+ on your Mac, Windows, or Linux machine and Enable Extension Marketplace.
 

 
Snyk’s extension lets you rapidly scan both local and remote Docker images to detect vulnerabilities.
 

Install the Snyk extension and supply the “Hello World” Docker Image.
 

 
Snyk’s tool uncovers 70 vulnerabilities of varying severity. Once you’re aware of these, you can begin remediation to galvanize your image.
 
💡 In order to perform a vulnerability check, you can use the following command directly against the Dockerfile: docker scan -f Dockerfile spring-helloworld
 
9) Use the OpenTelemetry API to measure Java performance
How do Spring Boot developers ensure that their apps are faster and performant? Generally, developers rely on third-party observability tools to measure the performance of their Java applications. Application performance monitoring is essential for all kinds of Java applications, and developers must create top notch user experiences.
Observability isn’t just limited to application performance. With the rise of microservices architectures, the three pillars of observability — metrics, traces, and logs — are front and center. Metrics help developers to understand what’s wrong with the system, while traces help you discover how it’s wrong. Logs tells you why it’s wrong, letting developers dig into particular metrics or traces to holistically understand system behavior.
Observing Java applications requires monitoring your Java VM metrics via JMX, underlying host metrics, and Java app traces. Java developers should monitor, analyze, and diagnose application performance using the Java OpenTelemetry API. OpenTelemetry provides a single set of APIs, libraries, agents, and collector services to capture distributed traces and metrics from your application. Check out this video to learn more.
Conclusion
In this blog post, you saw some of the many ways to optimize your Docker images by carefully crafting your Dockerfile and securing your image by using Snyk Docker Extension Marketplace. If you’d like to go further, check out these bonus resources that cover recommendations and best practices for building secure, production-grade Docker images.

Docker Development Best Practices
Dockerfile Best Practices
Build Images with BuildKit
Best Practices for Scanning Images
Getting Started with Docker Extensions

 

Quelle: https://blog.docker.com/feed/