Docker Desktop 4.20: Docker Engine and CLI Updated to Moby 24.0

We are happy to announce the major release of Moby 24.0 in Docker Desktop 4.20. We have dedicated significant effort to this release, marking a major milestone in the open source Moby project.

This effort began in September of last year when we announced we were extending the integration of the Docker Engine with containerd. Since then, we have continued working on the image store integration and contributing this work to the Moby project. 

In this release, we are adding support for image history, pulling images from private repositories, image importing, and support for the classic builder when using the containerd image store.

To explore and test these new features, activate the option Use containerd for pulling and storing images in the Features in Development panel within the Docker Desktop Settings (Figure 1). 

Figure 1: Select “Use containerd for pulling and storing images” in the “Features in development” screen.

When enabling this feature, you will notice your existing images are no longer listed, but there’s no need to worry. The containerd image store functions differently from the classic store, and you can only view one of them at a time. Images in the classic store can be accessed again by disabling the feature.

On the other hand, if you want to transfer your existing images to the new containerd store, simply push your images to the Docker Hub using the command docker push <my image name>. Then, activate the containerd store in the Settings and pull the image using docker pull <my image name>.

You can read all bug fixes and enhancements in the Moby release notes.

SBOM and provenance attestations using BuildKit v0.11

BuildKit v0.11 helps you secure your software supply chain by allowing you to add an SBOM (Software Bill of Materials) and provenance attestations in the SLSA format to your container images. This can be done with the container driver as described in the blog post Generating SBOMs for Your Image with BuildKit or by enabling the containerd image store and following the steps in the SBOM attestations documentation.

New Dockerfile inspection features

In version 4.20, you can use the docker build command to preview the configuration of your upcoming build or view a list of available build targets in your Dockerfile. This is particularly beneficial when dealing with multi-stage Dockerfiles or when diving into new projects.

When you run the build command, it processes your Dockerfile, evaluates all environment variables, and determines reachable build stages, but stops before running the build steps. You can use –print=outline to get detailed information on all build arguments, secrets, and SSH forwarding keys, along with their current values. Alternatively, use –print=targets to list all the possible stages that can be built with the –target flag.

We also aim to present textual descriptions for these elements by parsing the comments in your Dockerfile. Currently, you need to define BUILDX_EXPERIMENTAL environment variable to use the –print flag (Figure 2).

Figure 2: Using the new `–print=outline` command outputs detailed information on build arguments, secrets, SSH forwarding keys and associated values.

Check out the Dockerfile 1.5 changelog for more updates, such as loading images from on-disk OCI layout and improved Git source and checksum validation for ADD commands in the labs channel. Read the Highlights from the BuildKit v0.11 Release blog post to learn more.

Docker Compose dry run

In our continued efforts to make Docker Compose better for developer workflows, we’re addressing a long-standing user request. You can now dry run any Compose command by adding a flag (–dry-run). This gives you insight on what exactly Compose will generate or execute so nothing unexpected happens.

We’re also excited to highlight contributions from the community, such as First implementation of viz subcommand #10376 by @BenjaminGuzman, which generates a graph of your stack, with details like networks and ports. Try docker compose alpha viz on your project to check out your own graph.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.20 release notes for a full breakdown of what’s new in the latest release.

Quelle: https://blog.docker.com/feed/

160% Year-over-Year Growth in Pulls of Red Hat’s Universal Base Image on Docker Hub

Red Hat’s Universal Base Image eliminates “works on my machine” headaches for developers.

It’s Red Hat Summit week, and we wanted to use this as an opportunity to highlight several aspects of Docker’s partnership with Red Hat. In this post, we highlight Docker Hub and Red Hat’s Universal Base Images (UBI). Also check out our new post on simplifying Kubernetes development with Docker Desktop + Red Hat OpenShift.

Docker Hub is the world’s largest public registry of artifacts for developers, providing the fundamental building blocks — web servers, runtimes, databases, and more — for any application. It offers more than 15 million images for containers, serverless functions, and Wasm with support for multiple operating systems (Linux, Windows) and architectures (x86, ARM). Altogether, these 15 million images are pulled more than 16 billion times per month by over 20 million IPs.

While the majority of the 15 million images are community images, a subset are trusted content, both open source and commercial, curated and actively maintained by Docker and upstream open source communities and Docker’s ISV partners.

Docker and Red Hat have been partners since 2013, and Red Hat started distributing Linux images through Docker Hub in 2014. To help developers reduce the “works on my machine” finger-pointing and ensure consistency between development and production environments, in 2019 Red Hat launched Universal Base Image (UBI). Based on Red Hat Enterprise Linux (RHEL), UBI provides the same reliability, security, and performance as RHEL. Furthermore, to meet the different use cases of developers and ISVs, UBI comes in four sizes — standard, minimal, multi-service, and micro — and offers channels so that additional packages can be added as needed.

Given Docker’s reach in the developer community and the breadth and depth of developer content on Docker Hub, Docker and Red Hat agreed that distributing Red Hat UBI on Docker Hub made a lot of sense. Thus, in May 2021 Red Hat became a Docker Verified Publisher (DVP) and launched Red Hat UBIs on Docker Hub. As a DVP, Red Hat’s UBIs are easier for developers to discover, and it gives developers an extra level of assurance that the images they’re using are accessible, safe, and maintained.

The results? Tens of 1000s of developers are pulling Red Hat UBI millions of times every month. Furthermore, the pulls of Red Hat Universal Base image have grown 2.6X times in the last 12 months alone. Such growth points to value Docker and Red Hat together bring to the developer community.

… and we’re not finished! Having provided Red Hat UBIs directly to developers, now Docker and Red Hat are working together with Docker’s ISV partners and open source communities to bring the value of UBI to those software stacks as well. Stay tuned for more!

Learn More

Docker Verified Publisher (DVP)

Red Hat Universal Base Images (UBIs)

Docker Official Images (DOI)

Docker-sponsored Open Source (DSOS)

Quelle: https://blog.docker.com/feed/

Simplifying Kubernetes Development: Docker Desktop + Red Hat OpenShift

It’s Red Hat Summit week, and we wanted to use this as an opportunity to highlight several aspects of the Docker and Red Hat partnership. In this post, we highlight Docker Desktop and Red Hat OpenShift. In another post, we talk about 160% year-over-year growth in pulls of Red Hat’s Universal Base Image on Docker Hub.

Why Docker Desktop + Red Hat OpenShift?

Docker Desktop + Red Hat OpenShift allows the thousands of enterprises that depend on Red Hat OpenShift to leverage the Docker Desktop platform that more than 20 million active developers already know and trust to eliminate daily friction and empower them to deliver results.

What problem it solves

Sometimes, it feels like coding is easy compared to the sprint demo and getting everybody’s approval to move forward. Docker Desktop does the yak shaving to make developing, using, and testing containerized applications on Mac and Windows local environments easy, and the Red Hat OpenShift extension for Docker Desktop extends that with one-click pushes to Red Hat’s cloud container platform.

One especially convenient use of the Red Hat OpenShift extension for Docker Desktop is for quickly previewing or sharing work, where the ease of access and friction-free deploys without waiting for CI can reduce cycle times early in the dev process and enable rapid iteration leading up to full CI and production deployment.

Getting started

If you don’t already have Docker Desktop installed, refer to our guide on Getting Started with Docker.

Installing the extension

The Red Hat OpenShift extension is one of many in the Docker Extensions Marketplace. Select Install to install the extension.

👋 Pro tip: Have an idea for an extension that isn’t in the marketplace? Build your own and let us know about it! Or don’t tell us about it and keep it for internal use only — private plugins for company or team-specific workflows are also a thing.

Signing into the OpenShift cluster in the extension

From within your Red Hat OpenShift cluster web console, select the copy login command from the user menu:

👋 Don’t have an OpenShift cluster? Red Hat supports OpenShift exploration and development with a developer sandbox program that offers immediate access to a cluster, guided tutorials, and more. To sign up, go to their Developer Sandbox portal.

That leads to a page with the details you can use to fetch the Kubernetes context you can use in the Red Hat OpenShift extension and elsewhere:

From there, you can come back to the Red Hat OpenShift extension in Docker Desktop. In the Deploy to OpenShift screen, you can change or log in to a Kubernetes context. In the login screen, paste the whole oc login line, including the token and server URL from the previous:

Your first deploy

Using the extension is even easier than installing it; you just need a containerized app. A sample racing-game-app is ready to go with a Dockerfile (docs) and OpenShift manifest (docs), and it’s a joy to play.

To get started, clone https://github.com/container-demo/HexGL.

👋 Pro tip: Have an app you want to containerize? Docker’s newly introduced docker init command automates the creation of Dockerfiles, Compose manifests, and .dockerignore files.

Build your image

Using your local clone of sample racing-game-app, cd into the repo on your command line, where we’ll build the image:

docker build –platform linux/amd64 -t sample-racing-game .

The –platform linux/amd64 flag forces x86 compatible image, even if you’re building from a Mac with Apple Silicon/ARM CPU. The -t sample-racing-game names and tags your image with something more recognizable than a SHA256. Finally, the trailing dot (“.”) tells Docker to build from the current directory.

Starting with Docker version 23, docker build leverages BuildKit with more performance, better caching, and support for things like build secrets that make our lives as developers easier and more secure.

Test the image locally

Now that you’ve built your image, you can test it out locally. Just run the image using:

docker run -d -p 8080:8080 sample-racing-game

And open your browser to http://localhost:8080/ to take a look at the result.

Deploy to OpenShift

With the image built, it’s time to test it out on the cluster. We don’t even have to push it to a registry first.

Visit the Red Hat OpenShift extension in Docker Desktop, select our new sample-racing-game image, and select Push to OpenShift and Deploy from the action button pulldown:

The Push to OpenShift and Deploy action will push the image to the OpenShift cluster’s internal private registry without any need to push to another registry first. It’s not a substitute for a private registry, but it’s convenient for iterative development where you don’t plan on keeping the iterative builds.

Once you click the button, the extension will do exactly as you intend and keep you updated about progress:

Finally, the extension will display the URL for the app and attempt to open your default browser to show it off:

Cleanup

Once you’re done with your app, or before attempting to redeploy it, use the web terminal to all traces of your deployment using this one-liner:

oc get all -oname | grep sample-racing-game | xargs oc delete

That will avoid unnecessarily using resources and any errors if you try to redeploy to the same namespace later.

So…what did we learn?

The Red Hat OpenShift extension gives you a one-click deploy from Docker Desktop to an OpenShift cluster. It’s especially convenient for previewing iterative work in a shared cluster for sprint demos and approval as a way to accelerate iteration.

The above steps showed you how to install and configure the extension, build an image, and deploy it to the cluster with a minimum of clicks. Now show us what you’ll build next with Docker and Red Hat, tag @docker on Twitter or me @misterbisson@techub.social to share!

Learn More

Learn how to build and share a containerized app

Integrating Docker with your IDE

AI and ML with Docker

Connect with Docker

Roadmap

Forums

Slack

Quelle: https://blog.docker.com/feed/

Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension

As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.

With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.

It lets you test your locally hosted web applications or websites on cloud-based real OS machines.

You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. Once these details have been entered, click on the Launch Tunnel (Figure 5).

Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation. 

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 
Quelle: https://blog.docker.com/feed/

Docker Init: Initialize Dockerfiles and Compose files with a single CLI command

Docker has revolutionized the way developers build, package, and deploy their applications. Docker containers provide a lightweight, portable, and consistent runtime environment that can run on any infrastructure. And now, the Docker team has developed docker init, a new command-line interface (CLI) command introduced as a beta feature that simplifies the process of adding Docker to a project (Figure 1).

Note: Docker Init should not be confused with the internally -used docker-init executable, which is invoked by Docker when utilizing the –init flag with the docker run command.

Figure 1: With one command, all required Docker files are created and added to your project.

Create assets automatically

The new  docker init command automates the creation of necessary Docker assets, such as Dockerfiles, Compose files, and .dockerignore files, based on the characteristics of the project. By executing the docker init command, developers can quickly containerize their projects. Docker init is a valuable tool for developers who want to experiment with Docker, learn about containerization, or integrate Docker into their existing projects.

To use docker init, developers need to upgrade to the version 4.19.0 or later of Docker Desktop and execute the command in the target project folder. Docker init will detect the project definitions, and it will automatically generate the necessary files to run the project in Docker. 

The current Beta release of docker init supports Go, Node, and Python, and our development team is actively working to extend support for additional languages and frameworks, including Java, Rust, and .NET. If there is a language or stack that you would like to see added or if you have other feedback about docker init, let us know through our Google form.

In conclusion, docker init is a valuable tool for developers who want to simplify the process of adding Docker support to their projects. It automates the creation of necessary Docker assets  and can help standardize the creation of Docker assets across different projects. By enabling developers to focus on developing their applications and reducing the risk of errors and inconsistencies, Docker init can help accelerate the adoption of Docker and containerization.

See Docker Init in action

To see docker init in action, check out the following overview video by Francesco Ciulla, which demonstrates building the required Docker assets to your project.

Check out the documentation to learn more.
Quelle: https://blog.docker.com/feed/

Building a Local Application Development Environment for Kubernetes with the Gefyra Docker Extension 

If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run. 

In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns. 

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production. 

Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love. 

Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.

The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.

Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.

Why run Gefyra as a Docker Extension?

Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.

The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

Figure 3: Enable Docker Extensions.

You must also enable Kubernetes under Settings (Figure 4).

Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop. 

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).

Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.

Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8). 

Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.

Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub. 

If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube. 

Prerequisite

Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

Clone the repository

The next step is to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

Applying the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container. 

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME READY STATUS RESTARTS AGE
backend 1/1 Running 0 2m6s
frontend 1/1 Running 0 2m6s

After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop. 

Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

Using Gefyra “Run Container” with the frontend process

In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Figure 9: Run a local container.

Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.

Figure 11: Select namespace and workload.

Figure 12: Select image to run.

To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:

poetry run flask –app app debug run –port 5002 –host 0.0.0.0

Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.

Figure 14: Installing Gefyra components.

It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).

Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.

Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:

Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port. 

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.

Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity. 

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.19: Compose v2, the Moby project, and more

Docker Desktop release 4.19 is now available. In this post, we highlight features added to Docker Desktop in the past month, including performance enhancements, new language support, and a Moby update.

5x faster container-to-host networking on macOS

In Docker Desktop 4.19, we’ve made container-to-host networking performance 5x faster on macOS by replacing vpnkit with the TCP/IP stack from the gVisor project.

Many users work on projects that have containers communicating with a server outside their local Docker network. One example of this would be workloads that download packages from the internet via npm install or apt-get. This performance improvement should help a lot in these cases.

Over the next month, we’ll keep track of the stability of this new networking stack. If you notice any issues, you can revert to using the legacy vpnkit networking stack by setting “networkType”:”vpnkit” in Docker Desktop’s settings.json config file.

Docker Init (Beta): Support for Node and Python

In our 4.18 release, we introduced docker init, a CLI command in Beta that helps you easily add Docker to any of your projects by creating the required assets for you. In the 4.19 release, we’re happy to add to this and share that the feature now includes support for Python and Node.js. 

You can try docker init with Python and Node.js by updating to the latest version of Docker Desktop (4.19) and typing docker init in the command line while inside a target project folder. 

The Docker team is working on adding more languages and frameworks for this command, including Java, Rust, and .Net. Let us know if you would like us to support a specific language or framework. We welcome any feedback you may have as we continue to develop and improve Docker Init (Beta).

Docker Scout (Early Access)

With Docker Desktop release 4.19, we’ve made it easier to view Docker Scout data for all of your images directly in Docker Desktop. Whether you’re using an image stored locally in Docker Desktop or a remote image from Docker Hub, you can see all that data without leaving Docker Desktop.

A nudge toward Compose v2

Compose v1 has reached end-of-life and will no longer be bundled with Docker Desktop after June 2023.

In preparation, a new warning will be shown in the terminal when running Compose v1 commands. Set the COMPOSE_V1_EOL_SILENT=1 environment variable to suppress this message.

You can upgrade by enabling Use Compose v2 in the Docker Desktop settings. When active, Docker Desktop aliases docker-compose to Compose v2 and supports the recommended docker compose syntax.

Moby 23

We updated the Docker Engine and the CLI to Moby 23.0,  where we are upstreaming open source internal developments such as the containerd integration and Wasm support, which will ship with Moby 24.0. Moby 23.0 includes additional enhancements such as the –format=json shorthand variant of –format=“{{ json . }}” and support of relative source paths to the run command in the -v/–volume and -m/–mount flags. You can read more about Moby 23.0 in the release notes.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.19 release notes for a full breakdown of what’s new in the latest release.

Quelle: https://blog.docker.com/feed/

Docker Compose Experiment: Sync Files and Automatically Rebuild Services with Watch Mode

We often hear how indispensable Docker Compose is as a development tool. Running docker compose up offers a streamlined experience and scales from quickly standing up a PostgreSQL database to building 50+ services across multiple platforms.

And, although “building a Docker image” was previously considered a last step in release pipelines, it’s safe to say that containers have since become an essential part of our daily workflow. Still, concerns around slow builds and developer experience have often been a barrier towards the adoption of containers for local development.

We’ve come a long way, though. For starters, Docker Compose v2 now has deep integration with BuildKit, so you can use features like RUN cache mounts, SSH Agent forwarding, and efficient COPY with –link to speed up and simplify your builds. We’re also constantly making quality-of-life tweaks like enhanced progress reporting and improving consistency across the Docker CLI ecosystem.

As a result, more developers are running docker compose build && docker compose up to keep their running development environment up-to-date as they make code changes. In some cases, you can even use bind mounts combined with a framework that supports hot reload to avoid the need for an image rebuild, but this approach often comes with its own set of caveats and limitations.

An early look at the watch command

Starting with Compose v2.17, we’re excited to share an early look at the new development-specific configuration in Compose YAML as well as an experimental file watch command (Figure 1) that will automatically update your running Compose services as you edit and save your code.

This preview is brought to you in no small part by Compose developer Nicolas De Loof (in addition to more than 10 bugfixes in this release alone).

Figure 1: Preview of the new watch command.

An optional new section, x-develop, can be added to a Compose service to configure options specific to your project’s daily flow. In this release, the only available option is watch, which allows you to define file or directory paths to monitor on your computer and a corresponding action to take inside the service container.

Currently, there are two possible actions: 

sync — Copy changed files matching the pattern into the running service container(s).

rebuild — Trigger an image build and recreate the running service container(s).

services:
web:
build: .
x-develop:
watch:
– action: sync
path: ./web
target: /src/web
– action: rebuild
path: package.json

In the preceding example, whenever a source file in the web/ directory is changed, Compose will copy the file to the corresponding location under /src/web inside the container. Because Webpack supports Hot Module Reload, the changes are automatically detected and applied.

Unlike source code files, adding a new dependency cannot be done on the fly, so whenever package.json is changed, Compose will rebuild the image and recreate the web service container.

Behind the scenes, the file watching code shares its core with Tilt. The intricacies and surprises of file watching have always been near and dear to the Tilt team’s hearts, and, as Dockhands, the geeking out has continued. 

We are going to continue to build out the experience while gated behind the new docker compose alpha command and x-develop Compose YAML section. This approach will allow us to respond to community feedback early in the development process while still providing a clear path to stabilization as part of the Compose Spec.

Docker Compose powers countless workflows today, and its lightweight approach to containerized development is not going anywhere — it’s just learning a few new tricks.

Try it out

Follow the instructions at dockersamples/avatars to quickly run a small demo app, as follows:

git clone https://github.com/dockersamples/avatars.git
cd avatars
docker compose up -d
docker compose alpha watch
# open http://localhost:5735 in your browser

If you try it out on your own project, you can comment on the proposed specification on GitHub issue #253 in the compose-spec repository.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.18: Docker Scout Updates, Container File Explorer GA

We’re always looking for ways to enhance your experience with Docker, whether you’re using an integration, extension, or directly in product. Docker Desktop 4.18 focuses on improvements in the command line and in Docker Desktop. 

Read on to learn about new CLI features in Docker Scout, and find out about Docker init, an exciting CLI Beta feature to help you quickly add Docker to any project. We also review new features to help you get up and running with Docker faster: Container File Explorer, adminless macOS install, and a new experimental feature in Docker Compose.

Docker Scout CLI

In Docker Desktop 4.17, we introduced Docker Scout, a tool that provides visibility into image vulnerabilities and recommendations for quick remediation. We are delighted to announce the release of several new features into the Docker Scout command line, which ships with Docker Desktop 4.18. These updates come after receiving an overwhelming amount of community feedback. 

The 4.18 release of Docker Scout includes a vulnerability quickview, image recommendations directly on the command line, improved remediation guidance with BuildKit SBOM utilization, and a preview feature comparing images (imagine using diff, but for container images).

Quickview 

Suppose that you have created a new container image and would like to assess its security posture. You can now run docker scout quickview for an instant, high-level security insight into your image. If any issues are found, Docker Scout will guide you on what to do next.

`docker scout quickview` output showing image vulnerability information

Command-line recommendations

If you’ve previously used docker scout cves to understand which CVEs exist in your images, you may have wondered what course of action to take next. With the new docker scout recommendations command, you receive a list of recommendations that directly suggest available updates for the base image. 

The docker scout recommendations command analyzes the image and displays recommendations to refresh or update the base image, along with a list of benefits, including opportunities to reduce vulnerabilities or how to achieve smaller image sizes.

‘docker scout recommendations’ output showing available image updates for vulnerable images

BuildKit provenance and SBOM attestations 

In January 2023, BuildKit was extended to support building images with attestations. These images can now use the docker scout command line to process this information and determine relevant next steps. We can support this as the docker scout command-line tool knows exactly what base image you built with and can provide more accurate recommendations.If an image was built and pushed with an attached SBOM attestation, docker scout reads the package information from the SBOM attestation instead of creating a new local SBOM.

To learn how to build images with attestations using BuildKit, read “Generating SBOMs for Your Image with BuildKit.” 

Compare images

Note: This is an experimental Docker Scout feature and may change and evolve over time. 

Retrospectively documenting the changes made to address a security issue after completing a vulnerability remediation is considered a good practice. Docker Desktop 4.18 introduces an early preview of image comparison. 

Comparison of vulnerability differences between two images

This feature highlights the vulnerability differences between two images and how packages compare. These details include the package version, environment variables in each image, and more. Additionally, the command-line output can be set up in a markdown format. This information can then be used to generate diff previews in pull requests through GitHub Actions. 

We’d love to know what scenarios you could imagine using this diff feature in. You can do this by opening up Docker Desktop, navigating to the Images tab, and selecting Give feedback.

Read the documentation to learn more about these features. 

Container File Explorer 

Another feature we’re happy to announce is the GA release of Container File Explorer. When you need to check or quickly replace files within a container, Container File Explorer will help you do this — and much more — straight from Docker Desktop’s UI. 

You won’t need to remember long CLI commands, fiddle with long path parameters on the docker cp command, or get frustrated that your container has no shell at all to check the files. Container File Explorer provides a simple UI that allows you to:

Check a container file system

Copy files and folders between host and containers

Easily drag and drop files to a container

Quickly edit files with syntax highlighting

Delete files

With Container File Explorer, you can view your containers’ files at any state (stopped/running/paused/…) and for any container type, including slim-containers/slim-images (containers without a shell). Open the dashboard, go to the Containers tab, open the container action menu, and check your files:

Container File Explorer UI in Docker Desktop

Adminless install on macOS

We’ve adjusted our macOS install flow to make it super easy for developers to install Docker Desktop without granting them admin privileges. Some developers work in environments with elevated security requirements where local admin access may be prohibited on their machines. We wanted to make sure that users in these environments are able to opt out of Docker Desktop functionality that requires admin privileges.

The default install flow on macOS will still ask for admin privileges, as we believe this allows us to provide an optimized experience for the vast majority of developer use cases. Upon granting admin privileges, Docker Desktop automatically installs the Docker CLI tools, enabling third-party libraries to seamlessly integrate with Docker (by enabling the default Docker socket) and allowing users to bind to privileged ports between 1 and 1024. 

If you want to change the settings you configured at install, you can do so easily within the Advanced tab of Docker Desktop’s Settings.

Docker init (Beta)

Another exciting feature we’re releasing in Beta is docker init. This is a new CLI command that lets you quickly add Docker to your project by automatically creating the required assets: Dockerfiles, Compose files, and .dockerignore. Using this feature, you can easily update existing projects to run using containers or set up new projects even if you’re not familiar with Docker.

You can try docker init by updating to the latest version of Docker Desktop (4.18.0) and typing docker init in the command line while inside a target project folder. docker init will create all the required files to run your project in Docker. 

Refer to the docker init documentation to learn more.

The Beta version of docker init ships with Go support, and the Docker team is already working on adding more languages and frameworks, including Node.js, Python, Java, Rust, and .Net, plus other features in the coming months. If there is a specific language or framework you would like us to support, let us know. Submit other feedback and suggestions in our public roadmap.

Note: Please be aware that docker init should not be confused with the internally-used docker-init executable, which is invoked by Docker when utilizing the –init flag with the docker run command. Refer to the docs to learn more. 

`docker init` command-line output on how to get started

Docker Compose File Watch (Experimental)

Docker Compose has a new trick! Docker Compose File Watch is available now as an Experimental feature to automatically keep all your service containers up-to-date while you work.

With the 4.18 release, you can optionally add a new x-develop section to your services in compose.yaml:

services:
web:
build: .
# !!! x-develop is experimental !!!
x-develop:
watch:
– action: sync
path: ./web
target: /app/web
– action: rebuild
path: .package.json

Once configured, the new docker compose alpha watch command will start monitoring for file edits within your project:

On a change to ./web/App.jsx, for example, Compose will automatically synchronize it to /src/web/App.jsx inside the container.

Meanwhile, if you modify package.json (such as by installing a new npm package), Compose will rebuild the image and replace the existing service with an updated version.

Compose File Watch mode is just the start. With nearly 100 commits since the last Docker Compose release, we’ve squashed bugs and made a lot of quality-of-life improvements. (A special shout-out to all our recent first-time contributors!)

We’re excited about Docker Compose File Watch and are actively working on the underlying mechanics and configuration format.

Conclusion

That’s a wrap for our Docker Desktop 4.18 update. This release includes many cool, new features, including some that you can help shape! We also updated the Docker Engine to address some CVEs. As always, we love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. 

Check out the release notes for a full breakdown of what’s new in Docker Desktop 4.18.
Quelle: https://blog.docker.com/feed/

Containerizing an Event Posting App Built with the MEAN Stack

This article is a result of open source collaboration. During Hacktoberfest 2022, the project was announced in the Black Forest Docker meetup group and received contributions from members of the meetup group and other Hacktoberfest contributors. Almost all of the code in the GitHub repo was written by Stefan Ruf, Himanshu Kandpal, and Sreekesh Iyer.

The MEAN stack is a fast-growing, open source JavaScript stack used to develop web applications. MEAN is a diverse collection of robust technologies — MongoDB, Express.js, Angular, and Node.js — for developing scalable web applications. 

The stack is a popular choice for web developers as it allows them to work with a single language throughout the development process and it also provides a lot of flexibility and scalability. Node, Express, and Angular even claimed top spots as popular frameworks or technologies in Stack Overflow’s 2022 Developer Survey.

In this article, we’ll describe how the MEAN stack works using an Event Posting app as an example.

How does the MEAN stack work?

MEAN consists of the following four components:

MongoDB — A NoSQL database 

ExpressJS —  A backend web-application framework for NodeJS

Angular — A JavaScript-based front-end web development framework for building dynamic, single-page web applications

NodeJS — A JavaScript runtime environment that enables running JavaScript code outside the browser, among other things

Here’s a brief overview of how the different components might work together:

A user interacts with the frontend, via the web browser, which is built with Angular components. 

The backend server delivers frontend content, via ExpressJS running atop NodeJS.

Data is fetched from the MongoDB database before it returns to the frontend. Here, your application displays it for the user.

Any interaction that causes a data-change request is sent to the Node-based Express server.

Why is the MEAN stack so popular?

The MEAN stack is often used to build full-stack, JavaScript web applications, where the same language is used for both the client-side and server-side of the application. This approach can make development more efficient and consistent and make it easier for developers to work on both the frontend and backend of the application.

The MEAN stack is popular for a few reasons, including the following:

Easy learning curve — If you’re familiar with JavaScript and JSON, then it’s easy to get started. MEAN’s structure lets you easily build a three-tier architecture (frontend, backend, database) with just JavaScript and JSON.

Model View Architecture — MEAN supports the Model-view-controller architecture, supporting a smooth and seamless development process.

Reduces context switching — Because MEAN uses JavaScript for both frontend and backend development, developers don’t need to worry about switching languages. This capability boosts development efficiency.

Open source and active community support — The MEAN stack is purely open source. All developers can build robust web applications. Its frameworks improve the coding efficiency and promote faster app development.

Running the Event Posting app

Here are the key components of the Event Posting app:

MongoDB

Express.js

Angular

Node.js

Docker Desktop

Deploying the Event Posting app is a fast process. To start, you’ll clone the repository, set up the client and backend, then bring up the application. 

Then, complete the following steps:

git clone https://github.com/dockersamples/events
cd events/backend
npm install
npm run dev

General flow of the Event Posting app

The flow of information through the Event Posting app is illustrated in Figure 1 and described in the following steps.

Figure 1: General flow of the Event Posting app.

A user visits the event posting app’s website on their browser.

AngularJS, the frontend framework, retrieves the necessary HTML, CSS, and JavaScript files from the server and renders the initial view of the website.

When the user wants to view a list of events or create a new event, AngularJS sends an HTTP request to the backend server.

Express.js, the backend web framework, receives the request and processes it. This step includes interacting with the MongoDB database to retrieve or store data and providing an API for the frontend to access the data.

The back-end server sends a response to the frontend, which AngularJS receives and uses to update the view.

When a user creates a new event, AngularJS sends a POST request to the backend server, which Express.js receives and processes. Express.js stores the new event in the MongoDB database.

The backend server sends a confirmation response to the front-end, which AngularJS receives and uses to update the view and display the new event.

Node.js, the JavaScript runtime, handles the server-side logic for the application and allows for real-time updates. This includes running the Express.js server, handling real-time updates using WebSockets, and handling any other server-side tasks.

You can then access Event Posting at http://localhost:80 in your browser (Figure 2):

Figure 2: Add a new event.

Select Add New Event to add the details (Figure 3).

Figure 3: Add event details.

Save the event details to see the final results (Figure 4).

Figure 4: Display upcoming events.

Why containerize the MEAN stack?

Containerizing the MEAN stack allows for a consistent, portable, and easily scalable environment for the application, as well as improved security and ease of deployment. Containerizing the MEAN stack has several benefits, such as:

Consistency: Containerization ensures that the environment for the application is consistent across different development, testing, and production environments. This approach eliminates issues that can arise from differences in the environment, such as different versions of dependencies or configurations.

Portability: Containers are designed to be portable, which means that they can be easily moved between different environments. This capability makes it easy to deploy the MEAN stack application to different environments, such as on-premises or in the cloud.

Isolation: Containers provide a level of isolation between the application and the host environment. Thus, the application has access only to the resources it needs and does not interfere with other applications running on the same host.

Scalability: Containers can be easily scaled up or down depending on the needs of the application, resulting in more efficient use of resources and better performance.

Containerizing your Event Posting app

Docker helps you containerize your MEAN Stack — letting you bundle your complete Event Posting application, runtime, configuration, and operating system-level dependencies. The container then includes everything needed to ship a cross-platform, multi-architecture web application. 

We’ll explore how to run this app within a Docker container using Docker Official Images. To begin, you’ll need to download Docker Desktop and complete the installation process. This step includes the Docker CLI, Docker Compose, and a user-friendly management UI, which will each be useful later on.

Docker uses a Dockerfile to create each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Next, we’ll create an empty Dockerfile in the root of our project repository.

Containerizing your Angular frontend

We’ll build a multi-stage Dockerfile to containerize our Angular frontend. 

A Dockerfile is a plain-text file that contains instructions for assembling a Docker container image. When Docker builds our image via the docker build command, it reads these instructions, executes them, and creates a final image. 

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit testing. A separate image holds the application’s runtime. This setup makes the final image more secure and shrinks its footprint (because it doesn’t contain development or debugging tools). 

Let’s walk through the process of creating a Dockerfile for our application. First, create the following empty file with the name Dockerfile in the root of your frontend app.

touch Dockerfile

Then you’ll need to define your base image in the Dockerfile file. Here we’ve chosen the stable LTS version of the Node Docker Official Image. This image comes with every tool and package needed to run a Node.js application:

FROM node:lts-alpine AS build

Next, let’s create a directory to house our image’s application code. This acts as the working directory for your application:

WORKDIR /usr/src/app

The following COPY instruction copies the package.json and src file from the host machine to the container image. 

The COPY command takes two parameters. The first tells Docker which file(s) you’d like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /usr/src/app.

COPY package.json .

COPY package-lock.json .
RUN npm ci

Next, we need to add our source code into the image. We’ll use the COPY command just like we previously did with our package.json file. 

Note: It’s common practice to copy the package.json file separately from the application code when building a Docker image. This step allows Docker to cache the node_modules layer separately from the application code layer, which can significantly speed up the Docker build process and improve the development workflow.

COPY . .

Then, use npm run build to run the build script from package.json:

RUN npm run build

In the next step, we need to specify the second stage of the build that uses an Nginx image as its base and copies the nginx.conf file to the /etc/nginx directory. It also copies the compiled TypeScript code from the build stage to the /usr/share/nginx/html directory.

FROM nginx:stable-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=build /usr/src/app/dist/events /usr/share/nginx/html

Finally, the EXPOSE instruction tells Docker which port the container listens on at runtime. You can specify whether the port listens on TCP or UDP. The default is TCP if the protocol isn’t specified.

EXPOSE 80

Here is our complete Dockerfile:

# Builder container to compile typescript
FROM node:lts-alpine AS build
WORKDIR /usr/src/app

# Install dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci

# Copy the application source
COPY . .
# Build typescript
RUN npm run build

FROM nginx:stable-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=build /usr/src/app/dist/events /usr/share/nginx/html

EXPOSE 80

Now, let’s build our image. We’ll run the docker build command as above, but with the -f Dockerfile flag. The -f flag specifies your Dockerfile name. The “.” command will use the current directory as the build context and read a Dockerfile from stdin. The -t tags the resulting image.

docker build . -f Dockerfile -t events-fe:1

Containerizing your Node.js backend

Let’s walk through the process of creating a Dockerfile for our backend as the next step. First, create the following empty Dockerfile in the root of your backend Node app:

# Builder container to compile typescript
FROM node:lts-alpine AS build
WORKDIR /usr/src/app

# Install dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci

# Copy the application source
COPY . .
# Build typescript
RUN npm run build

FROM node:lts-alpine
WORKDIR /app
COPY package.json .
COPY package-lock.json .
COPY .env.production .env

RUN npm ci –production

COPY –from=build /usr/src/app/dist /app

EXPOSE 8000
CMD [ "node", "src/index.js"]

This Dockerfile is useful for building and running TypeScript applications in a containerized environment, allowing developers to package and distribute their applications more easily.

The first stage of the build process, named build, is based on the official Node.js LTS Alpine Docker image. It sets the working directory to /usr/src/app and copies the package.json and package-lock.json files to install dependencies with the npm ci command. It then copies the entire application source code and builds TypeScript with the npm run build command.

The second stage of the build process, named production, also uses the official Node.js LTS Alpine Docker image. It sets the working directory to /app and copies the package.json, package-lock.json, and .env.production files. It then installs only production dependencies with npm ci –production command, and copies the output of the previous stage, the compiled TypeScript code, from /usr/src/app/dist to /app.

Finally, it exposes port 8000 and runs the command node src/index.js when the container is started.

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

services:
frontend:
build:
context: "./frontend/events"
dockerfile: "./Dockerfile"
networks:
– events_net
backend:
build:
context: "./backend"
dockerfile: "./Dockerfile"
networks:
– events_net
db:
image: mongo:latest
ports:
– 27017:27017
networks:
– events_net
proxy:
image: nginx:stable-alpine
environment:
– NGINX_ENVSUBST_TEMPLATE_SUFFIX=.conf
– NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx
volumes:
– ${PWD}/nginx.conf:/etc/nginx/templates/nginx.conf.conf
ports:
– 80:80
networks:
– events_net

networks:
events_net:

Your example application has the following parts:

Four services backed by Docker images: Your Angular frontend, Node.js backend, MongoDB database, and Nginx as a proxy server

The frontend and backend services are built from Dockerfiles located in ./frontend/events and ./backend directories, respectively. Both services are attached to a network called events_net.

The db service is based on the latest version of the MongoDB Docker image and exposes port 27017. It is attached to the same events_net network as the frontend and backend services.

The proxy service is based on the stable-alpine version of the Nginx Docker image. It has two environment variables defined, NGINX_ENVSUBST_TEMPLATE_SUFFIX and NGINX_ENVSUBST_OUTPUT_DIR, that enable environment variable substitution in Nginx configuration files. 

The proxy service also has a volume defined that maps the local nginx.conf file to /etc/nginx/templates/nginx.conf.conf in the container. Finally, it exposes port 80 and is attached to the events_net network.

The events_net network is defined at the end of the file, and all services are attached to it. This setup enables communication between the containers using their service names as hostnames.

You can clone the repository or download the docker-compose.yml file directly from Dockersamples on GitHub.

Bringing up the container services

You can start the MEAN application stack by running the following command:

docker compose up -d

Next, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:

$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
events-backend-1 events-backend "docker-entrypoint.s…" backend 29 minutes ago Up 29 minutes 8000/tcp
events-db-1 mongo:latest "docker-entrypoint.s…" db 5 seconds ago Up 4 seconds 0.0.0.0:27017->27017/tcp
events-frontend-1 events-frontend "/docker-entrypoint.…" frontend 29 minutes ago Up 29 minutes 80/tcp
events-proxy-1 nginx:stable-alpine "/docker-entrypoint.…" proxy 29 minutes ago Up 29 minutes 0.0.0.0:80->80/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application (Figure 5):

Figure 5: Viewing running containers in Docker Dashboard.

Conclusion

Congratulations! You’ve successfully learned how to containerize a MEAN-backed Event Posting application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you easily build and deploy your MEAN stack in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing!
Quelle: https://blog.docker.com/feed/