Boost Your Local Testing Game with the LambdaTest Tunnel Docker Extension

As the demand for web applications continues to rise, so does the importance of testing them thoroughly. One challenge that testers face is how to test applications that are hosted locally on their machines. This is where the LambdaTest Tunnel Docker Extension comes in handy. This extension allows you to establish a secure connection between your local environment and the LambdaTest platform, making it possible to test your locally hosted pages and applications on a remote browser. 

In this article, we’ll explore the benefits of using the LambdaTest Tunnel Docker Extension and describe how it can streamline your testing workflow.

Overview of LambdaTest Tunnel

LambdaTest Tunnel is a secure and encrypted tunneling feature that allows devs and QAs to test their locally hosted web applications or websites on the cloud-based real machines. It establishes a secure connection between the user’s local machine and the real machine in the cloud (Figure 1).

By downloading the LambdaTest Tunnel binary, you can securely connect your local machine to LambdaTest cloud servers even when behind corporate firewalls. This allows you to test locally hosted websites or web applications across various browsers, devices, and operating systems available on the LambdaTest platform. Whether your web files are written in HTML, CSS, PHP, Python, or similar languages, you can use LambdaTest Tunnel to test them.

Figure 1: Overview of LambdaTest Tunnel.

Why use LambdaTest Tunnel?

LambdaTest Tunnel offers numerous benefits for web developers, testers, and QA professionals. These include secure and encrypted connection, cross-browser compatibility testing, localhost testing, etc.

Let’s see the LambdaTest Tunnel benefits one by one:

It provides a secure and encrypted connection between your local machine and the virtual machines in the cloud, thereby ensuring the privacy of your test data and online communications.

With LambdaTest Tunnel, you can test your web applications or websites, local folder, and files across a wide range of browsers and operating systems without setting up complex and expensive local testing environments.

It lets you test your locally hosted web applications or websites on cloud-based real OS machines.

You can even run accessibility tests on desktop browsers while testing locally hosted web applications and pages.

Why run LambdaTest Tunnel as a Docker Extension?

With Docker Extensions, you can build and integrate software applications into your daily workflow. Using LambdaTest Tunnel as a Docker extension provides a seamless and hassle-free experience for establishing a secure connection and performing cross-browser testing of locally hosted websites and web applications on the LambdaTest platform without manually launching the tunnel through the command line interface (CLI).

The LambdaTest Tunnel Docker Extension opens up a world of options for your testing workflows by adding a variety of features. Docker Desktop has an easy-to-use one-click installation feature that allows you to use the LambdaTest Tunnel Docker Extension directly from Docker Desktop.

Getting started

Prerequisites: Docker Desktop 4.8 or later and a LambdaTest account. Note: You must ensure the Docker extension is enabled (Figure 2).

Figure 2: Enable Docker Extensions.

Step 1: Install the LambdaTest Docker Extension

In the Extensions Marketplace, search for LambdaTest Tunnel extension and select Install (Figure 3).

Figure 3: Install LambdaTest Tunnel.

Step 2: Set up the Docker LambdaTest Tunnel

Open the LambdaTest Tunnel and select the Setup Tunnel to configure the tunnel (Figure 4).

Figure 4: Configure the tunnel.

Step 3: Enter your LambdaTest credentials

Provide your LambdaTest Username, Access Token, and preferred Tunnel Name. You can get your Username and Access Token from your LambdaTest Profile under Password & Security. Once these details have been entered, click on the Launch Tunnel (Figure 5).

Figure 5: Launch LambdaTest Tunnel.

The LambdaTest Tunnel will be launched, and you can see the running tunnel logs (Figure 6).

Figure 6: Running logs.

Once you have configured the LambdaTest Tunnel via Docker Extension, it should appear on the LambdaTest Dashboard (Figure 7).

Figure 7: New active tunnel.

Local testing using LambdaTest Tunnel Docker Extension

Let’s walk through a scenario using LambdaTest Tunnel. Suppose a web developer has created a new web application that allows users to upload and view images. The developer needs to ensure that the application can handle a variety of different image formats and sizes and that it can render display images across different browsers and devices.

To do this, the developer first sets up a local development environment and installs the LambdaTest Tunnel Docker Extension. They then use the web application to open and manipulate local image files.

Next, the developer uses the LambdaTest Tunnel to securely expose their local development environment to the internet. This step allows them to test the application in real-time on different browsers and devices using LambdaTest’s cloud-based digital experience testing platform.

Now let’s see the steps to perform local testing using the LambdaTest Tunnel Docker Extension.

1. Go to the LambdaTest Dashboard and navigate to Real Time Testing > Browser Testing (Figure 8).

Figure 8: Navigate to Browser Testing.

2. In the console, enter the localhost URL, select browser, browser version, operating system, etc. and select START (Figure 9).

Figure 9: Configure testing.

3. A cloud-based real operating system will fire up where you can perform testing of local files or folders (Figure 10).

Figure 10: Perform local testing.

Learn more about how to set up the LambdaTest Tunnel Docker Extension in the documentation. 

Conclusion

The LambdaTest Tunnel Docker Extension makes it easy to perform local testing without launching the tunnel from the CLI. You can run localhost tests over an online cloud grid of 3000+ real browsers and operating system combinations. You don’t have to worry about the challenges of local infrastructure because LambdaTest provides you with a cloud grid of zero downtime. 

Check out the LambdaTest Tunnel Docker Extension on DockerHub. The LambdaTest Tunnel Docker Extension source code is available on GitHub, and contributions are welcome. 
Quelle: https://blog.docker.com/feed/

Docker Init: Initialize Dockerfiles and Compose files with a single CLI command

Docker has revolutionized the way developers build, package, and deploy their applications. Docker containers provide a lightweight, portable, and consistent runtime environment that can run on any infrastructure. And now, the Docker team has developed docker init, a new command-line interface (CLI) command introduced as a beta feature that simplifies the process of adding Docker to a project (Figure 1).

Note: Docker Init should not be confused with the internally -used docker-init executable, which is invoked by Docker when utilizing the –init flag with the docker run command.

Figure 1: With one command, all required Docker files are created and added to your project.

Create assets automatically

The new  docker init command automates the creation of necessary Docker assets, such as Dockerfiles, Compose files, and .dockerignore files, based on the characteristics of the project. By executing the docker init command, developers can quickly containerize their projects. Docker init is a valuable tool for developers who want to experiment with Docker, learn about containerization, or integrate Docker into their existing projects.

To use docker init, developers need to upgrade to the version 4.19.0 or later of Docker Desktop and execute the command in the target project folder. Docker init will detect the project definitions, and it will automatically generate the necessary files to run the project in Docker. 

The current Beta release of docker init supports Go, Node, and Python, and our development team is actively working to extend support for additional languages and frameworks, including Java, Rust, and .NET. If there is a language or stack that you would like to see added or if you have other feedback about docker init, let us know through our Google form.

In conclusion, docker init is a valuable tool for developers who want to simplify the process of adding Docker support to their projects. It automates the creation of necessary Docker assets  and can help standardize the creation of Docker assets across different projects. By enabling developers to focus on developing their applications and reducing the risk of errors and inconsistencies, Docker init can help accelerate the adoption of Docker and containerization.

See Docker Init in action

To see docker init in action, check out the following overview video by Francesco Ciulla, which demonstrates building the required Docker assets to your project.

Check out the documentation to learn more.
Quelle: https://blog.docker.com/feed/

Building a Local Application Development Environment for Kubernetes with the Gefyra Docker Extension 

If you’re using a Docker-based development approach, you’re already well on your way toward creating cloud-native software. Containerizing your software ensures that you have all the system-level dependencies, language-specific requirements, and application configurations managed in a containerized way, bringing you closer to the environment in which your code will eventually run. 

In complex systems, however, you may need to connect your code with several auxiliary services, such as databases, storage volumes, APIs, caching layers, message brokers, and others. In modern Kubernetes-based architectures, you also have to deal with service meshes and cloud-native deployment patterns, such as probes, configuration, and structural and behavioral patterns. 

Kubernetes offers a uniform interface for orchestrating scalable, resilient, and services-based applications. However, its complexity can be overwhelming, especially for developers without extensive experience setting up Kubernetes clusters. That’s where Gefyra comes in, making it easier for developers to work with Kubernetes and improve the process of creating secure, reliable, and scalable software.

What is Gefyra? 

Gefyra, named after the Greek word for “bridge,” is a comprehensive toolkit that facilitates Docker-based development with Kubernetes. If you plan to use Kubernetes as your production platform, it’s essential to work with the same environment during development. This approach ensures that you have the highest possible “dev/prod-parity,” minimizing friction when transitioning from development to production. 

Gefyra is an open source project that provides docker run on steroids. It allows you to connect your local Docker with any Kubernetes cluster and run a container locally that behaves as if it would run in the cluster. You can write code locally in your favorite code editor using the tools you love. 

Additionally, Gefyra does not require you to build a container image from your code changes, push the image to a registry, or trigger a restart in the cluster. Instead, it saves you from this tedious cycle by connecting your local code right into the cluster without any changes to your existing Dockerfile. This approach is useful not only for new code but also when introspecting existing code with a debugger that you can attach to a running container. That makes Gefyra a productivity superstar for any Kubernetes-based development work.

How does Gefyra work?

Gefyra installs several cluster-side components that enable it to control the local development machine and the development cluster. These components include a tunnel between the local development machine and the Kubernetes cluster, a local DNS resolver that behaves like the cluster DNS, and sophisticated IP routing mechanisms. Gefyra uses popular open source technologies, such as Docker, WireGuard, CoreDNS, Nginx, and Rsync, to build on top of these components.

The local development setup involves running a container instance of the application on the developer machine, with a sidecar container called Cargo that acts as a network gateway and provides a CoreDNS server that forwards all requests to the cluster (Figure 1). Cargo encrypts all the passing traffic with WireGuard using ad hoc connection secrets. Developers can use their existing tooling, including their favorite code editor and debuggers, to develop their applications.

Figure 1: Local development setup.

Gefyra manages two ends of a WireGuard connection and automatically establishes a VPN tunnel between the developer and the cluster, making the connection robust and fast without stressing the Kubernetes API server (Figure 2). Additionally, the client side of Gefyra manages a local Docker network with a VPN endpoint, allowing the container to join the VPN that directs all traffic into the cluster.

Figure 2: Connecting developer machine and cluster.

Gefyra also allows bridging existing traffic from the cluster to the local container, enabling developers to test their code with real-world requests from the cluster and collaborate on changes in a team. The local container instance remains connected to auxiliary services and resources in the cluster while receiving requests from other Pods, Services, or the Ingress. This setup eliminates the need for building container images in a continuous integration pipeline and rolling out a cluster update for simple changes.

Why run Gefyra as a Docker Extension?

Gefyra’s core functionality is contained in a Python library available in its repository. The CLI that comes with the project has a long list of arguments that may be overwhelming for some users. To make it more accessible, Gefyra developed the Docker Desktop extension, which is easy for developers to use without having to delve into the intricacies of Gefyra.

The Gefyra extension for Docker Desktop enables developers to work with a variety of Kubernetes clusters, including the built-in Kubernetes cluster, local providers such as Minikube, K3d, or Kind, Getdeck Beiboot, or any remote clusters. Let’s get started.

Installing the Gefyra Docker Desktop

Prerequisites: Docker Desktop 4.8 or later.

Step 1: Initial setup

In Docker Desktop, confirm that the Docker Extensions feature is enabled. (Docker Extensions should be enabled by default.) In Settings | Extensions select the Enable Docker Extensions box (Figure 3).

Figure 3: Enable Docker Extensions.

You must also enable Kubernetes under Settings (Figure 4).

Figure 4: Enable Kubernetes.

Gefyra is in the Docker Extensions Marketplace. In the following instructions, we’ll install Gefyra in Docker Desktop. 

Step 2: Add the Gefyra extension

Open Docker Desktop and select Add Extensions to find the Gefyra extension in the Extensions Marketplace (Figure 5).

Figure 5: Locate Gefyra in the Docker Extensions Marketplace.

Once Gefyra is installed, you can open the extension and find the start screen of Gefyra that lists all containers that are connected to a Kubernetes cluster. Of course, this section is empty on a fresh install.To launch a local container with Gefyra, just like with Docker, you need to click on the Run Container button at the top right (Figure 6).

Figure 6: Gefyra start screen.

The next steps will vary based on whether you’re working with a local or remote Kubernetes cluster. If you’re using a local cluster, simply select the matching kubeconfig file and optionally set the context (Figure 7). 

For remote clusters, you may need to manually specify additional parameters. Don’t worry if you’re unsure how to do this, as the next section will provide a detailed example for you to follow along with.

Figure 7: Selecting Kubeconfig.

The Kubernetes demo workloads

The following example showcases how Gefyra leverages the Kubernetes functionality included in Docker Desktop to create a development environment for a simple application that consists of two services — a backend and a frontend (Figure 8). 

Both services are implemented as Python processes, and the frontend service uses a color property obtained from the backend to generate an HTML document. Communication between the two services is established via HTTP, with the backend address being passed to the frontend as an environment variable.

Figure 8: Frontend and backend services.

The Gefyra team has created a repository for the Kubernetes demo workloads, which can be found on GitHub. 

If you prefer to watch a video explaining what’s covered in this tutorial, check out this video on YouTube. 

Prerequisite

Ensure that the current Kubernetes context is switched to Docker Desktop. This step allows the user to interact with the Kubernetes cluster and deploy applications to it using kubectl.

kubectl config current-context
docker-desktop

Clone the repository

The next step is to clone the repository:

git clone https://github.com/gefyrahq/gefyra-demos

Applying the workload

The following YAML file sets up a simple two-tier app consisting of a backend service and a frontend service with communication between the two services established via the SVC_URL environment variable passed to the frontend container. 

It defines two pods, named backend and frontend, and two services, named backend and frontend, respectively. The backend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-backend image on port 5002. The frontend pod is defined with a container that runs the quay.io/gefyra/gefyra-demo-frontend image on port 5003. The frontend container also includes an environment variable named SVC_URL, which is set to the value backend.default.svc.cluster.local:5002.

The backend service is defined to select the backend pod using the app: backend label, and expose port 5002. The frontend service is defined to select the frontend pod using the app: frontend label, and expose port 80 as a load balancer, which routes traffic to port 5003 of the frontend container.

/gefyra-demos/kcd-munich> kubectl apply -f manifests/demo.yaml
pod/backend created
pod/frontend created
service/backend created
service/frontend created

Let’s watch the workload getting ready:

kubectl get pods
NAME READY STATUS RESTARTS AGE
backend 1/1 Running 0 2m6s
frontend 1/1 Running 0 2m6s

After ensuring that the backend and frontend pods have finished initializing (check for the READY column in the output), you can access the application by navigating to http://localhost in your web browser. This URL is served from the Kubernetes environment of Docker Desktop. 

Upon loading the page, you will see the application’s output displayed in your browser. Although the output may not be visually stunning, it is functional and should provide the necessary functionality for your needs.

Now, let’s explore how we can correct or adjust the color of the output generated by the frontend component.

Using Gefyra “Run Container” with the frontend process

In the first part of this section, you will see how to execute a frontend process on your local machine that is associated with a resource based on the Kubernetes cluster: the backend API. This can be anything ranging from a database to a message broker or any other service utilized in the architecture.

Kick off a local container with Run Container from the Gefyra start screen (Figure 9).

Figure 9: Run a local container.

Once you’ve entered the first step of this process, you will find the kubeconfig` and context to be set automatically. That’s a lifesaver if you don’t know where to find the default kubeconfig on your host.

Just hit the Next button and proceed with the container settings (Figure 10).

Figure 10: Container settings.

In the Container Settings step, you can configure the Kubernetes-related parameters for your local container. In this example, everything happens in the default Kubernetes namespace. Select it in the first drop-down input (Figure 11). 

In the drop-down input below Image, you can specify the image to run locally. Note that it lists all images that are being used in the selected namespace (from the Namespace selector). Isn’t that convenient? You don’t need to worry about the images being used in the cluster or find them yourself. Instead, you get a suggestion to work with the image at hand, as we want to do in this example (Figure 12). You could still specify any arbitrary images if you like, for example, a completely new image you just built on your machine.

Figure 11: Select namespace and workload.

Figure 12: Select image to run.

To copy the environment of the frontend container running in the cluster, you will need to select pod/frontend from the Copy Environment From selector (Figure 13). This step is important because you need the backend service address, which is passed to the pod in the cluster using an environment variable.

Finally, for the upper part of the container settings, you need to overwrite the following run command of the container image to enable code reloading:

poetry run flask –app app debug run –port 5002 –host 0.0.0.0

Figure 13: Copy environment of frontend container.

Let’s start the container process on port 5002 and expose this port on the local machine. In addition, let’s mount the code directory (/gefyra-demos/kcd-munich/frontend) to make code changes immediately visible. That’s it for now. A click on the Run button starts the process.

Figure 14: Installing Gefyra components.

It takes a few seconds to install Gefyra’s cluster-side components, prepare the local networking part, and pull the container image to start locally (Figure 14). Once this is ready, you will get redirected to the native container view of Docker Desktop from this container (Figure 15).

Figure 15: Log view.

You can look around in the container using the Terminal tab (Figure 16). Type in the env command in the shell, and you will see all the environment variables coming with Kubernetes.

Figure 16: Terminal view.

We’re particularly interested in the SVC_URL variable that points the frontend to the backend process, which is, of course, still running in the cluster. Now, when browsing to the URL http://localhost:5002, you will get a slightly different output:

Why is that? Let’s look at the code that we already mounted into the local container, specifically the app.py that runs a Flask server (Figure 17).

Figure 17: App.py code.

The last line of the code in the Gefyra example displays the text Hello KCD!, and any changes made to this code are immediately updated in the local container. This feature is noteworthy because developers can freely modify the code and see the changes reflected in real-time without having to rebuild or redeploy the container.

Line 12 of the code in the Gefyra example sends a request to a service URL, which is stored in the variable SVC. The value of SVC is read from an environment variable named SVC_URL, which is copied from the pod in the Kubernetes cluster. The URL, backend.default.svc.cluster.local:5002, is a fully qualified domain name (FQDN) that points to a Kubernetes service object and a port. 

These URLs are commonly used by applications in Kubernetes to communicate with each other. The local container process is capable of sending requests to services running in Kubernetes using the native connection parameters, without the need for developers to make any changes, which may seem like magic at times.

In most development scenarios, the capabilities of Gefyra we just discussed are sufficient. In other words, you can use Gefyra to run a local container that can communicate with resources in the Kubernetes cluster, and you can access the app on a local port. However, what if you need to modify the backend while the frontend is still running in Kubernetes? This is where the “bridge” feature of Gefyra comes in, which we will explore next.

Gefyra “bridge” with the backend process

We could choose to run the frontend process locally and connect it to the backend process running in Kubernetes through a bridge. However, this approach may not always be necessary or desirable, especially for backend developers who may not be interested in the frontend. In this case, it may be more convenient to leave the frontend running in the cluster and stop the local instance by selecting the stop button in Docker Desktop’s container view.

First of all, we have to run a local instance of the backend service. It’s the same as with the frontend, but this time with the backend container image (Figure 18).

Figure 18: Running a backend container image.

Compared to the frontend example from above, you can run the backend container image (quay.io/gefyra/gefyra-demo-backend:latest), which is suggested by the drop-down selector. This time we need to copy the environment from the backend pod running in Kubernetes. Note that the volume mount is now set to the code of the backend service to make it work.

After starting the container, you can check http://localhost:5002/color, which serves the backend API response. Looking at the app.py of the backend service shows the source of this response. In line 8, this app returns a JSON response with the color property set to green (Figure 19).

Figure 19: Checking the color.

At this point, keep in mind that we’re only running a local instance of the backend service. This time, a connection to a Kubernetes-based resource is not needed as this container runs without any external dependency.

The idea is to make the frontend process that serves from the Kubernetes cluster on http://localhost (still blue) pick up our backend information to render its output. That’s done using Gefyra’s bridge feature. In the next step, we will overlay the backend process running in the cluster with our local container instance so that the local code becomes effective in the cluster.

Getting back to the Gefyra container list on the start screen, you can find the Bridge column on each locally running container (Figure 20). Once you click this button, you can create a bridge of your local container into the cluster.

Figure 20: The Bridge column is visible on the far right.

In the next dialog, we need to enter the bridge configuration (Figure 21).

Figure 21: Enter the bridge configuration.

Let’s set the “Target” for the bridge to the backend pod, which is currently serving the frontend process in the cluster, and set a timeout for the bridge to 60 seconds. We also need to map the port of the proxy running in the cluster with the local instance. 

If your local container is configured to listen on a different port from the cluster, you can specify the mapping here (Figure 22). In this example, the service is running on port 5003 in both the cluster and on the local machine, so we need to map that port. After clicking the Bridge button, it takes a few seconds to return to the container list on Gefyra’s start view.

Figure 22: Specify port mapping.

Observe the change in the icon of the Bridge button, which now depicts a stop symbol (Figure 23). This means the bridge function is now operational and can be terminated by simply clicking this button again.

Figure 23: The Bridge column showing a stop symbol.

At this point, the local code is able to handle requests from the frontend process in the cluster by using the URL stored in the SVC_URL variable, without making any changes to the frontend process itself. To confirm this, you can open http://localhost in your browser (which is served from the Kubernetes of Docker Desktop) and check that the output is now green. This is because the local code is returning the value green for the color property. You can change this value to any valid one in your IDE, and it will be immediately reflected in the cluster. This is the amazing power of this tool.

Remember to release the bridge of your container once you are finished making changes to your backend. This will reset the cluster to its original state, and the frontend will display the original “beautiful” blue H1 again. This approach allows us to intercept containers running in Kubernetes with our local code without modifying the Kubernetes cluster itself. That’s because we did not make any changes to the Kubernetes cluster itself. Instead, we kind of intercepted containers running in Kubernetes with our local code and released that intercept afterwards.

Conclusion

Gefyra is an easy-to-use Docker Desktop extension that connects with Kubernetes to improve development workflows and team collaboration. It lets you run containers as usual while being connected with Kubernetes, thereby saving time and ensuring high dev/prod parity. 

The Blueshoe development team would appreciate a star on GitHub and welcomes you to join their Discord community for more information.

About the Author

Michael Schilonka is a strong believer that Kubernetes can be a software development platform, too. He is the co-founder and managing director of the Munich-based agency Blueshoe and the technical lead of Gefyra and Getdeck. He talks about Kubernetes in general and how they are using Kubernetes for development. Follow him on LinkedIn to stay connected.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.19: Compose v2, the Moby project, and more

Docker Desktop release 4.19 is now available. In this post, we highlight features added to Docker Desktop in the past month, including performance enhancements, new language support, and a Moby update.

5x faster container-to-host networking on macOS

In Docker Desktop 4.19, we’ve made container-to-host networking performance 5x faster on macOS by replacing vpnkit with the TCP/IP stack from the gVisor project.

Many users work on projects that have containers communicating with a server outside their local Docker network. One example of this would be workloads that download packages from the internet via npm install or apt-get. This performance improvement should help a lot in these cases.

Over the next month, we’ll keep track of the stability of this new networking stack. If you notice any issues, you can revert to using the legacy vpnkit networking stack by setting “networkType”:”vpnkit” in Docker Desktop’s settings.json config file.

Docker Init (Beta): Support for Node and Python

In our 4.18 release, we introduced docker init, a CLI command in Beta that helps you easily add Docker to any of your projects by creating the required assets for you. In the 4.19 release, we’re happy to add to this and share that the feature now includes support for Python and Node.js. 

You can try docker init with Python and Node.js by updating to the latest version of Docker Desktop (4.19) and typing docker init in the command line while inside a target project folder. 

The Docker team is working on adding more languages and frameworks for this command, including Java, Rust, and .Net. Let us know if you would like us to support a specific language or framework. We welcome any feedback you may have as we continue to develop and improve Docker Init (Beta).

Docker Scout (Early Access)

With Docker Desktop release 4.19, we’ve made it easier to view Docker Scout data for all of your images directly in Docker Desktop. Whether you’re using an image stored locally in Docker Desktop or a remote image from Docker Hub, you can see all that data without leaving Docker Desktop.

A nudge toward Compose v2

Compose v1 has reached end-of-life and will no longer be bundled with Docker Desktop after June 2023.

In preparation, a new warning will be shown in the terminal when running Compose v1 commands. Set the COMPOSE_V1_EOL_SILENT=1 environment variable to suppress this message.

You can upgrade by enabling Use Compose v2 in the Docker Desktop settings. When active, Docker Desktop aliases docker-compose to Compose v2 and supports the recommended docker compose syntax.

Moby 23

We updated the Docker Engine and the CLI to Moby 23.0,  where we are upstreaming open source internal developments such as the containerd integration and Wasm support, which will ship with Moby 24.0. Moby 23.0 includes additional enhancements such as the –format=json shorthand variant of –format=“{{ json . }}” and support of relative source paths to the run command in the -v/–volume and -m/–mount flags. You can read more about Moby 23.0 in the release notes.

Conclusion

We love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. Check out the Docker Desktop 4.19 release notes for a full breakdown of what’s new in the latest release.

Quelle: https://blog.docker.com/feed/

Docker Compose Experiment: Sync Files and Automatically Rebuild Services with Watch Mode

We often hear how indispensable Docker Compose is as a development tool. Running docker compose up offers a streamlined experience and scales from quickly standing up a PostgreSQL database to building 50+ services across multiple platforms.

And, although “building a Docker image” was previously considered a last step in release pipelines, it’s safe to say that containers have since become an essential part of our daily workflow. Still, concerns around slow builds and developer experience have often been a barrier towards the adoption of containers for local development.

We’ve come a long way, though. For starters, Docker Compose v2 now has deep integration with BuildKit, so you can use features like RUN cache mounts, SSH Agent forwarding, and efficient COPY with –link to speed up and simplify your builds. We’re also constantly making quality-of-life tweaks like enhanced progress reporting and improving consistency across the Docker CLI ecosystem.

As a result, more developers are running docker compose build && docker compose up to keep their running development environment up-to-date as they make code changes. In some cases, you can even use bind mounts combined with a framework that supports hot reload to avoid the need for an image rebuild, but this approach often comes with its own set of caveats and limitations.

An early look at the watch command

Starting with Compose v2.17, we’re excited to share an early look at the new development-specific configuration in Compose YAML as well as an experimental file watch command (Figure 1) that will automatically update your running Compose services as you edit and save your code.

This preview is brought to you in no small part by Compose developer Nicolas De Loof (in addition to more than 10 bugfixes in this release alone).

Figure 1: Preview of the new watch command.

An optional new section, x-develop, can be added to a Compose service to configure options specific to your project’s daily flow. In this release, the only available option is watch, which allows you to define file or directory paths to monitor on your computer and a corresponding action to take inside the service container.

Currently, there are two possible actions: 

sync — Copy changed files matching the pattern into the running service container(s).

rebuild — Trigger an image build and recreate the running service container(s).

services:
web:
build: .
x-develop:
watch:
– action: sync
path: ./web
target: /src/web
– action: rebuild
path: package.json

In the preceding example, whenever a source file in the web/ directory is changed, Compose will copy the file to the corresponding location under /src/web inside the container. Because Webpack supports Hot Module Reload, the changes are automatically detected and applied.

Unlike source code files, adding a new dependency cannot be done on the fly, so whenever package.json is changed, Compose will rebuild the image and recreate the web service container.

Behind the scenes, the file watching code shares its core with Tilt. The intricacies and surprises of file watching have always been near and dear to the Tilt team’s hearts, and, as Dockhands, the geeking out has continued. 

We are going to continue to build out the experience while gated behind the new docker compose alpha command and x-develop Compose YAML section. This approach will allow us to respond to community feedback early in the development process while still providing a clear path to stabilization as part of the Compose Spec.

Docker Compose powers countless workflows today, and its lightweight approach to containerized development is not going anywhere — it’s just learning a few new tricks.

Try it out

Follow the instructions at dockersamples/avatars to quickly run a small demo app, as follows:

git clone https://github.com/dockersamples/avatars.git
cd avatars
docker compose up -d
docker compose alpha watch
# open http://localhost:5735 in your browser

If you try it out on your own project, you can comment on the proposed specification on GitHub issue #253 in the compose-spec repository.
Quelle: https://blog.docker.com/feed/

Docker Desktop 4.18: Docker Scout Updates, Container File Explorer GA

We’re always looking for ways to enhance your experience with Docker, whether you’re using an integration, extension, or directly in product. Docker Desktop 4.18 focuses on improvements in the command line and in Docker Desktop. 

Read on to learn about new CLI features in Docker Scout, and find out about Docker init, an exciting CLI Beta feature to help you quickly add Docker to any project. We also review new features to help you get up and running with Docker faster: Container File Explorer, adminless macOS install, and a new experimental feature in Docker Compose.

Docker Scout CLI

In Docker Desktop 4.17, we introduced Docker Scout, a tool that provides visibility into image vulnerabilities and recommendations for quick remediation. We are delighted to announce the release of several new features into the Docker Scout command line, which ships with Docker Desktop 4.18. These updates come after receiving an overwhelming amount of community feedback. 

The 4.18 release of Docker Scout includes a vulnerability quickview, image recommendations directly on the command line, improved remediation guidance with BuildKit SBOM utilization, and a preview feature comparing images (imagine using diff, but for container images).

Quickview 

Suppose that you have created a new container image and would like to assess its security posture. You can now run docker scout quickview for an instant, high-level security insight into your image. If any issues are found, Docker Scout will guide you on what to do next.

`docker scout quickview` output showing image vulnerability information

Command-line recommendations

If you’ve previously used docker scout cves to understand which CVEs exist in your images, you may have wondered what course of action to take next. With the new docker scout recommendations command, you receive a list of recommendations that directly suggest available updates for the base image. 

The docker scout recommendations command analyzes the image and displays recommendations to refresh or update the base image, along with a list of benefits, including opportunities to reduce vulnerabilities or how to achieve smaller image sizes.

‘docker scout recommendations’ output showing available image updates for vulnerable images

BuildKit provenance and SBOM attestations 

In January 2023, BuildKit was extended to support building images with attestations. These images can now use the docker scout command line to process this information and determine relevant next steps. We can support this as the docker scout command-line tool knows exactly what base image you built with and can provide more accurate recommendations.If an image was built and pushed with an attached SBOM attestation, docker scout reads the package information from the SBOM attestation instead of creating a new local SBOM.

To learn how to build images with attestations using BuildKit, read “Generating SBOMs for Your Image with BuildKit.” 

Compare images

Note: This is an experimental Docker Scout feature and may change and evolve over time. 

Retrospectively documenting the changes made to address a security issue after completing a vulnerability remediation is considered a good practice. Docker Desktop 4.18 introduces an early preview of image comparison. 

Comparison of vulnerability differences between two images

This feature highlights the vulnerability differences between two images and how packages compare. These details include the package version, environment variables in each image, and more. Additionally, the command-line output can be set up in a markdown format. This information can then be used to generate diff previews in pull requests through GitHub Actions. 

We’d love to know what scenarios you could imagine using this diff feature in. You can do this by opening up Docker Desktop, navigating to the Images tab, and selecting Give feedback.

Read the documentation to learn more about these features. 

Container File Explorer 

Another feature we’re happy to announce is the GA release of Container File Explorer. When you need to check or quickly replace files within a container, Container File Explorer will help you do this — and much more — straight from Docker Desktop’s UI. 

You won’t need to remember long CLI commands, fiddle with long path parameters on the docker cp command, or get frustrated that your container has no shell at all to check the files. Container File Explorer provides a simple UI that allows you to:

Check a container file system

Copy files and folders between host and containers

Easily drag and drop files to a container

Quickly edit files with syntax highlighting

Delete files

With Container File Explorer, you can view your containers’ files at any state (stopped/running/paused/…) and for any container type, including slim-containers/slim-images (containers without a shell). Open the dashboard, go to the Containers tab, open the container action menu, and check your files:

Container File Explorer UI in Docker Desktop

Adminless install on macOS

We’ve adjusted our macOS install flow to make it super easy for developers to install Docker Desktop without granting them admin privileges. Some developers work in environments with elevated security requirements where local admin access may be prohibited on their machines. We wanted to make sure that users in these environments are able to opt out of Docker Desktop functionality that requires admin privileges.

The default install flow on macOS will still ask for admin privileges, as we believe this allows us to provide an optimized experience for the vast majority of developer use cases. Upon granting admin privileges, Docker Desktop automatically installs the Docker CLI tools, enabling third-party libraries to seamlessly integrate with Docker (by enabling the default Docker socket) and allowing users to bind to privileged ports between 1 and 1024. 

If you want to change the settings you configured at install, you can do so easily within the Advanced tab of Docker Desktop’s Settings.

Docker init (Beta)

Another exciting feature we’re releasing in Beta is docker init. This is a new CLI command that lets you quickly add Docker to your project by automatically creating the required assets: Dockerfiles, Compose files, and .dockerignore. Using this feature, you can easily update existing projects to run using containers or set up new projects even if you’re not familiar with Docker.

You can try docker init by updating to the latest version of Docker Desktop (4.18.0) and typing docker init in the command line while inside a target project folder. docker init will create all the required files to run your project in Docker. 

Refer to the docker init documentation to learn more.

The Beta version of docker init ships with Go support, and the Docker team is already working on adding more languages and frameworks, including Node.js, Python, Java, Rust, and .Net, plus other features in the coming months. If there is a specific language or framework you would like us to support, let us know. Submit other feedback and suggestions in our public roadmap.

Note: Please be aware that docker init should not be confused with the internally-used docker-init executable, which is invoked by Docker when utilizing the –init flag with the docker run command. Refer to the docs to learn more. 

`docker init` command-line output on how to get started

Docker Compose File Watch (Experimental)

Docker Compose has a new trick! Docker Compose File Watch is available now as an Experimental feature to automatically keep all your service containers up-to-date while you work.

With the 4.18 release, you can optionally add a new x-develop section to your services in compose.yaml:

services:
web:
build: .
# !!! x-develop is experimental !!!
x-develop:
watch:
– action: sync
path: ./web
target: /app/web
– action: rebuild
path: .package.json

Once configured, the new docker compose alpha watch command will start monitoring for file edits within your project:

On a change to ./web/App.jsx, for example, Compose will automatically synchronize it to /src/web/App.jsx inside the container.

Meanwhile, if you modify package.json (such as by installing a new npm package), Compose will rebuild the image and replace the existing service with an updated version.

Compose File Watch mode is just the start. With nearly 100 commits since the last Docker Compose release, we’ve squashed bugs and made a lot of quality-of-life improvements. (A special shout-out to all our recent first-time contributors!)

We’re excited about Docker Compose File Watch and are actively working on the underlying mechanics and configuration format.

Conclusion

That’s a wrap for our Docker Desktop 4.18 update. This release includes many cool, new features, including some that you can help shape! We also updated the Docker Engine to address some CVEs. As always, we love hearing your feedback. Please leave any feedback on our public GitHub roadmap and let us know what else you’d like to see. 

Check out the release notes for a full breakdown of what’s new in Docker Desktop 4.18.
Quelle: https://blog.docker.com/feed/

Containerizing an Event Posting App Built with the MEAN Stack

This article is a result of open source collaboration. During Hacktoberfest 2022, the project was announced in the Black Forest Docker meetup group and received contributions from members of the meetup group and other Hacktoberfest contributors. Almost all of the code in the GitHub repo was written by Stefan Ruf, Himanshu Kandpal, and Sreekesh Iyer.

The MEAN stack is a fast-growing, open source JavaScript stack used to develop web applications. MEAN is a diverse collection of robust technologies — MongoDB, Express.js, Angular, and Node.js — for developing scalable web applications. 

The stack is a popular choice for web developers as it allows them to work with a single language throughout the development process and it also provides a lot of flexibility and scalability. Node, Express, and Angular even claimed top spots as popular frameworks or technologies in Stack Overflow’s 2022 Developer Survey.

In this article, we’ll describe how the MEAN stack works using an Event Posting app as an example.

How does the MEAN stack work?

MEAN consists of the following four components:

MongoDB — A NoSQL database 

ExpressJS —  A backend web-application framework for NodeJS

Angular — A JavaScript-based front-end web development framework for building dynamic, single-page web applications

NodeJS — A JavaScript runtime environment that enables running JavaScript code outside the browser, among other things

Here’s a brief overview of how the different components might work together:

A user interacts with the frontend, via the web browser, which is built with Angular components. 

The backend server delivers frontend content, via ExpressJS running atop NodeJS.

Data is fetched from the MongoDB database before it returns to the frontend. Here, your application displays it for the user.

Any interaction that causes a data-change request is sent to the Node-based Express server.

Why is the MEAN stack so popular?

The MEAN stack is often used to build full-stack, JavaScript web applications, where the same language is used for both the client-side and server-side of the application. This approach can make development more efficient and consistent and make it easier for developers to work on both the frontend and backend of the application.

The MEAN stack is popular for a few reasons, including the following:

Easy learning curve — If you’re familiar with JavaScript and JSON, then it’s easy to get started. MEAN’s structure lets you easily build a three-tier architecture (frontend, backend, database) with just JavaScript and JSON.

Model View Architecture — MEAN supports the Model-view-controller architecture, supporting a smooth and seamless development process.

Reduces context switching — Because MEAN uses JavaScript for both frontend and backend development, developers don’t need to worry about switching languages. This capability boosts development efficiency.

Open source and active community support — The MEAN stack is purely open source. All developers can build robust web applications. Its frameworks improve the coding efficiency and promote faster app development.

Running the Event Posting app

Here are the key components of the Event Posting app:

MongoDB

Express.js

Angular

Node.js

Docker Desktop

Deploying the Event Posting app is a fast process. To start, you’ll clone the repository, set up the client and backend, then bring up the application. 

Then, complete the following steps:

git clone https://github.com/dockersamples/events
cd events/backend
npm install
npm run dev

General flow of the Event Posting app

The flow of information through the Event Posting app is illustrated in Figure 1 and described in the following steps.

Figure 1: General flow of the Event Posting app.

A user visits the event posting app’s website on their browser.

AngularJS, the frontend framework, retrieves the necessary HTML, CSS, and JavaScript files from the server and renders the initial view of the website.

When the user wants to view a list of events or create a new event, AngularJS sends an HTTP request to the backend server.

Express.js, the backend web framework, receives the request and processes it. This step includes interacting with the MongoDB database to retrieve or store data and providing an API for the frontend to access the data.

The back-end server sends a response to the frontend, which AngularJS receives and uses to update the view.

When a user creates a new event, AngularJS sends a POST request to the backend server, which Express.js receives and processes. Express.js stores the new event in the MongoDB database.

The backend server sends a confirmation response to the front-end, which AngularJS receives and uses to update the view and display the new event.

Node.js, the JavaScript runtime, handles the server-side logic for the application and allows for real-time updates. This includes running the Express.js server, handling real-time updates using WebSockets, and handling any other server-side tasks.

You can then access Event Posting at http://localhost:80 in your browser (Figure 2):

Figure 2: Add a new event.

Select Add New Event to add the details (Figure 3).

Figure 3: Add event details.

Save the event details to see the final results (Figure 4).

Figure 4: Display upcoming events.

Why containerize the MEAN stack?

Containerizing the MEAN stack allows for a consistent, portable, and easily scalable environment for the application, as well as improved security and ease of deployment. Containerizing the MEAN stack has several benefits, such as:

Consistency: Containerization ensures that the environment for the application is consistent across different development, testing, and production environments. This approach eliminates issues that can arise from differences in the environment, such as different versions of dependencies or configurations.

Portability: Containers are designed to be portable, which means that they can be easily moved between different environments. This capability makes it easy to deploy the MEAN stack application to different environments, such as on-premises or in the cloud.

Isolation: Containers provide a level of isolation between the application and the host environment. Thus, the application has access only to the resources it needs and does not interfere with other applications running on the same host.

Scalability: Containers can be easily scaled up or down depending on the needs of the application, resulting in more efficient use of resources and better performance.

Containerizing your Event Posting app

Docker helps you containerize your MEAN Stack — letting you bundle your complete Event Posting application, runtime, configuration, and operating system-level dependencies. The container then includes everything needed to ship a cross-platform, multi-architecture web application. 

We’ll explore how to run this app within a Docker container using Docker Official Images. To begin, you’ll need to download Docker Desktop and complete the installation process. This step includes the Docker CLI, Docker Compose, and a user-friendly management UI, which will each be useful later on.

Docker uses a Dockerfile to create each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Next, we’ll create an empty Dockerfile in the root of our project repository.

Containerizing your Angular frontend

We’ll build a multi-stage Dockerfile to containerize our Angular frontend. 

A Dockerfile is a plain-text file that contains instructions for assembling a Docker container image. When Docker builds our image via the docker build command, it reads these instructions, executes them, and creates a final image. 

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit testing. A separate image holds the application’s runtime. This setup makes the final image more secure and shrinks its footprint (because it doesn’t contain development or debugging tools). 

Let’s walk through the process of creating a Dockerfile for our application. First, create the following empty file with the name Dockerfile in the root of your frontend app.

touch Dockerfile

Then you’ll need to define your base image in the Dockerfile file. Here we’ve chosen the stable LTS version of the Node Docker Official Image. This image comes with every tool and package needed to run a Node.js application:

FROM node:lts-alpine AS build

Next, let’s create a directory to house our image’s application code. This acts as the working directory for your application:

WORKDIR /usr/src/app

The following COPY instruction copies the package.json and src file from the host machine to the container image. 

The COPY command takes two parameters. The first tells Docker which file(s) you’d like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /usr/src/app.

COPY package.json .

COPY package-lock.json .
RUN npm ci

Next, we need to add our source code into the image. We’ll use the COPY command just like we previously did with our package.json file. 

Note: It’s common practice to copy the package.json file separately from the application code when building a Docker image. This step allows Docker to cache the node_modules layer separately from the application code layer, which can significantly speed up the Docker build process and improve the development workflow.

COPY . .

Then, use npm run build to run the build script from package.json:

RUN npm run build

In the next step, we need to specify the second stage of the build that uses an Nginx image as its base and copies the nginx.conf file to the /etc/nginx directory. It also copies the compiled TypeScript code from the build stage to the /usr/share/nginx/html directory.

FROM nginx:stable-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=build /usr/src/app/dist/events /usr/share/nginx/html

Finally, the EXPOSE instruction tells Docker which port the container listens on at runtime. You can specify whether the port listens on TCP or UDP. The default is TCP if the protocol isn’t specified.

EXPOSE 80

Here is our complete Dockerfile:

# Builder container to compile typescript
FROM node:lts-alpine AS build
WORKDIR /usr/src/app

# Install dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci

# Copy the application source
COPY . .
# Build typescript
RUN npm run build

FROM nginx:stable-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=build /usr/src/app/dist/events /usr/share/nginx/html

EXPOSE 80

Now, let’s build our image. We’ll run the docker build command as above, but with the -f Dockerfile flag. The -f flag specifies your Dockerfile name. The “.” command will use the current directory as the build context and read a Dockerfile from stdin. The -t tags the resulting image.

docker build . -f Dockerfile -t events-fe:1

Containerizing your Node.js backend

Let’s walk through the process of creating a Dockerfile for our backend as the next step. First, create the following empty Dockerfile in the root of your backend Node app:

# Builder container to compile typescript
FROM node:lts-alpine AS build
WORKDIR /usr/src/app

# Install dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci

# Copy the application source
COPY . .
# Build typescript
RUN npm run build

FROM node:lts-alpine
WORKDIR /app
COPY package.json .
COPY package-lock.json .
COPY .env.production .env

RUN npm ci –production

COPY –from=build /usr/src/app/dist /app

EXPOSE 8000
CMD [ "node", "src/index.js"]

This Dockerfile is useful for building and running TypeScript applications in a containerized environment, allowing developers to package and distribute their applications more easily.

The first stage of the build process, named build, is based on the official Node.js LTS Alpine Docker image. It sets the working directory to /usr/src/app and copies the package.json and package-lock.json files to install dependencies with the npm ci command. It then copies the entire application source code and builds TypeScript with the npm run build command.

The second stage of the build process, named production, also uses the official Node.js LTS Alpine Docker image. It sets the working directory to /app and copies the package.json, package-lock.json, and .env.production files. It then installs only production dependencies with npm ci –production command, and copies the output of the previous stage, the compiled TypeScript code, from /usr/src/app/dist to /app.

Finally, it exposes port 8000 and runs the command node src/index.js when the container is started.

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

services:
frontend:
build:
context: "./frontend/events"
dockerfile: "./Dockerfile"
networks:
– events_net
backend:
build:
context: "./backend"
dockerfile: "./Dockerfile"
networks:
– events_net
db:
image: mongo:latest
ports:
– 27017:27017
networks:
– events_net
proxy:
image: nginx:stable-alpine
environment:
– NGINX_ENVSUBST_TEMPLATE_SUFFIX=.conf
– NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx
volumes:
– ${PWD}/nginx.conf:/etc/nginx/templates/nginx.conf.conf
ports:
– 80:80
networks:
– events_net

networks:
events_net:

Your example application has the following parts:

Four services backed by Docker images: Your Angular frontend, Node.js backend, MongoDB database, and Nginx as a proxy server

The frontend and backend services are built from Dockerfiles located in ./frontend/events and ./backend directories, respectively. Both services are attached to a network called events_net.

The db service is based on the latest version of the MongoDB Docker image and exposes port 27017. It is attached to the same events_net network as the frontend and backend services.

The proxy service is based on the stable-alpine version of the Nginx Docker image. It has two environment variables defined, NGINX_ENVSUBST_TEMPLATE_SUFFIX and NGINX_ENVSUBST_OUTPUT_DIR, that enable environment variable substitution in Nginx configuration files. 

The proxy service also has a volume defined that maps the local nginx.conf file to /etc/nginx/templates/nginx.conf.conf in the container. Finally, it exposes port 80 and is attached to the events_net network.

The events_net network is defined at the end of the file, and all services are attached to it. This setup enables communication between the containers using their service names as hostnames.

You can clone the repository or download the docker-compose.yml file directly from Dockersamples on GitHub.

Bringing up the container services

You can start the MEAN application stack by running the following command:

docker compose up -d

Next, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:

$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
events-backend-1 events-backend "docker-entrypoint.s…" backend 29 minutes ago Up 29 minutes 8000/tcp
events-db-1 mongo:latest "docker-entrypoint.s…" db 5 seconds ago Up 4 seconds 0.0.0.0:27017->27017/tcp
events-frontend-1 events-frontend "/docker-entrypoint.…" frontend 29 minutes ago Up 29 minutes 80/tcp
events-proxy-1 nginx:stable-alpine "/docker-entrypoint.…" proxy 29 minutes ago Up 29 minutes 0.0.0.0:80->80/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application (Figure 5):

Figure 5: Viewing running containers in Docker Dashboard.

Conclusion

Congratulations! You’ve successfully learned how to containerize a MEAN-backed Event Posting application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you easily build and deploy your MEAN stack in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing!
Quelle: https://blog.docker.com/feed/

Enabling a No-Code Performance Testing Platform Using the Ddosify Docker Extension

Performance testing is a critical component of software testing and performance evaluation. It involves simulating a large number of users accessing a system simultaneously to determine the system’s behavior under high user loads. This process helps organizations understand how their systems will perform in real-world scenarios and identify potential performance bottlenecks. Testing the performance of your application under different load conditions also helps identify bottlenecks and improve your application’s performance. 

In this article, we provide an introduction to the Ddosify Docker Extension and show how to get started using it for performance testing.  

The importance of performance testing

Performance testing should be regularly performed to ensure that your application is performing well under different load conditions so that your customers can have a great experience. Kissmetrics found that a 1-second delay in page response time can lead to a seven percent decrease in conversions and that half of the customers expect a website to load in less than 2 seconds. A 1-second delay in page response could result in a potential loss of several million dollars in annual sales for an e-commerce site.

Meet Ddosify

Ddosify is a high-performance, open-core performance testing platform that focuses on load and latency testing. Ddosify offers a suite of three products:

1. Ddosify Engine: An open source, single-node, load-testing tool (6K+ stars) that can be used to test your application from your terminal using a simple JSON file. Ddosify is written in Golang and can be deployed on Linux, macOS, and Windows. Developers and small companies are using Ddosify Engine to test their applications. The tool is available on GitHub.

2. Ddosify Cloud: An open core SaaS platform that allows you to test your application without any programming expertise. Ddosify Cloud uses Ddosify Engine in a distributed manner and provides a web interface to generate load test scenarios without code. Users can test their applications from different locations around the world and can generate advanced reports. We are using different technologies including Docker, Kubernetes, InfluxDB, RabbitMQ, React.js, Golang, AWS, and PostgreSQL within this platform and all working together transparently for the user. This tool is available on the Ddosify website.

3. Ddosify Docker Extension: This tool has similarities to Ddosify Engine, but has an easy-to-use user interface thanks to the extension capability of Docker Desktop. This feature allows you to test your application within Docker Desktop. The Ddosify Docker Extension is available free of charge from the Extension marketplace. The Ddosify Docker Extension repository is open source and available on GitHub. The tool is also available from the Docker Extensions Marketplace.

In this article, we will focus on the Ddosify Docker Extension.

The architecture of Ddosify

Ddosify Docker Extension uses the Ddosify Engine as a base image under the hood. We collect settings, including request count, duration, and headers, from the extension UI and send them to the Ddosify Engine. 

The Ddosify Engine performs the load testing and returns the results to the extension. The extension then displays the results to the user (Figure 1). 

Figure 1: Overview of Ddosify.

Why Ddosify?

Ddosify is easy to use and offers many features, including dynamic variables, CSV data import, various load types, correlation, and assertion. Ddosify also has different options for different use cases. If you are an individual developer, you can use the Ddosify Engine or Ddosify Docker Extension free of charge. If you need code-free load testing, advanced reporting, multi-geolocation, and more requests per second (RPS), you can use the Ddosify Cloud. 

With Ddosify, you can: 

Identify performance issues of your application by simulating high user traffic.

Optimize your infrastructure and ensure that you are only paying for the resources that you need.

Identify bugs before your customers do. Some bugs are only triggered under high load.

Measure your system capacity and identify its limitations.

Why run Ddosify as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Ddosify Docker Extension, you can easily perform load testing on your application from within Docker Desktop. You don’t need to install anything on your machine except Docker Desktop. Features of Ddosify Docker Extension include:

Strong community with 6K+ GitHub stars and a total of 1M+ downloads on all platforms. Community members contribute by proposing/adding features and fixing bugs.

Currently supports HTTP and HTTPS protocols. Other protocols are on the way.

Supports various load types. Test your system’s limits across different load types, including:

Linear

Incremental

Waved

Dynamic variables (parameterization) support. Just like Postman, Ddosify supports dynamic variables.

Save load testing results as PDF.

Getting started

As a prerequisite, you need Docker Desktop 4.10.0 or higher installed on your machine. You can download Docker Desktop from our website.

Step 1: Install Ddosify Docker Extension

Because Ddosify is an extension partner of Docker, you can easily install Ddosify Docker Extension from the Docker Extensions Marketplace (Figure 2). Start Docker Desktop and select Add Extensions. Next, filter by Testing Tools and select Ddosify. Click on the Install button to install the Ddosify Docker Extension. After a few seconds, Ddosify Docker Extension will be installed on your machine.

Figure 2: Installing Ddosify.

Step 2: Start load testing

You can start load testing your application from the Docker Desktop (Figure 3). Start Docker Desktop and click on the Ddosify icon in the Extensions section. The UI of the Ddosify Docker Extension will be opened.

Figure 3: Starting load testing.

You can start load testing by entering the target URL of your application. You can choose HTTP Methods (GET, POST, PUT, DELETE, etc.), protocol (HTTP, HTTPS), request count, duration, load type (linear, incremental, waved), timeout, body, headers, basic auth, and proxy settings. We chose the following values: 

URL:https://testserver.ddosify.com/account/register/Method:POSTProtocol:HTTPSRequest Count: 100Duration:5Load Type:LinearTimeout:10Body:{“username”: “{{_randomUserName}}”, “email”: “{{_randomEmail}}”, “password”: “{{_randomPassword}}”}Headers:User-Agent: DdosifyDockerExtension/0.1.2Content-Type: application/json

In this configuration, we are sending 100 requests to the target URL for 5 seconds (Figure 4). The RPS is 20. The target URL is a test server that is used to register new users with body parameters. We are using dynamic variables (random) for username, email, and password in the body. You can learn more about dynamic variables from the Ddosify documentation.

Figure 4: Parameters for sample load test.

Then click on the Start Load Test button to begin load testing. The results will be displayed in the UI (Figure 5).

Figure 5: Ddosify test results.

The test results include the following information:

48 requests successfully created users. Response Code: 201

20 requests failed to create users because of the duplicate username and emails with the server. Response Code: 400

32 requests failed to create users because of the timeout. The server could not respond within 10 seconds, so we should increase the timeout value or optimize the server

You can also save the load test results. Click on the Report button to save the results as a PDF file (Figure 6).

Figure 6: Save results as PDF.

Conclusion

In this article, we showed how to install Ddosify Docker Extension and quickly start load testing your application from Docker Desktop. We created random users on a test server with 100 requests for 5 seconds, and we saw that the server could not handle all the requests because of the timeout. 

If you need help with Ddosify, you can create an issue on our GitHub repository or join our Discord server.

Resources

Ddosify

Ddosify Engine

Ddosify Docker Extension 

Ddosify Docker Extension source codes

Quelle: https://blog.docker.com/feed/

We’re No Longer Sunsetting the Free Team Plan

After listening to feedback and consulting our community, it’s clear that we made the wrong decision in sunsetting our Free Team plan. Last week we felt our communications were terrible but our policy was sound. It’s now clear that both the communications and the policy were wrong, so we’re reversing course and no longer sunsetting the Free Team plan:

If you’re currently on the Free Team plan, you no longer have to migrate to another plan by April 14. 

Customers who upgraded from a Free Team subscription to a paid subscription between the sunsetting announcement on March 14 and today’s announcement will automatically receive a full refund for the transaction in the next 30 days, allowing them to use their new paid subscription for free for the duration of the term they purchased.

Customers who requested a migration to a Personal or Pro plan will be kept on their current Free Team plan. (Or they can choose to open a new Personal or Pro account via our website.)

In the past 10 days we received & accepted more applications for our Docker-Sponsored Open Source program (DSOS) than we did in the previous year. We encourage eligible open source projects to continue to apply and are currently processing applications within a couple of business days.

For more details, you can visit our FAQ. We apologize for both the communications and the policy, and vow to be an ever-more trustworthy community member in the future.

If you have any questions, you’re welcome to contact me directly on Twitter @scottcjohnston or by emailing scott@docker.com.
Quelle: https://blog.docker.com/feed/

Docker and Ambassador Labs Announce Telepresence for Docker, Improving the Kubernetes Development Experience

I’ve been a long-time user and avid fan of both Docker and Kubernetes, and have many happy memories of attending the Docker Meetups in London in the early 2010s. I closely watched as Docker revolutionized the developers’ container-building toolchain and Kubernetes became the natural target to deploy and run these containers at scale. 

Today we’re happy to announce Telepresence for Docker, simplifying how teams develop and test on Kubernetes for faster app delivery. Docker and Ambassador Labs both help cloud-native developers to be super-productive, and we’re excited about this partnership to accelerate the developer experience on Kubernetes. 

What exactly does this mean? 

When building with Kubernetes, you can now use Telepresence alongside the Docker toolchain you know and love.

You can buy Telepresence directly from Docker, and log in to Ambassador Cloud using your Docker ID and credentials.

You can get installation and product support from your current Docker support and services team.

Kubernetes development: Flexibility, scale, complexity

Kubernetes revolutionized the platform world, providing operational flexibility and scale for most organizations that have adopted it. But Kubernetes also introduces complexity when configuring local development environments.

We know you like building applications using your own local tools, where the feedback is instant, you can iterate quickly, and the environment you’re working in mirrors production. This combination increases velocity and reduces the time to successful deployment. But, you can face slow and painful development and troubleshooting obstacles when trying to integrate and test code into a real-world application running on Kubernetes. You end up having to replicate all of the services locally or remotely to test changes, which requires you to know about Kubernetes and the services built by others. The result, which we’ve seen at many organizations, is siloed teams, deferred deploying changes, and delayed organizational time to value.

Bridging remote environments with local development toolchains

Telepresence for Docker seamlessly bridges local dev machines to remote dev and staging Kubernetes clusters, so you don’t have to manage the complexity of Kubernetes, be a Kubernetes expert, or worry about consuming laptop resources when deploying large services locally. 

The remote-to-local approach helps your teams to quickly collaborate and iterate on code locally while testing the effects of those code changes interactively within the full context of your distributed application. This way, you can work locally on services using the tools you know and love while also being connected to a remote Kubernetes cluster.

How does Telepresence for Docker work?

Telepresence for Docker works by running a traffic manager pod in Kubernetes and Telepresence client daemons on developer workstations. As shown in the following diagram, the traffic manager acts as a two-way network proxy that can intercept connections and route traffic between the cluster and containers running on developer machines.

Once you have connected your development machine to a remote Kubernetes cluster, you have several options for how the local containers can integrate with the cluster. These options are based on the concepts of intercepts, where Telepresence for Docker can re-route — or intercept — traffic destined to and from a remote service to your local machine. Intercepts enable you to interact with an application in a remote cluster and see the results from the local changes you made on an intercepted service.

Here’s how you can use intercepts:

No intercepts: The most basic integration involves no intercepts at all, simply establishing a connection between the container and the cluster. This enables the container to access cluster resources, such as APIs and databases.

Global intercepts: You can set up global intercepts for a service. This means all traffic for a service will be re-routed from Kubernetes to your local container.

Personal intercepts: The more advanced alternative to global intercepts is personal intercepts. Personal intercepts let you define conditions for when a request should be routed to your local container. The conditions could be anything from only routing requests that include a specific HTTP header, to requests targeting a specific route of an API.

Benefits for platform teams: Reduce maintenance and cloud costs

On top of increasing the velocity of individual developers and development teams, Telepresence for Docker also enables platform engineers to maintain a separation of concerns (and provide appropriate guardrails). Platform engineers can define, configure, and manage shared remote clusters that multiple Telepresence for Docker users can interact within during their day-to-day development and testing workflows. Developers can easily intercept or selectively reroute remote traffic to the service on their local machine, and test (and share with stakeholders) how their current changes look and interact with remote dependencies. 

Compared to static staging environments, this offers a simple way to connect local code into a shared dev environment and fuels easy, secure collaboration with your team or other stakeholders. Instead of provisioning cloud virtual machines for every developer, this approach offers a more cost-effective way to have a shared cloud development environment.

Get started with Telepresence for Docker today

We’re excited that the Docker and Ambassador Labs partnership brings Telepresence for Docker to the 12-million-strong (and growing) community of registered Docker developers. Telepresence for Docker is available now. Keep using the local tools and development workflow you know and love, but with faster feedback, easier collaboration, and reduced cloud environment costs.

You can quickly get started with your Docker ID, or contact us to learn more. 
Quelle: https://blog.docker.com/feed/