Containerizing an Event Posting App Built with the MEAN Stack

This article is a result of open source collaboration. During Hacktoberfest 2022, the project was announced in the Black Forest Docker meetup group and received contributions from members of the meetup group and other Hacktoberfest contributors. Almost all of the code in the GitHub repo was written by Stefan Ruf, Himanshu Kandpal, and Sreekesh Iyer.

The MEAN stack is a fast-growing, open source JavaScript stack used to develop web applications. MEAN is a diverse collection of robust technologies — MongoDB, Express.js, Angular, and Node.js — for developing scalable web applications. 

The stack is a popular choice for web developers as it allows them to work with a single language throughout the development process and it also provides a lot of flexibility and scalability. Node, Express, and Angular even claimed top spots as popular frameworks or technologies in Stack Overflow’s 2022 Developer Survey.

In this article, we’ll describe how the MEAN stack works using an Event Posting app as an example.

How does the MEAN stack work?

MEAN consists of the following four components:

MongoDB — A NoSQL database 

ExpressJS —  A backend web-application framework for NodeJS

Angular — A JavaScript-based front-end web development framework for building dynamic, single-page web applications

NodeJS — A JavaScript runtime environment that enables running JavaScript code outside the browser, among other things

Here’s a brief overview of how the different components might work together:

A user interacts with the frontend, via the web browser, which is built with Angular components. 

The backend server delivers frontend content, via ExpressJS running atop NodeJS.

Data is fetched from the MongoDB database before it returns to the frontend. Here, your application displays it for the user.

Any interaction that causes a data-change request is sent to the Node-based Express server.

Why is the MEAN stack so popular?

The MEAN stack is often used to build full-stack, JavaScript web applications, where the same language is used for both the client-side and server-side of the application. This approach can make development more efficient and consistent and make it easier for developers to work on both the frontend and backend of the application.

The MEAN stack is popular for a few reasons, including the following:

Easy learning curve — If you’re familiar with JavaScript and JSON, then it’s easy to get started. MEAN’s structure lets you easily build a three-tier architecture (frontend, backend, database) with just JavaScript and JSON.

Model View Architecture — MEAN supports the Model-view-controller architecture, supporting a smooth and seamless development process.

Reduces context switching — Because MEAN uses JavaScript for both frontend and backend development, developers don’t need to worry about switching languages. This capability boosts development efficiency.

Open source and active community support — The MEAN stack is purely open source. All developers can build robust web applications. Its frameworks improve the coding efficiency and promote faster app development.

Running the Event Posting app

Here are the key components of the Event Posting app:

MongoDB

Express.js

Angular

Node.js

Docker Desktop

Deploying the Event Posting app is a fast process. To start, you’ll clone the repository, set up the client and backend, then bring up the application. 

Then, complete the following steps:

git clone https://github.com/dockersamples/events
cd events/backend
npm install
npm run dev

General flow of the Event Posting app

The flow of information through the Event Posting app is illustrated in Figure 1 and described in the following steps.

Figure 1: General flow of the Event Posting app.

A user visits the event posting app’s website on their browser.

AngularJS, the frontend framework, retrieves the necessary HTML, CSS, and JavaScript files from the server and renders the initial view of the website.

When the user wants to view a list of events or create a new event, AngularJS sends an HTTP request to the backend server.

Express.js, the backend web framework, receives the request and processes it. This step includes interacting with the MongoDB database to retrieve or store data and providing an API for the frontend to access the data.

The back-end server sends a response to the frontend, which AngularJS receives and uses to update the view.

When a user creates a new event, AngularJS sends a POST request to the backend server, which Express.js receives and processes. Express.js stores the new event in the MongoDB database.

The backend server sends a confirmation response to the front-end, which AngularJS receives and uses to update the view and display the new event.

Node.js, the JavaScript runtime, handles the server-side logic for the application and allows for real-time updates. This includes running the Express.js server, handling real-time updates using WebSockets, and handling any other server-side tasks.

You can then access Event Posting at http://localhost:80 in your browser (Figure 2):

Figure 2: Add a new event.

Select Add New Event to add the details (Figure 3).

Figure 3: Add event details.

Save the event details to see the final results (Figure 4).

Figure 4: Display upcoming events.

Why containerize the MEAN stack?

Containerizing the MEAN stack allows for a consistent, portable, and easily scalable environment for the application, as well as improved security and ease of deployment. Containerizing the MEAN stack has several benefits, such as:

Consistency: Containerization ensures that the environment for the application is consistent across different development, testing, and production environments. This approach eliminates issues that can arise from differences in the environment, such as different versions of dependencies or configurations.

Portability: Containers are designed to be portable, which means that they can be easily moved between different environments. This capability makes it easy to deploy the MEAN stack application to different environments, such as on-premises or in the cloud.

Isolation: Containers provide a level of isolation between the application and the host environment. Thus, the application has access only to the resources it needs and does not interfere with other applications running on the same host.

Scalability: Containers can be easily scaled up or down depending on the needs of the application, resulting in more efficient use of resources and better performance.

Containerizing your Event Posting app

Docker helps you containerize your MEAN Stack — letting you bundle your complete Event Posting application, runtime, configuration, and operating system-level dependencies. The container then includes everything needed to ship a cross-platform, multi-architecture web application. 

We’ll explore how to run this app within a Docker container using Docker Official Images. To begin, you’ll need to download Docker Desktop and complete the installation process. This step includes the Docker CLI, Docker Compose, and a user-friendly management UI, which will each be useful later on.

Docker uses a Dockerfile to create each image’s layers. Each layer stores important changes stemming from your base image’s standard configuration. Next, we’ll create an empty Dockerfile in the root of our project repository.

Containerizing your Angular frontend

We’ll build a multi-stage Dockerfile to containerize our Angular frontend. 

A Dockerfile is a plain-text file that contains instructions for assembling a Docker container image. When Docker builds our image via the docker build command, it reads these instructions, executes them, and creates a final image. 

With multi-stage builds, a Docker build can use one base image for compilation, packaging, and unit testing. A separate image holds the application’s runtime. This setup makes the final image more secure and shrinks its footprint (because it doesn’t contain development or debugging tools). 

Let’s walk through the process of creating a Dockerfile for our application. First, create the following empty file with the name Dockerfile in the root of your frontend app.

touch Dockerfile

Then you’ll need to define your base image in the Dockerfile file. Here we’ve chosen the stable LTS version of the Node Docker Official Image. This image comes with every tool and package needed to run a Node.js application:

FROM node:lts-alpine AS build

Next, let’s create a directory to house our image’s application code. This acts as the working directory for your application:

WORKDIR /usr/src/app

The following COPY instruction copies the package.json and src file from the host machine to the container image. 

The COPY command takes two parameters. The first tells Docker which file(s) you’d like to copy into the image. The second tells Docker where you want those files to be copied. We’ll copy everything into our working directory called /usr/src/app.

COPY package.json .

COPY package-lock.json .
RUN npm ci

Next, we need to add our source code into the image. We’ll use the COPY command just like we previously did with our package.json file. 

Note: It’s common practice to copy the package.json file separately from the application code when building a Docker image. This step allows Docker to cache the node_modules layer separately from the application code layer, which can significantly speed up the Docker build process and improve the development workflow.

COPY . .

Then, use npm run build to run the build script from package.json:

RUN npm run build

In the next step, we need to specify the second stage of the build that uses an Nginx image as its base and copies the nginx.conf file to the /etc/nginx directory. It also copies the compiled TypeScript code from the build stage to the /usr/share/nginx/html directory.

FROM nginx:stable-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=build /usr/src/app/dist/events /usr/share/nginx/html

Finally, the EXPOSE instruction tells Docker which port the container listens on at runtime. You can specify whether the port listens on TCP or UDP. The default is TCP if the protocol isn’t specified.

EXPOSE 80

Here is our complete Dockerfile:

# Builder container to compile typescript
FROM node:lts-alpine AS build
WORKDIR /usr/src/app

# Install dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci

# Copy the application source
COPY . .
# Build typescript
RUN npm run build

FROM nginx:stable-alpine

COPY nginx.conf /etc/nginx/nginx.conf
COPY –from=build /usr/src/app/dist/events /usr/share/nginx/html

EXPOSE 80

Now, let’s build our image. We’ll run the docker build command as above, but with the -f Dockerfile flag. The -f flag specifies your Dockerfile name. The “.” command will use the current directory as the build context and read a Dockerfile from stdin. The -t tags the resulting image.

docker build . -f Dockerfile -t events-fe:1

Containerizing your Node.js backend

Let’s walk through the process of creating a Dockerfile for our backend as the next step. First, create the following empty Dockerfile in the root of your backend Node app:

# Builder container to compile typescript
FROM node:lts-alpine AS build
WORKDIR /usr/src/app

# Install dependencies
COPY package.json .
COPY package-lock.json .
RUN npm ci

# Copy the application source
COPY . .
# Build typescript
RUN npm run build

FROM node:lts-alpine
WORKDIR /app
COPY package.json .
COPY package-lock.json .
COPY .env.production .env

RUN npm ci –production

COPY –from=build /usr/src/app/dist /app

EXPOSE 8000
CMD [ "node", "src/index.js"]

This Dockerfile is useful for building and running TypeScript applications in a containerized environment, allowing developers to package and distribute their applications more easily.

The first stage of the build process, named build, is based on the official Node.js LTS Alpine Docker image. It sets the working directory to /usr/src/app and copies the package.json and package-lock.json files to install dependencies with the npm ci command. It then copies the entire application source code and builds TypeScript with the npm run build command.

The second stage of the build process, named production, also uses the official Node.js LTS Alpine Docker image. It sets the working directory to /app and copies the package.json, package-lock.json, and .env.production files. It then installs only production dependencies with npm ci –production command, and copies the output of the previous stage, the compiled TypeScript code, from /usr/src/app/dist to /app.

Finally, it exposes port 8000 and runs the command node src/index.js when the container is started.

Defining services using a Compose file

Here’s how our services appear within a Docker Compose file:

services:
frontend:
build:
context: "./frontend/events"
dockerfile: "./Dockerfile"
networks:
– events_net
backend:
build:
context: "./backend"
dockerfile: "./Dockerfile"
networks:
– events_net
db:
image: mongo:latest
ports:
– 27017:27017
networks:
– events_net
proxy:
image: nginx:stable-alpine
environment:
– NGINX_ENVSUBST_TEMPLATE_SUFFIX=.conf
– NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx
volumes:
– ${PWD}/nginx.conf:/etc/nginx/templates/nginx.conf.conf
ports:
– 80:80
networks:
– events_net

networks:
events_net:

Your example application has the following parts:

Four services backed by Docker images: Your Angular frontend, Node.js backend, MongoDB database, and Nginx as a proxy server

The frontend and backend services are built from Dockerfiles located in ./frontend/events and ./backend directories, respectively. Both services are attached to a network called events_net.

The db service is based on the latest version of the MongoDB Docker image and exposes port 27017. It is attached to the same events_net network as the frontend and backend services.

The proxy service is based on the stable-alpine version of the Nginx Docker image. It has two environment variables defined, NGINX_ENVSUBST_TEMPLATE_SUFFIX and NGINX_ENVSUBST_OUTPUT_DIR, that enable environment variable substitution in Nginx configuration files. 

The proxy service also has a volume defined that maps the local nginx.conf file to /etc/nginx/templates/nginx.conf.conf in the container. Finally, it exposes port 80 and is attached to the events_net network.

The events_net network is defined at the end of the file, and all services are attached to it. This setup enables communication between the containers using their service names as hostnames.

You can clone the repository or download the docker-compose.yml file directly from Dockersamples on GitHub.

Bringing up the container services

You can start the MEAN application stack by running the following command:

docker compose up -d

Next, use the docker compose ps command to confirm that your stack is running properly. Your terminal will produce the following output:

$ docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
events-backend-1 events-backend "docker-entrypoint.s…" backend 29 minutes ago Up 29 minutes 8000/tcp
events-db-1 mongo:latest "docker-entrypoint.s…" db 5 seconds ago Up 4 seconds 0.0.0.0:27017->27017/tcp
events-frontend-1 events-frontend "/docker-entrypoint.…" frontend 29 minutes ago Up 29 minutes 80/tcp
events-proxy-1 nginx:stable-alpine "/docker-entrypoint.…" proxy 29 minutes ago Up 29 minutes 0.0.0.0:80->80/tcp

Viewing the containers via Docker Dashboard

You can also leverage the Docker Dashboard to view your container’s ID and easily access or manage your application (Figure 5):

Figure 5: Viewing running containers in Docker Dashboard.

Conclusion

Congratulations! You’ve successfully learned how to containerize a MEAN-backed Event Posting application with Docker. With a single YAML file, we’ve demonstrated how Docker Compose helps you easily build and deploy your MEAN stack in seconds. With just a few extra steps, you can apply this tutorial while building applications with even greater complexity. Happy developing!
Quelle: https://blog.docker.com/feed/

Enabling a No-Code Performance Testing Platform Using the Ddosify Docker Extension

Performance testing is a critical component of software testing and performance evaluation. It involves simulating a large number of users accessing a system simultaneously to determine the system’s behavior under high user loads. This process helps organizations understand how their systems will perform in real-world scenarios and identify potential performance bottlenecks. Testing the performance of your application under different load conditions also helps identify bottlenecks and improve your application’s performance. 

In this article, we provide an introduction to the Ddosify Docker Extension and show how to get started using it for performance testing.  

The importance of performance testing

Performance testing should be regularly performed to ensure that your application is performing well under different load conditions so that your customers can have a great experience. Kissmetrics found that a 1-second delay in page response time can lead to a seven percent decrease in conversions and that half of the customers expect a website to load in less than 2 seconds. A 1-second delay in page response could result in a potential loss of several million dollars in annual sales for an e-commerce site.

Meet Ddosify

Ddosify is a high-performance, open-core performance testing platform that focuses on load and latency testing. Ddosify offers a suite of three products:

1. Ddosify Engine: An open source, single-node, load-testing tool (6K+ stars) that can be used to test your application from your terminal using a simple JSON file. Ddosify is written in Golang and can be deployed on Linux, macOS, and Windows. Developers and small companies are using Ddosify Engine to test their applications. The tool is available on GitHub.

2. Ddosify Cloud: An open core SaaS platform that allows you to test your application without any programming expertise. Ddosify Cloud uses Ddosify Engine in a distributed manner and provides a web interface to generate load test scenarios without code. Users can test their applications from different locations around the world and can generate advanced reports. We are using different technologies including Docker, Kubernetes, InfluxDB, RabbitMQ, React.js, Golang, AWS, and PostgreSQL within this platform and all working together transparently for the user. This tool is available on the Ddosify website.

3. Ddosify Docker Extension: This tool has similarities to Ddosify Engine, but has an easy-to-use user interface thanks to the extension capability of Docker Desktop. This feature allows you to test your application within Docker Desktop. The Ddosify Docker Extension is available free of charge from the Extension marketplace. The Ddosify Docker Extension repository is open source and available on GitHub. The tool is also available from the Docker Extensions Marketplace.

In this article, we will focus on the Ddosify Docker Extension.

The architecture of Ddosify

Ddosify Docker Extension uses the Ddosify Engine as a base image under the hood. We collect settings, including request count, duration, and headers, from the extension UI and send them to the Ddosify Engine. 

The Ddosify Engine performs the load testing and returns the results to the extension. The extension then displays the results to the user (Figure 1). 

Figure 1: Overview of Ddosify.

Why Ddosify?

Ddosify is easy to use and offers many features, including dynamic variables, CSV data import, various load types, correlation, and assertion. Ddosify also has different options for different use cases. If you are an individual developer, you can use the Ddosify Engine or Ddosify Docker Extension free of charge. If you need code-free load testing, advanced reporting, multi-geolocation, and more requests per second (RPS), you can use the Ddosify Cloud. 

With Ddosify, you can: 

Identify performance issues of your application by simulating high user traffic.

Optimize your infrastructure and ensure that you are only paying for the resources that you need.

Identify bugs before your customers do. Some bugs are only triggered under high load.

Measure your system capacity and identify its limitations.

Why run Ddosify as a Docker Extension?

Docker Extensions help you build and integrate software applications into your daily workflows. With Ddosify Docker Extension, you can easily perform load testing on your application from within Docker Desktop. You don’t need to install anything on your machine except Docker Desktop. Features of Ddosify Docker Extension include:

Strong community with 6K+ GitHub stars and a total of 1M+ downloads on all platforms. Community members contribute by proposing/adding features and fixing bugs.

Currently supports HTTP and HTTPS protocols. Other protocols are on the way.

Supports various load types. Test your system’s limits across different load types, including:

Linear

Incremental

Waved

Dynamic variables (parameterization) support. Just like Postman, Ddosify supports dynamic variables.

Save load testing results as PDF.

Getting started

As a prerequisite, you need Docker Desktop 4.10.0 or higher installed on your machine. You can download Docker Desktop from our website.

Step 1: Install Ddosify Docker Extension

Because Ddosify is an extension partner of Docker, you can easily install Ddosify Docker Extension from the Docker Extensions Marketplace (Figure 2). Start Docker Desktop and select Add Extensions. Next, filter by Testing Tools and select Ddosify. Click on the Install button to install the Ddosify Docker Extension. After a few seconds, Ddosify Docker Extension will be installed on your machine.

Figure 2: Installing Ddosify.

Step 2: Start load testing

You can start load testing your application from the Docker Desktop (Figure 3). Start Docker Desktop and click on the Ddosify icon in the Extensions section. The UI of the Ddosify Docker Extension will be opened.

Figure 3: Starting load testing.

You can start load testing by entering the target URL of your application. You can choose HTTP Methods (GET, POST, PUT, DELETE, etc.), protocol (HTTP, HTTPS), request count, duration, load type (linear, incremental, waved), timeout, body, headers, basic auth, and proxy settings. We chose the following values: 

URL:https://testserver.ddosify.com/account/register/Method:POSTProtocol:HTTPSRequest Count: 100Duration:5Load Type:LinearTimeout:10Body:{“username”: “{{_randomUserName}}”, “email”: “{{_randomEmail}}”, “password”: “{{_randomPassword}}”}Headers:User-Agent: DdosifyDockerExtension/0.1.2Content-Type: application/json

In this configuration, we are sending 100 requests to the target URL for 5 seconds (Figure 4). The RPS is 20. The target URL is a test server that is used to register new users with body parameters. We are using dynamic variables (random) for username, email, and password in the body. You can learn more about dynamic variables from the Ddosify documentation.

Figure 4: Parameters for sample load test.

Then click on the Start Load Test button to begin load testing. The results will be displayed in the UI (Figure 5).

Figure 5: Ddosify test results.

The test results include the following information:

48 requests successfully created users. Response Code: 201

20 requests failed to create users because of the duplicate username and emails with the server. Response Code: 400

32 requests failed to create users because of the timeout. The server could not respond within 10 seconds, so we should increase the timeout value or optimize the server

You can also save the load test results. Click on the Report button to save the results as a PDF file (Figure 6).

Figure 6: Save results as PDF.

Conclusion

In this article, we showed how to install Ddosify Docker Extension and quickly start load testing your application from Docker Desktop. We created random users on a test server with 100 requests for 5 seconds, and we saw that the server could not handle all the requests because of the timeout. 

If you need help with Ddosify, you can create an issue on our GitHub repository or join our Discord server.

Resources

Ddosify

Ddosify Engine

Ddosify Docker Extension 

Ddosify Docker Extension source codes

Quelle: https://blog.docker.com/feed/

We’re No Longer Sunsetting the Free Team Plan

After listening to feedback and consulting our community, it’s clear that we made the wrong decision in sunsetting our Free Team plan. Last week we felt our communications were terrible but our policy was sound. It’s now clear that both the communications and the policy were wrong, so we’re reversing course and no longer sunsetting the Free Team plan:

If you’re currently on the Free Team plan, you no longer have to migrate to another plan by April 14. 

Customers who upgraded from a Free Team subscription to a paid subscription between the sunsetting announcement on March 14 and today’s announcement will automatically receive a full refund for the transaction in the next 30 days, allowing them to use their new paid subscription for free for the duration of the term they purchased.

Customers who requested a migration to a Personal or Pro plan will be kept on their current Free Team plan. (Or they can choose to open a new Personal or Pro account via our website.)

In the past 10 days we received & accepted more applications for our Docker-Sponsored Open Source program (DSOS) than we did in the previous year. We encourage eligible open source projects to continue to apply and are currently processing applications within a couple of business days.

For more details, you can visit our FAQ. We apologize for both the communications and the policy, and vow to be an ever-more trustworthy community member in the future.

If you have any questions, you’re welcome to contact me directly on Twitter @scottcjohnston or by emailing scott@docker.com.
Quelle: https://blog.docker.com/feed/

Docker and Ambassador Labs Announce Telepresence for Docker, Improving the Kubernetes Development Experience

I’ve been a long-time user and avid fan of both Docker and Kubernetes, and have many happy memories of attending the Docker Meetups in London in the early 2010s. I closely watched as Docker revolutionized the developers’ container-building toolchain and Kubernetes became the natural target to deploy and run these containers at scale. 

Today we’re happy to announce Telepresence for Docker, simplifying how teams develop and test on Kubernetes for faster app delivery. Docker and Ambassador Labs both help cloud-native developers to be super-productive, and we’re excited about this partnership to accelerate the developer experience on Kubernetes. 

What exactly does this mean? 

When building with Kubernetes, you can now use Telepresence alongside the Docker toolchain you know and love.

You can buy Telepresence directly from Docker, and log in to Ambassador Cloud using your Docker ID and credentials.

You can get installation and product support from your current Docker support and services team.

Kubernetes development: Flexibility, scale, complexity

Kubernetes revolutionized the platform world, providing operational flexibility and scale for most organizations that have adopted it. But Kubernetes also introduces complexity when configuring local development environments.

We know you like building applications using your own local tools, where the feedback is instant, you can iterate quickly, and the environment you’re working in mirrors production. This combination increases velocity and reduces the time to successful deployment. But, you can face slow and painful development and troubleshooting obstacles when trying to integrate and test code into a real-world application running on Kubernetes. You end up having to replicate all of the services locally or remotely to test changes, which requires you to know about Kubernetes and the services built by others. The result, which we’ve seen at many organizations, is siloed teams, deferred deploying changes, and delayed organizational time to value.

Bridging remote environments with local development toolchains

Telepresence for Docker seamlessly bridges local dev machines to remote dev and staging Kubernetes clusters, so you don’t have to manage the complexity of Kubernetes, be a Kubernetes expert, or worry about consuming laptop resources when deploying large services locally. 

The remote-to-local approach helps your teams to quickly collaborate and iterate on code locally while testing the effects of those code changes interactively within the full context of your distributed application. This way, you can work locally on services using the tools you know and love while also being connected to a remote Kubernetes cluster.

How does Telepresence for Docker work?

Telepresence for Docker works by running a traffic manager pod in Kubernetes and Telepresence client daemons on developer workstations. As shown in the following diagram, the traffic manager acts as a two-way network proxy that can intercept connections and route traffic between the cluster and containers running on developer machines.

Once you have connected your development machine to a remote Kubernetes cluster, you have several options for how the local containers can integrate with the cluster. These options are based on the concepts of intercepts, where Telepresence for Docker can re-route — or intercept — traffic destined to and from a remote service to your local machine. Intercepts enable you to interact with an application in a remote cluster and see the results from the local changes you made on an intercepted service.

Here’s how you can use intercepts:

No intercepts: The most basic integration involves no intercepts at all, simply establishing a connection between the container and the cluster. This enables the container to access cluster resources, such as APIs and databases.

Global intercepts: You can set up global intercepts for a service. This means all traffic for a service will be re-routed from Kubernetes to your local container.

Personal intercepts: The more advanced alternative to global intercepts is personal intercepts. Personal intercepts let you define conditions for when a request should be routed to your local container. The conditions could be anything from only routing requests that include a specific HTTP header, to requests targeting a specific route of an API.

Benefits for platform teams: Reduce maintenance and cloud costs

On top of increasing the velocity of individual developers and development teams, Telepresence for Docker also enables platform engineers to maintain a separation of concerns (and provide appropriate guardrails). Platform engineers can define, configure, and manage shared remote clusters that multiple Telepresence for Docker users can interact within during their day-to-day development and testing workflows. Developers can easily intercept or selectively reroute remote traffic to the service on their local machine, and test (and share with stakeholders) how their current changes look and interact with remote dependencies. 

Compared to static staging environments, this offers a simple way to connect local code into a shared dev environment and fuels easy, secure collaboration with your team or other stakeholders. Instead of provisioning cloud virtual machines for every developer, this approach offers a more cost-effective way to have a shared cloud development environment.

Get started with Telepresence for Docker today

We’re excited that the Docker and Ambassador Labs partnership brings Telepresence for Docker to the 12-million-strong (and growing) community of registered Docker developers. Telepresence for Docker is available now. Keep using the local tools and development workflow you know and love, but with faster feedback, easier collaboration, and reduced cloud environment costs.

You can quickly get started with your Docker ID, or contact us to learn more. 
Quelle: https://blog.docker.com/feed/

Docker and Hugging Face Partner to Democratize AI

Today, Hugging Face and Docker are announcing a new partnership to democratize AI and make it accessible to all software engineers. Hugging Face is the most used open platform for AI, where the machine learning (ML) community has shared more than 150,000 models; 25,000 datasets; and 30,000 ML apps, including Stable Diffusion, Bloom, GPT-J, and open source ChatGPT alternatives. These apps enable the community to explore models, replicate results, and lower the barrier of entry for ML — anyone with a browser can interact with the models.

Docker is the leading toolset for easy software deployment, from infrastructure to applications. Docker is also the leading platform for software teams’ collaboration.

Docker and Hugging Face partner so you can launch and deploy complex ML apps in minutes. With the recent support for Docker on Hugging Face Spaces, folks can create any custom app they want by simply writing a Dockerfile. What’s great about Spaces is that once you’ve got your app running, you can easily share it with anyone worldwide! 🌍 Spaces provides an unparalleled level of flexibility and enables users to build ML demos with their preferred tools — from MLOps tools and FastAPI to Go endpoints and Phoenix apps.

Spaces also come with pre-defined templates of popular open source projects for members that want to get their end-to-end project in production in a matter of seconds with just a few clicks.

Spaces enable easy deployment of ML apps in all environments, not just on Hugging Face. With “Run with Docker,” millions of software engineers can access more than 30,000 machine learning apps and run them locally or in their preferred environment.

“At Hugging Face, we’ve worked on making AI more accessible and more reproducible for the past six years,” says Clem Delangue, CEO of Hugging Face. “Step 1 was to let people share models and datasets, which are the basic building blocks of AI. Step 2 was to let people build online demos for new ML techniques. Through our partnership with Docker Inc., we make great progress towards Step 3, which is to let anyone run those state-of-the-art AI models locally in a matter of minutes.”

You can also discover popular Spaces in the Docker Hub and run them locally with just a couple of commands.

To get started, read Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces. Or try Hugging Face Spaces now.
Quelle: https://blog.docker.com/feed/

Effortlessly Build Machine Learning Apps with Hugging Face’s Docker Spaces

The Hugging Face Hub is a platform that enables collaborative open source machine learning (ML). The hub works as a central place where users can explore, experiment, collaborate, and build technology with machine learning. On the hub, you can find more than 140,000 models, 50,000 ML apps (called Spaces), and 20,000 datasets shared by the community.

Using Spaces makes it easy to create and deploy ML-powered applications and demos in minutes. Recently, the Hugging Face team added support for Docker Spaces, enabling users to create any custom app they want by simply writing a Dockerfile.

Another great thing about Spaces is that once you have your app running, you can easily share it with anyone around the world. 🌍

This guide will step through the basics of creating a Docker Space, configuring it, and deploying code to it. We’ll show how to build a basic FastAPI app for text generation that will be used to demo the google/flan-t5-small model, which can generate text given input text. Models like this are used to power text completion in all sorts of apps. (You can check out a completed version of the app at Hugging Face.)

Prerequisites

To follow along with the steps presented in this article, you’ll need to be signed in to the Hugging Face Hub — you can sign up for free if you don’t have an account already.

Create a new Docker Space 🐳

To get started, create a new Space as shown in Figure 1.

Figure 1: Create a new Space.

Next, you can choose any name you prefer for your project, select a license, and use Docker as the software development kit (SDK) as shown in Figure 2. 

Spaces provides pre-built Docker templates like Argilla and Livebook that let you quickly start your ML projects using open source tools. If you choose the “Blank” option, that means you want to create your Dockerfile manually. Don’t worry, though; we’ll provide a Dockerfile to copy and paste later. 😅

Figure 2: Adding details for the new Space.

When you finish filling out the form and click on the Create Space button, a new repository will be created in your Spaces account. This repository will be associated with the new space that you have created.

Note: If you’re new to the Hugging Face Hub 🤗, check out Getting Started with Repositories for a nice primer on repositories on the hub.

Writing the app

Ok, now that you have an empty space repository, it’s time to write some code. 😎

The sample app will consist of the following three files:

requirements.txt — Lists the dependencies of a Python project or application

app.py — A Python script where we will write our FastAPI app

Dockerfile — Sets up our environment, installs requirements.txt, then launches app.py

To follow along, create each file shown below via the web interface. To do that, navigate to your Space’s Files and versions tab, then choose Add file → Create a new file (Figure 3). Note that, if you prefer, you can also utilize Git.

Figure 3: Creating new files.

Make sure that you name each file exactly as we have done here. Then, copy the contents of each file from here and paste them into the corresponding file in the editor. After you have created and populated all the necessary files, commit each new file to your repository by clicking on the Commit new file to main button.

Listing the Python dependencies 

It’s time to list all the Python packages and their specific versions that are required for the project to function properly. The contents of the requirements.txt file typically include the name of the package and its version number, which can be specified in a variety of formats such as exact version numbers, version ranges, or compatible versions. The file lists FastAPI, requests, and uvicorn for the API along with sentencepiece, torch, and transformers for the text-generation model.

fastapi==0.74.*
requests==2.27.*
uvicorn[standard]==0.17.*
sentencepiece==0.1.*
torch==1.11.*
transformers==4.*

Defining the FastAPI web application

The following code defines a FastAPI web application that uses the transformers library to generate text based on user input. The app itself is a simple single-endpoint API. The /generate endpoint takes in text and uses a transformers pipeline to generate a completion, which it then returns as a response.

To give folks something to see, we reroute FastAPI’s interactive Swagger docs from the default /docs endpoint to the root of the app. This way, when someone visits your Space, they can play with it without having to write any code.

from fastapi import FastAPI
from transformers import pipeline

# Create a new FastAPI app instance
app = FastAPI()

# Initialize the text generation pipeline
# This function will be able to generate text
# given an input.
pipe = pipeline("text2text-generation",
model="google/flan-t5-small")

# Define a function to handle the GET request at `/generate`
# The generate() function is defined as a FastAPI route that takes a
# string parameter called text. The function generates text based on the # input using the pipeline() object, and returns a JSON response
# containing the generated text under the key "output"
@app.get("/generate")
def generate(text: str):
"""
Using the text2text-generation pipeline from `transformers`, generate text
from the given input text. The model used is `google/flan-t5-small`, which
can be found [here](<https://huggingface.co/google/flan-t5-small>).
"""
# Use the pipeline to generate text from the given input text
output = pipe(text)

# Return the generated text in a JSON response
return {"output": output[0]["generated_text"]}

Writing the Dockerfile

In this section, we will write a Dockerfile that sets up a Python 3.9 environment, installs the packages listed in requirements.txt, and starts a FastAPI app on port 7860.

Let’s go through this process step by step:

FROM python:3.9

The preceding line specifies that we’re going to use the official Python 3.9 Docker image as the base image for our container. This image is provided by Docker Hub, and it contains all the necessary files to run Python 3.9.

WORKDIR /code

This line sets the working directory inside the container to /code. This is where we’ll copy our application code and dependencies later on.

COPY ./requirements.txt /code/requirements.txt

The preceding line copies the requirements.txt file from our local directory to the /code directory inside the container. This file lists the Python packages that our application depends on

RUN pip install –no-cache-dir –upgrade -r /code/requirements.txt

This line uses pip to install the packages listed in requirements.txt. The –no-cache-dir flag tells pip to not use any cached packages, the –upgrade flag tells pip to upgrade any already-installed packages if newer versions are available, and the -r flag specifies the requirements file to use.

RUN useradd -m -u 1000 user
USER user
ENV HOME=/home/user
PATH=/home/user/.local/bin:$PATH

These lines create a new user named user with a user ID of 1000, switch to that user, and then set the home directory to /home/user. The ENV command sets the HOME and PATH environment variables. PATH is modified to include the .local/bin directory in the user’s home directory so that any binaries installed by pip will be available on the command line. Refer the documentation to learn more about the user permission.

WORKDIR $HOME/app

This line sets the working directory inside the container to $HOME/app, which is /home/user/app.

COPY –chown=user . $HOME/app

The preceding line copies the contents of our local directory into the /home/user/app directory inside the container, setting the owner of the files to the user that we created earlier.

CMD ["uvicorn", "app:app", "–host", "0.0.0.0", "–port", "7860"]

This line specifies the command to run when the container starts. It starts the FastAPI app using uvicorn and listens on port 7860. The –host flag specifies that the app should listen on all available network interfaces, and the app:app argument tells uvicorn to look for the app object in the app module in our code.

Here’s the complete Dockerfile:

# Use the official Python 3.9 image
FROM python:3.9

# Set the working directory to /code
WORKDIR /code

# Copy the current directory contents into the container at /code
COPY ./requirements.txt /code/requirements.txt

# Install requirements.txt
RUN pip install –no-cache-dir –upgrade -r /code/requirements.txt

# Set up a new user named "user" with user ID 1000
RUN useradd -m -u 1000 user
# Switch to the "user" user
USER user
# Set home to the user’s home directory
ENV HOME=/home/user
PATH=/home/user/.local/bin:$PATH

# Set the working directory to the user’s home directory
WORKDIR $HOME/app

# Copy the current directory contents into the container at $HOME/app setting the owner to the user
COPY –chown=user . $HOME/app

# Start the FastAPI app on port 7860, the default port expected by Spaces
CMD ["uvicorn", "app:app", "–host", "0.0.0.0", "–port", "7860"]

Once you commit this file, your space will switch to Building, and you should see the container’s build logs pop up so you can monitor its status. 👀

If you want to double-check the files, you can find all the files at our app Space.

Note: For a more basic introduction on using Docker with FastAPI, you can refer to the official guide from the FastAPI docs.

Using the app 🚀

If all goes well, your space should switch to Running once it’s done building, and the Swagger docs generated by FastAPI should appear in the App tab. Because these docs are interactive, you can try out the endpoint by expanding the details of the /generate endpoint and clicking Try it out! (Figure 4).

Figure 4: Trying out the app.

Conclusion

This article covered the basics of creating a Docker Space, building and configuring a basic FastAPI app for text generation that uses the google/flan-t5-small model. You can use this guide as a starting point to build more complex and exciting applications that leverage the power of machine learning.

If you’re interested in learning more about Docker templates and seeing curated examples, check out the Docker Examples page. There you’ll find a variety of templates to use as a starting point for your own projects, as well as tips and tricks for getting the most out of Docker templates. Happy coding!
Quelle: https://blog.docker.com/feed/

Announcing Docker+Wasm Technical Preview 2

We recently announced the first Technical Preview of Docker+Wasm, a special build that makes it possible to run Wasm containers with Docker using the WasmEdge runtime. Starting from version 4.15, everyone can try out the features by activating the containerd image store experimental feature.

We didn’t want to stop there, however. Since October, we’ve been working with our partners to make running Wasm workloads with Docker easier and to support more runtimes.

Now we are excited to announce a new Technical Preview of Docker+Wasm with the following three new runtimes:

spin from Fermyon

slight from Deislabs

wasmtime from Bytecode Alliance

All of these runtimes, including WasmEdge, use the runwasi library.

What is runwasi?

Runwasi is a multi-company effort to make a library in Rust that makes it easier to write containerd shims for Wasm workloads. Last December, the runwasi project was donated and moved to the Cloud Native Computing Foundation’s containerd organization in GitHub.

With a lot of work from people at Microsoft, Second State, Docker, and others, we now have enough features in runwasi to run Wasm containers with Docker or in a Kubernetes cluster. We still have a lot of work to do, but there are enough features for people to start testing.

If you would like to chat with us or other runwasi maintainers, join us on the CNCF’s #runwasi channel.

Get the update

Ready to dive in and try it for yourself? Great! Before you do, understand that this is a technical preview build of Docker Desktop, so things might not work as expected. Be sure to back up your containers and images before proceeding.

Download and install the appropriate version for your system, then activate the containerd image store (Settings > Features in development > Use containerd for pulling and storing images), and you’ll be ready to go.

Figure 1: Docker Desktop beta features in development.

Mac (Intel)

Mac (Arm)

Linux (deb, Intel)

Linux (deb, Arm)

Linux (rpm, Intel)

Linux (Arch)

Windows

Let’s take Wasm for a spin 

The WasmEdge runtime is still present in Docker Desktop, so you can run: 

$ docker run –rm –runtime=io.containerd.wasmedge.v1
–platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

You can even run the same image with the wasmtime runtime:

$ docker run –rm –runtime=io.containerd.wasmtime.v1
–platform=wasi/wasm secondstate/rust-example-hello:latest
Hello WasmEdge!

In the next example, we will deploy a Wasm workload to Docker Desktop’s Kubernetes cluster using the slight runtime. To begin, make sure to activate Kubernetes in Docker Desktop’s settings, then create an example.yaml file:

cat > example.yaml <<EOT
apiVersion: apps/v1
kind: Deployment
metadata:
name: wasm-slight
spec:
replicas: 1
selector:
matchLabels:
app: wasm-slight
template:
metadata:
labels:
app: wasm-slight
spec:
runtimeClassName: wasmtime-slight-v1
containers:
– name: hello-slight
image: dockersamples/slight-rust-hello:latest
command: ["/"]
resources:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 500m
memory: 128Mi

apiVersion: v1
kind: Service
metadata:
name: wasm-slight
spec:
type: LoadBalancer
ports:
– protocol: TCP
port: 80
targetPort: 80
selector:
app: wasm-slight
EOT

Note the runtimeClassName, kubernetes will use this to select the right runtime for your application. 

You can now run:

$ kubectl apply -f example.yaml

Once Kubernetes has downloaded the image and started the container, you should be able to curl it:

$ curl localhost/hello
hello

You now have a Wasm container running locally in Kubernetes. How exciting! 🎉

Note: You can take this same yaml file and run it in AKS.

Now let’s see how we can use this to run Bartholomew. Bartholomew is a micro-CMS made by Fermyon that works with the spin runtime. You’ll need to clone this repository; it’s a slightly modified Bartholomew template. 

The repository already contains a Dockerfile that you can use to build the Wasm container:

FROM scratch
COPY . .
ENTRYPOINT [ "/modules/bartholomew.wasm" ]

The Dockerfile copies all the contents of the repository to the image and defines the build bartholomew Wasm module as the entry point of the image.

$ cd docker-wasm-bartholomew
$ docker build -t my-cms .
[+] Building 0.0s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 147B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2.84kB 0.0s
=> CACHED [1/1] COPY . . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => exporting manifest sha256:cf85929e5a30bea9d436d447e6f2f2e 0.0s
=> => exporting config sha256:0ce059f2fe907a91a671f37641f4c5d73 0.0s
=> => naming to docker.io/library/my-cms:latest 0.0s
=> => unpacking to docker.io/library/my-cms:latest 0.0s

You are now ready to run your first WebAssembly micro-CMS:

$ docker run –runtime=io.containerd.spin.v1 -p 3000:80 my-cms

If you go to http://localhost:3000, you should be able to see the Bartholomew landing page (Figure 2).

Figure 2: Bartholomew landing page.

We’d love your feedback

All of this work is fresh from the oven and relies on the containerd image store in Docker, which is an experimental feature we’ve been working on for almost a year now. The good news is that we already see how this hard work can benefit everyone by adding more features to Docker. We’re still working on it, so let us know what you need. 

If you want to help us shape the future of WebAssembly with Docker, try it out, let us know what you think, and leave feedback on our public roadmap.

Quelle: https://blog.docker.com/feed/

Scaling Kubernetes to 7,500 nodes

openai.com – We’ve scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling L…
Quelle: news.kubernauts.io