Containerized Python Development – Part 3

This is the last part in the series of blog posts showing how to set up and optimize a containerized Python development environment. The first part covered how to containerize a Python service and the best development practices for it. The second part showed how to easily set up different components that our Python application needs and how to easily manage the lifecycle of the overall project with Docker Compose.

In this final part, we review the development cycle of the project and discuss in more details how to apply code updates and debug failures of the containerized Python services. The goal is to analyze how to speed up these recurrent phases of the development process such that we get a similar experience to the local development one.

Applying Code Updates

In general, our containerized development cycle consists of writing/updating code, building, running and debugging it.

For the building and running phase, as most of the time we actually have to wait, we want these phases to go pretty quick such that we focus on coding and debugging.

We now analyze how to optimize the build phase during development. The build phase corresponds to image build time when we change the Python source code. The image needs to be rebuilt in order to get the Python code updates in the container before launching it.

We can however apply code changes without having to build the image. We can do this simply by bind-mounting the local source directory to its path in the container. For this, we update the compose file as follows:

docker-compose.yaml…  app:
    build: app
    restart: always
    volumes:      – ./app/src:/code

With this, we have direct access to the updated code and therefore we can skip the image build and restart the container to reload the Python process.

Furthermore, we can avoid re-starting the container if we run inside it a reloader process that watches for file changes and triggers the restart of the Python process once a change is detected. We need to make sure we have bind-mounted the source code in the Compose file as described previously.

In our example, we use the Flask framework that, in debugging mode, runs a very convenient module called the reloader. The reloader watches all the source code files and automatically restarts the server when detects that a file has changed. To enable the debug mode we only need to set the debug parameter as below:

server.pyserver.run(debug=True, host=’0.0.0.0′, port=5000)

If we check the logs of the app container we see that the flask server is running in debugging mode.

$ docker-compose logs app
Attaching to project_app_1
app_1 | * Serving Flask app “server” (lazy loading)
app_1 | * Environment: production
app_1 | WARNING: This is a development server. Do not use it in a production deployment.
app_1 | Use a production WSGI server instead.
app_1 | * Debug mode: on
app_1 | * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 315-974-099

Once we update the source code and save, we should see the notification in the logs and reload.

$ docker-compose logs app
Attaching to project_app_1
app_1 | * Serving Flask app “server” (lazy loading)

app_1 | * Debugger PIN: 315-974-099
app_1 | * Detected change in ‘/code/server.py’, reloading
app_1 | * Restarting with stat
app_1 | * Debugger is active!
app_1 | * Debugger PIN: 315-974-099

Debugging Code

We can debug code in mostly two ways. 

First is the old fashioned way of placing print statements all over the code for checking runtime value of objects/variables. Applying this to containerized processes is quite straightforward and we can easily check the output with a docker-compose logs command.

Second, and the more serious approach is by using a debugger. When we have a containerized process, we need to run a debugger inside the container and then connect to that remote debugger to be able to inspect the instance data.

We take as an example again our Flask application. When running in debug mode, aside from the reloader module it also includes an interactive debugger. Assume we update the code to raise an exception, the Flask service will return a detailed response with the exception.

Another interesting case to exercise is the interactive debugging where we place breakpoints in the code and do a live inspect. For this we need an IDE with Python and remote debugging support. If we choose to rely on Visual Studio Code to show how to debug Python code running in containers we need to do the following to connect to the remote debugger directly from VSCode. 

First, we need to map locally the port we use to connect to the debugger. We can easily do this by adding  the port mapping to the Compose file:

docker-compose.yaml…  app:
    build: app
    restart: always
    volumes:      – ./app/src:/code
    ports:      – 5678:5678…

Next, we need to import the debugger module in the source code and make it listen on the port we defined in the Compose file. We should not forget to add it to the dependencies file also and rebuild the image for the app service to get the debugger package installed. For this exercise, we choose to use the ptvsd debugger package that VS Code supports.

server.py…import ptvsdptvsd.enable_attach(address=(‘0.0.0.0′, 5678))…

requirements.txtFlask==1.1.1
mysql-connector==2.2.9
ptvsd==4.3.2

We need to remember that for changes we make in the Compose file, we need to run a compose down command to remove the current containers setup and then run a docker-compose up to redeploy with the new configurations in the compose file.

Finally, we need to create a ‘Remote Attach’ configuration in VS Code to launch the debugging mode.

The launch.json for our project should look like:

{    “version”: “0.2.0”,    “configurations”: [        {            “name”: “Python: Remote Attach”,            “type”: “python”,            “request”: “attach”,            “port”: 5678,            “host”: “localhost”,            “pathMappings”: [                {                    “localRoot”: “${workspaceFolder}/app/src”,                    “remoteRoot”: “/code”                }            ]        }    ]}

We need to make sure we update the path map locally and in the container. 

Once we do this, we can easily place breakpoints in the IDE, start the debugging mode based on the configuration we created and, finally, trigger the code to reach the breakpoint.

Conclusion

This series of blog posts showed how to quickly set up a containerized Python development environment, manage project lifecycle and apply code updates and debug containerized Python services.  Putting in practice all we discussed should make the containerized development experience identical to the local one. 

Resources

Project samplehttps://github.com/aiordache/demos/tree/master/dockercon2020-demoBest practices for writing Dockerfileshttps://docs.docker.com/develop/develop-images/dockerfile_best-practices/https://www.docker.com/blog/speed-up-your-development-flow-with-these-dockerfile-best-practices/Docker Desktop https://docs.docker.com/desktop/Docker Compose https://docs.docker.com/compose/Project skeleton samples  https://github.com/docker/awesome-compose
The post Containerized Python Development – Part 3 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How to Talk Back to CVEs in ChartCenter

jfrog.com – If your Helm charts could talk, what would they say to potential users? Would they boast of the power in the Kubernetes apps they deploy? Would they warn of their dangers? Would they offer advice? In…
Quelle: news.kubernauts.io

Stash by AppsCode

stash.run – InterSystems was delighted to engage with AppsCode in the delicate, yet fundamental task of supporting durable, non-ephemeral workloads with Kubernetes. We needed the best-prepared, most-proficient d…
Quelle: news.kubernauts.io

Multi-arch build, what about GitLab CI?

Following the previous article where we saw how to build multi arch images using GitHub Actions, we will now show how to do the same thing using another CI. In this article, we’ll show how to use GitLab CI, which is part of the GitLab.

To start building your image with GitLab CI, you will first need to create a .gitlab-ci.yml file at the root of your repository, commit it and push it.

image: docker:stable
variables:  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
services:
  – docker:dind
build:
  stage: build
  script:
    – docker version

This should result in a build output that shows the version of the Docker CLI and Engine: 

We will now install Docker buildx. Because GitLabCI runs everything in containers and uses any image you want to start this container, we can use one with buildx preinstalled, like the one we used for CircleCI. And as for CircleCI, we need to start a builder instance.

image: jdrouet/docker-with-buildx:stable
variables:  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
services:
  – docker:dind
build:
  stage: build
  script:
    – docker buildx create –use
    – docker buildx build –platform linux/arm/v7,linux/arm64/v8,linux/amd64 –tag your-username/multiarch-example:gitlab .

And that’s it, your image will now be built for both ARM and x86 platforms.

The last step is now to store the image on the Docker Hub. To do so we’ll need an access token from Docker Hub to get write access.

Once you created it, you’ll have to set in your project CI/CD settings in the Variables section.

We can then add  DOCKER_USERNAME and DOCKER_PASSWORD variables to GitLab CI so that we can login to push our images.

Once this is done, you can add the login step and the –push option to the buildx command as follows.

build:
  stage: build
  script:
    – docker login -u “$DOCKER_USERNAME” -p “$DOCKER_PASSWORD”
    – docker buildx create –use
    – docker buildx build –push –platform linux/arm/v7,linux/arm64/v8,linux/386,linux/amd64 –tag your-username/multiarch-example:gitlab .

And voila, you can now create a multi arch image each time you make a change in your codebase.
The post Multi-arch build, what about GitLab CI? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Containerized Python Development – Part 2

This is the second part of the blog post series on how to containerize our Python development. In part 1, we have already shown how to containerize a Python service and the best practices for it. In this part, we discuss how to set up and wire other components to a containerized Python service. We show a good way to organize project files and data and how to manage the overall project configuration with Docker Compose. We also cover the best practices for writing Compose files for speeding up our containerized development process.

Managing Project Configuration with Docker Compose

Let’s take as an example an application for which we separate its functionality in three-tiers following a microservice architecture. This is a pretty common architecture for multi-service applications. Our example application consists of:

a UI tier – running on an nginx servicea logic tier – the Python component we focus ona data tier – we use a mysql database to store some data we need in the logic tier

The reason for splitting an application into tiers is that we can easily modify or add new ones without having to rework the entire project.

A good way to structure the project files is to isolate the file and configurations for each service. We can easily do this by having a dedicated directory per service inside the project one. This is very useful to have a clean view of the components and to easily containerize each service. It also helps in manipulating service specific files without having to worry that we could modify by mistake other service files.

For our example application, we have the following directories:

Project
├─── web
└─── app└─── db

We have already covered how to containerize a Python component in the first part of this blog post series.  Same applies for the other project components but we skip the details for them as we can easily access samples implementing the structure we discuss here. The nginx-flask-mysql example provided by the awesome-compose repository is one of them. 

This is the updated Project structure with the Dockerfile in place. Assume we have a similar setup for the web and db components.

Project
├─── web
├─── app
│ ├─── Dockerfile
│ ├─── requirements.txt
│ └─── src
│ └─── server.py
└─── db

We could now start the containers manually for all our containerized project components. However, to make them communicate we have to manually handle the network creation and attach the containers to it. This is fairly complicated and it would take precious development time if we need to do it frequently.

Here is where Docker Compose offers a very easy way of coordinating containers and spinning up and taking down services in our local environment. For this, all we need to do is write a Compose file containing the configuration for our project’s services. Once we have it, we can get the project running with a single command.

Compose file

Let’s see what is the structure of the Compose files and how we can manage the project services with it.

Below is a sample file for our project. As you can see we define a list of services. In  the db section we specify the base image directly as we don’t have any particular configuration to apply to it. Meanwhile our web and app service are going to have the image built from their Dockerfiles. According to where we can get the service image we can either set the build or the image field. The build field requires a path with a Dockerfile inside.

docker-compose.yamlversion: “3.7”
services:  db:    image: mysql:8.0.19    command: ‘–default-authentication-plugin=mysql_native_password’
    restart: always
    environment:
      – MYSQL_DATABASE=example
      – MYSQL_ROOT_PASSWORD=password  app:
    build: app
    restart: always
  web:
    build: web
    restart: always
    ports:
      – 80:80

To initialize the database we can pass environment variables with the DB name and password while for our web service we map the container port to the localhost in order to be able to access the web interface of our project.

Let’s see how to deploy the project with Docker Compose. 

All we need to do now is to place the docker-compose.yaml at the root directory of the project and then issue the command for deployment with docker-compose.

Project├─── docker-compose.yaml
├─── web
├─── app
└─── db

Docker Compose is going to take care of pulling the mysql image from Docker Hub and launching the db container while for our web and app service, it builds the images locally and then runs the containers from them. It also takes care of creating a default network and placing all containers in it so that they can reach each other.

All this is triggered with only one command.

$ docker-compose up -d Creating network “project_default” with the default driver Pulling db (mysql:8.0.19)… … Status: Downloaded newer image for mysql:8.0.19 Building app Step 1/6 : FROM python:3.8 —> 7f5b6ccd03e9 Step 2/6 : WORKDIR /code —> Using cache —> c347603a917d Step 3/6 : COPY requirements.txt . —> fa9a504e43ac Step 4/6 : RUN pip install -r requirements.txt —> Running in f0e93a88adb1 Collecting Flask==1.1.1 … Successfully tagged project_app:latest WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use docker-compose build or docker-compose up –build. Building web Step 1/3 : FROM nginx:1.13-alpine 1.13-alpine: Pulling from library/nginx … Status: Downloaded newer image for nginx:1.13-alpine —> ebe2c7c61055 Step 2/3 : COPY nginx.conf /etc/nginx/nginx.conf —> a3b2a7c8853c Step 3/3 : COPY index.html /usr/share/nginx/html/index.html —> 9a0713a65fd6 Successfully built 9a0713a65fd6 Successfully tagged project_web:latest Creating project_web_1 … done Creating project_db_1 … done Creating project_app_1 … done

Check the running containers:

$ docker-compose ps  Name         Command                        State  Ports
————————————————————————-
project_app_1  /bin/sh -c python server.py    Up
project_db_1   docker-entrypoint.sh –def … Up     3306/tcp, 33060/tcp
project_web_1  nginx -g daemon off;           Up     0.0.0.0:80->80/tcp

To stop and remove all project containers run:

$ docker-compose downStopping project_db_1 … done
Stopping project_web_1 … done
Stopping project_app_1 … done
Removing project_db_1 … done
Removing project_web_1 … done
Removing project_app_1 … done
Removing network project-default

To rebuild images we can run a build and then an up command to update the state of the project containers:

$ docker-compose build$ docker-compose up -d

As we can see, it is quite easy to manage the lifecycle of the project containers with docker-compose.

Best practices for writing Compose files

Let us analyse the Compose file and see how we can optimise it by following best practices for writing Compose files.

Network separation

When we have several containers we need to control how to wire them together. We need to keep in mind that, as we do not set any network in the compose file, all our containers will end in the same default network.

This may not be a good thing if we want only our Python service to be able to reach the database. To address this issue, in the compose file we can actually define separate networks for each pair of components. In this case the web component won’t be able to access the DB.

Docker Volumes

Every time we take down our containers, we remove them and therefore lose the data we stored in previous sessions. To avoid that and persist DB data between different containers, we can exploit named volumes. For this, we simply define a named volume in the Compose file and specify a mount point for it in the db service as shown below:

version: “3.7”
services:  db:    image: mysql:8.0.19    command: ‘–default-authentication-plugin=mysql_native_password’
    restart: always    volumes:      – db-data:/var/lib/mysql    networks:      – backend-network
    environment:
      – MYSQL_DATABASE=example
      – MYSQL_ROOT_PASSWORD=password  app:
    build: app
    restart: always    networks:      – backend-network      – frontend-network
  web:
    build: web
    restart: always
    ports:
      – 80:80    networks:      – frontend-networkvolumes:
  db-data:
networks:
  backend-network:
  frontend-network:

 We can explicitly remove the named volumes on docker-compose down if we want.

Docker Secrets

As we can observe in the Compose file, we set the db password in plain text. To avoid this, we can exploit docker secrets to have the password stored and share it securely with the services that need it. We can define secrets and reference them in services as below. The password is being stored locally in the project/db/password.txt file and mounted in the containers under /run/secrets/<secret-name>.

version: “3.7”
services:  db:    image: mysql:8.0.19    command: ‘–default-authentication-plugin=mysql_native_password’
    restart: always    secrets:      – db-password    volumes:      – db-data:/var/lib/mysql    networks:      – backend-network
    environment:
      – MYSQL_DATABASE=example
      – MYSQL_ROOT_PASSWORD=/run/secrets/db-password  app:
    build: app
    restart: always    secrets:      – db-password    networks:      – backend-network      – frontend-network
  web:
    build: web
    restart: always
    ports:
      – 80:80    networks:      – frontend-networkvolumes:
  db-data:secrets:
  db-password:
    file: db/password.txt
networks:
  backend-network:
  frontend-network:

We have now a well defined Compose file for our project that follows best practices. An example application exercising all the aspects we discussed can be found here.

What’s next?

This blog post showed how to set up a container-based multi-service project where a Python service is wired to other services and how to deploy it locally with Docker Compose.

In the next and final part of this series, we show how to update and debug the containerized Python component.

Resources

Project samplehttps://github.com/aiordache/demos/tree/master/dockercon2020-demoDocker Compose https://docs.docker.com/compose/Project skeleton samples  https://github.com/docker/awesome-compose
The post Containerized Python Development – Part 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/