Towards a Kubernetes native Future

medium.com – Cloud native technologies have now become mainstream. Enterprises are increasingly looking at CNCF and its flagship conference KubeCon/CloudNativeCon to keep up-to-date on the latest advancements in …
Quelle: news.kubernauts.io

Stuck between ECS and Kubernetes?

towardsdatascience.com – Both Amazon EC2 Container Service (ECS) and Kubernetes are fast, highly scalable container management solutions that allow you to run containerized applications in a cluster of managed servers. Befor…Tweeted by @gkarapalidis https://twitter.com/gkarapalidis/status/1254266226068598784
Quelle: news.kubernauts.io

The Complete DevOps RoadMap

medium.com – DevOps is really hot at the moment and most of my friends, colleagues, and senior developers I know are working hard to become a DevOps engineer and project themselves as DevOps champion in their org…Tweeted by @thecraftman_ https://twitter.com/thecraftman_/status/1254356007309975552
Quelle: news.kubernauts.io

Up Your Productivity with these DockerCon LIVE 2020 Sessions

DockerCon LIVE 2020 is the opportunity for the Docker Community to connect while socially distancing, have fun, and learn from each other. From beginner to expert, DockerCon speakers are getting ready to share their tips and tricks for becoming more productive and finding joy while building apps.

From the Docker team, Engineer Dave Scott will share how recent updates to Docker Desktop helps to deliver quicker edit-compile-test cycles to developers. He’ll dive into the New Docker Desktop Filesharing Features that enabled this change and how to use them effectively to speed up your dev workflow.

Docker Engineer Anca Lordache will cover Best Practices for Compose-managed Python Applications, from bootstrapping your project to how to reproduce your builds and make them more optimized.

If you are a Node.js developer, Digital Ocean’s Kathleen Juell will share How to Build and Run Node Apps with Docker and Compose.

And for PHP devs, Erika Heidi from Digital Ocean will demonstrate How to Create PHP Development Environments with Docker Compose, using a Laravel 6 application as case study. She’ll demonstrate how to define and integrate services, share files between containers, and manage your environment with Docker Compose commands.

Or if it’s VS Code or bust, Docker Captain Brian Christner will share how to Become a Docker Power User with Microsoft Visual Studio Code. He’ll cover using VS Code, streamlining your Docker Desktop development workflow with less context switching and built-in shortcuts, bootstrapping new projects, quickly writing Dockerfiles utilizing templates, and how to build, run, and interact with containers all from VS Code.

Apply automation to your workflow and Build a Docker Image Packaging Pipeline Using Github Actions. Gareth Rushgrove, Director of Product at Snyk, will take attendees through a hello world example of building and publishing a single Docker image using the new build-push-action from Docker. Then he will cover best practices for building reusable Dockerfiles while building up to show how multi-stage Dockerfiles and Buildkit help you build smaller images faster.

Speaking of advanced build techniques, Docker Captain Nicolas Dille will dive into How to Improve your Image Builds Using Advanced Docker Build. If you aren’t using BuildKit as your image builder, you will after this session! Nicholas will teach you how to use Docker’s advanced builder to make image builds faster and more secure.

But what about testing? Today’s applications carry a monumental amount of external dependencies and unit testing isn’t enough to deliver quality code. Andrew Crumpler from Global Freight Solutions shares how Using Docker and Compose to Simplify Integration Testing makes Docker an invaluable part of a developer’s toolkit to deliver consistency, convenience and certainty in integration testing.

We’ve highlighted just 8 of the 38+ sessions in this post. Check out what else is on the agenda and start saving sessions to your calendar here.
The post Up Your Productivity with these DockerCon LIVE 2020 Sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Speed Up Your Development Flow With These Dockerfile Best Practices

The Dockerfile is the starting point for creating a Docker image. The file format provides a well-defined set of directives that allow you to copy files or folders, run commands, set environment variables, and do other tasks required to create a container image. It’s really important to craft your Dockerfile well to keep the resulting image secure, small, quick to build, and quick to update.

In this post, we’ll see how to write good Dockerfiles to speed up your development flow, ensure build reproducibility and that produce images that can be confidently deployed to production.

Note: for this blog post we’ll base our Dockerfile examples on the react-java-mysql sample from the awesome-compose repository.

Development flow

As developers, we want to match our development environment to the target production context as closely as possible to ensure that what we build will work when deployed.

We also want to be able to develop quickly which means we want builds to be fast and for us to be able to use developer tools like debuggers. Containers are a great way to codify our development environment but we need to define our Dockerfile correctly to be able to interact quickly with our containers.

Incremental builds

A Dockerfile is a list of instructions for building your container image. While the Docker builder caches the result of each step as an image layer, the cache can be invalidated causing the step that invalidated the cache and all subsequent steps to need to be rerun and the corresponding layers to be regenerated.

The cache is invalidated when files in the build context that are referenced by COPY or ADD change. The ordering of the steps can therefore have drastic effects on performance.

Let’s take a look at an example where we build a NodeJs project in the Dockerfile. In this project, there are dependencies specified in the package.json file which are fetched when the npm ci command is run.

The simplest Dockerfile would be:

FROM node:ltsENV CI=trueENV PORT=3000WORKDIR /codeCOPY . /codeRUN npm ciCMD [ “npm”, “start” ]

Structuring the Dockerfile as above will cause the cache to be invalidated at the COPY line any time a file in the build context changes. This means that the dependencies will be fetched and the node_modules directory filled when any file is changed instead of just the package.json file which can take a long time.

To avoid this and only fetch the dependencies when they change (i.e.: when package.json or package-lock.json changes), we should consider separating the dependency installation from the build and run of our application.

A more optimized Dockerfile would be this:

FROM node:lts
ENV CI=true
ENV PORT=3000
WORKDIR /code
COPY package.json package-lock.json /code/
RUN npm ci
COPY src /code/src
CMD [ “npm”, “start” ]

Using this separation, if there are no changes in package.json or package-lock.json then the cache will be used for the layer generated by the RUN npm ci instruction. This means that when you edit your application source and rebuild, the dependencies won’t be redownloaded which saves time .

We also limit the second COPY to the src directory as explained in a previous post.

Keep live reload active between the host and the container

This tip is not directly related to the Dockerfile but we often hear this kind of question: How do I keep live reload active while running the app in a container and modifying the source code from my IDE on the host machine?

With our example, we need to mount our project directory in the container and pass an environment variable to enable Chokidar which wraps NodeJS file change events from the host.

$ docker run -e CHOKIDAR_USEPOLLING=true  -v ${PWD}/src/:/code/src/ -p 3000:3000 repository/image_name

Consistent builds

One of the most important things with a Dockerfile is to build the exact same image from the same build context (sources, dependencies…)

We’ll continue to improve the Dockerfile defined in the previous section.

Build consistently from sources

As we saw in the previous section, we’re able to build an application by adding the source files and dependencies in the Dockerfile description and then running commands on them.

But in our previous example we aren’t able to confirm that the image generated will be the same each time we run a docker build…Why? Because each time NodeJS is released, we can expect the lts tag to point to the latest LTS version of the NodeJS image, which will change over time and could introduce breaking changes. We can easily fix this by using a more specific tag for the base image (we’ll let you choose between LTS or the latest stable version )

FROM node:13.12.0ENV CI=trueENV PORT=3000WORKDIR /codeCOPY package.json package-lock.json /code/RUN npm ciCOPY src /code/srcCMD [ “npm”, “start” ]

We’ll see in the No more latest section that there are other advantages to using more specific base image tags and avoiding the latest tag.

Multi-stage and targets to match the right environment

We made the development build consistent, but how can we do this for the production artifact?

Since Docker 17.05, we can use multi-stage builds to define steps to produce our final image. Using this mechanism in our Dockerfile, we’ll be able to split the image we use for our development flow from that used to build the application and that used in production.

FROM node:13.12.0 AS developmentENV CI=trueENV PORT=3000WORKDIR /codeCOPY package.json package-lock.json /code/RUN npm ciCOPY src /code/srcCMD [ “npm”, “start” ]FROM development AS builderRUN npm run buildFROM nginx:1.17.9 AS productionCOPY –from=builder /code/build /usr/share/nginx/html

Each time you see FROM … AS … it’s a build stage.So we now have a development, a build, and a production stage.We can continue to use a container for our development flow by building the specific development stage image using the –target flag.

$ docker build –target development -t repository/image_name:development .

And use it as usual

$ docker run -e CHOKIDAR_USEPOLLING=true -v ${PWD}/src/:/code/src/ repository/image_name:development

A docker build without the –target flag will build the final stage which in this case is the production image. Our production image is simply a nginx image with the binaries built in the previous steps put in the correct place that they are served.

Production ready

It’s really important to keep your production image as lean and as secure as possible. Here are a few things to check before running a container in production.

No more latest image version

As we previously saw in the Build consistently from sources section, using a specific tag for build steps help to make the image build reproducible. There are at least two other very good reasons to use more specific tags for your images: 

You can easily find all the containers running with an image version in your favorite orchestrator (Swarm, Kubernetes…)

# Search in Docker engine containers using our repository/image_name:development image$ docker inspect $(docker ps -q) | jq -c ‘.[] | select(.Config.Image == “repository/image_name:development”) |”(.Id) (.State) (.Config)”‘
“89bf376620b0da039715988fba42e78d42c239446d8cfd79e4fbc9fbcc4fd897 {”Status”:”running”,”Running”:true,”Paused”:false,”Restarting”:false,”OOMKilled”:false,”Dead”:false,”Pid”:25463,”ExitCode”:0,”Error”:””,”StartedAt”:”2020-04-20T09:38:31.600777983Z”,”FinishedAt”:”0001-01-01T00:00:00Z”}{”Hostname”:”89bf376620b0”,”Domainname”:””,”User”:””,”AttachStdin”:false,”AttachStdout”:true,”AttachStderr”:true,”ExposedPorts”:{”3000/tcp”:{}},”Tty”:false,”OpenStdin”:false,”StdinOnce”:false,”Env”:[”CHOKIDAR_USEPOLLING=true”,”PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin”,”NODE_VERSION=12.16.2”,”YARN_VERSION=1.22.4”,”CI=true”,”PORT=3000”],”Cmd”:[”npm”,”start”],”Image”:”repository/image_name:development”,”Volumes”:null,”WorkingDir”:”/code”,”Entrypoint”:[”docker-entrypoint.sh”],”OnBuild”:null,”Labels”:{}}”#Search in k8s pods running a container with our repository/image_name:development image (using jq cli)$ kubectl get pods –all-namespaces -o json | jq -c ‘.items[] | select(.spec.containers[].image == “repository/image_name:development”)| .metadata’{“creationTimestamp”:”2020-04-10T09:41:55Z”,”generateName”:”image_name-78f95d4f8c-“,”labels”:{“com.docker.default-service-type”:””,”com.docker.deploy-namespace”:”docker”,”com.docker.fry”:”image_name”,”com.docker.image-tag”:”development”,”pod-template-hash”:”78f95d4f8c”},”name”:”image_name-78f95d4f8c-gmlrz”,”namespace”:”docker”,”ownerReferences”:[{“apiVersion”:”apps/v1″,”blockOwnerDeletion”:true,”controller”:true,”kind”:”ReplicaSet”,”name”:”image_name-78f95d4f8c”,”uid”:”5ad21a59-e691-4873-a6f0-8dc51563de8d”}],”resourceVersion”:”532″,”selfLink”:”/api/v1/namespaces/docker/pods/image_name-78f95d4f8c-gmlrz”,”uid”:”5c70f340-05f1-418f-9a05-84d0abe7009d”}

In case of CVE (Common Vulnerabilities and Exposure), you can quickly know if you need to patch or not your containers and image descriptions.

From our example we could specify that our development and production images are alpine versions.

FROM node:13.12.0-alpine AS developmentENV CI=trueENV PORT=3000WORKDIR /codeCOPY package.json package-lock.json /code/RUN npm ciCOPY src /code/srcCMD [ “npm”, “start” ]FROM development AS builderRUN npm run buildFROM nginx:1.17.9-alpineCOPY –from=builder /code/build /usr/share/nginx/html

Use official images

You can use Docker Hub to search for base images to use in your Dockerfile, some of these are the officially supported ones. We strongly recommend to use these images as:

their content has been verifiedthey’re updated quickly when a CVE is fixed

You can add an image_filter request query param to only get the official images.

https://hub.docker.com/search?q=nginx&type=image&image_filter=official

All the previous examples in this post were using official images of NodeJS and NGINX.

Just enough permissions!

All applications, running in a container or not, should adhere to the principle of least privilege which means an application should only access the resources it needs. 

In case of malicious behavior or because of bugs, a process running with too many privileges may have unexpected consequences on the whole system at runtime.

Because the NodeJS official image is well setup, we’ll switch to the backend Dockerfile.

Configuring an image to run as an unprivileged user is very easy:

FROM maven:3.6.3-jdk-11 AS builderWORKDIR /workdir/serverCOPY pom.xml /workdir/server/pom.xmlRUN mvn dependency:go-offlineRUN mvn packageFROM openjdk:11-jre-slimRUN addgroup -S java && adduser -S javauser -G javaUSER javauserEXPOSE 8080COPY –from=builder /workdir/server/target/project-0.0.1-SNAPSHOT.jar /project-0.0.1-SNAPSHOT.jarCMD [“java”, “-Djava.security.egd=file:/dev/./urandom”, “-jar”, “/project-0.0.1-SNAPSHOT.jar”]

Simply by creating a new group, adding a user to it and using the USER directive we can run our container with a non-root user.

Conclusion

In this blog post we just showed some of the many ways to optimize and secure your Docker images by carefully crafting your Dockerfile. If you’d like to go further you can take a look at: 

Our official documentation about Dockerfile best practices A previous post on the subject by Tibor VassA session during the DockerCon 2019 by Tibor Vass and Sebastiaan van Stijn Another session during Devoxx 2019 by Jérémie Drouet and myself
The post Speed Up Your Development Flow With These Dockerfile Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/