Containerize Your Go Developer Environment – Part 2

This is the second part in a series of posts where we show how to use Docker to define your Go development environment in code. The goal of this is to make sure that you, your team, and the CI are all using the same environment. In part 1, we explained how to start a containerized development environment for local Go development, building an example CLI tool for different platforms and shrinking the build context to speed up builds. Now we are going to go one step further and learn how to add dependencies to make the project more realistic, caching to make the builds faster, and unit tests.

Adding dependencies

The Go program from part 1 is very simple and doesn’t have any dependencies Go dependencies. Let’s add a simple dependency – the commonly used github.com/pkg/errors package:

package mainimport (   “fmt”   “os”   “strings”   “github.com/pkg/errors”)func echo(args []string) error {   if len(args) < 2 {       return errors.New(“no message to echo”)   }   _, err := fmt.Println(strings.Join(args[1:], ” “))   return err}func main() {   if err := echo(os.Args); err != nil {       fmt.Fprintf(os.Stderr, “%+vn”, err)       os.Exit(1)   }}

Our example program is now a simple echo program that writes out the arguments that the user inputs or “no message to echo” and a stack trace if nothing is specified.

We will use Go modules to handle this dependency. Running the following commands will create the go.mod and go.sum files:$ go mod init$ go mod tidy

Now when we run the build, we will see that each time we build, the dependencies are downloaded$ make[+] Building 8.2s (7/9) => [internal] load build definition from Dockerfile…0.0s => [build 3/4] COPY . . 0.1s => [build 4/4] RUN GOOS=darwin GOARCH=amd64 go build -o /out/example . 7.9s => => # go: downloading github.com/pkg/errors v0.9.1

This is clearly inefficient and slows things down. We can fix this by downloading our dependencies as a separate step in our Dockerfile:

FROM –platform=${BUILDPLATFORM} golang:1.14.3-alpine AS buildWORKDIR /srcENV CGO_ENABLED=0COPY go.* .RUN go mod downloadCOPY . .ARG TARGETOSARG TARGETARCHRUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .FROM scratch AS bin-unixCOPY –from=build /out/example / …

Notice that we’ve added the go.* files and download the modules before adding the rest of the source. This allows Docker to cache the modules as it will only rerun these steps if the go.* files change.

Caching

Separating the downloading of our dependencies from our build is a great improvement but each time we run the build, we are starting the compile from scratch. For small projects this might not be a problem but as your project gets bigger you will want to leverage Go’s compiler cache.

To do this, you will need to use BuildKit’s Dockerfile frontend (https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md). Our updated Dockerfile is as follows:

# syntax = docker/dockerfile:1-experimentalFROM –platform=${BUILDPLATFORM} golang:1.14.3-alpine AS buildARG TARGETOSARG TARGETARCHWORKDIR /srcENV CGO_ENABLED=0COPY go.* .RUN go mod downloadCOPY . .RUN –mount=type=cache,target=/root/.cache/go-build GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .FROM scratch AS bin-unixCOPY –from=build /out/example / …

Notice the # syntax at the top of the Dockerfile that selects the experimental Dockerfile frontend and the –mount option attached to the run command. This mount option means that each time the go build command is run, the container will have the cache mounted to Go’s compiler cache folder.

Benchmarking this change for the example binary on a 2017 MacBook Pro 13”, I see that a small code change takes 11 seconds to build without the cache and less than 2 seconds with it. This is a huge improvement!

Adding unit tests

All projects need tests! We’ll add a simple test for our echo function in a main_test.go file:

package mainimport (     “testing”    “github.com/stretchr/testify/require” )func TestEcho(t *testing.T) {    // Test happy path    err := echo([]string{“bin-name”, “hello”, “world!”})    require.NoError(t, err) }func TestEchoErrorNoArgs(t *testing.T) {    // Test empty arguments    err := echo([]string{})     require.Error(t, err) }

This test ensures that we get an error if the echo function is passed an empty list of arguments.

We will now want another build target for our Dockerfile so that we can run the tests and build the binary separately. This will require a refactor into a base stage and then unit-test and build stages:

# syntax = docker/dockerfile:1-experimentalFROM –platform=${BUILDPLATFORM} golang:1.14.3-alpine AS base WORKDIR /src ENV CGO_ENABLED=0 COPY go.* . RUN go mod download COPY . .FROM base AS build ARG TARGETOS ARG TARGETARCH RUN –mount=type=cache,target=/root/.cache/go-build GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/example .FROM base AS unit-test RUN –mount=type=cache,target=/root/.cache/go-build go test -v .FROM scratch AS bin-unix COPY –from=build /out/example / …

Note that Go test uses the same cache as the build so we mount the cache for this stage too. This allows Go to only run tests if there have been code changes which makes the tests run quicker.

We can also update our Makefile to add a test target:

all: bin/example test: lint unit-testPLATFORM=local.PHONY: bin/example bin/example:    @docker build . –target bin     –output bin/     –platform ${PLATFORM}.PHONY: unit-test unit-test:    @docker build . –target unit-test

What’s next?

In this post we have seen how to add Go dependencies efficiently, caching to make the build faster and unit tests to our containerized Go development environment. In the next and final post of the series, we are going to complete our journey and learn how to add a linter, set up a GitHub Actions CI, and some extra build optimizations.

You can find the finalized source this example on my GitHub: https://github.com/chris-crone/containerized-go-dev

You can read more about the experimental Dockerfile syntax here: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md

If you’re interested in build at Docker, take a look at the Buildx repository: https://github.com/docker/buildx

Read the whole blog post series here.
The post Containerize Your Go Developer Environment – Part 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 ways to boost your Kubernetes knowledge

opensource.com – When the cloud was still in its formative years, developers discovered that it was convenient to write applications in small, atomic, minimal Linux images that shared resources with the server they r…
Quelle: news.kubernauts.io

How to Develop Inside a Container Using Visual Studio Code Remote Containers

This is a
guest post from Jochen Zehnder. Jochen is a Docker Community Leader and working
as a Site Reliability Engineer for 56K.Cloud. He started his career as a
Software Developer, where he learned the ins and outs of creating software. He
is not only focused on development but also on the automation to bridge the gap
to the operations side. At 56K.Cloud he helps companies to adapt technologies
and concepts like Cloud, Containers, and DevOps. 56K.Cloud
is a Technology company from Switzerland focusing on Automation, IoT,
Containerization, and DevOps.

Jochen Zehnder joined 56K.Cloud in
February, after working as a software developer for several years. He always tries
to make the lives easier for everybody involved in the development process. One
VS Code feature that excels at this is the Visual Studio Code Remote –
Containers extension. It is one of many extensions of the Visual Studio Remote
Development feature.

This post
is based on the work Jochen did for the 56K.Cloud internal handbook. It uses Jekyll to generate a static website out of
markdown files. This is a perfect example of how to make lives easier for
everybody. Nobody should know how to install, configure, … Jekyll to make
changes to the handbook. With the Remote Development feature, you add
all the configurations and customizations to the version control system of your
project. This means a small group implements it, and the whole team benefits.

One thing I
need to mention is that as of now, this feature is still in preview. However, I
never ran into any issues while using it, and I hope that it will get out of
preview soon.

##
Prerequisites

You need to
fulfil the following prerequisites, to use this feature:

* Install Docker
and Docker Compose

* Install Visual
Studio Code

* Install
the Remote
– Container extension

## Enable
it for an existing folder

The Remote
– Container extension provides several ways to develop in a container. You
can find more information in the documentation,
with several Quick start sections. In this post, I will focus on how to
enable this feature for an existing local folder.

As with all
the other VS Code extensions, you also manage this with the Command Palette.
You can either use the shortcut or the green button in the bottom left corner
to open it. In the popup, search for Remote-Containers and select Open
Folder in Container…

VS Code Command Palette

In the next
popup, you have to select the folder which you want to open in the container.
For this folder, you then need to Add the Development Container
Configuration Files. VS Code shows you a list with predefined container
configurations. In my case, I selected the Jekyll configuration. After
that, VS Code starts building the container image and opens the folder in the
container.

Add Development Container Configuration Files

If you now
have a look at the Explorer you can see, that there is a new folder
called `.devcontainer`. In my case, it added two files. The `Dockerfile`
contains all the instructions to build the container image. The
`devcontainer.json` contains all the needed runtime configurations. Some of the
predefined containers will add more files. For example, in the `.vscode` folder
to add useful Tasks.
You can have a look at the GitHub Repo to
find out more about the existing configurations. There you can also find
information about how to use the provided template to write your own.

##
Customizations

The
predefined container definitions provide a basic configuration, but you can
customize them. Making these adjustments is easy and I explain the two changes
I had to do below. The first was to install extra packages in the operating
system. To do so, I added the instructions to the `Dockerfile`. The second
change was to configure the port mappings. In the `devcontainer.json`, I
uncommented the `forwardPorts` attribute and added the needed ports. Be aware,
for some changes you just need to restart the container. Whereas for others,
you need to rebuild the container image.

## Using
and sharing

After you
opened the folder in the container you can keep on working as you are used to.
Even the terminal connects to the shell in the container. Whenever you open a
new terminal, it will set the working directory to the folder you opened in the
container. In my case, it allows me to type in the Jekyll commands to build and
serve the site.

After I
made all the configurations and customizations, I committed and pushed the new
files to the git repository. This made them available to my colleagues, and
they can benefit from my work.

# Summary

Visual
Studio Code supports multiple ways to do remote development. The Visual
Studio Code Remote – Containers extension allows you to develop inside a
container. The configuration and customizations are all part of your code. You
can add them to the version control system and share them with everybody
working on the project.

## More
Information

For more
information about the topic you can head over to the following links:

VS Code Remote
DevelopmentVisual Studio
Code Remote – ContainersVS Code
Remote Development Container Definitions – GitHub Repo

The Remote Container extension uses Docker as the container runtime.
There is also a Docker extension, called: Docker for Visual Studio Code. Brian
gave a very good introduction at DockerCon LIVE 2020. The recording of his talk Become a
Docker Power User With Microsoft Visual Studio Code is
available online.

## Find out
more about 56K.Cloud

We love Cloud, IoT, Containers, DevOps, and Infrastructure as Code. If you are interested in chatting connect with us on Twitter or drop us an email: info@56K.Cloud. We hope you found this article helpful. If there is anything you would like to contribute or you have questions, please let us know!

This post originally appeared here.
The post How to Develop Inside a Container Using Visual Studio Code Remote Containers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/