Deploying Dockerized .NET Apps Without Being a DevOps Guru

This is a guest post by Julie Lerman. She is a Docker Captain, published author, Microsoft Regional Director and a long-time Microsoft MVP who now counts her years as a coder in decades. She makes her living as a coach and consultant to software teams around the world. You can follow Julie on her blog at thedatafarm.com/blog, or on Twitter at @julielerman.

.NET Developers who use Visual Studio have access to a great extension to help them create Docker images for their apps. The Visual Studio Tools for Docker simplify the task of developing and debugging apps destined for Docker images. But what happens when you are ready to move from debugging in Visual Studio to deploying your image to a container in the cloud?

This blog post will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API. It will also engage Docker Compose and Microsoft SQL Server for Linux in a Docker container, along with a Docker Volume for persistence. The goal is to create a simple test environment and a low-stress path to getting your first experience with publishing an app in Docker.

Using the Docker Tools to aid in building and debugging the API is the focus of a series of articles that were published in the April, May and June 2019 issues of MSDN Magazine. So I’ll provide only a high level look at the solution.

Overview of the Sample App

The API allows me to track the names of Docker Captains. It’s not a real-world solution, but enough to give me something to work with. You can download the solution from github.com/julielerman/dockercaptains. I’ll provide a few highlights here.

public class Captain
{
public int CaptainId { get; set; }
public string Name { get; set; }
}

The API leverages Entity Framework Core (EF Core) for its data persistence. This requires a class that inherits from the EF Core DbContext. My class, CaptainContext, specifies a DbSet to work from and defines a bit of seed data for the database.

Enabling a Dynamic Connection String

The startup.cs file uses ASP.NET Core’s dependency injection to configure a SQL Server provider for the CaptainContext. There is also code to read a connection string from an environment variable within the Docker container and update a password placeholder that’s less visible to prying eyes.

public void ConfigureServices(IServiceCollection services)
{
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2);
var conn = Configuration[“ConnectionStrings:CaptainDB”];
conn = connectionstring.Replace(“ENVPW”, Configuration[“DB_PW”]);
services.AddDbContext<CaptainContext>(options => options.UseSqlServer(conn));
}

The VS Tools generated a Dockerfile and I only made one change to the default — adding the CaptainDB connection string ENV variable with its ENVPW placeholder:

ENV ConnectionStrings:CaptainDB “Server=db;Database=CaptainDB;User=sa;Password=ENVPW;”

ASP.NET Core can discover Docker environment variables when running in a Docker container.

Orchestrating with a docker-compose file

Finally comes the docker-compose.yml file. This sets up a service for the API image, another for the database server image and a volume for persisting the data.

version: ‘3.4’

services:
dataapidocker:
image: ${DOCKER_REGISTRY-}dataapidocker
build:
context: .
dockerfile: DataAPIDocker/Dockerfile
environment:
– DB_PW
depends_on:
– db
ports:
– 80:80
db:
image: mcr.microsoft.com/mssql/server
volumes:
– mssql-server-julie-data:/var/opt/mssql/data
environment:
SA_PASSWORD: “${DB_PW}”
ACCEPT_EULA: “Y”
ports:
– “1433:1433″
volumes:
mssql-server-julie-data: {}

Notice that I’m declaring the DB_PW environment variable in the API’s service definition and referencing it in the db’s service definition.

There’s also an .env file in the solution where the value of DB_PW is hidden.

DB_PW=P@ssword1

Docker will read that file by default. I go into more detail about the .env file on my blog. 

I got this solution set up and running from within Visual Studio on my development machine. And I love that even when the debugger publishes the app to a local container, I can still debug while it’s running in that container. That’s a super-power of the tools extension.

Using the Tools to Publish to Docker Hub

Once I was happy with my progress, I wanted to get this demo running in the cloud.  Although I can easily use the CLI to push and pull, I love that the Docker Tools in VS can handle this part. The Dockerfile created by the tool has instructions for a multi-stage build. When you target Visual Studio to a release build, the tools will build the release image described in the Dockerfile. Publishing will rebuild that release image and publish it to your destination registry.

You can see my full solution in the screenshot below. My API project is called DataAPIDocker. Notice there is also a docker-compose project. This was created by the Docker Tools.  But it is the DataAPIDocker project that will be published first into an image and then to a repository.

This will present a Publish page where you can choose to create a New Profile. A publish profile lets you define where to publish your app and also predefine any needed credentials. Creating a profile begins with selecting from a list of targets; for publishing a Docker image, select Container Registry. That option then gives you predefined registries to choose, such as Azure Container Registry, Docker Hub,  or a custom registry – which could be an instance of Docker Trusted Registry. 

I’ll choose Docker Hub and click Publish. 

The last step is to provide your Docker Hub repository name. If you don’t already have docker.config set up with your credentials, then you also need to supply your password. 

After creating a profile, it gets stored in the Visual Studio project.

You’ll be returned to the Publish overview page with this profile selected, where you can edit the default “latest” tag name. Click the Publish button to trigger the Docker Tools to do their job. 

A window will open up showing the progress of the docker push command run by the tools.

After the push is complete you can open the repository to see your new repository which by default is public.

Setting up an Azure Linux VM to Host the Containers

Now that the image is hosted in the cloud, you can turn your sights to hosting a container instance for running the app. Since my Visual Studio Subscription includes credits on Azure, I’ll use those.  I will create a Linux Virtual Machine on Azure with Docker and Docker Compose, then run an instance of my new image along with a SQL Server and a data volume.

I found two interesting paths for doing this at the command line. One was by using the Azure CLI at the command line in Windows, macOS or Linux. It is so much easier than doing it through the Azure Portal.

I found this doc to be really helpful as I was doing this for the first time. The article walks you through installing the Azure CLI, logging into Azure, creating a Linux VM with Docker already installed then installing Docker Compose. Keep in mind that this will create a default machine using “Standard DS1 v2 (1 vcpus, 3.5 GB memory)” setup. That VM size has an estimated cost of about $54 (USD) per month. 

Alternatively, you can use Docker Machine, a Docker tool for installing Docker on virtual hosts and managing the hosts. This path is a little more automated but it does require that you use bash and that you start by using the Azure CLI to log into your Azure account using the command az login.

Once that’s done, you can use parameters of docker-machine to tell it you’re creating  this in Azure, specify your subscription, ssh username, port and size of the machine to create. The last uses standard Azure VM size names. 

I found it interesting to use the Azure CLI workflow which was educational and then consider the docker-machine workflow as a shortcut version.

Since I was still working on my Windows machine, and don’t have the Windows Subsystem for Linux installed there, I opened up Visual Studio Code and switched my terminal shell to use bash.  That let me use docker-machine without issue.I also have the Azure Login extension in VS Code, so I was already logged in to Azure.

I first had to get the subscription ID of my Azure Account which I did using the CLI. Then I plugged the id into the docker-machine command:

docker-machine create -d azure
–azure-subscription-id [this is where I pasted my subscript id]
–azure-ssh-user azureuser
–azure-open-port 80
–azure-size “Standard_DS1_v2″
mylinuxvm

There are more settings you can apply, such as defining the resource and location. The output from this command will pause, providing you with details for how to allow docker-machine authorization to the VM by plugging a provided code into a browser window.  Once that’s done the command will continue its work and the output will forge ahead.

When it’s finished, you’ll see  the message “Docker is up and running!” (on the new VM), Followed by a very important message to configure a shell on the VM by running:

“C:Program FilesDockerDockerResourcesbindocker-machine.exe” env mylinuxvm

Recall that I’m doing these tasks on Windows, so docker-machine is ensuring that I know where to find the executable. After performing this task, I can see the machine up and running in the Azure Portal. This lets me inspect other default configuration choices made because I didn’t specify them in the docker-machine command.

By default, all of the needed ports are set up for access such as 80 for http and 22 for ssh.

Re-Creating Docker-Compose and .env on the VM

We only need two files on this machine: the docker-compose.yml and the .env file.

Docker-machine allows you to easily ssh into the VM in order for your command line commands to execute on that machine.

docker-machine ssh mylinuxvm

Then you can use a linux editor such as nano to re-create the two files.

nano docker-compose.yml

And you can paste the contents of your docker-compose file into there. This is the docker-compose file in my solution for the sample app. However, there are two edits you’ll need to make.

The original file depends on a variable supplied by the VS Docker Tools for the registry location. Change the value of image to point to your Docker Hub image: image: julielerman/dataapidocker:formylinuxvmYou’ll also need to change the version of docker-compose specified at the top of the file to 2.0 since you’re moving from hosting on Windows to hosting on Linux.

In nano, you can save the docker-compose file with ^O. Then exit nano and run it again to create the .env file using the command:

nano .env

Paste the key value pair environment variable from the app and save the .env file.

Running the Container

I still had to install docker-compose on the new machine. Docker is nice enough to feed you the command for that if you try to run docker-compose before installing it.
sudo apt install docker-compose

Then I was able to run my containers with: sudo docker-compose up

One important thing I learned: The VS Docker tooling doesn’t define port mapping for the API service in docker-compose. That’s hidden in a docker-compose.override.yml file used by the debugger. If you look at the docker-compose file listed earlier in this article, you’ll see that I added it myself. Without it, when you try to browse to the API, you will get a Connection refused error.

My ASP.NET Core API is now running and I can browse to it at public IP address specified for the VM. The HTTP Get of my Captains controller returns a list of the captains seeded in the database. 

DevOps are for Devs, Too

As a developer who is often first in line to claim “I don’t do DevOps”, I was surprised at how simple it turned out to be to deploy the app I had created. So often I have allowed my development machine to be a gate that defined the limitations of my expertise. I can build the apps and watch them work on my development machine but I’ve usually left deployment to someone else.

While I have ventured into the Azure Portal frequently, the fact that the Docker Tools and the Azure CLI made it so simple to create the assets I needed for deploying the app made me wonder why I’d waited so long to try that out. And in reality, I didn’t have to deploy the app, just an image and then a docker-compose file. That the Docker Machine made it even easier to create those cloud assets was something of a revelation. 

Part of this workflow leveraged the Docker Tools for Visual Studio on Windows. But because I spend a lot of time in Visual Studio Code on my MacBook, I now have the confidence to explore using the Docker CLI for publishing the image to Docker Hub. After that I can just repeat the Docker Machine path to create the Azure VM where I can run my containers. 

If you want to learn more, these posts and articles are a great place to start:

5 Things to Try with Docker Desktop WSL 2 Tech Preview Video Series: Modernizing .NET Apps for Developers EF Core in a Docker Containerized App (3 part series in MSDN Magazine)Julie’s blog posts on docker-compose and publishing Docker images to Azure

Get Started with Docker Desktop

How to Deploy Dockerized .NET Apps Without Being a DevOps Guru by #Docker Captain @julielermanClick To Tweet

The post Deploying Dockerized .NET Apps Without Being a DevOps Guru appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

See Docker Enterprise 3.0 in Action in Our Upcoming Webinar Series

Docker Enterprise 3.0 represents a significant milestone for the industry-leading enterprise container platform. It is the only end-to-end solution for Kubernetes and modern applications that spans from the desktop to the cloud.  With Docker Enterprise 3.0, organizations can build, share, and run modern applications of any language or technology stack, on their choice of infrastructure and operating system.

To showcase all of the capabilities of the platform and highlight what is new in this release, we invite you to join our 5-part webinar series to explore the technologies that make up Docker Enterprise 3.0. You’ll see several demos of the platform and gain a better understanding of how Docker can you help your organization deliver high-velocity innovation while providing you the choice and security you need. We designed the webinar both for those new to containers and Kubernetes, as well as those who are just here to learn more about what’s new. We’re excited to share what we’ve been working on.

Sign Up for the Series

Here’s an overview of what we’ll be covering in each session.

Part 1: Content Management

Tuesday, August 13, 2019 @ 11am PDT / 2pm EDT

This webinar will cover the important aspects of tracking the provenance of and  securing your container images.

Part 2: Security

Wednesday, August 14, 2019 – 11am PDT / 2pm EDT

Learn how Docker Enterprise uses a multi-layered approach to security in delivering a secure software supply chain. 

Part 3: Docker Applications

Thursday, August 15, 2019 @ 11am PDT / 2pm EDT

Find out how you can accelerate developer productivity with the use of application templates and Docker Applications – a new way to define multi-service applications based on the CNAB specification.

Part 4: Operations Management

Tuesday, August 20, 2019 – 11am PDT / 2pm EDT

Discover how Docker Enterprise makes Day 1 and Day 2 operations simple and repeatable.

Part 5: Docker Kubernetes Service

Wednesday, August 21, 2019 – 11am PDT / 2pm EDT

See how Docker Kubernetes Service (DKS) makes Kubernetes easy to use and more secure for the entire organization.

Register for our upcoming 5-part #webinar series to see #Docker Enterprise 3.0 in action:Click To Tweet

To learn more about Docker Enterprise 3.0:

Register for the webinar seriesTest drive Docker Enterprise 3.0 with a free, hosted trial

The post See Docker Enterprise 3.0 in Action in Our Upcoming Webinar Series appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Release Party Recap

We Celebrated the Launch of Docker Enterprise 3.0 and Docker 19.03 Last Week

Last week, Docker Captain Bret Fisher hosted a 3-day Release Party for Docker 19.03 and Docker Enterprise 3.0. Captains and the Docker team demonstrated some of their favorite new features and answered live audience questions. Here are the highlights (You can check out the full release party here).

Docker Desktop Enterprise

To kick things off, Docker Product Manager Ben De St Paer-Gotch shared Docker Desktop Enterprise. Docker Desktop Enterprise ships with the Enterprise Engine and includes a number of features that makes enterprise development easier and more productive. For example, version packs allow developers to switch between Docker Engine versions and Kubernetes versions, all from the desktop.

For admins, Docker Desktop Enterprise includes the ability to lock down the settings of Docker Desktop, so developers’ machines stay aligned with corporate requirements. Ben also demonstrated Docker Application Designer, a feature that allows users to create new Docker applications by using a library of templates, making it easier for developers in the enterprise to get updated app templates – or “gold standard” versions like the right environment variable settings, custom code, custom editor settings, etc. – without a dependency on central IT.

Check out the full demo and discussion here:

Docker Buildx

Docker Captain Sujay Pillai shared the power of Buildx, the next generation image builder. Docker Buildx is a CLI plugin that extends the Docker command with the features provided by Moby BuildKit builder toolkit. It supports the features available for docker build including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, Buildx supports new features not yet available for regular docker build like building manifest lists, distributed caching, exporting build results to OCI image tarballs, creating scoped builder instances, building against multiple nodes concurrently etc.

Buildx is an experimental feature, meaning Docker is providing early access to the feature for testing and feedback purposes, but it is not yet supported or production ready. Buildx is included in Docker 19.03, Docker Desktop Enterprise version 2.1.0 and Docker Desktop Edge version 2.0.4.0 or higher. (Side note: The Buildx plugin in these versions supersedes the old environment variable for Buildkit and it does not require DOCKER_BUILDKIT=1 environment variable for starting builds.)

You can download Buildx here and catch Sujay’s demo here:

Docker Cluster

Docker Director of Engineering Joe Abbey introduced and demonstrated Docker Cluster, a newly released command line tool in Enterprise 3.0 that greatly simplifies managing the lifecycle of server clusters on AWS. Docker Cluster for Azure will be released later this year. Commands include: backup, create, inspect, list all available, restore, remove, update, and print version commit and build type. Check out the demo below:

Docker Context

Docker Captain (and co-creator of Play with Docker and Play with Kubernetes) Marcos Nils demonstrated context switching within the command line, available in 19.03. Users can now create contexts for both Docker and Kubernetes endpoints, and then easily switch between them using one command. To create contexts, you can copy the host name whenever you set up a new context or copy the context information from another context.

Docker Context removes the needs to have separate scripts with environment variables to switch between environments. To find out what context you are using, go to the Docker command line. The command line will show both the default stack (i.e. Swarm or Kubernetes orchestrator) and the default context you have set up.

Try it out now using Docker 19.03 and Play with Docker, as demonstrated by Marcos in this video:

Rootless Docker

Rootless functionality allows users to run containers without having root access to the operating system. For operators, rootless Docker provides an additional layer of security by isolating containers from the OS. For developers, rootless Docker means you can run Docker on your machine even when you don’t have root access. Docker Captain Dimitris Kapanidis demonstrates how to install rootless Docker in this video:

You can find a full demo of rootless Docker on Github here.

Docker App

Docker App is based on Cloud Native Application Bundles (CNAB), the open source, cloud-agnostic specification for packaging and running distributed applications. That makes it easy to share and parameterize apps by making your Docker Stack and Compose files reusable and shareable on Docker Hub. With the 19.03 release, you now get two binaries of Docker App: 1) A command line plug-in that enables you to access Docker App from a single command and 2) The existing standalone CLI install for Docker App.

Below, Docker Captain Michael Irwin demonstrates Docker App’s ability to parameterize anything within the compose files except for the image. In other words, with Docker App you can easily define the ports you want to expose, how many replicas, what CPU memories to give to the app and more.

Want to learn more about how these all work in Docker Enterprise 3.0? Join us for our upcoming webinar series on driving High-Velocity Innovation with Docker Enterprise 3.0.

Sign up for the Webinar

Want to learn more?

Try Docker Enterprise 3.0 for YourselfLearn More about What’s New in Docker Enterprise 3.0

Check out the highlights from the #Docker 19.03 and Enterprise 3.0 release party:Click To Tweet
The post Docker Release Party Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 Things to Try with Docker Desktop WSL 2 Tech Preview

We are pleased to announce the availability of our Technical Preview of Docker Desktop for WSL 2! 

As a refresher, this preview makes use of the new Windows Subsystem for Linux (WSL) version that Microsoft recently made available on Windows insider fast ring. It has allowed us to provide improvements to file system sharing, boot time and access to some new features for Docker Desktop users. 

To do this we have changed quite a bit about how we interact with the operating system compared to Docker Desktop on Windows today: 

To learn more about the full feature set have a look at our previous blog:   Get Ready for Tech Preview of Docker Desktop for WSL 2  and  Docker WSL 2 – The Future of Docker Desktop for Windows.

Want to give it a go?

Get setup on a Windows machine on the latest Windows Insider build. The first step for this is heading over to the Microsoft and getting set up as a Windows Insider: https://insider.windows.com/en-gb/getting-started/ You’ll need to install the latest release branch (at least build version 18932) and you will then want to enable the WSL 2 feature in Windows: https://docs.microsoft.com/en-us/windows/wsl/wsl2-installThen get Ubuntu 18.04 on your machine: Microsoft store. Finally, download the Tech Preview  Docker Desktop for WSL 2 Technical Preview

If you are having issues or want more detailed steps, have a look at our docs here.

Things to try:

Navigate between WSL 2 and traditional Docker

Use $ docker context ls  to view the different contexts available.

The daemon running in WSL 2 runs side-by-side with the “classic” Docker Desktop daemon. This is done by using a separate Docker Context. Run `docker context use wsl` to use the WSL 2 based daemon, and `docker context use default` to use the Docker Desktop classic daemon. The “default” context will target either the Moby Linux VM daemon or the Windows Docker daemon depending if you are in Linux or Windows mode. 

Access full system resources

Use $ docker info to inspect the system statistics. You should see all of your system resources (CPU & memory) available to you in the WSL 2 context. 

Linux workspaces

Source code and build scripts can live inside WSL 2 and access the same Docker Daemon as from Windows. Bind mounting files from WSL 2 is supported, and provides better I/O performance.

Visual Studio remote with WSL

You can work natively with Docker and Linux from Visual Studio Code on Windows. 

If you are a Visual Studio Code user make sure you have installed the plugin from the marketplace. You can then connect to WSL 2 and access your source in Linux, which means you can use the console in VSCode to build your containers using any existing Linux build scripts from within the Windows UI.

For full instructions have a look through Microsoft’s documentation: https://code.visualstudio.com/docs/remote/wsl

File system improvements: 

If you are a PHP Symfony user let us know your thoughts! We found that page refreshes went from ~400ms to ~15ms when we were running from a Linux Workspace.

Want to Learn More?

Read more about the Docker Desktop for WSL 2 Technical PreviewLearn more about Docker Desktop and the new Docker Desktop EnterpriseLearn more about running Windows containers in this On-Demand webinar: Docker for Windows Container Development

5 Things to Try with #Docker Desktop WSL 2 Tech PreviewClick To Tweet

The post 5 Things to Try with Docker Desktop WSL 2 Tech Preview appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Write Maintainable Integration Tests with Docker

Testcontainer is an open source community focused on making integration tests easier across many languages. Gianluca Arbezzano is a Docker Captain, SRE at Influx Data and the maintainer of the Golang implementation of Testcontainer that uses the Docker API to expose a test-friendly library that you can use in your test cases. 

Photo by Markus Spiske on Unsplash.

The popularity of microservices and the use of third-party services for non-business critical features has drastically increased the number of integrations that make up the modern application. These days, it is commonplace to use MySQL, Redis as a key value store, MongoDB, Postgress, and InfluxDB – and that is all just for the database – let alone the multiple services that make up other parts of the application.

All of these integration points require different layers of testing. Unit tests increase how fast you write code because you can mock all of your dependencies, set the expectation for your function and iterate until you get the desired transformation. But, we need more. We need to make sure that the integration with Redis, MongoDB or a microservice works as expected, not just that the mock works as we wrote it. Both are important but the difference is huge.

In this article, I will show you how to use testcontainer to write integration tests in Go with very low overhead. So, I am not telling you to stop writing unit tests, just to be clear!

Back in the day, when I was interested in becoming  a Java developer, I tried to write an integration between Zipkin, a popular open source tracer, and InfluxDB. I ultimately failed because I am not a Java developer, but I did understand how they wrote integration tests, and I became fascinated.

Getting Started: testcontainers-java

Zipkin provides a UI and an API to store and manipulate traces, it supports Cassandra, in-memory, ElasticSearch, MySQL and many more platforms as storage. In order to validate that all the storage systems work, they use a library called testcontainers-java that is a wrapper around the docker-api designed to be “test-friendly.”Here is the Quick Start example:

public class RedisBackedCacheIntTestStep0 {
private RedisBackedCache underTest;

@Before
public void setUp() {
// Assume that we have Redis running locally?
underTest = new RedisBackedCache(“localhost”, 6379);
}

@Test
public void testSimplePutAndGet() {
underTest.put(“test”, “example”);

String retrieved = underTest.get(“test”);
assertEquals(“example”, retrieved);
}
}

At the setUp you can create a container (redis in this case) and expose a port. From here, you can interact with a live redis instance.

Everytime you start a new container, there is a “sidecar” called ryuk that keeps your Docker environment clean by removing containers, volumes and networks after a certain amount of time. You can also remove them from inside the test.The below example comes from Zipkin. They are testing the ElasticSearch integration and as the example shows, you can programmatically configure your dependencies from inside the test case.

public class ElasticsearchStorageRule extends ExternalResource {
static final Logger LOGGER = LoggerFactory.getLogger(ElasticsearchStorageRule.class);
static final int ELASTICSEARCH_PORT = 9200; final String image; final String index;
GenericContainer container;
Closer closer = Closer.create();

public ElasticsearchStorageRule(String image, String index) {
this.image = image;
this.index = index;
}
@Override

protected void before() {
try {
LOGGER.info(“Starting docker image ” + image);
container =
new GenericContainer(image)
.withExposedPorts(ELASTICSEARCH_PORT)
.waitingFor(new HttpWaitStrategy().forPath(“/”));
container.start();
if (Boolean.valueOf(System.getenv(“ES_DEBUG”))) {
container.followOutput(new Slf4jLogConsumer(LoggerFactory.getLogger(image)));
}
System.out.println(“Starting docker image ” + image);
} catch (RuntimeException e) {
LOGGER.warn(“Couldn’t start docker image ” + image + “: ” + e.getMessage(), e);
}

That this happens programmatically is key because you do not need to rely on something external such as docker-compose to spin up your integration tests environment. By spinning it up from inside the test itself, you have a lot more control over the orchestration and provisioning, and the test is more stable. You can even check when a container is ready before you start a test.

Since I am not a Java developer, I ported the library (we are still working on all the features) in Golang and now it’s in the main testcontainers/testcontainers-go organization.

func TestNginxLatestReturn(t *testing.T) {
ctx := context.Background()
req := testcontainers.ContainerRequest{
Image: “nginx”,
ExposedPorts: []string{“80/tcp”},
}
nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
t.Error(err)
}
defer nginxC.Terminate(ctx)
ip, err := nginxC.Host(ctx)
if err != nil {
t.Error(err)
}
port, err := nginxC.MappedPort(ctx, “80”)
if err != nil {
t.Error(err)
}
resp, err := http.Get(fmt.Sprintf(“http://%s:%s”, ip, port.Port()))
if resp.StatusCode != http.StatusOK {
t.Errorf(“Expected status code %d. Got %d.”, http.StatusOK, resp.StatusCode)
}
}

Creating the Test

This is what it looks like:

ctx := context.Background()
req := testcontainers.ContainerRequest{
Image: “nginx”,
ExposedPorts: []string{“80/tcp”},
}
nginxC, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
t.Error(err)
}
defer nginxC.Terminate(ctx)

You create the nginx container and with the defer nginxC.Terminate(ctx) command, you are cleaning up the container when the test is over. Remember ryuk? it is not a mandatory command, but testcontainers-go uses it to remove the containers at some point.

Modules

The Java library has a feature called modules where you get pre-canned containers such as databases (mysql, postgress, cassandra, etc.) or applications like nginx.The go version is working on something similar but it is still an open pr.

If you’d like to build a microservice your application relies on from the upstream video, this is a great feature. Or if you would like to test how your application behaves from inside a container (probably more similar to where it will run in prod). This is how it works in Java:

@Rule
public GenericContainer dslContainer = new GenericContainer(
new ImageFromDockerfile()
.withFileFromString(“folder/someFile.txt”, “hello”)
.withFileFromClasspath(“test.txt”, “mappable-resource/test-resource.txt”)
.withFileFromClasspath(“Dockerfile”, “mappable-dockerfile/Dockerfile”))

What I’m working on now

Something that I am currently working on is a new canned container that uses kind to spin up Kubernetes clusters inside a container. If your applications use the Kubernetes API, you can test it in integration:

ctx := context.Background()
k := &KubeKindContainer{}
err := k.Start(ctx)
if err != nil {
t.Fatal(err.Error())
}
defer k.Terminate(ctx)
clientset, err := k.GetClientset()
if err != nil {
t.Fatal(err.Error())
}
ns, err := clientset.CoreV1().Namespaces().Get(“default”, metav1.GetOptions{})
if err != nil {
t.Fatal(err.Error())
}
if ns.GetName() != “default” {
t.Fatalf(“Expected default namespace got %s”, ns.GetName())

This feature is still a work in progress as you can see from PR67.

Calling All Coders

The Java version for testcontainers is the first one developed, it has a lot of features not ported to the Go version or to other libraries as well such as JavaScript, Rust, .Net.

My suggestion is to try the one written in your language and to contribute to it. 

In Go we don’t have a way to programmatically build images. I am thinking to embed buildkit or img in order to get a damonless builder that doesn’t depend on Docker. The great part about working with the Go version is that all the container related libraries are already in Go, so you can do a very good work of integration with them.

This is a great chance to become part of this community! If you are passionate about testing framework join us and send your pull requests, or come to hang out on Slack.

Try It Out

I hope you are as excited as me about the flavour and the power this library provides. Take a look at the testcontainers organization on GitHub to see if your language is covered and try it out! And, if your language is not covered, let’s write it! If you are a Go developer and you’d like to contribute, feel free to reach out to me @gianarb, or go check it out and open an issue or pull request!

Docker Captain @GianArb gives the low down on how to write maintainable integration tests with #DockerClick To Tweet
The post Write Maintainable Integration Tests with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Top 4 Tactics To Keep Node.js Rockin’ in Docker

This is a guest post from Docker Captain Bret Fisher, a long time DevOps sysadmin and speaker who teaches container skills with his popular Docker Mastery courses including Docker Mastery for Node.js, weekly YouTube Live shows, and consults to companies adopting Docker.

Foxy, my Docker Mastery mascot is a fan of Node and Docker

We’ve all got our favorite languages and frameworks, and Node.js is tops for me. I’ve run Node.js in Docker since the early days for mission-critical apps. I’m on a mission to educate everyone on how to get the most out of this framework and its tools like npm, Yarn, and nodemon with Docker.

There’s a ton of info out there on using Node.js with Docker, but so much of it is years out of date, and I’m here to help you optimize your setups for Node.js 10+ and Docker 18.09+. If you’d rather watch my DockerCon 2019 talk that covers these topics and more, check it out on YouTube.

Let’s go through 4 steps for making your Node.js containers sing! I’ll include some quick “Too Long; Didn’t Read” for those that need it.

Stick With Your Current Base Distro

TL;DR: If you’re migrating Node.js apps into containers, use the base image of the host OS you have in production today. After that, my favorite base image is the official node:slim editions rather than node:alpine, which is still good but usually more work to implement and comes with limitations.

One of the first questions anyone asks when putting a Node.js app in Docker, is “Which base image should I start my Node.js Dockerfile from?”

slim and alpine are quite smaller than the default image

There are multiple factors that weigh into this, but don’t make “image size” a top priority unless you’re dealing with IoT or embedded devices where every MB counts. In recent years the slim image has shrunk down in size to 150MB and works the best across the widest set of scenarios. Alpine is a very minimal container distribution, with the smallest node image at only 75MB. However, the level of effort to swap package managers (apt to apk), deal with edge cases, and work around security scanning limitations causes me hold off on recommending node:alpine for most use cases.

When adopting container tech, like anything, you want to do what you can to reduce the change rate. So many new tools and processes come along with containers. Choosing the base image your devs and ops are most used to has many unexpected benefits, so try to stick with it when it makes sense, even if this means making a custom image for CentOS, Ubuntu, etc.

Dealing With Node Modules

TL;DR: You don’t have to relocate node_modules in your containers as long as you follow a few rules for proper local development. A second option is to move mode_modules up a directory in your Dockerfile, configure your container properly, and it’ll provide the most flexible option, but may not work with every npm framework.

We’re all now used to a world where we don’t write all the code we run in an app, and that means dealing with app framework dependencies. One common question is how to deal with those code dependencies in containers when they are a subdirectory of our app. Local bind-mounts for development can affect your app differently if those dependencies were designed to run on your host OS and not the container OS.

The core of this issue for Node.js is that node_modules can contain binaries compiled for your host OS, and if it’s different then the container OS, you’ll get errors trying to run your app when you’re bind-mounting it from the host for development. Note that if you’re a pure Linux developer and you develop on Linux x64 for Linux x64, this bind-mount issue isn’t usually a concern.

For Node.js I offer you two approaches, which come with their own benefits and limitations:

Solution A: Keep It Simple

Don’t move node_modules. It will still sit in the default subdirectory of your app in the container, but this means that you have to prevent the node_modules created on your host from being used in the container during development.

This is my preferred method when doing pure-Docker development. It works great with a few rules you must follow for local development:

Develop only through the container. Why? Basically, you don’t want to mix up the node_modules on your host with the node_modules in the container. On macOS and Windows, Docker Desktop bind-mounts your code across the OS barrier, and this can cause problems with binaries you’ve installed with npm for the host OS, that can’t be run in the container OS.Run all your npm commands through docker-compose. This means your initial npm install for your project should now be docker-compose run <service name> npm install.

Solution B: Move Container Modules and Hide Host Modules

Relocate node_modules up the file path in the Dockerfile so you can develop Node.js in and out of the container, and the dependencies won’t clash which you switch between host-native development and Docker-based development.

Since Node.js is designed to run on multiple OS’s and architectures, you may not want to always develop in containers. If you want the flexibility to sometimes develop/run your Node.js app directly on the host, and then other times spin it up in a local container, then Solution B is your jam.

In this case you need a node_modules on host that is built for that OS, and a different node_modules in the container for Linux.

The basic lines you’ll need to move node_modules up the path

Rules for this solution include:

Move the node_modules up a directory in the container image. Node.js always looks for a node_modules as a subdirectory, but if it’s missing, it’ll walk up the directory path until it finds one. Example of doing that in a Dockerfile here. To prevent the host node_modules subdirectory from showing up in the container, use a workaround I call an “empty bind-mount” to prevent the host node_modules from ever being used in the container. In your compose YAML it would look like this.This works with most Node.js code, but some larger frameworks and projects seem to hard-code in the assumption that node_modules is a subdirectory, which will rule out this solution for you.

For both of these solutions, always remember to add node_modules to your .dockerignore file (same syntax as .gitignore) so you’ll never accidentally build your images with modules from the host. You always want your builds to run an npm install inside the image build.

Use The Node User, Go Least Privilege

All the official Node.js images have a Linux user added in the upstream image called node. This user is not used by default, which means your Node.js app will run as root in the container by default. This isn’t the worst thing, as it’s still isolated to that container, but you should enable in all your projects where you don’t need Node to run as root. Just add a new line in your Dockerfile: USER node

Here are some rules for using it:

Location in the Dockerfile matters. Add USER after apt/yum/apk commands, and usually before npm install commands.It doesn’t affect all commands, like COPY, which has its own syntax for controlling owner of files you copy in.You can always switch back to USER root if you need to. In more complex Dockerfiles this will be necessary, like my multi-stage example that includes running tests and security scans during optional stages.Permissions may get tricky during development because now you’ll be doing things in the container as a non-root user by default. The way to often get around this is to do things like npm install by telling Docker you want to run those one-off commands as root: docker-compose run -u root npm install

Don’t Use Process Managers In Production

TL;DR: Except for local development, don’t wrap your node startup commands with anything. Don’t use npm, nodemon, etc. Have your Dockerfile CMD be something like  [“node”, “file-to-start.js”] and you’ll have an easier time managing and replacing your containers.

Nodemon and other “file watchers” are necessary in development, but one big win for adopting Docker in your Node.js apps is that Docker takes over the job of what we used to use pm2, nodemon, forever, and systemd for on servers.

Docker, Swarm, and Kubernetes will do the job of running healthchecks and restarting or recreating your container if it fails. It’s also now the job of orchestrators to scale the number of replicas of our apps, which we used to use tools like pm2 and forever for. Remember, Node.js is still single-threaded in most cases, so even on a single server you’ll likely want to spin up multiple container replicas to take advantage of multiple CPU’s.

My example repo shows you how to using node directly in your Dockerfile, and then for local development, either build use a different image stage with docker build –target <stage name>, or override the CMD in your compose YAML.

Start Node Directly in Dockerfiles

TL;DR I also don’t recommend using npm to start your apps in your Dockerfile. Let me explain.

I recommend calling the node binary directly, largely due to the “PID 1 Problem” where you’ll find some confusion and misinformation online about how to deal with this in Node.js apps. To clear up confusion in the blogosphere, you don’t always need a “init” tool to sit between Docker and Node.js, and you should probably spend more time thinking about how your app stops gracefully.

Node.js accepts and forwards signals like SIGINT and SIGTERM from the OS, which is important for proper shutdown of your app. Node.js leaves it up to your app to decide how to handle those signals, which means if you don’t write code or use a module to handle them, your app won’t shut down gracefully. It’ll ignore those signals and then be killed by Docker or Kubernetes after a timeout period (Docker defaults to 10 seconds, Kubernetes to 30 seconds.) You’ll care a lot more about this once you have a production HTTP app that you have to ensure doesn’t just drop connections when you want to update your apps.

Using other apps to start Node.js for you, like npm for example, often break this signaling. npm won’t pass those signals to your app, so it’s best to leave it out of your Dockerfiles ENTRYPOINT and CMD. This also has the benefit of having one less binary running in the container. Another bonus is it allows you to see in the Dockerfile exactly what your app will do when your container is launched, rather then also having to check the package.json for the true startup command.

For those that know about init options like docker run –init or using tini in your Dockerfile, they are good backup options when you can’t change your app code, but it’s a much better solution to write code to handle proper signal handling for graceful shutdowns. Two examples are some boilerplate code I have here, and looking at modules like stoppable.

Is That All?

Nope. These are concerns that nearly every Node.js team deals with, and there’s lots of other considerations that go along with that. Topics like multi-stage builds, HTTP proxies, npm install performance, healthchecks, CVE scanning, container logging, testing during image builds, and microservice docker-compose setups are all common questions for my Node.js clients and students.

If you’re wanting more info on these topics, you can watch my DockerCon 2019 session video on this topic, or check my 8-hours of Docker for Node.js videos at https://www.bretfisher.com/node Thanks for reading. You can reach me on Twitter, get my weekly DevOps and Docker newsletter, subscribe to my weekly YouTube videos and Live Show, and check out my other Docker resources and courses.

Keep on Dockering!

Docker Captain @bretfisher dives into 4 Ways to Keep #nodejs Rockin in DockerClick To Tweet

The post Top 4 Tactics To Keep Node.js Rockin’ in Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Accelerate Application Delivery with Application Templates in Docker Desktop Enterprise

The Application Templates interface.

Docker Enterprise 3.0, now generally available, includes several new features that make it simpler and faster for developers to build and deliver modern applications in the world of Docker containers and Kubernetes. One such feature is the new Application Templates interface that is included with Docker Desktop Enterprise.

Application Templates enable developers to build modern applications using a library of predefined and organization-approved application and service templates, without requiring prior knowledge of Docker commands. By providing re-usable “scaffolding” for developing modern container-based applications, Application Templates accelerate developer onboarding and improve productivity.

The Application Templates themselves include many of the discrete components required for developing a new application, including the Dockerfile, custom base images, common compose service YAML, and application parameters (external ports and upstream image versions). They can even include boilerplate code and code editor configs.

With Application Templates, development leads, application architects, and security and operations teams can customize and share application and service templates that align to corporate standards. As a developer, you know you’re starting from pre-approved templates that  eliminate time-consuming configuration steps and error-prone manual setup. Instead, you have the freedom to customize and experiment so you can focus on delivering innovative apps. 

Application Templates In Action: A Short Demo

The Easiest and Fastest Way to Containerize Apps

Even if you’ve never run a Docker container before, there is a new GUI-based Application Designer interface in Docker Desktop Enterprise that makes it simple to view and select Application Templates, run them on your machine, and start coding. There’s also a docker template CLI interface (currently available in Experimental mode only), which provides the same functionality if you prefer command line to a GUI.

Underneath the covers, Application Templates create a Docker Application, a new packaging format based on the Cloud Native Application Bundle specification. Docker Applications make it easy to bundle up all the container images, configuration, and parameters and share them on Docker Hub or Docker Trusted Registry. 

Docker Desktop Enterprise comes pre-loaded with a library of common templates based on Docker Hub official images, but you can also create and use templates customized to your own organization’s specifications.

After developers select their template and scaffold it locally, source code can be mounted in the local containers to speed the inner loop code and test cycles. The containers are running right on the developer’s machine so any changes to the code will be visible immediately in the running application. 

Docker Desktop Enterprise with Application Templates is generally available now! Contact Docker today to get started.

Accelerate Application Delivery with Application Templates in #Docker Desktop EnterpriseClick To Tweet

Interested in finding out more?

Learn more about Docker Desktop EnterpriseCheck out Docker Enterprise 3.0, the only end-to-end platform for building, sharing and running container-based applicationsWatch the full Docker Desktop Enterprise product demoGet the detailed documentation on Application Templates

The post Accelerate Application Delivery with Application Templates in Docker Desktop Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the new Docker Technology Partner Program

We’re pleased to announce the launch of the Docker Technology Partner (DTP) program as a strong foundation for the ongoing collaboration with our ecosystem partners. Together through the new program, Docker and our partners will accelerate providing our enterprise customers with proven collaborative solutions. 

Our industry-leading container platform has proceeded to become central to continuous, high-velocity innovation for more than 750 enterprises around the world. As such, we recognized the need to enhance our partner program to make it easier for customers to identify key partners from the ecosystem that will provide them with the most value. The DTP program is designed to ensure that Docker customers across a variety of company sizes and industries have access to our massive ecosystem of partners and are able to integrate Docker containers with other chosen technologies.  This program provides clear insight into our formal partnerships, as well as the depth of joint product integration. 

Our partners also receive due recognition for their hard work in ensuring compatibility and support with Docker Enterprise. As always, we truly do appreciate the continued support of our partners, and are proud to showcase their accomplishments in integrating and validating with the Docker platform. 

Whether you’re an existing Docker customer, or just getting started with the platform, we encourage you to learn more about our partners and some of their offerings. These products provide invaluable tools for IT operators and developers to get the Docker platform and associated applications running in production easily. Docker Enterprise works on all major cloud providers and operating systems, and supports both Kubernetes and Swarm, giving you access to the broadest range of partner solutions.  Additionally, you’ll find applications from our ISV partners to either run in your Docker Enterprise environment, or utilize when building your own applications.

There are three levels in the DTP program:

Verified – Partners who have engaged with Docker directly, and are publishing products on Docker Hub under a Verified Publisher account.Professional – Partners who have Certified (tested & supported) their products with Docker EnterprisePremier – Deepest level of integration with Docker Enterprise, requiring Certified technology to undergo category specific technology review. 

Please take a look at our growing list of Verified partners on Docker Hub, as well as our Professional partners! We will begin to invite partners individually to participate in our first round up of Premier partners in the near future. 

Learn More: 

Review the DTP program guide, and apply today!Learn more about Docker partners or find one in our directory. Learn more about Docker Enterprise and get a free trial today

You can contact us with any questions.

Introducing the new #Docker Technology Partner (DTP) programClick To Tweet
The post Introducing the new Docker Technology Partner Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Docker Enterprise 3.0 General Availability

Today, we’re excited to announce the general availability of Docker Enterprise 3.0 – the only desktop-to-cloud enterprise container platform enabling organizations to build and share any application and securely run them anywhere – from hybrid cloud to the edge.

Docker Enterprise 3.0 Demo

Leading up to GA, more than 2,000 people participated in the Docker Enterprise 3.0 public beta program to try it for themselves. We gathered feedback from some of these beta participants to find out what excites them most about the latest iteration of Docker Enterprise. Here are 3 things that customers are excited about and the features that support them:

Simplifying Kubernetes

Kubernetes is a powerful orchestration technology but due to its inherent complexity, many enterprises (including Docker customers) have struggled to realize the full value of Kubernetes on their own. Much of Kubernetes’ perceived complexity stems from a lack of intuitive security and manageability configurations that most enterprises expect and require for production-grade software. We’re addressing this challenge with Docker Kubernetes Service (DKS) – a Certified Kubernetes distribution that is included with Docker Enterprise 3.0. It’s the only offering that integrates Kubernetes from the developer desktop to production servers, with ‘sensible secure defaults’ out-of-the-box.

“Increasing application development velocity and digital agility are a strategic imperative for companies in all sectors today. Developer experience is the killer app,” said RedMonk co-founder, James Governor. “Docker Kubernetes Service and Docker Application aim to package and simplify developer and operator experience, making modern container based workflows more accessible to developers and operators alike.”

You can learn more about Docker Kubernetes Service here.

Automating Deployment of Containers and Kubernetes

One of the most common requests we’ve heard from customers has been to make it easier to deploy and manage their container environments. That’s why we introduced new lifecycle automation tools for day 1 and day 2 operations, helping customers accelerate and expand the deployment of containers and Kubernetes on their choice of infrastructure. Using a simple set of CLI commands, operations teams can easily deploy, scale, backup and restore and upgrade their Docker Enterprise clusters across hybrid and multi-cloud deployment on AWS, Azure, or VMware.

You can learn more about lifecycle automation tools here.

Building Modern Applications 

With the ever-increasing emphasis on making things easier and faster for developers, it’s no surprise that Docker Desktop Enterprise and Docker Application created a lot of excitement amongst beta participants. Docker Desktop Enterprise is a new developer tool that decreases the “time-to-Docker” – accelerating developer onboarding and improving developer productivity. Docker Application, based on the CNAB standard, is a new application format that enables developers to bundle the many distributed resources that comprise a modern application into a single object that can be easily shared, installed and run anywhere. Docker Desktop Enterprise also allows users to quickly and easily create Docker Applications leveraging pre-defined Application Templates that support any language or framework.

“The Docker Enterprise platform and its approach to simplifying how containerized applications are built, shared and run allows us to fail fearlessly. We can test new services easily and quickly and if they work, we can immediately enhance the mortgage experience for our customers,” said Don Bauer, Lead DevOps Engineer, Citizens Bank. “Docker’s investment in new capabilities like Docker Application and simplified cluster management will further improve developer productivity and lifecycle automation for us so that we can continue to bring new, differentiated services to market faster.” 

You can learn more about Docker Applications here.

How to Get Started

Try Docker Enterprise 3.0 for YourselfLearn More about What’s New in Docker Enterprise 3.0Sign up for the upcoming webinar series: Drive High-Velocity Innovation with Docker Enterprise 3.0

Big News! Announcing #Docker Enterprise 3.0 General AvailabilityClick To Tweet
The post Announcing Docker Enterprise 3.0 General Availability appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Get Ready for the Tech Preview of Docker Desktop for WSL 2

Today at OSCON, Scott Hanselman, Kayla Cinnamon, and Yosef Durr of Microsoft demonstrated some of the new capabilities coming with Windows Subsystem for Linux (WSL) 2, including how it will be integrated with Docker Desktop. As part of this demonstration, we are excited to announce that users can now sign up for the end of July Docker Desktop Technical Preview of WSL 2. WSL 2 is the second generation of a compatibility layer for running Linux binary executables natively on Windows. Since it was announced at Microsoft Build, we have been working in partnership with Microsoft to deliver an improved Linux experience for Windows developers and invite everyone to sign up for the upcoming Technical Preview release.

Improving the Linux Experience on Windows

There are over half a million active users of Docker Desktop for Windows today and many of them are building Java and Node.js applications targeting Linux-based server environments. Leveraging WSL 2 will make the Docker developer experience more seamless no matter what operating system you’re running and what type of application you’re building. And the performance improvements will be immediately noticeable.

WSL 2 introduces a significant architectural change as it is a full Linux kernel built by Microsoft, allowing Linux containers to run natively without emulation. With the new WSL 2 Docker Desktop preview you will get access to Linux workspaces, removing the need to maintain both Linux and Windows build scripts. WSL 2 also supports dynamic memory and CPU allocation and an improved startup time down from 40 seconds to 2 seconds! 

Preview of Docker Desktop with WSL2

Thanks to our collaboration with Microsoft, we are already hard at work on getting this into your hands ahead of the WSL 2 full availability. We have written core functionalities to deploy an integration package, run the daemon and expose it to Windows processes, with support for bind mounts and port forwarding to simplify the experience.

For more details on the engineering work involved, read this engineering blog post.

The Tech Preview will be available shortly and we look forward to hearing your feedback.

Sign-up for the Docker Desktop for WSL 2 Tech Preview notification

Interested in trying out the #tech preview of Docker Desktop for WSL 2? Learn more in this blog post and sign up for the beta.Click To Tweet
The post Get Ready for the Tech Preview of Docker Desktop for WSL 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/