Docker on Windows Webinar Q&A

Recently I presented Docker on Windows: from 101 to Modernizing .NET Apps, a live webinar on using Docker with Windows, and running .NET Framework apps in containers. The session was recorded and you can watch it on the Docker YouTube channel:

I start with the basics of Windows Docker containers, showing how to you can run containers from public images, and write Dockerfiles to package your own apps to run in containers.
Then I move onto Dockerizing a traditional ASP.NET WebForms app, showing you how to take existing apps and run them in Docker with no code changes, and then use the Docker platform to modernize the app – breaking features out of the monolithic codebase, running them in separate containers and using Docker to connect them.
I maxed out the session time (just like Mike with his Docker for the Sysadmin webinar), so here are the answers to questions raised in the session.
Q:  We have several servers hosting our frontend, some as middle tier hosting the services and we have some for the database. Shall we have a container for each service?
A: Docker doesn’t mandate any particular design, you can architect your move to Docker in the way that works best for you. You could start by packaging your whole web app into one Docker image and your service layer into another Docker image, without having to change source code.
You can run multiple web containers from your web image, and multiple service containers from your service image. That gives you failover, scale and zero-downtime deployments for updates. You also get better density – you won’t be allocating a set of servers to be the service layer, you have a single swarm and servers can run containers from different layers to get maximum compute utilization.
Docker containers can access other services on the network, so you can continue to use your existing database. That could be the first step in a roadmap to break out monolithic apps and run features in their own containers, which means you can scale, update and manage them separately.  Check out the Modernizing Traditional Apps labs for guidance.
Q:  What would be the proper way to isolate groups of containers in Docker, meaning having a set of containers for DEV and another set of containers for QA, running in the same Docker host or swarm? 
The best grouping mechanism is a Docker stack running in swarm mode. You don’t need a cluster of machines running Docker for swarm mode, you can run a single-node swarm for your non-production environments.
You define all the services for one application in a Docker compose file and deploy it to the swarm with docker stack deploy. You can manage the whole solution as one unit, having different stacks for different environments. Running in swarm mode also lets you scale services and use rolling updates for app changes, so you can practice deployments for production.
You could also have dedicated nodes in your swarm for each environment, using node labels and service constraints – so you could have two servers for integration, three for QA etc. but manage them all in the same swarm. Or run separate swarms and physically isolate your environments – you can run a single server in swarm mode.
The other option is to segment workloads by running them in separate Docker networks, if you’re running outside of swarm mode. That doesn’t work so well with Windows where there’s a single default NAT network.
Q:  Can you host .NET NT Services in a docker container? My .NET NT services also write to the windows Event-Viewer, how would this work between the host and the Service assuming I can run .NET NT services in a container.
Yes, with Windows Server Core as the base image you can run Windows Services. In your Dockerfile you would deploy the Windows Service with a script, like installing an MSI with msiexec. Then in your startup command you can use Start-Service to make sure the Windows Service is running when the container starts.
Docker doesn’t integrate directly with the Event Log in containers, but you can use a startup command which polls the Event Log and makes the entries visible from Docker – this is what Microsoft do with the Dockerfile for SQL Server.
Q: How is licensing handled for Windows in containers?
Licensing for containers is done at the host level, so if you have a Windows Server VM running 100 containers, you only pay one server licence for the VM.
Docker licensing is separate. Docker for Windows is a free Docker Community Edition, and runs on Windows 10 and Windows Server 2016. In production you get support with Docker Enterprise Edition, and the Windows Server 2016 licence includes Docker EE Basic so you can raise support tickets with Microsoft and have them escalated to Docker, Inc.
Q:  If I bring up 3 containers on a host, can I use host IP address or DNS name to access these, or do I have to use container IP address to access them?
You can publish container ports to the host when you run them, e.g. docker container run -d -p 80:80 microsoft/iis will run the IIS container with port 80 mapped to the host. Any external traffic to the host gets directed into the container. When you’re working locally on the Docker host, then you need to use the container IP addresses.
Inside the container it’s simpler. Service discovery is built into Docker. The platform has its own DNS server, so containers can reach other by container (or service) name. If you run SQL Server in a container called db then you can run an ASP.NET app in another container and use db as the server name in the database connection string. Docker resolves the container name to the IP address of the container transparently, whether the container is running on the same server, or a different server in a Docker swarm.
Q. Can we change the password of the “User ManagerContainerAdministrator”? Can we use this user account to run as our application service in the container?
No. ContainerAdministrator is a special virtual account, it doesn’t have a password. If you need to use an administrative account with a password, you can create one in the Dockerfile with the net user command, and add it to the admin group with net localgroup.
ContainerAdministrator is the default account when you run a container – so if your CMD instruction starts a console app, that app will run as ContainerAdministrator. If your app runs in the background as a Windows Service, then the account will be the service account, so ASP.NET apps run under application pool accounts.
Q:  Will the sample code be available after this?
Yes. The demos from the webinar were all based on the .NET Newsletter sample app on GitHub – dockersamples/newsletter-signup. It’s an ASP.NET WebForms app that’s been modernized using Docker, splitting features out from the original monolith into small, self-contained components.
Learn more about Docker on Windows:

Watch Docker for .NET Developers from DockerCon 2017
Try Image2Docker, a tool which extracts ASP.NET apps into Dockerfiles and deploy to Docker Enterprise Edition
Watch Escape from Your VMs, the DockerCon 2017 session on using Image2Docker
Come and join us in Copenhagen for DockerCon Europe 2017

 

#Docker on #Windows – from 101 to modernizing #dotnet apps. Q&A from the webinar with…Click To Tweet

The post Docker on Windows Webinar Q&A appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

The best way to learn Docker for Free: Play-With-Docker (PWD)

Last year at the Distributed System Summit in Berlin, Docker captains Marcos Nils and Jonathan Leibiusky started hacking on an in-browser solution to help people learn Docker. A few days later, Play-with-docker (PWD) was born. 
PWD is a Docker playground which allows users to run Docker commands in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in browser, where you can build and run Docker containers and even create clusters in Docker Swarm Mode. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs. In addition to the playground, PWD also includes a training site composed of a large set of Docker labs and quizzes from beginner to advanced level available at training.play-with-docker.com.
In case you missed it, Marcos and Jonathan presented PWD during the last DockerCon Moby Cool Hack session. Watch the video below for a deep dive into the infrastructure and roadmaps.

Over the past few months, the Docker team has been working closely with Marcos, Jonathan and other active members of the Docker community to add new features to the project and Docker labs to the training section.

PWD: the Playground
Here is a quick recap of what’s new with the Docker playground:
 
1.     PWD Docker Machine driver and SSH
As PWD success grew, the community started to ask if they could use PWD to run their own Docker workshops and trainings. So one of the first improvements made to the project was the creation of PWD Docker machine driver, which allows users to create and manage their PWD hosts easily through their favorite terminal including the option to use ssh related commands. Here is how it works:

 
2.     Adding support for file upload
Another cool feature brought to you by Marcos and Jonathan is the ability to upload your Dockerfile directly into your PWD windows with a simple drag and drop of your file in your PWD instance.

 
3.     Templated session
In addition to file upload, PWD also has a feature which lets you spin up a 5 nodes swarm in a matter of seconds using predefined templates.

 
4.      Showcasing your applications with Docker in a single click
Another cool feature that comes with PWD is its embeddable button that you can use in your sites to set up a PWD environment and deploy a compose stack right away and a chrome extension that adds the “Try in PWD” button to the most popular images in DockerHub. Here’s a short demo of the extension in action:

PWD: the Training Site
 A number of new labs are available on training.play-with-docker.com. Some notable highlights include two labs that were originally hands-on labs from DockerCon in Austin, and a couple that highlight new features that are stable in Docker 17.06CE:

Docker Networking Hands-on Lab
Docker Orchstration Hands-on Lab
Multi-stage builds
Docker swarm config files

All in all, there are now 36 labs, with more being added all the time. If you want to contribute a lab, check out the GitHub repo and get started.

PWD: the Use Cases
With the traffic to the site and the feedback we’ve received, it’s fair to say that PWD has a lot of traction right now. Here are some of the most common use-cases:

Try new features fast as it’s updated with the latest dev versions.
Set up clusters in no-time and launch replicated services.
Learn through it’s interactive tutorials: training.play-with-docker.com.
Give presentations at conferences and meetups.
Allow to run advanced workshops that’d usually require complex setups, such as Jérôme’s advanced Docker Orchestration workshop
Collaborate with community members to diagnose and detect issues.

Get involved with PWD:

Contribute to PWD by submitting PRs
Contribute to the PWD training site

Just starting to #learndocker? Check out PWD for free hands-on labs! Contributions welcome Click To Tweet

The post The best way to learn Docker for Free: Play-With-Docker (PWD) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

How Watson Health Cloud Deploys Applications with Kubernetes

Today’s post is by Sandhya Kapoor, Senior Technologist, Watson Platform for Health, IBM For more than a year, Watson Platform for Health at IBM deployed healthcare applications in virtual machines on our cloud platform. Because virtual machines had been a costly, heavyweight solution for us, we were interested to evaluate Kubernetes for our deployments. Our design was to set up the application and data containers in the same namespace, along with the required agents using sidecars, to meet security and compliance requirements in the healthcare industry. I was able to run more processes on a single physical server than I could using a virtual machine. Also, running our applications in containers ensured optimal usage of system resources. To orchestrate container deployment, we are using Armada infrastructure, a Kubernetes implementation by IBM for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure. With Kubernetes, our developers can rapidly develop highly available applications by leveraging the power and flexibility of containers, and with integrated and secure volume service, we can store persistent data, share data between Kubernetes pods, and restore data when needed. Here is a snapshot of Watson Care Manager, running inside a Kubernetes cluster: Before deploying an app, a user must create a worker node cluster. I can create a cluster using the kubectl cli commands or create it from a Bluemix dashboard. Our clusters consist of one or more physical or virtual machines, also known as worker nodes, that are loosely coupled, extensible, and centrally monitored and managed by the Kubernetes master. When we deploy a containerized app, the Kubernetes master decides where to deploy the app, taking into consideration the deployment requirements and available capacity in the cluster. A user makes a request to Kubernetes to deploy the containers, specifying the number of replicas required for high availability. The Kubernetes scheduler decides where the pods (groups of one or more containers) will be scheduled and which worker nodes they will be deployed on, storing this information internally in Kubernetes and etcd. The deployment of pods in worker nodes is updated based on load at runtime, optimizing the placement of pods in the cluster. Kubelet running in each worker node regularly polls the kube API server. If there is new work to do, kubelet pulls the configuration information and takes action, for example, spinning off a new pod. Process Flow:UCD – IBM UrbanCode Deploy is a tool for automating application deployments through your environments. WH Cluster – Kubernetes worker node.Usage of GitLab in the Process Flow: We stored all our artifacts in GitLab, which includes the Docker files that are required for creating the image, YAML files needed to create a pod, and the configuration files to make the Healthcare application run. GitLab and Jenkins interaction in the Process Flow:We use Jenkins for continuous integration and build automation to create/pull/retag the Docker image and push the image to a Docker registry in the cloud. Basically, we have a Jenkins job configured to interact with GitLab project to get the latest artifacts and, based on requirements, it will either create a new Docker image from scratch by pulling the needed intermediate images from Docker/Bluemix repository or update the Docker image. After the image is created/updated the Jenkins job pushes the image to a Bluemix repository to save the latest image to be pulled by UrbanCode Deploy (UCD) component. Jenkins and UCD interaction in the Process Flow:The Jenkins job is configured to use the UCD component and its respective application, application process, and the UCD environment to deploy the application. The Docker image version files that will be used by the UCD component are also passed via Jenkins job to the UCD component. Usage of UCD in the Process Flow:UCD is used for deployment and the end-to end deployment process is automated here. UCD component process involves the following steps:Download the required artifacts for deployment from the Gitlab.Login to Bluemix and set the KUBECONFIG based on the Kubernetes cluster used for creating the pods.Create the application pod in the cluster using kubectl create command.If needed, run a rolling update to update the existing pod. Deploying the application in Armada: Provision a cluster in Armada with <x> worker nodes. Create Kubernetes controllers for deploying the containers in worker nodes, the Armada infrastructure pulls the Docker images from IBM Bluemix Docker registry to create containers. We tried deploying an application container and running a logmet agent (see Reading and displaying logs using logmet container, below) inside the containers that forwards the application logs to an IBM cloud logging service. As part of the process, YAML files are used to create a controller resource for the UrbanCode Deploy (UCD). UCD agent is deployed as a DaemonSet controller, which is used to connect to the UCD server. The whole process of deployment of application happens in UCD. To support the application for public access, we created a service resource to interact between pods and access container services. For storage support, we created persistent volume claims and mounted the volume for the containers. UCD: IBM UrbanCode Deploy is a tool for automating application deployments through your environments. Armada: Kubernetes implementation of IBM. WH Docker Registry: Docker Private image registry. Common agent containers: We expect to configure our services to use the WHC mandatory agents. We deployed all ion containers.Reading and displaying logs using logmet container: Logmet is a cloud logging service that helps to collect, store, and analyze an application’s log data. It also aggregates application and environment logs for consolidated application or environment insights and forwards them. Metrics are transmitted with collectd. We chose a model that runs a logmet agent process inside the container. The agent takes care of forwarding the logs to the cloud logging service configured in containers. The application pod mounts the application logging directory to the storage space, which is created by persistent volume claim, and stores the logs, which are not lost even when the pod dies. Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Exposing services with Ingress: Ingress controllers are reverse proxies that expose services outside cluster through URLs. They act as an external HTTP load balancer that uses a unique public entry point to route requests to the application. To expose our services to outside the cluster, we used Ingress. In Armada, if we create a paid cluster, an Ingress controller is automatically installed for us to use. We were able to access services through Ingress by creating a YAML resource file that specifies the service path.–Sandhya Kapoor, Senior Technologist, Watson Platform for Health, IBM Post questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackGet involved with the Kubernetes project on GitHub
Quelle: kubernetes

Moby Summit alongside Open Source Summit North America

Since the Moby Project introduction at DockerCon 2017 in Austin last April, the Moby Community has been hard at work to further define the Moby project, improve its components (runC, containerd, LinuxKit, InfraKit, SwarmKit, Libnetwork and Notary) and fine processes and clear communication channels.
All project maintainers are developing these aspects in the open with the support of the community. Contributors are getting involved on GitHub, giving feedback on the Moby Project Discourse forum and asking questions on Slack. Special Interest Groups (SIGs) for the Moby Project components have been formed based on the Kubernetes model for Open Source collaboration. These SIGs ensure a high level of transparency and synchronization between project maintainers and a community of heterogeneous contributors.
In addition to these online channels and meetings, the Moby community hosts regular meetups and summits. Check out the videos and slides from the last DockerCon Moby May Summit and June Moby Summit to catch up on the latest project updates. The Moby Summit page on the Moby website contains the agenda and registration link for next Moby summit, as well as recaps of previous summit. 
The next Moby Summit will take place on September 14, 2017 in Los Angeles as part of the Open Source Summit North America. Following the success of the previous editions, we’ll keep the same format which consists of short technical talks / demos in the morning and Birds-of-a-Feather in the afternoon. We’re actively looking for people who can talk about their Moby Project use cases. Don’t hesitate to reach out to community@mobyproject.org if you’d like to give a talk or would like to cover a specific topic during the BoF sessions, or contribute to the agenda by sending a pull request to the Moby website repository.
 
Register for Moby Summit LA
 
  Learn more about the Moby Project:

Visit www.mobyproject.org
Join the #Moby-project channel on Slack
Check out the upcoming events in the Moby Community Calendar
Join the conversation on GitHub and Discourse

  

Attending #OSS17 in LA next September ? Join us at the @moby Summit on 9/14 #mobyprojectClick To Tweet

The post Moby Summit alongside Open Source Summit North America appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Securing the AtSea App with Docker Secrets

Passing application configuration information as environmental variables was once considered best practice in 12 factor applications. However, this practice can expose information in logs, can be difficult to track how and when information is exposed, third party applications can access this information. Instead of environmental variables, Docker implements secrets to manage configuration and confidential information.
Secrets are a way to keep information such as passwords and credentials secure in a Docker CE or EE with swarm mode. Docker manages secrets and securely transmits it to only those nodes in the swarm that need access to it. Secrets are encrypted during transit and at rest in a Docker swarm. A secret is only accessible to those services which have been granted explicit access to it, and only while those service tasks are running.
The AtSea Shop is an example storefront application that can be deployed on different operating systems and can be customized to both your enterprise development and operational environments. The previous post showed how to use multi-stage builds to create small and efficient images. In this post, I’ll demonstrate how secrets are implemented in the application.
Creating Secrets
Secrets can be created using the command line or with a Compose file. The AtSea application uses nginx as a reverse proxy secured with HTTPS. To accomplish this, I created a self-signed x509 certificate.
mkdir certs
openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
I then created secrets using the domain key and certificate for nginx.
docker secret create revprox_cert certs/domain.crt
docker secret create revprox_key certs/domain.key
I also used secrets to hold the PostgreSQL database password and a token for the payment gateway by making files that contained the password and token. For example, the postgres_password file contains the password ‘gordonpass’. In the compose file, I added the secrets:
secrets:
postgres_password:
file: ./devsecrets/postgres_password
payment_token:
file: ./devsecrets/payment_token
 I then set the database password secret,
 database:
   build: 
      context: ./database
   image: atsea_db
   environment:
     POSTGRES_USER: gordonuser
     POSTGRES_DB_PASSWORD_FILE: /run/secrets/postgres_password
     POSTGRES_DB: atsea
   ports:
     – “5432:5432″ 
   networks:
     – back-tier
   secrets:
     – postgres_password
and I make the postgres_password secret available to the application server.
appserver:
   build:
      context: .
      dockerfile: app/Dockerfile
   image: atsea_app
   ports:
     – “8080:8080″ 
     – “5005:5005″
   networks:
     – front-tier
     – back-tier
   secrets:
     – postgres_password
As you can see, you can set secrets at the command line and programmatically in the a compose file.
Docker Enterprise Edition (formerly known as Docker Datacenter) fully incorporates secrets management through creation, update and removal of secrets. In addition Docker EE supports authorization, rotation and auditing of secrets. Creating a secret in Docker Enterprise Edition is accomplished by clicking on the Resources tab and then the Secrets menu item.

Create the secret by entering the name and the value and clicking Create. In this example, I’m using the secret for the PostgreSQL password in the AtSea application.
 

Using Secrets
In order to use the Secret containing the certificate for nginx, I configured the nginx.conf file to point at the secret in the nginx container.
server {
  listen 443;
       ssl on;
       ssl_certificate /run/secrets/revprox_cert;
       ssl_certificate_key /run/secrets/revprox_key;
       server_name atseashop.com;
       access_log /dev/stdout;
       error_log /dev/stderr;

       location / {
           proxy_pass http://appserver:8080;
       }
   }

The AtSea application uses the postgres_password secret to connect to the database. This is done by reading the secret from the container and setting it to Spring-Boot’s DataSourceProperties class in the JpaConfiguration.java file.
// Set password to connect to postgres using Docker secrets.
try(BufferedReader br = new BufferedReader(new  FileReader(“/run/secrets/postgres_password”)))
{
StringBuilder sb = new StringBuilder();
String line = br.readLine();

while (line != null) {
sb.append(line);
sb.append(System.lineSeparator());
line = br.readLine();
}
dataSourceProperties.setDataPassword(sb.toString());
} catch (IOException e) {
System.err.println(“Could not successfully load DB password file”);
}
return dataSourceProperties;
}

Learn more about Docker secrets:

Documentation
Command line
Docker Enterprise Editionr Secrets
Play with Docker Secrets hands-on lab
Docker Captain Alex Ellis’ Docker Secrets in Action
Why you shouldn’t use ENV variables for secret data

Securing AtSea with #Docker Secrets by @spara #dockersecurityClick To Tweet

The post Securing the AtSea App with Docker Secrets appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker for the SysAdmin Webinar Q&A

On June 27th I presented a webinar on “Docker for the SysAdmin”.  The webinar was driven by a common scenario I’m seeing: A sysadmin is sitting at her desk minding her own business when a developer walks in and says “here’s the the new app, it’s in a Docker image. Please deploy it ASAP”. This session is designed to help provide some guidance on how sysadmins should think about managing Dockerized applications in production.
In any case, I was a bit long-winded (as usual), and didn’t have time to get to all the Q&A (and there was a lot).

So, as promised, here are all the questions from that session, along with my answers.  If you need more info, hit me up on Twitter: @mikegcoleman
————
Q: I am planning an application deployment and want to use Docker. What cloud would you recommend at the moment? I have GCP, Azure, AWS under my belt. 1) TCO 2) Performance ?
A: Answering that would require me to understand your application on a pretty deep level, so I can’t really provide a specific response. I will say that if you choose one cloud provider today, and realize that you’d like to change course down the road, Docker makes that much simpler since your Dockerized workloads will move easily between different cloud providers. So, figure out what your technical and business drivers are, choose the best provider based on those, and if you need to adjust later you’ll be in good shape.
Q: What’s the max size of a container?
A: There is no maximum size per se. Containers can use all the resources of a given node (physical or virtual) if you want them to. However, if you don’t, you can set but minimum and maximum values for CPU and memory.
Q: Is it possible to run an Ubuntu container in a Windows host running Docker Engine?
A: Natively, no. You can always run a Linux VM on a Windows host to run Linux-based containers. At DockerCon Microsoft announced that they will be bringing native Linux containers to Windows in the future, so stay tuned for more information on that.
Q: Can DDC now run both Linux & Windows workloads? If not yet, then is this in the roadmap of the tool?
A: Docker swarm mode can manage Linux and Windows workloads in the same cluster today. This functionality will be coming to Docker Enterprise Edition / Docker Data Center in the very near future.
Q: Does Docker have a tool for scanning images similar to Black Duck?
A: Yes. Docker Enterprise Edition Advanced includes Docker Security Scanning. This features allows you to instruct Docker Trusted Registry to scan images for known vulnerabilities and exploits.
Q: Is the hypervisor still recommended, to allow the hosts to be clustered? Or is that not truly needed? (Can I cluster it using something more native to Docker? (Swarm perhaps)?
A: Whether or not you want to run containers on bare metal or in a VM is a decision you should make based on several factors. There is no cut and dried answer. You need to look at factors such as costs, performance, leveraging existing skillsets, disaster recovery, etc – and then decide what makes the most sense.  Regardless, you can build swarm mode clusters that include both physical and virtual machines.
Q: Is the secure communication between the hosts TLS 1.2?
A: Yes, TLS 1.2.
Q: I have to start testing DDC. Is there a test version? Do Docker for Azure / AWS use DDC under the hood?
A: Yes, you can get a 30 day trial of Docker Enterprise Edition from the Docker Store. Docker for Azure and Docker for AWS can deploy DDC (it’s not really under the hood as DDC is installed onto the AWS or Azure infrastructure).
Q: Is the Visualizer, part of Docker Datacenter?
A: No, it’s a demo app that you can grab from our Docker Samples GitHub. 
Q: When a node stops and a workload is moved, does the storage move with it?
A: At this time volumes do not follow containers when they are migrated. However, there are a number of 3rd party plug-ins that can help with this scenario.
Q: Is there way to update the base image, which is used to build the application?
A: You would need to rebuild those applications once the base image is updated.
Q: If the client wants the setup in their data center to have no connectivity, how should DDC be set up? How does DTR get the updates for the images? And how do we install DDC?  
A: For an air gapped installation, follow these instructions. Additionally, you can load the security scanning database for Docker Trusted Registry from a file.
Q: How do you use Chef/Puppet with Docker to manage the images?
A: I would actually advocate for integrating Dockerfiles into your existing source code management practices vs. trying to use any config management tool to manage images.
Helpful links to get started

Learn more about Docker Enterprise Edition
Play with Docker or try a self-paced Hands-on Lab
Watch this session or other DockerCon 2017 sessions
Try Docker Enterprise Edition for free

@mikegcoleman answers some questions about #Docker form #sysadmins and IT prosClick To Tweet

The post Docker for the SysAdmin Webinar Q&A appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What’s new in Docker 17.06 Community Edition (CE)

Docker 17.06 CE (Community Edition) is the first version of Docker built entirely on the Moby Project. New features include Multi-Stage Build, new Networking features, a new metrics endpoint and more! In this Online Meetup, Sophia Parafina, Docker Developer Relations Engineer, demo’d and reviewed these new features. Check out the recording below and slides.

Learn More about Docker 17.06 CE
Check out the announcement blog post or watch the video summary below.

To find out more about these features and more:

Download the latest version of Docker CE
Check out the Docker Documentation
Play with these features on Play with Docker
Ask questions in our forums and in the Docker Community Slack

 

Learn more about what’s new in #Docker 17.06 CE w/ @spara’s online #meetup videoClick To Tweet

The post What’s new in Docker 17.06 Community Edition (CE) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Multi-Stage Builds

This is part of a series of articles describing how the AtSea Shop application was built using enterprise development tools and Docker. In the previous post, I introduced the AtSea application and how I developed a REST application with the Eclipse IDE and Docker. Multi-stage builds, a Docker feature introduced in Docker 17.06 CE, let you orchestrate a complex build in a single Dockerfile. Before multi-stage build, Docker users would use a script to compile the applications on the host machine, then use Dockerfiles to build the images. The AtSea application is the perfect use case for a multi-stage build because:

it uses node.js to compile the ReactJs app into storefront
it uses Spring Boot and Maven to make a standalone jar file
it is deployed to a standalone JDK container
the storefront is then included in the jar

Let’s look at the Dockerfile.
The react-app is an extension of create-react-app. From within the react-app directory we run AtSea’s frontend in local development mode.
The first stage of the build uses a Node base image to create a production-ready frontend build directory consisting of static javascript and css files. A Docker best practice is named stages, e.g. “FROM node:latest AS storefront”.
This step first makes our image’s working directory at /usr/src/atsea/app/react-app. We copy the contents of the react-app directory, which includes the ReactJs source and package.json file, to the root of our image’s working directory. Then we use npm to install all necessary react-app’s node dependencies. Finally, npm run build bundles the react-app using the node dependencies and ReactJs source into a build directory at the root.
FROM node:latest AS storefront
WORKDIR /usr/src/atsea/app/react-app
COPY react-app .
RUN npm install
RUN npm run build
Once this build stage is complete, the builder has an intermediate image named storefront. This temporary image will not show up in your list of images from a docker image ls. Yet the builder can access and choose artifacts from this stage in other stages of the build.
To compile the AtSea REST application, we use a maven image and copy the pom.xml file, which maven uses to install the dependencies. We copy the source files to the image and run maven again to build the AtSea jar file using the package command. This creates another intermediate image called appserver.
FROM maven:latest AS appserver
WORKDIR /usr/src/atsea
COPY pom.xml .
RUN mvn -B -f pom.xml -s /usr/share/maven/ref/settings-docker.xml dependency:resolve
COPY . .
RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package -DskipTests
Putting it all together, we use a java image to build the final Docker image. The build directory in storefront, created during the first build stage, is copied to the /static directory, defined as an external directory in the AtSea REST application. We are choosing to leave behind all those node modules :).
We copy the AtSea jar file to the java image and set ENTRYPOINT to start the application and set the profile to use a PostgreSQL database. The final image is compact since it only contains the compiled applications in the JDK base image.
FROM java:8-jdk-alpine
WORKDIR /static
COPY –from=storefront /usr/src/atsea/app/react-app/build/ .
WORKDIR /app
COPY –from=appserver /usr/src/atsea/target/AtSea-0.0.1-SNAPSHOT.jar .
ENTRYPOINT [“java”, “-jar”, “/app/AtSea-0.0.1-SNAPSHOT.jar”]
CMD [“–spring.profiles.active=postgres”]
This step uses COPY –from command to copy files from the intermediate images. Multi-stage builds can also use offsets instead of named stages, e.g.  “COPY –from=0 /usr/src/atsea/app/react-app/build/ .”
Multi-stage builds facilitate the creation of small and significantly more efficient containers since the final image can be free of any build tools. External scripts are no longer needed to orchestrate a build. Instead, an application image is built and started by using docker-compose up –build. A stack is deployed using docker stack deploy -c docker-stack.yml.
Multi-Stage Builds in Docker Cloud
Docker Cloud now supports multi-stage builds for automated builds. Linking the github repository to Docker Cloud ensures that your images will be always be current. To enable automated builds, tag and push your image to your Docker Cloud repository.
docker tag atsea_app <your username>/atsea_app
docker push <your username>/atsea_app
Log into your Docker Cloud account.

Next connect your Github account to give Cloud access to the source code. Click on Cloud Settings, then click on sources, and the plug icon. Follow the directions to connect your Github account.

After your Github account is connected, click on Repositories on the side menu and then click your atsea_app repository.

Click on Builds, then click on Configure Automated Builds on the following screen.

In the Build Configurations form, complete

the Source Repository with the Github account and repository
the Build Location, we’ll use Docker Cloud with a medium node
the Docker Version using Edge 17.05 CE which supports multi-stage builds
leave Autotest to off
create a Build Rule that specifies the dockerfile in the app directory of the repository

Click on Save and Build to build the image.

Docker Cloud will notify you if the build was successful.

For more information on multi-stage builds read the documentation and Docker Captain Alexis Ellis’ Builder pattern vs. Multi-stage builds in Docker. To build compact and efficient images watch Abby Fuller’s Dockercon 2017 presentation, Creating Effective Images and check out her slides.
Interested in more? Check out these developer resources and videos from Dockercon 2017.

AtSea Shop demo
Docker Reference Architecture: Development Pipeline Best Practices Using Docker EE
Automated Builds in Docker Cloud
Docker Labs

Developer Tools
Java development using docker

DockerCon videos

Docker for Java Developers
The Rise of Cloud Development with Docker & Eclipse Che
All the New Goodness of Docker Compose
Docker for Devs

Multi-stage builds in the #DockerCon AtSea demo app by @spara @jessvalarezo1Click To Tweet

The post Multi-Stage Builds appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes 1.7: Security Hardening, Stateful Application Updates and Extensibility

Today we’re announcing Kubernetes 1.7, a milestone release that adds security, storage and extensibility features motivated by widespread production use of Kubernetes in the most demanding enterprise environments. At-a-glance, security enhancements in this release include encrypted secrets, network policy for pod-to-pod communication, node authorizer to limit kubelet access and client / server TLS certificate rotation. For those of you running scale-out databases on Kubernetes, this release has a major feature that adds automated updates to StatefulSets and enhances updates for DaemonSets. We are also announcing alpha support for local storage and a burst mode for scaling StatefulSets faster. Also, for power users, API aggregation in this release allows user-provided apiservers to be served along with the rest of the Kubernetes API at runtime. Additional highlights include support for extensible admission controllers, pluggable cloud providers, and container runtime interface (CRI) enhancements.What’s NewSecurity:The Network Policy API is promoted to stable. Network policy, implemented through a network plug-in, allows users to set and enforce rules governing which pods can communicate with each other. Node authorizer and admission control plugin are new additions that restrict kubelet’s access to secrets, pods and other objects based on its node.Encryption for Secrets, and other resources in etcd, is now available as alpha. Kubelet TLS bootstrapping now supports client and server certificate rotation.Audit logs stored by the API server are now more customizable and extensible with support for event filtering and webhooks. They also provide richer data for system audit.Stateful workloads:StatefulSet Updates is a new beta feature in 1.7, allowing automated updates of stateful applications such as Kafka, Zookeeper and etcd, using a range of update strategies including rolling updates.StatefulSets also now support faster scaling and startup for applications that do not require ordering through Pod Management Policy. This can be a major performance improvement. Local Storage (alpha) was one of most frequently requested features for stateful applications. Users can now access local storage volumes through the standard PVC/PV interface and via StorageClasses in StatefulSets.DaemonSets, which create one pod per node already have an update feature, and in 1.7 have added smart rollback and history capability.A new StorageOS Volume plugin provides highly-available cluster-wide persistent volumes from local or attached node storage.Extensibility:API aggregation at runtime is the most powerful extensibility features in this release, allowing power users to add Kubernetes-style pre-built, 3rd party or user-created APIs to their cluster.Container Runtime Interface (CRI) has been enhanced with New RPC calls to retrieve container metrics from the runtime. Validation tests for the CRI have been published and Alpha integration with containerd 1.0, which supports basic pod lifecycle and image management is now available. Read our previous in-depth post introducing CRI.Additional Features:Alpha support for external admission controllers is introduced, providing two options for adding custom business logic to the API server for modifying objects as they are created and validating policy. Policy-based Federated Resource Placement is introduced as Alpha providing placement policies for the federated clusters, based on custom requirements such as regulation, pricing or performance.Deprecation: Third Party Resource (TPR) has been replaced with Custom Resource Definitions (CRD) which provides a cleaner API, and resolves issues and corner cases that were raised during the beta period of TPR. If you use the TPR beta feature, you are encouraged to migrate, as it is slated for removal by the community in Kubernetes 1.9.The above are a subset of the feature highlights in Kubernetes 1.7. For a complete list please visit the release notes.AdoptionThis release is possible thanks to our vast and open community. Together, we’ve already pushed more than 50,000 commits in just three years, and that’s only in the main Kubernetes repo. Additional extensions to Kubernetes are contributed in associated repos bringing overall stability to the project. This velocity makes Kubernetes one of the fastest growing open source projects — ever. Kubernetes adoption has been coming from every sector across the world. Recent user stories from the community include: GolfNow, a member of the NBC Sports Group, migrated their application to Kubernetes giving them better resource utilization and slashing their infrastructure costs in half.Bitmovin, provider of video infrastructure solutions, showed us how they’re using Kubernetes to do multi-stage canary deployments in the cloud and on-prem.Ocado, world’s largest online supermarket, uses Kubernetes to create a distributed data center for their smart warehouses. Read about their full setup here.Is Kubernetes helping your team? Share your story with the community. See our growing resource of user case studies and learn from great companies like Box that have adopted Kubernetes in their organization. Huge kudos and thanks go out to the Kubernetes 1.7 release team, led by Dawn Chen of Google. AvailabilityKubernetes 1.7 is available for download on GitHub. To get started with Kubernetes, try one of the these interactive tutorials. Get InvolvedJoin the community at CloudNativeCon + KubeCon in Austin Dec. 6-8 for the largest Kubernetes gathering ever. Speaking submissions are open till August 21 and discounted registration ends October 6.The simplest way to get involved is joining one of the many Special Interest Groups (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly community meeting, and these channels:Post questions (or answer questions) on Stack OverflowJoin the community portal for advocates on K8sPortFollow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackShare your Kubernetes story. Many thanks to our vast community of contributors and supporters in making this and all releases possible.– Aparna Sinha, Group Product Manager, Kubernetes Google and Ihor Dvoretskyi, Program Manager, Kubernetes Mirantis
Quelle: kubernetes

Announcing Docker 17.06 Community Edition (CE)

Today we released Docker CE 17.06  with new features, improvements, and bug fixes. Docker CE 17.06 is the first Docker version built entirely on the Moby Project, which  we announced in April at DockerCon. You can see the complete list of changes in the changelog, but let’s take a look at some of the new features.
We also created a video version of this post here:

Multi-stage builds
The biggest feature in 17.06 CE is that multi-stage builds, announced in April at DockerCon, have come to the stable release. Multi-stage builds allow you to build cleaner, smaller Docker images using a single Dockerfile.
Multi-stage builds work by building intermediate images that produce an output. That way you can compile code in an intermediate image and use only the output in the final image. So for instance, Java developers commonly use Apache Maven to compile their apps, but Maven isn’t required to run their app. Multi-stage builds can result in a substantial image size savings:
REPOSITORY          TAG                 IMAGE ID                CREATED              SIZE

maven               latest              66091267e43d         2 weeks ago         620MB

java                8-jdk-alpine     3fd9dd82815c         3 months ago       145MB
Let’s take a look at our AtSea sample app which creates a sample storefront application.

AtSea uses multi-stage build with two intermediate stages: a node.js base image to build a ReactJS app, and a Maven base image to compile a Spring Boot app into a single image.
.gist table { margin-bottom: 0; }

The final image is only 209MB, and doesn’t have Maven or node.js.
There are other builder improvements as well, including the –build-arg flag on docker build, which lets you set build-time variables. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the –build-arg flag.
Logs and Metrics
 
Metrics
We currently support metrics through an API endpoint in the daemon. You can now expose docker’s /metrics endpoint to plugins.

$ docker plugin install –grant-all-permissions cpuguy83/docker-metrics-plugin-test:latest

$ curl http://127.0.0.1:19393/metrics

This plugin is for example only. It runs reverse proxy on the host’s network which forwards requests to the local metrics socket in the plugin. In real scenarios you would likely either push the collected metrics to an external service or make the metrics available for collection by a service such as Prometheus.
Note that while metrics plugins are available on non-experimental daemons, the metric labels are still considered experimental and may change in future versions of Docker.
 
Log Driver Plugins
We have added support for log driver plugins.
 
Service logs
Docker service logs has moved out of the Edge release and into Stable, so you can easily get consolidated logs for an entire service running on a Swarm. We’ve added an endpoint for logs from individual tasks within a service as well.

Networking
 
Node-local network support for Services
Docker supports a variety of networking options. With Docker 17.06 CE, you can now attach services to node-local networks. This includes networks like Host, Macvlan, IPVlan, Bridge, and local-scope plugins. So for instance for a Macvlan network you can create a node specific network configurations on the worker nodes and then create a network on a manager node that brings in those configurations:
[Wrk-node1]$ docker network create —config-only —subnet=10.1.0.0/16 local-config

[Wrk-node2]$ docker network create —config-only —subnet=10.2.0.0/16 local-config

[Mgr-node2]$ docker network create —scope=swarm —config-from=local-config -d macvlan

mynet

[Mgr-node2]$ docker service create —network=mynet my_new_service

Swarm mode
We have a number of new features in swarm mode. Here’s just a few of them:

Configuration Objects
We’ve created a new configuration object for swarm mode that allows you to securely pass along configuration information in the same way you pass along secrets.
$ echo “This is a config” | docker config create test_config –

$ docker service create –name=my-srv —config=test_config …

$ docker exec -it 37d7cfdff6d5 cat test_config

This is a config

Certificate Rotation Improvements
The swarm mode public key infrastructure (PKI) system built into Docker makes it simple to securely deploy a container orchestration system. The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications between themselves and other nodes in the swarm. Since this relies on certificates, it’s important to rotate those frequently. Since swarm mode launched with Docker 1.12, you’ve been able to schedule certificate rotation as frequently as every hour. With Docker CE 17.06 we’ve added the ability to immediately force certificate rotation on a one-time basis.
docker swarm ca –rotate
Swarm Mode Events
You can use docker events to get real-time event information from Docker. This is really useful when writing automation and monitoring applications that work with Docker. But until Docker CE 17.06 CE we didn’t have support for events for swarm mode. Now you docker events will return information on services, nodes, networks, and secrets.

Dedicated Datapath
The new –datapath-addr flag on docker swarm init allows you to isolate the swarm mode management tasks from the data passed around by the application. That helps save the cluster from IO greedy applications. For instance in you initiate your cluster:
docker swarm init —advertise-addr=eth0 —datapath-addr=eth1
Cluster management traffic (Raft, grpc & gossip) will travel over eth0 and services will communicate with each other over eth1.

Desktop Editions
We’ve got three new features in Docker for Mac and Windows.

GUI option to reset docker data without loosing all settings
Now you can reset your data without resetting your settings

Add an experimental DNS name for the host
If you’re running containers on Docker for Mac or Docker for Windows, and you want to access other containers you can use a new experimental host: docker.for.mac.localhost and docker.for.win.localhost to access open ports. For instance:
.gist table { margin-bottom: 0; }

Login certificates for authenticating registry access
You can now add certificates to Docker for Mac and Docker for Windows that allow you to access registries, not just your username and password. This will make accessing Docker Trusted Registry, as well as the open source Registry and any other registry application fast and easy.

Cloud Editions
 
Our Cloudstor volume plugin is available both on Docker for AWS and Docker for Azure. In Docker for AWS, support for persistent volumes (both global EFS-based and attachable EBS-based) are now available in stable. And we support EBS volumes across Availability Zones.
For Docker for Azure, we now support deploying to Azure Gov. Support for persistent volumes through cloudstor backed by Azure File Storage is now available in Stable for both Azure Public and Azure Gov
 
Deprecated
 
In the dockerd commandline, we long ago deprecated the –api-enable-cors flag in favor of –api-cors-header. We’re not removing –api-enable-cors entirely.
Ubuntu 12.04 “precise pangolin” has been end-of-lifed, so it is now no longer a supported OS for Docker. Later versions of Ubuntu are still supported.
 
What’s next
 
To find out more about these features and more:

Download the latest version of Docker CE
Check out the Docker Documentation
Play with these features on Play with Docker
Ask questions in our forums and in the Docker Community Slack
RSVP for the CE 17.06 Online Meetup on June 28th

The post Announcing Docker 17.06 Community Edition (CE) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/