Kubernetes 1.4: Making it easy to run on Kubernetes anywhere

Today we’re happy to announce the release of Kubernetes 1.4.Since the release to general availability just over 15 months ago, Kubernetes has continued to grow and achieve broad adoption across the industry. From brand new startups to large-scale businesses, users have described how big a difference Kubernetes has made in building, deploying and managing distributed applications. However, one of our top user requests has been making Kubernetes itself easier to install and use. We’ve taken that feedback to heart, and 1.4 has several major improvements.These setup and usability enhancements are the result of concerted, coordinated work across the community – more than 20 contributors from SIG-Cluster-Lifecycle came together to greatly simplify the Kubernetes user experience, covering improvements to installation, startup, certificate generation, discovery, networking, and application deployment.Additional product highlights in this release include simplified cluster deployment on any cloud, easy installation of stateful apps, and greatly expanded Cluster Federation capabilities, enabling a straightforward deployment across multiple clusters, and multiple clouds.What’s new:Cluster creation with two commands – To get started with Kubernetes a user must provision nodes, install Kubernetes and bootstrap the cluster. A common request from users is to have an easy, portable way to do this on any cloud (public, private, or bare metal).Kubernetes 1.4 introduces ‘kubeadm’ which reduces bootstrapping to two commands, with no complex scripts involved. Cluster setup is now transparent, reliable and customizable without having to reverse engineer opaque pre-built solutions. Installation is also streamlined by packaging Kubernetes with its dependencies, for most major Linux distributions including Red Hat and Ubuntu Xenial. This means users can now install Kubernetes using familiar tools such as apt-get and yum.Add-on deployments, such as for an overlay network, can be reduced to one command by using a DaemonSet.Enabling this simplicity is a new certificates API and its use for kubelet TLS bootstrap, as well as a new discovery API.Expanded stateful application support – While cloud-native applications are built to run in containers, many existing applications need additional features to make it easy to adopt containers. Most commonly, these include stateful applications such as batch processing, databases and key-value stores. In Kubernetes 1.4, we have introduced a number of features simplifying the deployment of such applications, including: ScheduledJob is introduced as Alpha so users can run batch jobs at regular intervals.Init-containers are Beta, addressing the need to run one or more containers before starting the main application, for example to sequence dependencies when starting a database or multi-tier app.Dynamic PVC Provisioning moved to Beta. This feature now enables cluster administrators to expose multiple storage provisioners and allows users to select them using a new Storage Class API object.  Curated and pre-tested Helm charts for common stateful applications such as MariaDB, MySQL and Jenkins will be available for one-command launches using version 2 of the Helm Package Manager.Cluster federation API additions – One of the most requested capabilities from our global customers has been the ability to build applications with clusters that span regions and clouds. Federated Replica Sets Beta – replicas can now span some or all clusters enabling cross region or cross cloud replication. The total federated replica count and relative cluster weights / replica counts are continually reconciled by a federated replica-set controller to ensure you have the pods you need in each region / cloud.Federated Services are now Beta, and secrets, events and namespaces have also been added to the federation API.Federated Ingress Alpha – starting with Google Cloud Platform (GCP), users can create a single L7 globally load balanced VIP that spans services deployed across a federation of clusters within GCP. With Federated Ingress in GCP, external clients point to a single IP address and are sent to the closest cluster with usable capacity in any region or zone of the federation in GCP.Container security support – Administrators of multi-tenant clusters require the ability to provide varying sets of permissions among tenants, infrastructure components, and end users of the system.Pod Security Policy is a new object that enables cluster administrators to control the creation and validation of security contexts for pods/containers. Admins can associate service accounts, groups, and users with a set of constraints to define a security context.AppArmor support is added, enabling admins to run a more secure deployment, and provide better auditing and monitoring of their systems. Users can configure a container to run in an AppArmor profile by setting a single field.Infrastructure enhancements – We continue adding to the scheduler, storage and client capabilities in Kubernetes based on user and ecosystem needs.Scheduler – introducing inter-pod affinity and anti-affinity Alpha for users who want to customize how Kubernetes co-locates or spreads their pods. Also priority scheduling capability for cluster add-ons such as DNS, Heapster, and the Kube Dashboard.Disruption SLOs – Pod Disruption Budget is introduced to limit impact of pods deleted by cluster management operations (such as node upgrade) at any one time.Storage – New volume plugins for Quobyte and Azure Data Disk have been added.Clients – Swagger 2.0 support is added, enabling non-Go clients.Kubernetes Dashboard UI – lastly, a great looking Kubernetes Dashboard UI with 90% CLI parity for at-a-glance management.For a complete list of updates see the release notes on GitHub. Apart from features the most impressive aspect of Kubernetes development is the community of contributors. This is particularly true of the 1.4 release, the full breadth of which will unfold in upcoming weeks.AvailabilityKubernetes 1.4 is available for download at get.k8s.io and via the open source repository hosted on GitHub. To get started with Kubernetes try the Hello World app.To get involved with the project, join the weekly community meeting or start contributing to the project here (marked help). Users and Case StudiesOver the past fifteen months since the Kubernetes 1.0 GA release, the adoption and enthusiasm for this project has surpassed everyone’s imagination. Kubernetes runs in production at hundreds of organization and thousands more are in development. Here are a few unique highlights of companies running Kubernetes: Box — accelerated their time to delivery from six months to launch a service to less than a week. Read more on how Box runs mission critical production services on Kubernetes.Pearson — minimized complexity and increased their engineer productivity. Read how Pearson is using Kubernetes to reinvent the world’s largest educational company. OpenAI — a non-profit artificial intelligence research company, built infrastructure for deep learning with Kubernetes to maximize productivity for researchers allowing them to focus on the science.We’re very grateful to our community of over 900 contributors who contributed more than 5,000 commits to make this release possible. To get a closer look on how the community is using Kubernetes, join us at the user conference KubeCon to hear directly from users and contributors.ConnectPost questions (or answer questions) on Stack Overflow Connect with the community on SlackFollow us on Twitter @Kubernetesio for latest updatesThank you for your support! — Aparna Sinha, Product Manager, Google
Quelle: kubernetes

Build and run your first Docker Windows Server container

Today, Microsoft announced the general availability of Server 2016, and with it, Docker engine running containers natively on Windows. This blog post describes how to get setup to run Docker Windows Containers on Windows 10 or using a Windows Server 2016 VM. Check out the companion blog posts on the technical improvements that have made Docker containers on Windows possible and the post announcing the Docker Inc. and Microsoft partnership.
Before getting started, It’s important to understand that Windows Containers run Windows executables compiled for the Windows Server kernel and userland (either windowsservercore or nanoserver). To build and run Windows containers, you have to have a Windows system with container support.
Windows 10 with Anniversary Update
For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Update (note that container images can only be based on Windows Server Core and Nanoserver, not Windows 10). All that’s missing is the Windows-native Docker Engine and some image base layers.
The simplest way to get a Windows Docker Engine is by installing the Docker for Windows public beta (direct download link). Docker for Windows used to only setup a Linux-based Docker development environment (slightly confusing, we know), but the public beta version now sets up both Linux and Windows Docker development environments, and we’re working on improving Windows container support and Linux/Windows container interoperability.
With the public beta installed, the Docker for Windows tray icon has an option to switch between Linux and Windows container development. For details on this new feature, check out Stefan Scherers blog post.
Switch to Windows containers and skip the next section.

Windows Server 2016
Windows Server 2016 is the where Docker Windows containers should be deployed for production. For developers planning to do lots of Docker Windows container development, it may be worth setting up a Windows Server 2016 dev system (in a VM, for example), at least until Windows 10 and Docker for Windows support for Windows containers matures.
For Microsoft Ignite 2016 conference attendees, USB flash drives with Windows Server 2016 preloaded are available at the expo. Not at ignite? Download a free evaluation version and install it on bare metal or in a VM running on Hyper-V, VirtualBox or similar. Running a VM with Windows Server 2016 is also a great way to do Docker Windows container development on macOS and older Windows versions.
Once Windows Server 2016 is running, log in and install the Windows-native Docker Engine directly (that is, not using &;Docker for Windows&;). Run the following in an Administrative PowerShell prompt:
# Add the containers feature and restart
Install-WindowsFeature containers
Restart-Computer -Force

# Download, install and configure Docker Engine
Invoke-WebRequest “https://download.docker.com/components/engine/windows-server/cs-1.12/docker.zip” -OutFile “$env:TEMPdocker.zip” -UseBasicParsing

Expand-Archive -Path “$env:TEMPdocker.zip” -DestinationPath $env:ProgramFiles

# For quick use, does not require shell to be restarted.
$env:path += “;c:program filesdocker”

# For persistent use, will apply even after a reboot.
[Environment]::SetEnvironmentVariable(“Path”, $env:Path + “;C:Program FilesDocker”, [EnvironmentVariableTarget]::Machine)

# You have to start a new PowerShell prompt at this point
dockerd –register-service
Start-Service docker
Docker Engine is now running as a Windows service, listening on the default Docker named pipe. For development VMs running (for example) in a Hyper-V VM on Windows 10, it might be advantageous to make the Docker Engine running in the Windows Server 2016 VM available to the Windows 10 host:
# Open firewall port 2375
netsh advfirewall firewall add rule name=”docker engine” dir=in action=allow protocol=TCP localport=2375

# Configure Docker daemon to listen on both pipe and TCP (replaces docker –register-service invocation above)
dockerd.exe -H npipe:////./pipe/docker_engine -H 0.0.0.0:2375 –register-service
The Windows Server 2016 Docker engine can now be used from the VM host  by setting DOCKER_HOST:
$env:DOCKER_HOST = “<ip-address-of-vm>:2375″
See the Microsoft documentation for more comprehensive instructions.
Running Windows containers
First, make sure the Docker installation is working:
> docker version
Client:
Version:      1.12.1
API version:  1.24
Go version:   go1.6.3
Git commit:   23cf638
Built:        Thu Aug 18 17:32:24 2016
OS/Arch:      windows/amd64
Experimental: true

Server:
Version:      1.12.2-cs2-ws-beta-rc1
API version:  1.25
Go version:   go1.7.1
Git commit:   62d9ff9
Built:        Fri Sep 23 20:50:29 2016
OS/Arch:      windows/amd64
Next, pull a base image that’s compatible with the evaluation build, re-tag it and to a test-run:
docker pull microsoft/windowsservercore:10.0.14393.206
docker tag microsoft/windowsservercore:10.0.14393.206 microsoft/windowsservercore
docker run microsoft/windowsservercore hostname
69c7de26ea48
Building and pushing Windows container images
Pushing images to Docker Cloud requires a free Docker ID. Storing images on Docker Cloud is a great way to save build artifacts for later user, to share base images with co-workers or to create build-pipelines that move apps from development to production with Docker.
Docker images are typically built with docker build from a Dockerfile recipe, but for this example, we’re going to just create an image on the fly in PowerShell.
“FROM microsoft/windowsservercore `n CMD echo Hello World!” | docker build -t <docker-id>/windows-test-image –
Test the image:
docker run <docker-id>/windows-test-image
Hello World!
Login with docker login and then push the image:
docker push <docker-id>/windows-test-image
Images stored on Docker Cloud available in the web interface and public images can be pulled by other Docker users.
Using docker-compose on Windows
Docker Compose is a great way develop complex multi-container consisting of databases, queues and web frontends. Compose support for Windows is still a little patchy and only works on Windows Server 2016 at the time of writing (i.e. not on Windows 10).
To try out Compose on Windows, you can clone a variant of the ASP.NET Core MVC MusicStore app, backed by a SQL Server Express 2016 database. If running this sample on Windows Server 2016 directly, first grab a Compose executable and make it is in your path. A correctly tagged microsoft/windowsservercore image is required before starting. Also note that building the SQL Server image will take a while.
git clone https://github.com/friism/Musicstore

cd Musicstore
docker build -t sqlserver:2016 -f .dockermssql-server-2016-expressDockerfile .dockermssql-server-2016-express.

docker-compose -f .srcMusicStoredocker-compose.yml up

Start a browser and open http://<ip-of-vm-running-docker>:5000/ to see the running app.
Summary
This post described how to get setup to build and run native Docker Windows containers on both Windows 10 and using the recently published Windows Server 2016 evaluation release. To see more example Windows Dockerfiles, check out the Golang, MongoDB and Python Docker Library images.
Please share any Windows Dockerfiles or Docker Compose examples your build with @docker on Twitter using the tag windows. And don’t hesitate to reach on the Docker Forums if you have questions.
More Resources:

Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Build and run your first Docker Windows Server container appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker for Windows Server 2016

Today, Microsoft is announcing general availability of Windows Server 2016 at the Ignite conference in Atlanta. For Windows developers and IT-pros, the most exciting new Windows feature is containers, and containers on Windows Server 2016 are powered by Docker.
The first version of Docker was released in 2013, and in the 3 years since launch, Docker has completely transformed how Linux developers and ops build, ship and run apps. With Docker Engine and containers now available natively on Windows, developers and IT-pros can begin the same transformation for Windows-based apps and infrastructure and start reaping the same benefits: better security, more agility, and improved portability and freedom to move on-prem apps to the cloud.
For developers and IT-pros that build and maintain heterogenous deployments with both Linux and Windows infrastructure, Docker on Windows holds even greater significance: The Docker platform now represents a single set of tools, APIs and image formats for managing both Linux and Windows apps. As Linux and Windows apps and servers are dockerized, developers and IT-pros can bridge the operating system divide with shared Docker terminology and interfaces for managing and evolving complex microservices deployments both on-prem and in the cloud.

Running Containers on Windows Server
Docker running containers on Windows is the result of a two-year collaboration between Microsoft that involved the Windows kernel growing containerization primitives, Docker and Microsoft collaborating on porting the Docker Engine and CLI to Windows to take advantage of those new primitives and Docker adding multi-arch image support to Docker Hub.
The result is that the awesome power of docker run to quickly start a fresh and fully isolated container is now available natively on Windows:
PC C:> docker run -ti microsoft/windowsservercore powershell
Windows PowerShell

Copyright (C) 2016 Microsoft Corporation. All rights reserved.

PS C:>
The kernel containerization features are available in all versions of Windows Server 2016, and are also on Windows 10 systems with the Anniversary Update, and the Windows-native Docker daemon runs on both Windows Server 2016 and Windows 10 (although only containers based on Windows Server build and run on Windows 10).
docker run on Windows comes with the same semantics as on Linux: Full process isolation and sandboxed filesystem (and Windows Registry!) with support for layering changes. Each container sees a clean Windows system and cannot interfere with other processes (containerized or not) on the system.
For example, two dockerized apps using different Internet Information Services (IIS) versions and different .NET frameworks can co-exist merrily on the same system. They can even write to their respective filesystems and registries without affecting each other.
With containerization, Windows IT-pros get most of the isolation and release-artifact-stability benefits of VMs, without the resource overhead and lost agility inherent in hardware virtualization.
Similar to how containers on Linux can run with different security profiles, containers on Windows run in one of two isolation modes:

Windows Server Containers use the same shared-kernel process-isolation paradigm known from Linux. Since containers run as normal (but isolated) processes, startup is fast and resource overhead is minimal.
With Hyper-V isolation, container processes run inside a very minimal hypervisor created during container start. This yields potentially better isolation at the cost of slower startup and some resource overhead.

Isolation can be set with a simple switch passed to docker run:
docker run –isolation=hyperv microsoft/nanoserver
As long as the underlying host supports the requested isolation mode, any Windows container image can be run as either a hyper-v or server container and a container host can run both side by side. Container processes are oblivious to the isolation mode they run in, and the Docker control API is the same for both modes.
This makes isolation mode not generally a developer concern and developers should use the default or what’s convenient on their system. Isolation mode does give IT-pros options when choosing how to deploy containerized apps in production.
Also note that, while Hyper-V is the runtime technology powering hyper-v isolation, hyper-v isolated containers are not Hyper-V VMs and cannot be managed with classic Hyper-V tools.
For readers interested in details of how containers are implemented on Windows, John Starks’ black belt session at DockerCon ‘16 is a great introduction.

Building Windows Container Images
Thanks to layering improvements to the Windows Registry and filesystem, docker build and Dockerfiles are fully supported for creating Windows Docker images. Below is an example Windows Dockerfile that Stefan Scherer has proposed for the Node.js official Docker library image. It can be built on Windows with docker build:
FROM microsoft/windowsservercore
ENV NPM_CONFIG_LOGLEVEL info
ENV NODE_VERSION 4.5.0
ENV NODE_SHA256 16aab15b29e79746d1bae708f6a5dbed8ef3c87426a9408f7261163d0cda0f56
RUN powershell -Command
$ErrorActionPreference = ‘Stop’ ;
(New-Object System.Net.WebClient).DownloadFile(‘https://nodejs.org/dist/v%NODE_VERSION%/node-v%NODE_VERSION%-win-x64.zip’, ‘node.zip’) ;
if ((Get-FileHash node.zip -Algorithm sha256).Hash -ne $env:NODE_SHA256) {exit 1} ;
Expand-Archive node.zip -DestinationPath C: ;
Rename-Item ‘C:node-v%NODE_VERSION%-win-x64′ ‘C:nodejs’ ;
New-Item ‘%APPDATA%npm’ ;
$env:PATH = ‘C:nodejs;%APPDATA%npm;’ + $env:PATH ;
[Environment]::SetEnvironmentVariable(‘PATH’, $env:PATH, [EnvironmentVariableTarget]::Machine) ;
Remove-Item -Path node.zip
CMD [ “node.exe” ]
Note how PowerShell is used to install and setup zip files and exes: Windows containers run Windows executables compiled for Windows APIs. To build and run a Windows container, a Windows system is required. While the Docker tools, control APIs and image formats are the same on Windows and Linux, a Docker Windows container won’t run on a Linux system and vice-versa.
Also note that the starting layer is microsoft/windowsservercore. Starting FROM scratch is not an option when creating Windows container images. Instead, images are based on either microsoft/windowsservercore or microsoft/nanoserver.
The Windows Server Core image comes with a mostly complete userland with the processes and DLLs found on a standard Windows Server Core install. With the exception of GUI apps and apps requiring Windows Remote Desktop, most apps that run on Windows Server can be dockerized to run in an image based on microsoft/windowsservercore with minimal effort. Examples include Microsoft SQL Server, Apache, Internet Information Services (IIS) and the full .NET framework.
This flexibility comes at the cost of some bulk: The microsoft/windowsservercore image takes up 10GB. Thanks to Docker’s highly efficient image layering, this is not a big problem in practice. Any given Docker host only needs to pull the base layer once, and any images pulled or built on that system simply reuse the base layer.
The other base layer option is Nano Server, a new and very minimal Windows version with a pared-down Windows API. Lots of software already runs on Nano Server, including IIS, the new .NET Core framework, Node.js and Go. And the Nano Server base image is an order of magnitude smaller than Windows Server Core, meaning it has less dependencies and surface area to keep updated. Nano Server is an exciting development, not only as a base for minimal containers that build and boot quickly, but also as a Minimalist Operating System that makes for a great container host OS running just the Docker daemon and containers, and nothing else.
With the choice of Windows Server Core and Nano Server, developers and IT-pros can opt to lift-and-shift existing Windows-based apps into Server Core containers or adopt Nano Server for greenfield development or incrementally as part of breaking monolithic apps into microservices components.
Docker is working with Microsoft and the community to build container images based on both Windows Server Core and Nano Server. Golang, Python and Mongo are available as official Docker images (more are on their way), and Microsoft also maintains a set of very popular sample images.
Summary
Today’s announcement of Docker Engine building, running and managing containers on Windows is the fruit of years of labor by teams at both Microsoft and Docker and by the Docker community. We’re incredibly proud of the work we’ve done with Microsoft to bring the benefits of containerization to Windows developers and IT-pros, and we’re excited about the prospect of getting Windows and Linux technologists building, shipping and running apps together with a common set of tools and APIs.
Here are some resources to help you get started

Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Introducing Docker for Windows Server 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Announces Commercial Partnership with Microsoft to Double Container Market by Extending Docker Engine to Windows Server

With industry analysts declaring Windows Server with more than 60% of the x86 server market, and citing Microsoft Azure as the fastest-growing public cloud, it comes as no surprise that Microsoft, even at its current scale, is further extending its leadership as a strategic, trusted partner to enterprise IT.
It is this industry leadership that catalyzed our technical collaboration in the Docker open source project back in October 2014, to jointly bring the agility, portability, and security benefits of the Docker platform to Windows Server.  After two years of joint engineering, we are excited to unveil a new, commercial partnership to extend these benefits for both Windows developers targeting Windows Server and enterprise IT professionals.
Specifically, the commercial partnership entails:

The Commercially Supported Docker Engine aka “CS Docker Engine”, Docker, Inc.’s tested, validated, and supported package of Docker Engine, will be available to Windows Server 2016 customers at no additional cost
Microsoft will provide Windows Server 2016 customers enterprise support for CS Docker Engine, backed by Docker, Inc
Docker and Microsoft will jointly promote Docker Datacenter to enable IT Pros to secure the Windows Server software supply chain and manage containerized Windows Server workloads, whether on-prem, in the cloud, or hybrid.

These fruits of this partnership for the first time offer enterprise IT professionals with a single platform for both Windows and Linux applications on any infrastructure, whether bare metal, virtualized, or cloud.
 

CS Docker Engine and Docker Datacenter for Windows Server represent a major addition that complements several other joint initiatives as depicted above.
 

For developers, the integration of Visual Studio Tools for Docker and Docker for Windows provides complete desktop development environments for building Dockerized Windows apps;
To jumpstart app development, Microsoft has contributed Windows Server container base images and apps to Docker Hub;
For IT pros, Docker Datacenter will be designed to manage Windows Server environments in addition to the Linux environments Datacenter already manages.

Availability
Windows Server 2016 will be generally available in October, when Windows Server 2016 customers will be able to download the first version of the CS Docker Engine. Docker Datacenter for managing Windows Server containerized workloads will be available shortly thereafter.  While the announcements today represent another significant milestone in our partnership, Docker and Microsoft are just getting started.
Check out the below to learn more!

Learn more about Docker on Windows Server
Sign up to be notified of GA and the Docker Datacenter for Windows Beta
Register for a webinar: Docker for Windows Server
Learn more about the Docker and Microsoft partnership

The post Docker Announces Commercial Partnership with Microsoft to Double Container Market by Extending Docker Engine to Windows Server appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Weekly Roundup | September 18, 2016

 

It’s time for your weekly ! Get caught up on the top news including; how to maintain dev environments for Java web apps, scale with Swarm, and make your CI/CD pipeline work for you. As we begin a new week, let’s recap our top five most-read stories of the week of September 18, 2016:

Debugging Java in Docker tutorial for building a Java web application using containers and three popular Java IDEs: Eclipse, IntelliJ IDEA and Netbeans by Sophia Parafina. 

Orchestration Workshop instructional workshop to scale out an app, operate and go deeper in Docker Swarm by Jerome Petazzoni.  

PiZero Swarm with OTG  detailed explanation on how to build your own PiZero Swarm with OTG networking by Docker Captain Alex Ellis.

Docker-based Workflow Learn how to make your CI/CD pipeline work with these four tips for an effective Docker-based workflow by Docker Captain Adrian Mouat.

Service Discovery in the Swarm Cluster overview of how Docker Engine now acts as a Swarm manager, Swarm worker, and service registry by Docker Captain Viktor Farcic.

Weekly Roundup: Top 5 Docker stories for the week 09/18/16Click To Tweet

The post Docker Weekly Roundup | September 18, 2016 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Walk, Jog, Run: Getting Smarter About Docker

 

I’ve spent most of the summer traveling to and speaking at a lot of different trade shows: EMC World, Cisco Live!, VMworld, HP Discover, Dockercon, and LinuxCon (as well as some meetups and smaller gatherings). A lot of the time, I’m speaking to people who are just getting familiar with Docker. They may have read an article or have had someone walk into their office and say “This Docker thing, so hot right now. Go figure it out”.
Certainly there are a number of companies running Docker in production, but there are still many who are asking fundamental questions about what Docker is, and how it can benefit their organization. To help folks out in that regard, I wrote an eBook.
After someone gets a grasp on what Docker is, they tend to want to dive in and start exploring, but often times they aren’t sure how to get started.
My advice (based on the approach I took when I joined Docker last year) is to walk, jog, and then run:
Walk: Decide where you want to run Docker, and install it. This could be Docker for Mac, Docker for Windows, or just installing Docker on Linux. Pick whatever makes the most sense for your environment and load it up.
From there run your first Docker app: hello-world
$ docker run hello-world
After you know you have Docker running, work your way through one of our labs. I’d suggest starting with this beginner lab we have on GitHub.
Jog: You’ve got Docker installed, and you have played with the command line a bit. You’ve probably run a few containers as well. The next step is to try and replicate a commonly deployed application in Docker.
When I started working with Docker the first application I tried to deploy was WordPress. It is a good example because it has a few moving parts (the web front end and a database backend), and there are about a thousand blogs on how to do it. Read a bunch of blogs, and then give it a go.
After WordPress I decided to build a very simple CI pipeline using Jenkins. CI is a great use case for Docker. Docker vastly simplifies your build environments, and, because containers deploy so quickly, customers can see some real time savings as well.
Run: So you’ve been able to stand up someone else’s application, now it’s time to take the next step. Deploy one of your own. Take an existing application, and move it into a container.
This will allow to build your own Dockerfile from scratch, and maybe even a Docker Compose file as well. Try and choose an app with on a few moving parts, and be mindful of things like static configs and port mappings which can add additional complexity.
So download the eBook  and get started walking. If you get stuck, ask for help from the community, or attend a meetup. There are a growing number of Docker experts out there who are always more than willing to lend a hand.

Practical advice to walk, jog, run your way to @Docker expertiseClick To Tweet

 
Here are some helpful links:

Download Docker onto your favorite environment
Check out all the hands on tutorials for Docker and product documentation
Learn more about our latest Docker 1.12 release
Join our community, or attend a meetup or ask a Docker expert for help

The post Walk, Jog, Run: Getting Smarter About Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the new Docs Repo on GitHub!

By John Mulhausen
The documentation team at Docker is excited to announce that we are consolidating all of our documentation into a single GitHub Pages-based repository on GitHub.
When is this happening?

The new repo is public now at https://github.com/docker/docker.github.io.
During the week of Monday, September 26th, any existing docs PRs need to be migrated over or merged.
We’ll do one last “pull” from the various docs repos on Wednesday, September 28th, at which time the docs/ folders in the various repos will be emptied.
Between the 28th and full cutover, the docs team will be testing the new repo and making sure all is well across every page.
Full cutover (production is drawing from the new repo, new docs work is pointed at the new repo, dissolution of old docs/ folders) is complete on Monday, October 3rd.

The problem with the status quo

Up to now, the docs have been all inside the various project repos, inside folders named “docs/” &; and to see the docs running on your local machine was a pain.
The docs were built around Hugo, which is not natively supported by GitHub, and took minutes to build, and even longer for us to deploy.
Even worse than all that, having the docs siloed by product meant that cross-product documentation was rarely worked on, and things like reusable partials (includes) weren’t being taken advantage of. It was difficult to have visibility into what constituted “docs activity” when pull requests pertained to both code and docs alike.

Why this solution will get us to a much better place

All of the documentation for all of Docker’s projects will now be open source!
It will be easier than ever to contribute to and stage the docs. You can use GitHub Pages’ *.github.io spaces, install Jekyll and run our docs, or just run a Docker command:
git clone https://github.com/docker/docker.github.io.git docs
cd docs
docker run -ti &;rm -v &;$PWD&;:/docs -p 4000:4000 docs/docstage
Doc releases can be done with milestone tags and branches that are super easy to reference, instead of cherry-picked pull requests (PRs) from several repos. If you want to use a particular version of the docs, in perpetuity, it will be easier than ever to retrieve them, and we can offer far more granularity.
Any workflows that require users to use multiple products can be modeled and authored easily, as authors will only have to deal with a single point of reference.
The ability to have “includes” (such as reusable instructions, widgets that enable docs functionality, etc) will be possible for the first time.

What does this mean for open source contributors?
Open source contributors will need to create both a code PR and a docs PR, instead of having all of the work live in one PR. We’re going to work to mitigate any inconvenience:

Continuous integration tests will eventually be able to spot when a code PR is missing docs and provide in-context, useful instructions at the right time that guide contributors on how to spin up a docs PR and link it to the code PR.
We are not going to enforce that a docs PR has to be merged before a code PR is merged, just that a docs PR exists. That means we should be able to merge your code PR just as quickly, if not more so, than in the past.
We will leave README instructions in the repos under their respective docs/ folders that point people to the correct docs repo.
We are adding “edit this page” buttons to every page on the docs so it will be easier than ever to locate what needs to be updated and fix it, right in the browser on GitHub.

We welcome contributors to get their feet wet, start looking at our new repo, and propose changes. We’re making it easier than ever to edit our documentation!
The post Announcing the new Docs Repo on GitHub! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

5 Minutes with the Docker Captains

Captain is a distinction that Docker awards select members of the community that are both experts in their field and are passionate about sharing their Docker knowledge with others.

This week we are highlighting 3 of our outstanding Captains who are making September one filled with Docker learnings and events. Read on to learn more about how they got started, what they love most about Docker, and why Docker.
Alex Ellis
Alex is a Principal Application Developer with expertise in the full Microsoft .NET stack, Node.js and Ruby. He enjoys making robots and IoT-connected projects with Linux and the Raspberry PI microcomputer. He is a writer for Linux User and Developer magazine and also produces tutorials on Docker, coding and IoT for his tech blog at alexellis.io.

As a Docker Captain, how do you share that learning with the community?
I started out by sharing tutorials and code on my blog alexellis.io and on Github. More recently I’ve attended local meet-up groups, conferences and tech events to speak and tell a story about Docker and cool hacks. I joined Twitter in March and it’s definitely a must-have for reaching people.
Why do you like Docker?
Docker makes the complex seem simple and forces you to automate your workflow. I have a background in software engineering and automation is everything for delivering reliable, repeatable and testable systems.
What’s a common tech question you’re asked and the high-level explanation?
The questions vary and often surprise me &; I like to be able to connect people to the right Captains or Docker folks. Opening an issue about a technical problem on Github is really valuable for the community and the Docker project. Please give feedback.
What’s your favorite thing about the Docker community?
The community is vibrant and full of life &8211; people are working on solutions for problems that you may have and are generous with their knowledge.
Who are you when you’re not online?
I love film photography &8211; everything from buying vintage cameras, to developing and printing my own images. I balance my time at the screen with road cycling &8211; cruising down country lanes in the countryside or spending just time away from the screen in the great outdoors.
Marcos Lilljedahl
Marcos Lilljedahl is an OS evangelist and Golang lover with a strong background in distributed systems and app architecture. Marcos is currently working at Matica, a Machine Learning startup that brings latest in research to industry. Mantica runs Machine learning apps in a fully dockerized environment mainly using compose / machine and engine.

How has Docker impacted what you do on a daily basis?
Although I run pretty much everything in containers (even games like Counter Strike / Quake3 / etc), the biggest benefit comes from the fact that it really has helped to reduce friction when working with different teams and platforms. It’s a fundamental tool for everyone to speak the same “app” language and then translate that directly into production.
As a Docker Captain, how do you share that learning with the community?
I’m not the “blog post” kind of person, I usually like to deep dive into code and understand the core principles about Docker. I usually contribute by helping people resolve GitHub issues or by responding on the Slack community channel when there are specific questions or unexpected issues. Also, whenever I find some time, I like to hack on stuff like our two hackathon winner projects CMT and Whaleprint.
Why do you like Docker?
What I like the most is its fundamental purpose (help people with great ideas to make things possible) and the community behind it.
Who are you when you’re not online?
I like to do all kind of sports like sailing, swimming, playing soccer, running, snowboarding, roller hockey and crossfit. I also enjoy spending weekends with my girlfriend and family cooking asado.
If you could meet anyone in the world dead or alive who would it be and why?
I would have loved to meet young Steve Jobs. I believe he transmitted the energy to make anyone do the impossible.
Sreenivas Makam
Sreenivas Makam is currently working as a senior engineering manager at Cisco Systems, Bangalore. His interests include SDN, NFV, Network Automation, DevOps, and cloud technologies, and he likes to try out and follow open source projects in these areas. His blog can be found at sreeninet.wordpress.com and his hacky code at github.com/smakam. Sreenivas wrote a book on Mastering CoreOS, which was published in February 2016. He has done the technical reviewing for Mastering Ansible book, Packt Publishing, Ansible Networking Report, O&;Reilly Publisher and Network programmability and Automation book, O&8217;Reilly Publisher. He has given presentations at Docker and other meetups in Bangalore.

How has Docker impacted what you do on a daily basis?
I come from a networking background and used to approach problems from an infrastructure point of view. Docker has given me the insight to approach problems from a developer or an operator perspective.
As a Docker Captain, how do you share that learning with the community?
I enjoy sharing my learning and knowledge through my blogs. Other than this, I give presentations in Docker meetups and other meetups in Bangalore. The best part about being a Docker captain is the direct access to Docker developers and other Docker captains and there is always something new to learn from them.
How did you first get involved with Docker?
I was fascinated with cloud adoption and trying out related technologies like AWS and Google cloud, Openstack and SDN. I dabbled into Docker as part of this. I was initially impressed with how fast I could build, deploy and destroy a Docker container. I got involved in Docker from October 2014 and the first version I used was Docker 1.3.
Why do you like Docker?
There are many reasons, the biggest reason is perhaps the simplicity. There has been a lot of effort put in making complex topics like Orchestration and Security very easy to use for both developers and operations teams.
What’s your favorite thing about the Docker community?
The Docker community is super-active, encourages new members and supports diversity.
Follow all of the Docker Captains on Twitter using Docker with Alex Ellis’ tutorial.
Docker Captains
Captains are Docker ambassadors (not Docker employees) and their genuine love of all things Docker has a huge impact on the Docker community – whether they are blogging, writing books, speaking, running workshops, creating tutorials and classes, offering support in forums, or organizing and contributing to local events – they make Docker’s mission of democratizing technology possible. Whether you are new to Docker or have been a part of the community for a while, please don’t hesitate to reach out to Docker Captains with your challenges, questions, speaking requests and more.
While Docker does not accept applications for the Captains program, we are always on the lookout to add additional leaders that inspire and educate the Docker community. If you are interested in becoming a Docker Captain, we need to know how you are giving back. Sign up for community.docker.com, share your activities on social media with the Docker, get involved in a local meetup as a speaker or organizer and continue to share your knowledge of Docker in your community.
The post 5 Minutes with the Docker Captains appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Visit Docker @ Microsoft Ignite – Booth #758

 

Next week Microsoft will host over 20,000 IT executives, architects, engineers, partners and thought-leaders from around the world at Microsoft Ignite, September 25th-30th at the Georgia World Congress Center in Atlanta, Georgia.
Visit the Docker booth to learn how developers and IT pros can build, ship, and run any application, anywhere, across both Windows and Linux operating systems with Docker. By transforming modern application architectures for Linux and Windows applications, Docker allows business to benefit from a more agile development environment with a single journey for all their applications.
Don’t miss out! Docker experts will be on-hand to for in-booth demos to help you:

      Deploy your first Docker Windows container
      Learn about Docker containers on Windows Server 2016
      Manage your container environment with Docker Datacenter on Windows

Calling all Microsoft MVPs!

Attend our daily in booth theater session “Docker Containers for Linux and Windows&; with Docker evangelist Mike Coleman in the Docker booth @ 2PM every day. Session attendees will receive exclusive Docker and Microsoft swag.
To learn more about how Docker powers Windows containers, add these key Docker sessions to your Ignite agenda:
GS05: Reinvent IT infrastructure for business agility
Microsoft’s strategy centers on empowering you – the IT professionals &; to generate business value within your organizations. With Microsoft Azure and Azure Stack, you can leverage the power of cloud to drive business agility and developer productivity With the launch of Windows Server 2016 and Microsoft System Center 2016, you can accomplish more than ever before in your existing datacenters. And with Operations Management Suite, you can securely manage all of your on-premises and cloud infrastructure from one place. Microsoft Corporate VP Jason Zander discusses in-depth the latest technology innovations across all of these areas that help you reinvent your IT infrastructure, and be a hero within your organizations.
Speaker: Jason Zander, Microsoft
 
BRK3146: Dive into the new world of Windows Server and Hyper-V Containers
Applications need to be always available, globally accessible, scalable and secure in today’s 24/7 economy. Businesses must be able to deploy rapid updates and revisions at a lower cost with fewer resources than ever before to be competitive. Containers are an amazingly powerful technology for building, deploying and hosting applications that have been proven to reduce costs, improve efficiency and reduce deployment times &8211; making it a hot new feature in Windows Server 2016. We dive into the architecture features of the new container technology, talk about development and deployment experiences and best practices, along with some of the new Windows innovations such as Hyper-V Containers and Active Directory backed container identity.
Speakers: Taylor Brown, Microsoft & Patrick Lang, Microsoft
Thursday, September 29, 9:00am &8211; 10:15am, Room A1
BRK3147: Accelerate application delivery with Docker Containers and Windows Server 2016
Applications are changing and Docker is driving the containerization movement to deliver new microservices applications or provide a new construct to package legacy applications. Attend this session to learn how the combination of Docker, Linux, Microsoft Windows Server and Microsoft Azure technologies together deliver an application platform for hybrid cloud apps. Accelerate your app delivery and gain freedom to use any stack across a secure software supply chain.
Speakers: Mike Coleman, Docker & Taylor Brown, Microsoft
Thursday, September 29, 12:30pm &8211; 1:45pm, Room A411 &8211; A412
BRK3319: The Path to Containerization – transforming workloads into containers
Containers, micro-services and Docker are all the rage but what workloads are they used for? And how can you take advantage of these transformative new technologies? In this session you will hear from a user that has succeeded in taking their existing .Net application and migrated it into Windows containers proving them the agility and flexibility to further transform the application. But where do I start with containers? We will further cover concepts and best practices for identifying and migrating applications from existing deployments into containers and how to start down the path to microservice architectures.
Speakers: Taylor Brown, Microsoft & Matthew Roberts, Microsoft
To get ready for Ignite and to learn more about Docker, read the eBook Containers for the Virtualization Admin by Docker Technical Evangelist Mike Coleman.
More resources

Learn more about Docker for the Enterprise
Read the white paper: Docker for the Virtualization Admin
See all the integrations between Docker and Microsoft
Learn more about Docker Datacenter

The post Visit Docker @ Microsoft Ignite &8211; Booth 758 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

High performance network policies in Kubernetes clusters

Editor’s note: today’s post is by Juergen Brendel, Pritesh Kothari and Chris Marino co-founders of Pani Networks, the sponsor of the Romana project, the network policy software used for these benchmark tests.Network PoliciesSince the release of Kubernetes 1.3 back in July, users have been able to define and enforce network policies in their clusters. These policies are firewall rules that specify permissible types of traffic to, from and between pods. If requested, Kubernetes blocks all traffic that is not explicitly allowed. Policies are applied to groups of pods identified by common labels. Labels can then be used to mimic traditional segmented networks often used to isolate layers in a multi-tier application: You might identify your front-end and back-end pods by a specific “segment” label, for example. Policies control traffic between those segments and even traffic to or from external sources.Segmenting trafficWhat does this mean for the application developer? At last, Kubernetes has gained the necessary capabilities to provide “defence in depth”. Traffic can be segmented and different parts of your application can be secured independently. For example, you can very easily protect each of your services via specific network policies: All the pods identified by a Replication Controller behind a service are already identified by a specific label. Therefore, you can use this same label to apply a policy to those pods.Defense in depth has long been recommended as best practice. This kind of isolation between different parts or layers of an application is easily achieved on AWS and OpenStack by applying security groups to VMs. However, prior to network policies, this kind of isolation for containers was not possible. VXLAN overlays can provide simple network isolation, but application developers need more fine grained control over the traffic accessing pods. As you can see in this simple example, Kubernetes network policies can manage traffic based on source and origin, protocol and port.apiVersion: extensions/v1beta1kind: NetworkPolicymetadata: name: pol1spec: podSelector:   matchLabels:     role: backend ingress: – from:   – podSelector:      matchLabels:       role: frontend   ports:   – protocol: tcp     port: 80Not all network backends support policiesNetwork policies are an exciting feature, which the Kubernetes community has worked on for a long time. However, it requires a networking backend that is capable of applying the policies. By themselves, simple routed networks or the commonly used flannel network driver, for example, cannot apply network policy.There are only a few policy-capable networking backends available for Kubernetes today: Romana, Calico, and Canal; with Weave indicating support in the near future. Red Hat’s OpenShift includes network policy features as well.We chose Romana as the back-end for these tests because it configures pods to use natively routable IP addresses in a full L3 configuration. Network policies, therefore, can be applied directly by the host in the Linux kernel using iptables rules. This results is a high performance, easy to manage network. Testing performance impact of network policiesAfter network policies have been applied, network packets need to be checked against those policies to verify that this type of traffic is permissible. But what is the performance penalty for applying a network policy to every packet? Can we use all the great policy features without impacting application performance? We decided to find out by running some tests.Before we dive deeper into these tests, it is worth mentioning that ‘performance’ is a tricky thing to measure, network performance especially so. Throughput (i.e. data transfer speed measured in Gpbs) and latency (time to complete a request) are common measures of network performance. The performance impact of running an overlay network on throughput and latency has been examined previously here and here. What we learned from these tests is that Kubernetes networks are generally pretty fast, and servers have no trouble saturating a 1G link, with or without an overlay. It’s only when you have 10G networks that you need to start thinking about the overhead of encapsulation. This is because during a typical network performance benchmark, there’s no application logic for the host CPU to perform, leaving it available for whatever network processing is required. For this reason we ran our tests in an operating range that did not saturate the link, or the CPU. This has the effect of isolating the impact of processing network policy rules on the host. For these tests we decided to measure latency as measured by the average time required to complete an HTTP request across a range of response sizes. Test setupHardware: Two servers with Intel Core i5-5250U CPUs (2 core, 2 threads per core) running at 1.60GHz, 16GB RAM and 512GB SSD. NIC: Intel Ethernet Connection I218-V (rev 03)Ubuntu 14.04.5Kubernetes 1.3 for data collection (verified samples on v1.4.0-beta.5)Romana v0.9.3.1Client and server load test softwareFor the tests we had a client pod send 2,000 HTTP requests to a server pod. HTTP requests were sent by the client pod at a rate that ensured that neither the server nor network ever saturated. We also made sure each request started a new TCP session by disabling persistent connections (i.e. HTTP keep-alive). We ran each test with different response sizes and measured the average request duration time (how long does it take to complete a request of that size). Finally, we repeated each set of measurements with different policy configurations. Romana detects Kubernetes network policies when they’re created, translates them to Romana’s own policy format, and then applies them on all hosts. Currently, Kubernetes network policies only apply to ingress traffic. This means that outgoing traffic is not affected.First, we conducted the test without any policies to establish a baseline. We then ran the test again, increasing numbers of policies for the test’s network segment. The policies were of the common “allow traffic for a given protocol and port” format. To ensure packets had to traverse all the policies, we created a number of policies that did not match the packet, and finally a policy that would result in acceptance of the packet.The table below shows the results, measured in milliseconds for different request sizes and numbers of policies:Response SizePolicies.5k1k10k100k1M00.7320.7381.0772.53210.487100.7440.7421.0842.57010.556500.7450.7551.0862.58010.5661000.7620.7701.1042.64010.5972000.7830.7831.1472.65210.677What we see here is that, as the number of policies increases, processing network policies introduces a very small delay, never more than 0.2ms, even after applying 200 policies. For all practical purposes, no meaningful delay is introduced when network policy is applied. Also worth noting is that doubling the response size from 0.5k to 1.0k had virtually no effect. This is because for very small responses, the fixed overhead of creating a new connection dominates the overall response time (i.e. the same number of packets are transferred).Note: .5k and 1k lines overlap at ~.8ms in the chart aboveEven as a percentage of baseline performance, the impact is still very small. The table below shows that for the smallest response sizes, the worst case delay remains at 7%, or less, up to 200 policies. For the larger response sizes the delay drops to about 1%. Response SizePolicies.5k1k10k100k1M00.0%0.0%0.0%0.0%0.0%10-1.6%-0.5%-0.6%-1.5%-0.7%50-1.8%-2.3%-0.8%-1.9%-0.8%100-4.1%-4.3%-2.5%-4.3%-1.0%200-7.0%-6.1%-6.5%-4.7%-1.8%What is also interesting in these results is that as the number of policies increases, we notice that larger requests experience a smaller relative (i.e. percentage) performance degradation.This is because when Romana installs iptables rules, it ensures that packets belonging to established connection are evaluated first. The full list of policies only needs to be traversed for the first packets of a connection. After that, the connection is considered ‘established’ and the connection’s state is stored in a fast lookup table. For larger requests, therefore, most packets of the connection are processed with a quick lookup in the ‘established’ table, rather than a full traversal of all rules. This iptables optimization results in performance that is largely independent of the number of network policies. Such ‘flow tables’ are common optimizations in network equipment and it seems that iptables uses the same technique quite effectively. Its also worth noting that in practise, a reasonably complex application may configure a few dozen rules per segment. It is also true that common network optimization techniques like Websockets and persistent connections will improve the performance of network policies even further (especially for small request sizes), since connections are held open longer and therefore can benefit from the established connection optimization.These tests were performed using Romana as the backend policy provider and other network policy implementations may yield different results. However, what these tests show is that for almost every application deployment scenario, network policies can be applied using Romana as a network back end without any negative impact on performance.If you wish to try it for yourself, we invite you to check out Romana. In our GitHub repo you can find an easy to use installer, which works with AWS, Vagrant VMs or any other servers. You can use it to quickly get you started with a Romana powered Kubernetes or OpenStack cluster.
Quelle: kubernetes