Announcing Federal Security and Compliance Controls for Docker Datacenter

Security and compliance are top of mind for IT organizations. In a technology-first era rife with cyber threats, it is important for enterprises to have the ability to deploy applications on a platform that adheres to stringent security baselines. This is especially applicable to U.S. Federal Government entities, whose wide-ranging missions, from public safety and national security, to enforcing financial regulations, are critical to keeping policy in order.

Federal agencies and many non-government organizations are dependent on various standards and security assessments to ensure their systems are operating in controlled environments. One such standard is NIST Special Publication 800-53, which provides a library of security controls to which technology systems should adhere. NIST 800-53 defines three security baselines: low, moderate, and high. The number of security controls that need to be met increases from the low to high baselines, and agencies will elect to meet a specific baseline depending on the requirements of their systems.
Another assessment process known as the Federal Risk and Authorization Management Program, or for short, further expands upon the NIST 800-53 controls by including additional security requirements at each baseline. FedRAMP is a program that ensures cloud providers meet stringent Federal government security requirements.
When an agency elects to deploy a system like Docker Datacenter for production use, they must complete a security assessment and grant the system an Authorization to Operate (ATO). The FedRAMP program already includes provisional ATOs at specific security baselines for a number of cloud providers, including AWS and Azure, with scope for on-demand compute services (e.g. Virtual Machines, Networking, etc). Since many cloud providers have already met the requirements defined by FedRAMP, an agency that leverages the provider’s services must only authorize the components of its own system that it deploys and manages at the chosen security baseline.
A goal of Docker is to help make it easier for organizations to build compliant enterprise container environments. As such, to help expedite the agency ATO process, we&;re excited to release NIST 800-53 Revision 4 security and privacy control guidance for Docker Datacenter at the FedRAMP Moderate baseline.
The security content is available in two forms:

An open source project where the community can collaborate on the compliance documentation itself and
System Security Plan (SSP) template for Azure Government

 

 
First, we&8217;ve made the guidance available as part of a project available here. The documentation in the repository is developed using a format known as OpenControl, an open source, &;compliance-as-code&; schema and toolkit that helps software vendors and organizations build compliance documentation. We chose to use OpenControl for this project because we&8217;re big fans of tools at Docker, and it really fits our development principals quite nicely. OpenControl also includes schema definitions for other standards including Payment Card Industry Data Security Standard (PCI DSS). This helps to address compliance needs for organizations outside of the public sector. We’re also licensing this project under CC0 Universal Public Domain. To accelerate compliance for container platforms, Docker is making this project public domain and inviting folks to contribute to the documentation to help enhance the container compliance story.
 
Second, we&8217;re including this documentation in the form of a System Security Plan (SSP) template for running Docker Datacenter on Microsoft Azure Government. The template can be used to help lessen the time it takes for an agency to certify Docker Datacenter for use. To obtain these templates, please contact compliance@docker.com.
We’ve also started to experiment with natural language processing which you’ll find in the project’s repository on GitHub. By using Microsoft’s Cognitive Services Text Analytics API, we put together a simple tool that vets the integrity of the actual security narratives and ensures that what’s written holds true to the NIST 800-53 control definitions. You can think of this as a form of automated proofreading. We’re hoping that this helps to open the door to new and exciting ways to develop content!

New federal security and compliance controls for on @Azure FedRAMP To Tweet

More resources for you:

See What’s New and Learn more about Docker Datacenter
Sign up for a free 30 day trial of Docker Datacenter
Learn More about Docker in public sector.

The post Announcing Federal Security and Compliance Controls for Docker Datacenter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

containerd – a core container runtime project for the industry

Today Docker is spinning out its core runtime functionality into a standalone component, incorporating it into a separate project called , and will be donating it to a neutral foundation early next year. This is the latest chapter in a multi-year effort to break up the Docker platform into a more modular architecture of loosely coupled components.
Over the past 3 years, as Docker adoption skyrocketed, it grew into a complete platform to build, ship and run distributed applications, covering many functional areas from infrastructure to orchestration, the core container runtime being just a piece of it. For millions of developers and IT pros, a complete platform is exactly what they need. But many platform builders and operators are looking for “boring infrastructure”: a basic component that provides the robust primitives for running containers on their system, bundled in a stable interface, and nothing else. A component that they can customize, extend and swap out as needed, without unnecessary abstraction getting in their way. containerd is built to provide exactly that.

What Docker does best is provide developers and operators with great tools which make them more productive. Those tools come from integrating many different components into a cohesive whole. Most of those components are invented by others &; but along the way we find ourselves developing some of those components from scratch. Over time we spin out these components as independent projects which anyone can reuse and contribute back to. containerd is the latest of those components.

containerd is already deployed on millions of machines since April 2016 when it was included in Docker 1.11. Today we are announcing a roadmap to extend containerd, with input from the largest cloud providers, Alibaba Cloud, AWS, Google, IBM, Microsoft, and other active members of the container ecosystem. We will add more Docker Engine functionality to containerd so that containerd 1.0 will provide all the core primitives you need to manage containers with parity on Linux and Windows hosts:

Container execution and supervision
Image distribution
Network Interfaces Management
Local storage
Native plumbing level API
Full OCI support, including the extended OCI image specification

When containerd 1.0 implements that scope, in Q2 2017, Docker and other leading container systems, from AWS ECS to Microsoft ACS, Kubernetes, Mesos or Cloud Foundry will be able to use it as their core container runtime. containerd will use the OCI standard and be fully OCI compliant.

Over the past 3 years, the adoption of containers with Docker has triggered an unprecedented wave of innovation in our industry. We think containerd will unlock a whole new phase of innovation and growth across the entire container ecosystem, which in turn will benefit every Docker developer and customer.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post. We plan to have a summit at the end of February to bring in more contributors, stay tuned for more details about that in the next few weeks.
Thank you to Arnaud Porterie, Michael Crosby, Mickaël Laventure, Stephen Day, Patrick Chanezon and Mike Goelzer from the Docker team, and all the maintainers and contributors of the Docker project for making this project a reality.

Introducing containerd &8211; a core container runtime project for the industryClick To Tweet

The post containerd &8211; a core container runtime project for the industry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

More details about containerd, Docker’s core container runtime component

Today we announced that Docker is extracting a key component of its platform, a part of the engine plumbing&; a core container runtime&8211;and commits to donating it to an open foundation. containerd is designed to be less coupled, and easier to integrate with other tools sets. And it is being written and designed to address the requirements of the major cloud providers and container orchestration systems.
Because we know a lot of Docker fans want to know how the internals work, we thought we would share the current state of containerd and what we plan for version 1.0. Before that, it’s a good idea to look at what Docker has become over the last three and a half years.
The Docker platform isn’t a container runtime. It is in fact a set of integrated tools that allow you to build ship and run distributed applications. That means Docker handles networking, infrastructure, build, orchestration, authorization, security, and a variety of other services that cover the complete distributed application lifecycle.

The core container runtime, which is containerd, is a small but vital part of the platform. We started breaking out containerd from the rest of the engine in Docker 1.11, planning for this eventual release.
This is a look at Docker Engine 1.12 as it currently is, and how containerd fits in.

You can see that containerd has just the APIs currently necessary to run a container. A GRPC API is called by the Docker Engine, which triggers an execution process. That spins up a supervisor and an executor which is charged with monitoring and running containers. The container is run (i.e. executed) by runC, which is another plumbing project that we open sourced as a reference implementation of the Open Container Initiative runtime standard.
When containerd reaches 1.0, we plan to have a number of other features from Docker Engine as well.

That feature set and scope of containerd is:

A distribution component that will handle pushing to a registry, without a preferencetoward a particular vendor.
Networking primitives for the creation of system interfaces and APIs to manage a container&;s network namespace
Host level storage for image and container filesystems
A GRPC API
A new metrics API in the Prometheus format for internal and container level metrics
Full support of the OCI image spec and runC reference implementation

A more detailed architecture overview is available in the project’s GitHub repository.
This is a look at a future version of Docker Engine leveraging containerd 1.0.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users; and in fact this evolution of Docker plumbing will go unnoticed by end-users. It has a CLI, ctr, designed for debugging and experimentation, and a GRPC API designed for embedding. It’s designed as a plumbing component, designed to be integrated into other projects that can benefit from the lessons we’ve learned running containers.
We are at containerd version 0.2.4, so a lot of work needs to be done. We’ve invited the container ecosystem to participate in this project and are please to have support from Alibaba, AWS, Google, IBM and Microsoft who are providing contributors to help developing containerd. You can find up-to-date roadmap, architecture and API definitions in the github repo, and learn more at the containerd livestream meetup Friday, December 16th at 10am PST. We also plan to organize a summit at the end of February to bring contributors together.

More details about containerd, @Docker’s core container runtime componentClick To Tweet

The post More details about containerd, Docker’s core container runtime component appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes 1.5: Supporting Production Workloads

Today we’re announcing the release of Kubernetes 1.5. This release follows close on the heels of KubeCon/CloundNativeCon, where users gathered to share how they’re running their applications on Kubernetes. Many of you expressed interest in running stateful applications in containers with the eventual goal of running all applications on Kubernetes. If you have been waiting to try running a distributed database on Kubernetes, or for ways to guarantee application disruption SLOs for stateful and stateless apps, this release has solutions for you. StatefulSet and PodDisruptionBudget are moving to beta. Together these features provide an easier way to deploy and scale stateful applications, and make it possible to perform cluster operations like node upgrade without violating application disruption SLOs.You will also find usability improvements throughout the release, starting with the kubectl command line interface you use so often. For those who have found it hard to set up a multi-cluster federation, a new command line tool called ‘kubefed’ is here to help. And a much requested multi-zone Highly Available (HA) master setup script has been added to kube-up. Did you know the Kubernetes community is working to support Windows containers? If you have .NET developers, take a look at the work on Windows containers in this release. This work is in early stage alpha and we would love your feedback.Lastly, for those interested in the internals of Kubernetes, 1.5 introduces Container Runtime Interface or CRI, which provides an internal API abstracting the container runtime from kubelet. This decoupling of the runtime gives users choice in selecting a runtime that best suits their needs. This release also introduces containerized node conformance tests that verify that the node software meets the minimum requirements to join a Kubernetes cluster.What’s NewStatefulSet beta (formerly known as PetSet) allows workloads that require persistent identity or per-instance storage to be created, scaled, deleted and repaired on Kubernetes. You can use StatefulSets to ease the deployment of any stateful service, and tutorial examples are available in the repository. In order to ensure that there are never two pods with the same identity, the Kubernetes node controller no longer force deletes pods on unresponsive nodes. Instead, it waits until the old pod is confirmed dead in one of several ways: automatically when the kubelet reports back and confirms the old pod is terminated; automatically when a cluster-admin deletes the node; or when a database admin confirms it is safe to proceed by force deleting the old pod. Users are now warned if they try to force delete pods via the CLI. For users who will be migrating from PetSets to StatefulSets, please follow the upgrade guide.PodDisruptionBudget beta is an API object that specifies the minimum number or minimum percentage of replicas of a collection of pods that must be up at any time. With PodDisruptionBudget, an application deployer can ensure that cluster operations that voluntarily evict pods will never take down so many simultaneously as to cause data loss, an outage, or an unacceptable service degradation. In Kubernetes 1.5 the “kubectl drain” command supports PodDisruptionBudget, allowing safe draining of nodes for maintenance activities, and it will soon also be used by node upgrade and cluster autoscaler (when removing nodes). This can be useful for a quorum based application to ensure the number of replicas running is never below the number needed for quorum, or for a web front end to ensure the number of replicas serving load never falls below a certain percentage.Kubefed alpha is a new command line tool to help you manage federated clusters, making it easy to deploy new federation control planes and add or remove clusters from existing federations. Also new in cluster federation is the addition of ConfigMaps beta and DaemonSets alpha and deployments alpha to the federation API allowing you to create, update and delete these objects across multiple clusters from a single endpoint.HA Masters alpha provides the ability to create and delete clusters with highly available (replicated) masters on GCE using the kube-up/kube-down scripts. Allows setup of zone distributed HA masters, with at least one etcd replica per zone, at least one API server per zone, and master-elected components like scheduler and controller-manager distributed across zones.Windows server containers alpha provides initial support for Windows Server 2016 nodes and scheduling Windows Server Containers. Container Runtime Interface (CRI) alpha introduces the v1 CRI API to allow pluggable container runtimes; an experimental docker-CRI integration is ready for testing and feedback.Node conformance test beta is a containerized test framework that provides a system verification and functionality test for nodes. The test validates whether the node meets the minimum requirements for Kubernetes; a node that passes the tests is qualified to join a Kubernetes. Node conformance test is available at: gcr.io/google_containers/node-test:0.2 for users to verify node setup.These are just some of the highlights in our last release for the year. For a complete list please visit the release notes. AvailabilityKubernetes 1.5 is available for download here on GitHub and via get.k8s.io. To get started with Kubernetes, try one of the new interactive tutorials. Don’t forget to take 1.5 for a spin before the holidays! User AdoptionIt’s been a year-and-a-half since GA, and the rate of Kubernetes user adoption continues to surpass estimates. Organizations running production workloads on Kubernetes include the world’s largest companies, young startups, and everything in between. Since Kubernetes is open and runs anywhere, we’ve seen adoption on a diverse set of platforms; Pokémon Go (Google Cloud), Ticketmaster (AWS), SAP (OpenStack), Box (bare-metal), and hybrid environments that mix-and-match the above. Here are a few user highlights:Yahoo! JAPAN — built an automated tool chain making it easy to go from code push to deployment, all while running OpenStack on Kubernetes. Walmart — will use Kubernetes with OneOps to manage its incredible distribution centers, helping its team with speed of delivery, systems uptime and asset utilization.  Monzo — a European startup building a mobile first bank, is using Kubernetes to power its core platform that can handle extreme performance and consistency requirements.Kubernetes EcosystemThe Kubernetes ecosystem is growing rapidly, including Microsoft’s support for Kubernetes in Azure Container Service, VMware’s integration of Kubernetes in its Photon Platform, and Canonical’s commercial support for Kubernetes. This is in addition to the thirty plus Technology & Service Partners that already provide commercial services for Kubernetes users. The CNCF recently announced the Kubernetes Managed Service Provider (KMSP) program, a pre-qualified tier of service providers with experience helping enterprises successfully adopt Kubernetes. Furthering the knowledge and awareness of Kubernetes, The Linux Foundation, in partnership with CNCF, will develop and operate the Kubernetes training and certification program — the first course designed is Kubernetes Fundamentals.Community VelocityIn the past three months we’ve seen more than a hundred new contributors join the project with some 5,000 commits pushed, reaching new milestones by bringing the total for the core project to 1,000+ contributors and 40,000+ commits. This incredible momentum is only possible by having an open design, being open to new ideas, and empowering an open community to be welcoming to new and senior contributors alike. A big thanks goes out to the release team for 1.5 — Saad Ali of Google, Davanum Srinivas of Mirantis, and Caleb Miles of CoreOS for their work bringing the 1.5 release to light.Offline, the community can be found at one of the many Kubernetes related meetups around the world. The strength and scale of the community was visible in the crowded halls of CloudNativeCon/KubeCon Seattle (the recorded user talks are here). The next CloudNativeCon + KubeCon is in Berlin March 29-30, 2017, be sure to get your ticket and submit your talk before the CFP deadline of Dec 16th.Ready to start contributing? Share your voice at our weekly community meeting. Post questions (or answer questions) on Stack Overflow Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackThank you for your contributions and support!– Aparna Sinha, Senior Product Manager, Google
Quelle: kubernetes

Convert ASP.NET Web Servers to Docker with Image2Docker

A major update to Image2Docker was released last week, which adds ASP.NET support to the tool. Now you can take a virtualized web server in Hyper-V and extract a image for each website in the VM &; including ASP.NET WebForms, MVC and WebApi apps. 

Image2Docker is a PowerShell module which extracts applications from a Windows Virtual Machine image into a Dockerfile. You can use it as a first pass to take workloads from existing servers and move them to Docker containers on Windows.
The tool was first released in September 2016, and we&;ve had some great work on it from PowerShell gurus like Docker Captain Trevor Sullivan and Microsoft MVP Ryan Yates. The latest version has enhanced functionality for inspecting IIS &8211; you can now extract ASP.NET websites straight into Dockerfiles.
In Brief
If you have a Virtual Machine disk image (VHD, VHDX or WIM), you can extract all the IIS websites from it by installing Image2Docker and running ConvertTo-Dockerfile like this:
Install-Module Image2Docker
Import-Module Image2Docker
ConvertTo-Dockerfile -ImagePath C:win-2016-iis.vhd -Artifact IIS -OutputPath c:i2d2iis
That will produce a Dockerfile which you can build into a Windows container image, using docker build.
How It Works
The Image2Docker tool (also called &;I2D2&;) works offline, you don&8217;t need to have a running VM to connect to. It inspects a Virtual Machine disk image &8211; in Hyper-V VHD, VHDX format, or Windows Imaging WIM format. It looks at the disk for known artifacts, compiles a list of all the artifacts installed on the VM and generates a Dockerfile to package the artifacts.
The Dockerfile uses the microsoft/windowsservercore base image and installs all the artifacts the tool found on the VM disk. The artifacts which Image2Docker scans for are:

IIS & ASP.NET apps
MSMQ
DNS
DHCP
Apache
SQL Server

Some artifacts are more feature-complete than others. Right now (as of version 1.7.1) the IIS artifact is the most complete, so you can use Image2Docker to extract Docker images from your Hyper-V web servers.
Installation
I2D2 is on the PowerShell Gallery, so to use the latest stable version just install and import the module:
Install-Module Image2Docker
Import-Module Image2Docker
If you don&8217;t have the prerequisites to install from the gallery, PowerShell will prompt you to install them.
Alternatively, if you want to use the latest source code (and hopefully contribute to the project), then you need to install the dependencies:
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201
Install-Module -Name Pester,PSScriptAnalyzer,PowerShellGet
Then you can clone the repo and import the module from local source:
mkdir docker
cd docker
git clone https://github.com/sixeyed/communitytools-image2docker-win.git
cd communitytools-image2docker-win
Import-Module .Image2Docker.psm1
Running Image2Docker
The module contains one cmdlet that does the extraction: ConvertTo-Dockerfile. The help text gives you all the details about the parameters, but here are the main ones:

ImagePath &8211; path to the VHD | VHDX | WIM file to use as the source
Artifact &8211; specify one artifact to inspect, otherwise all known artifacts are used
ArtifactParam &8211; supply a parameter to the artifact inspector, e.g. for IIS you can specify a single website
OutputPath &8211; location to store the generated Dockerfile and associated artifacts

You can also run in Verbose mode to have Image2Docker tell you what it finds, and how it&8217;s building the Dockerfile.
Walkthrough &8211; Extracting All IIS Websites
This is a Windows Server 2016 VM with five websites configured in IIS, all using different ports:

Image2Docker also supports Windows Server 2012, with support for 2008 and 2003 on its way. The websites on this VM are a mixture of technologies &8211; ASP.NET WebForms, ASP.NET MVC, ASP.NET WebApi, together with a static HTML website.
I took a copy of the VHD, and ran Image2Docker to generate a Dockerfile for all the IIS websites:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -Verbose -OutputPath c:i2d2iis
In verbose mode there&8217;s a whole lot of output, but here are some of the key lines &8211; where Image2Docker has found IIS and ASP.NET, and is extracting website details:
VERBOSE: IIS service is present on the system
VERBOSE: ASP.NET is present on the system
VERBOSE: Finished discovering IIS artifact
VERBOSE: Generating Dockerfile based on discovered artifacts in
:C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mount
VERBOSE: Generating result for IIS component
VERBOSE: Copying IIS configuration files
VERBOSE: Writing instruction to install IIS
VERBOSE: Writing instruction to install ASP.NET
VERBOSE: Copying website files from
C:UserseltonAppDataLocalTemp865115-6dbb-40e8-b88a-c0142922d954-mountwebsitesaspnet-mvc to
C:i2d2iis
VERBOSE: Writing instruction to copy files for -mvc site
VERBOSE: Writing instruction to create site aspnet-mvc
VERBOSE: Writing instruction to expose port for site aspnet-mvc
When it completes, the cmdlet generates a Dockerfile which turns that web server into a Docker image. The Dockerfile has instructions to installs IIS and ASP.NET, copy in the website content, and create the sites in IIS.
Here&8217;s a snippet of the Dockerfile &8211; if you&8217;re not familiar with Dockerfile syntax but you know some PowerShell, then it should be pretty clear what&8217;s happening:
# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication…

# Set up website: aspnet-mvc
COPY aspnet-mvc /websites/aspnet-mvc
RUN New-Website -Name ‘aspnet-mvc’ -PhysicalPath “C:websitesaspnet-mvc” -Port 8081 -Force
EXPOSE 8081
# Set up website: aspnet-webapi
COPY aspnet-webapi /websites/aspnet-webapi
RUN New-Website -Name ‘aspnet-webapi’ -PhysicalPath “C:websitesaspnet-webapi” -Port 8082 -Force
EXPOSE 8082
You can build that Dockerfile into a Docker image, run a container from the image and you&8217;ll have all five websites running in a Docker container on Windows. But that&8217;s not the best use of Docker.
When you run applications in containers, each container should have a single responsibility &8211; that makes it easier to deploy, manage, scale and upgrade your applications independently. Image2Docker support that approach too.
Walkthrough &8211; Extracting a Single IIS Website
The IIS artifact in Image2Docker uses the ArtifactParam flag to specify a single IIS website to extract into a Dockerfile. That gives us a much better way to extract a workload from a VM into a Docker Image:
ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam aspnet-webforms -Verbose -OutputPath c:i2d2aspnet-webforms
That produces a much neater Dockerfile, with instructions to set up a single website:
# escape=`
FROM microsoft/windowsservercore
SHELL [“powershell”, “-Command”, “$ErrorActionPreference = ‘Stop';”]

# Wait-Service is a tool from Microsoft for monitoring a Windows Service
ADD https://raw.githubusercontent.com/Microsoft/Virtualization-Documentation/live/windows-server-container-tools/Wait-Service/Wait-Service.ps1 /

# Install Windows features for IIS
RUN Add-WindowsFeature Web-server, NET-Framework-45-ASPNET, Web-Asp-Net45
RUN Enable-WindowsOptionalFeature -Online -FeatureName IIS-ApplicationDevelopment,IIS-ASPNET45,IIS-BasicAuthentication,IIS-CommonHttpFeatures,IIS-DefaultDocument,IIS-DirectoryBrowsing

# Set up website: aspnet-webforms
COPY aspnet-webforms /websites/aspnet-webforms
RUN New-Website -Name ‘aspnet-webforms’ -PhysicalPath “C:websitesaspnet-webforms” -Port 8083 -Force
EXPOSE 8083

CMD /Wait-Service.ps1 -ServiceName W3SVC -AllowServiceRestart
Note &8211; I2D2 checks which optional IIS features are installed on the VM and includes them all in the generated Dockerfile. You can use the Dockerfile as-is to build an image, or you can review it and remove any features your site doesn&8217;t need, which may have been installed in the VM but aren&8217;t used.
To build that Dockerfile into an image, run:
docker build -t i2d2/aspnet-webforms .
When the build completes, I can run a container to start my ASP.NET WebForms site. I know the site uses a non-standard port, but I don&8217;t need to hunt through the app documentation to find out which one, it&8217;s right there in the Dockerfile: EXPOSE 8083.
This command runs a container in the background, exposes the app port, and stores the ID of the container:
$id = docker run -d -p 8083:8083 i2d2/aspnet-webforms
When the site starts, you&8217;ll see in the container logs that the IIS Service (W3SVC) is running:
> docker logs $id
The Service ‘W3SVC’ is in the ‘Running’ state.
Now you can browse to the site running in IIS in the container, but because published ports on Windows containers don&8217;t do loopback yet, if you&8217;re on the machine running the Docker container, you need to use the container&8217;s IP address:
$ip = docker inspect –format ‘{{ .NetworkSettings.Networks.nat.IPAddress }}’ $id
start “http://$($ip):8083″
That will launch your browser and you&8217;ll see your ASP.NET Web Forms application running in IIS, in Windows Server Core, in a Docker container:

Converting Each Website to Docker
You can extract all the websites from a VM into their own Dockerfiles and build images for them all, by following the same process &8211; or scripting it &8211; using the website name as the ArtifactParam:
$websites = @(“aspnet-mvc”, “aspnet-webapi”, “aspnet-webforms”, “static”)
foreach ($website in $websites) {
    ConvertTo-Dockerfile -ImagePath C:i2d2win-2016-iis.vhd -Artifact IIS -ArtifactParam $website -Verbose -OutputPath “c:i2d2$website” -Force
    cd “c:i2d2$website”
    docker build -t “i2d2/$website” .
}
Note. The Force parameter tells Image2Docker to overwrite the contents of the output path, if the directory already exists.
If you run that script, you&8217;ll see from the second image onwards the docker build commands run much more quickly. That&8217;s because of how Docker images are built from layers. Each Dockerfile starts with the same instructions to install IIS and ASP.NET, so once those instructions are built into image layers, the layers get cached and reused.
When the build finish I have four i2d2 Docker images:
> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED              SIZE
i2d2/static                                   latest              cd014b51da19        7 seconds ago        9.93 GB
i2d2/aspnet-webapi                            latest              1215366cc47d        About a minute ago   9.94 GB
i2d2/aspnet-mvc                               latest              0f886c27c93d        3 minutes ago        9.94 GB
i2d2/aspnet-webforms                          latest              bd691e57a537        47 minutes ago       9.94 GB
microsoft/windowsservercore                   latest              f49a4ea104f1        5 weeks ago          9.2 GB
Each of my images has a size of about 10GB but that&8217;s the virtual image size, which doesn&8217;t account for cached layers. The microsoft/windowsservercore image is 9.2GB, and the i2d2 images all share the layers which install IIS and ASP.NET (which you can see by checking the image with docker history).
The physical storage for all five images (four websites and the Windows base image) is actually around 10.5GB. The original VM was 14GB. If you split each website into its own VM, you&8217;d be looking at over 50GB of storage, with disk files which take a long time to ship.
The Benefits of Dockerized IIS Applications
With our Dockerized websites we get increased isolation with a much lower storage cost. But that&8217;s not the main attraction &8211; what we have here are a set of deployable packages that each encapsulate a single workload.
You can run a container on a Docker host from one of those images, and the website will start up and be ready to serve requests in seconds. You could have a Docker Swarm with several Windows hosts, and create a service from a website image which you can scale up or down across many nodes in seconds.
And you have different web applications which all have the same shape, so you can manage them in the same way. You can build new versions of the apps into images which you can store in a Windows registry, so you can run an instance of any version of any app. And when Docker Datacenter comes to Windows, you&8217;ll be able to secure the management of those web applications and any other Dockerized apps with role-based access control, and content trust.
Next Steps
Image2Docker is a new tool with a lot of potential. So far the work has been focused on IIS and ASP.NET, and the current version does a good job of extracting websites from VM disks to Docker images. For many deployments, I2D2 will give you a working Dockerfile that you can use to build an image and start working with Docker on Windows straight away.
We&8217;d love to get your feedback on the tool &8211; submit an issue on GitHub if you find a problem, or if you have ideas for enhancements. And of course it&8217;s open source so you can contribute too.
Additional Resources

Image2Docker: A New Tool For Prototyping Windows VM Conversions
Containerize Windows Workloads With Image2Docker
Run IIS + ASP.NET on Windows 10 with Docker
Awesome Docker &8211; Where to Start on Windows

Convert @Windows aspnet VMs to Docker with Image2DockerClick To Tweet

The post Convert ASP.NET Web Servers to Docker with Image2Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: Registration And CFP Now Open!

2017 tickets are now available! Take advantage of our lowest pricing today &; tickets are limited and Early Bird will sell out fast! We have extended DockerCon to a three-day conference with repeat sessions, hands-on labs and summits taking place on Thursday.
 
Register for DockerCon
 
The DockerCon 2017 Call for Proposals is open! Before you submit your cool hack or session proposals, take a look at our tips for getting selected below. We have narrowed the scope of sessions we’re looking for this year down to Cool Hacks and Use Cases. The deadline for submissions is January 14, 2017 at 11:59 PST.
Submit a talk

Proposal Dos:

Submitting a Cool Hack:
Be novel
Show us your cool hacks and wow us with the interesting ways you can push the boundaries of the Docker stack. Check out past audience favorites like Serverless Docker, In-the-air update of a drone with Docker and Resin.io and building a UI for container management with Minecraft for inspiration.
Be clear
You do not have to have your hack ready by the submission deadline, rather, plan to clearly explain your hack, what makes it cool and the technologies you will use.
 
All Sessions:
To illustrate the tips below, check out the sample proposals with comments on why they stand out.
Clarify your message
The best talks leave the audience transformed: They come into the session thinking or doing things one way, and they leave armed to think about or solve a problem differently. This means that your session must have solid take-aways that the audience can apply to their use case. We ask for your three key take-aways in the CFP. Make sure to be specific about your audience transformation, i.e. instead of listing “the talk covers orchestration,” instead write, “the talk will go through a step-by-step process for setting up swarm mode, providing the audience with an live example of how easy it is to use.” This is also a great place to highlight what you will leave them with, i.e. “Attendees will have full unrestricted access to all the code I’m going to write and open-source for the talk.”
Keep in line with the theme of the conference
Conferences are organized around a narrative and DockerCon is a user conference. That means we&;re looking for proposals that will inform and delight attendees on the following topics:
Using Docker
Has Docker technology made you better at what you do? Is Docker an integral part of your company’s tech stack? Do you use Docker to do big things? Infuse your proposal with concrete, first-hand examples about your Docker usage, challenges and what you learned along the way, and inspire us on how to use Docker to accomplish real tasks.
Deep Dives
Propose code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.
Get specific
While you should submit a topic that is broad enough to cover a range of interests, sessions are a maximum of 40 minutes, so don’t try to boil the ocean. Stay focused on content that support your take-aways so you can deliver a clear and compelling story.
Inspire us
Expand the conversation beyond technical details and inspire attendees to explore new uses. Past examples include Dockerizing CS50: From Cluster to Cloud to Appliance to Container, Shipping Manifests, Bill of Lading and Docker &8211; Metadata for Containers and Stop Being Lazy and Test Your Software.
Be open
Has your company built tools used in production and/or testing? Remember the buzz around Netflix&8217;s Chaos Monkey and the excitement around it when it was released? If you have such a tool, revealing the recipe for your secret sauce is a great way to get your talk on the radar of DockerCon 2017 attendees.
Show that you are engaging
Having a great topic and talk is important, but equally important is execution and delivery. In the CFP, you have the opportunity to provide as much information as you can about presentations you have given. Videos, reviews, and slide decks will add to your credibility as an entertaining speaker.
 
Proposal Don&8217;ts
These items are surefire ways of not getting past the initial review.
Sales pitches
No, just don&8217;t. It&8217;s acceptable to mention your company&8217;s product during a presentation but it should never be the focus of your talk.
Bulk submissions
If your proposal reads as generic talk that has been submitted to a number of conferences, it will not pass the initial review. Granted that a talk can be a polished version of earlier talk, but the proposal should be tailored for DockerCon 2017.
Jargon
If the proposal contains jargon, it&8217;s very likely that the presentation will also contain jargon. Although DockerCon 2017 is a technology conference, we value the ability to explain and make your points with clear and easy to follow language.
So, what happens next?
After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee of reviewers from Docker and the industry will read the proposals and select the best ones. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
The deadline for proposal submission is January 14, 2017 at 11:59 PST.
We&8217;re looking forward to reading your proposals!
Submit a talk

DockerCon CFP is now open! Let us know how you’re using To Tweet

The post DockerCon 2017: Registration And CFP Now Open! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker for Azure Public Beta

Last week for AWS went public beta, and today Docker for Azure reached the same milestone and is ready for public testing. Docker for Azure is a great way for ops to setup and maintain secure and scalable Docker deployments on Azure.

With Docker for Azure, IT ops teams can:

Deploy a standard Docker platform to ensure teams can seamlessly move apps from developer laptops to Dockerized staging and production environments, without risk of incompatibilities or lock-in.
Integrate deeply with underlying infrastructure to ensure Docker takes advantage of the host environment’s native capabilities and exposes a familiar interface to administrators.
Deploy the platform to all the places where you want to run Dockerized apps, simply and efficiently
Make sure the latest and greatest Docker versions are available for the hardware, OSs, and infrastructure you love, and provide solid upgrade paths from one Docker version to the next.

To try the latest Docker for Azure beta based on the latest Docker Engine betas, click the button below or get more details on the beta site:

Installation takes a few minutes, and will give you a fully functioning swarm, ready to deploy and scale Dockerized apps.
We first unveiled the Docker for Azure private beta on stage at DockerCon 2016 back in June, and we are excited to be opening up to beta to the public. We received lots of great feedback from private beta testers (thanks!) and incorporated as much of it as possible. Enhancements added during the private beta include:

All container logs are stored in an Azure storage account for later retrieval and inspection. That means you no longer have to rummage around on hosts to find the error you’re looking for or worry that logs are lost if a worker is replaced.
Built-in diagnose tool lets you submit a swarm-wide diagnostic dump to Docker so that we can help diagnose and troubleshoot a misbehaving Docker for Azure swarm.
Improved upgrade stability so that you can confidently upgrade your Docker for Azure to the latest version

We’re particularly proud of the progress we’ve made on diagnostics and upgradability. These are features that set a true production system apart from simple fire-and-forget templates that just spin up resources without thought for debugging or future upgrades.
The improvements added during the private beta complement the initial features Docker for Azure launched with earlier this year:

Simple access and management using SSH
Quick and secure deployment of websites thanks to auto-provisioned and auto-configured load balancers
Secure and easy-to-manage Azure network and instance configuration

With today’s public beta announcement, we hope to get even more users interested in running Docker on Azure and testing the beta. Check out the detailed docs and sign up on beta.docker.com to be notified of updates and new beta versions.
Docker for AWS and Azure currently only support Linux-based swarms of managers and workers. Windows Server worker support will come as Docker on Windows Server matures. If you have questions or feedback, send an email or post to the Docker for AWS or the Docker for Azure forums.
Additional Resources

Take a short survey to provide feedback on your experience
Check out Docker for Windows Server 2016
Learn more: Docker and Microsoft solutions

Docker for @Azure Public Beta Available Now! Click To Tweet

The post Docker for Azure Public Beta appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Tips for Troubleshooting Apps in Production with Docker Datacenter

If you have been using Docker for some time, after the initial phases of building Dockerfiles and running a container here and there, the real work begins in building, deploying and operating multi-container applications in a production environment.  Are you operationally ready to take your application to production? Docker Datacenter provides an integrated management framework for your Dockerized environment and applications and when coupled with clear strategies in approaching and resolving anomalies, IT ops teams can be assured in successfully operationalizing Docker.
Let’s use a sports metaphor to approach troubleshooting:

Pre-Game will cover the planning phase for your applications
Game Time will cover troubleshooting tools available in Docker Datacenter
Post-Game will discuss complementary tools to aid in ongoing insights

Pre-Game
Whether or not you are sports fan, you can appreciate the importance of the planning out any task. This is no different than what you would do for your applications. Health checks are a great way to provide a deeper level of insight into how your application is performing. Since Docker 1.12 there is a new HEALTHCHECK directive. We can use this directive to signal to the Docker Engine whether or not the application is healthy.
There are a two ways to implement the HEALTHCHECK directive. The first way is use the directive in the Dockerfile. I prefer this method since the app and the health check are coupled together. Here is an example of a Dockerfile with the HEALTHCHECK directive :
FROM alpine
RUN apk -U upgrade && apk add python curl &&
apk add py-pip &&
pip install –upgrade pip &&
pip install flask redis pymongo &&
rm -rf /var/cache/apk/*
WORKDIR /code
ADD . /code
EXPOSE 5000
HEALTHCHECK CMD curl -f http://localhost:5000/ || exit 1
CMD [“python”, “app.py”]

The second way is using the docker run command. https://docs.docker.com/engine/reference/run/#/healthcheck. Here is an example of the run command :
docker run –name=test -d –health-cmd=’stat /etc/passwd || exit 1′ –health-interval=2s busybox sleep 1d
This method can actually supersede the Dockerfile method. Both methods are very useful. Here is an example output with the heatlh :
clemenko13:orientation clemenko$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4cbabba22d31 clemenko/orient_flask “python app.py” 15 minutes ago Up 15 minutes (healthy) 0.0.0.0:5000->5000/tcp orientation_app_1

You can even write wrappers to report this back.
And in the image below, you can see that Docker Datacenter has a visual for the HEALTHCHECK within the container info itself.

Game Time
Whether or not you have the healthz endpoint setup, Docker Datacenter has some great troubleshooting tools built right in.
Let’s start with the container details.
In Docker Datacenter, you can click on the container to bring up the details and other items. There is a ton of good details here like Status, Healthcheck, Node, Networks and Ports. Here is an example with an active Healthcheck.

Moving on to logging.
With Docker Datacenter you have a few choices with logging. You can let Docker Datacenter handle it for you, or send all the logs remotely for the whole engine. In the Docker Datacenter UI, you are able to drill into each container’s logs.

If you want to use something like the syslog driver for remote logging you can modify the logging configuration from the Docker Datacenter admin settings. More info on the log drivers can be found here.

Next we can dive into the container itself.
Docker Datacenter has the ability to attach to a console session of the container remotely. You can use the console to dive into the running container. This is very useful if you need to check files, processes, settings or even ports. The trick with the console UI is that you need to have a shell inside your image. Most images will have sh or bash as part of their base image. Similar to viewing the container logs you will see a “Console” tab on the container’s info page. Notice it will try and use sh by default :

NOTE: By using the RBAC feature in Docker Datacenter you can configure access in many ways. For example you can give developers access through the GUI but not through SSH.
Next we need to talk about networking.
Issues can arise once your application is live and running. If you run into networking related issues, there are two good ways to troubleshoot through the containers console or the sidekick method. The console method should be the first step. Simply console into the container and curl/ping around. What if you don’t have curl in your image. Well simple docker run a base image attached to the same network overlay. With that base image you should be able to add ANY binaries that are needed for troubleshooting the network. You can even pre-build one for use within your infrastructure.
It is worth noting that the same container info page also has stats. The stats tab only displays the current stats for CPU, MEMORY, and NETWORK. However this can be useful is seeing if there are any bottlenecks.

Post Game and Wrap-up.
Start with the HEALTHCHECK endpoint. Check the logs, either remotely or locally. Then move onto the console to introspect the running container. Aggregating your logs can give you insight into all your apps and hosts at the same time. Remote logging to external systems like ELK or Splunk can give you that aggregate view. Stats can also be good for aggregation. CAdvisor or Sysdig’s containers can be plumbed up for combined historical metrics.
Hopefully you have a much better understanding of how to troubleshoot your running Dockerized applciations.
More resources for you:

See What’s New and Learn more about Docker Datacenter
Sign up for a free 30 day trial
Check out the Docker knowledge base for more tips

Tips for troubleshooting apps in production To Tweet

The post Tips for Troubleshooting Apps in Production with Docker Datacenter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Learn Docker with More Hands-On Labs

Docker Labs is a rich resource for technical folks from any background to learn Docker. Since the last update on the Docker Blog, three new labs have been published covering , SQL Server and running a Registry on Windows. The self-paced, hands-on labs are a popular way for people to learn how to use Docker for specific scenarios, and it&;s a resource which is growing with the help of the community.

New Labs

Ruby FAQ. You can Dockerize Ruby and Ruby on Rails apps, but there are considerations around versioning, dependency management and the server runtimes. The Ruby FAQ walks through some of the challenges in moving Ruby apps to Docker and proposes solutions. This lab is just beginning, we would love to have your contributions.
SQL Server Lab. Microsoft maintain a SQL Server Express image on Docker Hub that runs in a Windows container. That image lets you attach an existing database to the container, but this lab walks you through a full development and deployment process, building a Docker image that packages your own database schema into an image.
Registry Windows Lab. Docker Registry is an open-source registry server for storing Docker images, which you can run in your own network. There&8217;s already an official registry image for Linux, and this lab shows how to build and run a registry server in a Docker container on Windows.

Highlights
Some of the existing labs are worth calling out for the amount of information they provide. There are hours of learning here:

Docker Networking. Walks through a reference architecture for container networks, covering all the major networking concepts in detail, together with tutorials that demonstrate the concepts in action.
Swarm Mode. A beginner tutorial for native clustering which came in Docker 1.12. Explains how to run services, how Docker load-balances with the Routing Mesh, how to scale up and down, and how to safely remove nodes from the swarm.

Fun Facts
In November, the labs repo on GitHub was viewed over 35,000 times. The most popular lab right now is Windows Containers.
The repo contains 244 commits, has been forked 296 times and starred by 1,388 GitHub users. The labs are the work of 35 contributors so far &; including members of the community, Docker Captains and folks at Docker, Inc.
Among the labs there are 14 Dockerfiles and 102 pages of documentation, totalling over 77,000 words of Docker learning. It would take around 10 hours to read aloud all the labs!
How to Contribute
If you want to join the contributors, we&8217;d love to add your work to the hands-on labs. Contributing is super easy. The documentation is written in GitHub flavored markdown and there&8217;s no mandated structure, just make your lab easy to follow and learn from.
Whether you want to add a new lab or update an existing one, the process is the same:

fork the docker/labs repo on GitHub;
clone your forked repo onto your machine;
add your awesome lab, or change an existing lab to make it even more awesome;
commit your changes (and make sure to sign your work);
submit a pull request &8211; the labs maintainers will review, feed back and publish!

with hands-on labs, now with and Ruby!Click To Tweet

The post Learn Docker with More Hands-On Labs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

From Network Policies to Security Policies

Editor’s note: Today’s post is by Bernard Van De Walle, Kubernetes Lead Engineer, at Aporeto, showing how they took a new approach to the Kubernetes network policy enforcement.Kubernetes Network Policies Kubernetes supports a new API for network policies that provides a sophisticated model for isolating applications and reducing their attack surface. This feature, which came out of the SIG-Network group, makes it very easy and elegant to define network policies by using the built-in labels and selectors Kubernetes constructs.Kubernetes has left it up to third parties to implement these network policies and does not provide a default implementation.We want to introduce a new way to think about “Security” and “Network Policies”. We want to show that security and reachability are two different problems, and that security policies defined using endpoints (pods labels for example) do not specifically need to be implemented using network primitives.Most of us at Aporeto come from a Network/SDN background, and we knew how to implement those policies by using traditional networking and firewalling: Translating the pods identity and policy definitions to network constraints, such as IP addresses, subnets, and so forth.However, we also knew from past experiences that using an external control plane also introduces a whole new set of challenges: This distribution of ACLs requires very tight synchronization between Kubernetes workers; and every time a new pod is instantiated, ACLs need to be updated on all other pods that have some policy related to the new pod. Very tight synchronization is fundamentally a quadratic state problem and, while shared state mechanisms can work at a smaller scale, they often have convergence, security, and eventual consistency issues in large scale clusters. From Network Policies to Security PoliciesAt Aporeto, we took a different approach to the network policy enforcement, by actually decoupling the network from the policy. We open sourced our solution as Trireme, which translates the network policy to an authorization policy, and it implements a transparent authentication and authorization function for any communication between pods. Instead of using IP addresses to identify pods, it defines a cryptographically signed identity for each pod as the set of its associated labels. Instead of using ACLs or packet filters to enforce policy, it uses an authorization function where a container can only receive traffic from containers with an identity that matches the policy requirements. The authentication and authorization function in Trireme is overlaid on the TCP negotiation sequence. Identity (i.e. set of labels) is captured as a JSON Web Token (JWT), signed by local keys, and exchanged during the Syn/SynAck negotiation. The receiving worker validates that the JWTs are signed by a trusted authority (authentication step) and validates against a cached copy of the policy that the connection can be accepted. Once the connection is accepted, the rest of traffic flows through the Linux kernel and all of the protections that it can potentially offer (including conntrack capabilities if needed). The current implementation uses a simple user space process that captures the initial negotiation packets and attaches the authorization information as payload. The JWTs include nonces that are validated during the Ack packet and can defend against man-in-the-middle or replay attacks.The Trireme implementation talks directly to the Kubernetes master without an external controller and receives notifications on policy updates and pod instantiations so that it can maintain a local cache of the policy and update the authorization rules as needed. There is no requirement for any shared state between Trireme components that needs to be synchronized. Trireme can be deployed either as a standalone process in every worker or by using Daemon Sets. In the latter case, Kubernetes takes ownership of the lifecycle of the Trireme pods. Trireme’s simplicity is derived from the separation of security policy from network transport. Policy enforcement is linked directly to the labels present on the connection, irrespective of the networking scheme used to make the pods communicate. This identity linkage enables tremendous flexibility to operators to use any networking scheme they like without tying security policy enforcement to network implementation details. Also, the implementation of security policy across the federated clusters becomes simple and viable.Kubernetes and Trireme deploymentKubernetes is unique in its ability to scale and provide an extensible security support for the deployment of containers and microservices. Trireme provides a simple, secure, and scalable mechanism for enforcing these policies. You can deploy and try Trireme on top of Kubernetes by using a provided Daemon Set. You’ll need to modify some of the YAML parameters based on your cluster architecture. All the steps are described in detail in the deployment GitHub folder. The same folder contains an example 3-tier policy that you can use to test the traffic pattern.To learn more, download the code, and engage with the project, visit:Trireme on GitHubTrireme for Kubernetes by Aporeto on GitHub–Bernard Van De Walle, Kubernetes lead engineer, Aporeto
Quelle: kubernetes