Docker’s Contribution to Authentication for Windows Containers in Kubernetes

When Docker Enterprise added support for Windows containers running on Swarm with the release of Windows Server 2016, we had to tackle challenges that are less pervasive in pure Linux environments. Chief among these was Active Directory authentication for container-based services using Group Managed Service Accounts, or gMSAs. With nearly 3 years of experience deploying and running Windows container applications in production, Docker has solved for a number of complexities that come with managing gMSAs in a container-based world. We are pleased to have contributed that work to upstream Kubernetes.

Challenges with gMSA in Containerized Environments

Aside from being used for authentication across multiple instances, gMSAs solves for two additional problems: 

Containers cannot join the domain, and;When you start a container, you never really know which host in your cluster it’s going to run on. You might have three replicas running across hosts A, B, and C today and then tomorrow you have four replicas running across hosts Q, R, S, and T. 

One way to solve for this transience is to place the gMSA credential specifications for your service on each and every host where the containers for that service might run, and then repeat that for every Windows service you run in containers. It only takes a few combinations of servers and services to realize this solution just doesn’t scale. You could also place the credential specs inside the image itself, but then you have issues with flexibility if you later change the credspec the service uses.

Figure 1: Managing the matrix of container, credspecs, and hosts doesn’t scale

With Docker Enterprise 3.0 we created a new way to manage gMSA credspecs in Swarm environments. Rather than manually creating and copying credspecs to every potential host, you can instead create a service configuration:

docker config create credspec… 

which is a cluster-wide resource and can be used as a parameter when you create a Windows container service:

docker service create –credential-spec=”config://credspec”…

Swarm then automatically provides the credential spec to the appropriate container at runtime. Much like a secret, the config is only provided to containers that require it; and unlike a typical docker config, the cred spec is not mounted as a file in the system.

Figure 2: Swarm and Kubernetes orchestrators provide gMSA credspecs only when & where needed

Bringing gMSA credspecs to Kubernetes

Now that Kubernetes 1.14 has added support for Windows, the number of Windows container applications is likely to increase substantially and this same gMSA support will be important to anybody trying to run production Windows apps in their Kubernetes environment. The Docker team has been supporting this effort within the Kubernetes project with help from the SIG-Windows community. gMSA support is in the Alpha release phase in Kubernetes 1.14. 

gMSAs in Kubernetes work in a similar fashion to the config in Swarm: you create a credspec for the gMSA, use Kubernetes RBAC to control which pods can access the credspec, and then your pods can access the appropriate gMSA as needed. Again, this is still in Alpha right now so if you want to try it out you will have to enable the feature first.

We have additional work we are contributing upstream in addition to the gMSA work, like CSI support for Windows workloads, and we’ll share more about that in the weeks ahead as they reach alpha release stages. 

Learn more about #Docker’s contribution to authentication for Windows containers in @kubernetesioClick To Tweet

Call to Action 

If you’re attending OSCON check out the “Deploying Windows apps with Draft, Helm, and Kubernetes” session by Jessica Deen Test out the new gMSA config specs, coming soon in Docker Enterprise 3.0Review and contribute to the Kubernetes Windows gMSA SIG or other enhancement proposalsLearn more about Microsoft Group Managed Service Accounts

The post Docker’s Contribution to Authentication for Windows Containers in Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

10 Reasons Developers Love Docker

Developers ranked Docker as the #1 most wanted platform, #2 most loved platform, and #3 most broadly used platform in the 2019 Stack Overflow Developer Survey. Nearly 90,000 developers from around the world responded to the survey. So we asked the community why they love Docker, and here are 10 of the reasons they shared:
ROSIE the Robot at DockerCon. Her software runs on Docker containers.
 

It works on everyone’s machine. Docker eliminates the “but it worked on my laptop” problem.

“I love docker because it takes environment specific issues out of the equation – making the developer’s life easier and improving productivity by reducing time wasted debugging issues that ultimately don’t add value to the application.” @pamstr_

Takes the pain out of CI/CD. If there is one thing developers hate, it is doing the same thing over and over.

“Docker completely changed my life as a developer! I can spin up my project dependencies like databases for my application in a second in a clean state on any machine on our team! I can‘t not imagine the whole ci/cd-approach without docker. Automate all the stuff? Dockerize it!” @Dennis65560555 

Boosts your career. According to a recent Indeed report, in the last year, job postings listing Docker as a preferred skill have increased almost 50%. And the share of job searches per million including Docker has increased 9,538% since 2014.

Makes cool tech accessible. Whether you’re building your robot, experimenting with AI, or programming a Raspberry Pi, Docker makes it easy to work with interesting new technologies.

“I really find Docker an amazing piece of open platform which lets me to convert my Raspberry Pi’s into CCTV camera using Docker containers, pushing the live streaming data to Amazon Rekognition Service for Deep Learning & Facial Recognition using a single Docker Compose file.” @ajeetsraina (Docker Captain)

Raises productivity. It’s easier to ramp up quickly and there’s less busy-work with Docker.

“With containerized environments, my time from zero to contribution is almost non-existent. Same thing applies when switching to another project with completely different requirements. I can finally spend more time writing code and less time getting to the point of writing code. Oh! And I know it’ll work the same way in my build pipelines and prod!” @mikesir87 (Docker Captain)

Standardize Development + Deployments. Containers drive repeatability across processes, making it easier for both dev and ops, and ultimately driving business value. 

“Docker enables us to standardize our application deployment and development across On-Prem and Cloud platforms. We can now bring more value to our customers faster and standardized.” @idomyowntricks (Docker Captain)

Makes cloud migration easy. Docker runs on all the major cloud providers and operating systems, so apps containterized with Docker are portable across datacenters and clouds.

“Currently Docker is a key piece for migration to the cloud, for that reason is the most wanted and loved platform for the architects and developers ” @herrera_luis10 

Application upgrades are a lot easier. That’s true even for complex applications.

“I switched from Oracle 11g to 12c to 18c using Docker containers a few days ago. It worked absolutely painless on my Windows 10 workstation for testing purposes. I love working this way so much! Thanks Docker!” @dthater 

And, if an app breaks, it’s easy to fix. Rolling back to a known good state is simple with Docker.

“Love it because I broke my local PostgreSQL installation and decided that was the reason to switch to using a Docker compose file. Was up and running again in an hour, haven’t looked back” @J_Kreutzbender 

It’s easy to try out new apps. Testing new applications is much easier when you don’t have to build out infrastructure each time.

“I like @Docker because it allows for lightweight test drives of applications and services.” @burgwyn 
Docker Captain Don Bauer said it best:
“Docker has allows us to fail fearlessly. We can test new things easily and quickly and if they work, awesome. But if they don’t, we didn’t spend weeks or months on it. We might have spent a couple of hours or days.”

10 Reasons Why #Developers Love #DockerClick To Tweet

To find out more:

Get started with Docker Hub – a simple way for individual developers to start exploring Docker developer tools for common dev/test scenarios.
Learn more about Docker for Developers.
Read the Stack Overflow survey results.

The post 10 Reasons Developers Love Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Intro Guide to Dockerfile Best Practices

There are over one million Dockerfiles on GitHub today, but not all Dockerfiles are created equally. Efficiency is critical, and this blog series will cover five areas for Dockerfile best practices to help you write better Dockerfiles: incremental build time, image size, maintainability, security and repeatability. If you’re just beginning with Docker, this first blog post is for you! The next posts in the series will be more advanced.
Important note: the tips below follow the journey of ever-improving Dockerfiles for an example Java project based on Maven. The last Dockerfile is thus the recommended Dockerfile, while all intermediate ones are there only to illustrate specific best practices.
Incremental build time
In a development cycle, when building a Docker image, making code changes, then rebuilding, it is important to leverage caching. Caching helps to avoid running build steps again when they don’t need to.
Tip #1: Order matters for caching

However, the order of the build steps (Dockerfile instructions) matters, because when a step’s cache is invalidated by changing files or modifying lines in the Dockerfile, subsequent steps of their cache will break. Order your steps from least to most frequently changing steps to optimize caching.
Tip #2: More specific COPY to limit cache busts

Only copy what’s needed. If possible, avoid “COPY  .” When copying files into your image, make sure you are very specific about what you want to copy. Any changes to the files being copied will break the cache. In the example above, only the pre-built jar application is needed inside the image, so only copy that. That way unrelated file changes will not affect the cache.
Tip #3: Identify cacheable units such as apt-get update & install

Each RUN instruction can be seen as a cacheable unit of execution. Too many of them can be unnecessary, while chaining all commands into one RUN instruction can bust the cache easily, hurting the development cycle. When installing packages from package managers, you always want to update the index and install packages in the same RUN: they form together one cacheable unit. Otherwise you risk installing outdated packages.
Reduce Image size
Image size can be important because smaller images equal faster deployments and a smaller attack surface.
Tip #4: Remove unnecessary dependencies

Remove unnecessary dependencies and do not install debugging tools. If needed debugging tools can always be installed later. Certain package managers such as apt, automatically install packages that are recommended by the user-specified package, unnecessarily increasing the footprint. Apt has the –no-install-recommends flag which ensures that dependencies that were not actually needed are not installed. If they are needed, add them explicitly.
Tip #5: Remove package manager cache

Package managers maintain their own cache which may end up in the image. One way to deal with it is to remove the cache in the same RUN instruction that installed packages. Removing it in another RUN instruction would not reduce the image size.
There are further ways to reduce image size such as multi-stage builds which will be covered at the end of this blog post. The next set of best practices will look at how we can optimize for maintainability, security, and repeatability of the Dockerfile.
Maintainability
Tip #6: Use official images when possible

Official images can save a lot of time spent on maintenance because all the installation stamps are done and best practices are applied. If you have multiple projects, they can share those layers because they use exactly the same base image.
Tip #7: Use more specific tags

Do not use the latest tag. It has the convenience of always being available for official images on Docker Hub but there can be breaking changes over time. Depending on how far apart in time you rebuild the Dockerfile without cache, you may have failing builds.
Instead, use more specific tags for your base images. In this case, we’re using openjdk. There are a lot more tags available so check out the Docker Hub documentation for that image which lists all the existing variants.
Tip #8: Look for minimal flavors

Some of those tags have minimal flavors which means they are even smaller images. The slim variant is based on a stripped down Debian, while the alpine variant is based on the even smaller Alpine Linux distribution image. A notable difference is that debian still uses GNU libc while alpine uses musl libc which, although much smaller, may in some cases cause compatibility issues. In the case of openjdk, the jre flavor only contains the java runtime, not the sdk; this also drastically reduces the image size.
Reproducibility
So far the Dockerfiles above have assumed that your jar artifact was built on the host. This is not ideal because you lose the benefits of the consistent environment provided by containers. For instance if your Java application depends on specific libraries it may introduce unwelcome inconsistencies depending on which computer the application is built.
Tip #9: Build from source in a consistent environment
The source code is the source of truth from which you want to build a Docker image. The Dockerfile is simply the blueprint.

You should start by identifying all that’s needed to build your application. Our simple Java application requires Maven and the JDK, so let’s base our Dockerfile off of a specific minimal official maven image from Docker Hub, that includes the JDK. If you needed to install more dependencies, you could do so in a RUN step.
The pom.xml and src folders are copied in as they are needed for the final RUN step that produces the app.jar application with mvn package. (The -e flag is to show errors and -B to run in non-interactive aka “batch” mode).
We solved the inconsistent environment problem, but introduced another one: every time the code is changed, all the dependencies described in pom.xml are fetched. Hence the next tip.
Tip #10: Fetch dependencies in a separate step

By again thinking in terms of cacheable units of execution, we can decide that fetching dependencies is a separate cacheable unit that only needs to depend on changes to pom.xml and not the source code. The RUN step between the two COPY steps tells Maven to only fetch the dependencies.
There is one more problem that got introduced by building in consistent environments: our image is way bigger than before because it includes all the build-time dependencies that are not needed at runtime.
Tip #11: Use multi-stage builds to remove build dependencies (recommended Dockerfile)

Multi-stage builds are recognizable by the multiple FROM statements. Each FROM starts a new stage. They can be named with the AS keyword which we use to name our first stage “builder” to be referenced later. It will include all our build dependencies in a consistent environment.
The second stage is our final stage which will result in the final image. It will include the strict necessary for the runtime, in this case a minimal JRE (Java Runtime) based on Alpine. The intermediary builder stage will be cached but not present in the final image. In order to get build artifacts into our final image, use COPY –from=STAGE_NAME. In this case, STAGE_NAME is builder.

Multi-stage builds is the go-to solution to remove build-time dependencies.
We went from building bloated images inconsistently to building minimal images in a consistent environment while being cache-friendly. In the next blog post, we will dive more into other uses of multi-stage builds.

New blog from #Docker: Intro guide to Dockerfile best practicesClick To Tweet

Additional Resources:  

DockerCon Talk Recording: Dockerfile Best Practices
DockerCon Talk Slides: Dockerfile Best Practices
Docker Docs: Dockerfile References

The post Intro Guide to Dockerfile Best Practices appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A Secure Content Workflow from Docker Hub to DTR

Docker Hub is home to the world’s largest library of container images. Millions of individual developers rely on Docker Hub for official and certified container images provided by independent software vendors (ISV) and the countless contributions shared by community developers and open source projects. Large enterprises can benefit from the curated content in Docker Hub by building on top of previous innovations, but these organizations often require greater control over what images are used and where they ultimately live (typically behind a firewall in a data center or cloud-based infrastructure). For these companies, building a secure content engine between Docker Hub and Docker Trusted Registry (DTR) provides the best of both worlds – an automated way to access and “download” fresh, approved content to a trusted registry that they control.
Ultimately, the Hub-to-DTR workflow gives developers a fresh source of validated and secure content to support a diverse set of application stacks and infrastructures; all while staying compliant with corporate standards. Here is an example of how this is executed in Docker Enterprise 3.0:

Image Mirroring
DTR allows customers to set up a mirror to grab content from a Hub repository by constantly polling it and pulling new image tags as they are pushed. This ensures that fresh images are replicated across any number of registries in multiple clusters, putting the latest content right where it’s needed while avoiding network bottlenecks.
 

Access Controls
Advanced access controls let organizations to set permissions in DTR at a very granular level – down to the API. Images from Docker Hub can be mirrored into a restricted repository in DTR with access given only to qualified content administrators. The role of the content administrator is to ensure that the images meet the company’s policies.
Image Scanning
Once in the restricted repository, content administrators can set up automated vulnerability scanning which gives organization fine-grained visibility and control over the software and libraries that are being used. These binary-level scans compare the images and applications against the NIST CVE database to identify exposure to known security threats, providing organizations a chance to review and approve images before making them available to developers.
Policy-Based Image Promotion:
With DTR, content administrators can set up rules-based image promotion pipelines that automate the flow approved images to developer repository. (E.g. “Promote Image to Target if Vulnerability Scan shows Zero Major Vulnerabilities”.) This streamlines the development and delivery pipeline while enforcing security controls that automatically gate images, ensuring only approved content gets used by developers.

Image Signing
Digital signatures are used to verify both the contents and publisher of images, ensuring their integrity. Customers can also take this a step further by requiring signatures from specific users before images are deployed, providing an additional layer of trust. This allows content administrators to validate that they have approved images in the developer repositories. Developers and CI tools can apply signatures as well.
End-to-End Automation
The entire workflow outlined above can be automated within Docker Enterprise 3.0 – from image mirroring, to vulnerability scans that are triggered based on new content, to promotion policies and even the CI workflows that add digital signatures. This end-to-end automation enables enterprise developers to innovate on top of the vast content available in Docker Hub, while adhering to secure corporate standards and practices.

Learn how to combine @Docker Hub and Docker Trusted Registry with #Docker Enterprise 3.0 for a secure content workflowClick To Tweet

 
To learn more about Docker Enterprise 3.0:

Register for the Upcoming Docker Enterprise 3.0 Virtual Event
Try Docker Enterprise for yourself trial.docker.com 
Learn more about Docker Trusted Registry

The post A Secure Content Workflow from Docker Hub to DTR appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Build, Share and Run Multi-Service Applications with Docker Enterprise 3.0

Modern applications can come in many flavors, consisting of different technology stacks and architectures, from n-tier to microservices and everything in between. Regardless of the application architecture, the focus is shifting from individual containers to a new unit of measurement which defines a set of containers working together – the Docker Application. We first introduced Docker Application packages a few months ago. In this blog post, we look at what’s driving the need for these higher-level objects and how Docker Enterprise 3.0 begins to shift the focus to applications.
Scaling for Multiple Services and Microservices
Since our founding in 2013, Docker – and the ecosystem that has thrived around it – has been built around the core workflow of a Dockerfile that creates a container image that in turn becomes a running container. Docker containers, in turn, helped to drive the growth and popularity of microservices architectures by allowing independent parts of an application to be turned on and off rapidly and scaled independently and efficiently. The challenge is that as microservices adoption grows, a single application is no longer based on a handful of machines but dozens of containers that can be divided amongst different development teams. Organizations are no longer managing a few containers, but thousands of them. A new canonical object around applications is needed to help companies scale operations and provide clear working models for how multiple teams collaborate on microservices.
At the same time, organizations are seeing different configuration formats emerge including Helm charts, Kubernetes YAML and Docker Compose files. It is common for organizations to have a mix of these as technology evolves, so not only are applications becoming more segmented, they are embracing multiple configuration formats.
Docker Applications are a way to build, share and run multi-service applications across multiple configuration formats. It allows you to bundle together application descriptions, components and parameters into a single atomic unit (either a file or directory) – building in essence a “container of containers”.

The application description provides a manifest of the application metadata, including the name, version and a description.
The application component consists of one or more service configuration files and can be a mix of Docker Compose, Kubernetes YAML and Helm chart files.
Finally, the application parameters define the application settings and make it possible to take the same application package to different infrastructure environments WITH adjustable fields.

Docker Applications are an implementation of the Cloud-Native Application Bundle (CNAB) specification – an open source standard originally co-developed by Docker, Microsoft, Hashicorp, Bitnami, and Codefresh and with more companies onboard today.
Docker Applications in Docker Enterprise 3.0
In Docker Enterprise 3.0, we begin to lay the groundwork for Docker Applications Services. You will be able to begin testing the ‘docker app’ CLI plugin with Docker Desktop Enterprise which provides a way to define applications. These are then pushed to either Docker Hub or Docker Trusted Registry for secure collaboration and integration to the CI/CD toolchain. With the latter, you can also perform a binary-level scan of the package against the NIST CVE database. Finally, the parameterized environment variables make it easy for operators to deploy these multi-service applications to different environments, making it possible to adjust things like ports used during deployment.

With Docker Enterprise 3.0, organizations can continue to operate individual containers, but also have the ability to shift the conversation to Docker Applications to scale more effectively.

Build, Share and Run Multi-Service Applications with #Docker Enterprise 3.0Click To Tweet

To learn more about Docker Enterprise 3.0:

Watch the on-demand webinar: What’s New in Docker Enterprise 3.0
The Docker Enterprise 3.0 public beta is just about to conclude, but you can learn more about Docker Application at https://github.com/docker/app.
Learn more about Docker Enterprise

The post Build, Share and Run Multi-Service Applications with Docker Enterprise 3.0 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Tools for Modernizing Traditional Applications

Over the past two years Docker has worked closely with customers to modernize portfolios of traditional applications with Docker container technology and Docker Enterprise, the industry-leading container platform. Such applications are typically monolithic in nature, run atop older operating systems such as Windows Server 2008 or Windows Server 2003, and are difficult to transition from on-premises data centers to the public cloud.

The Docker platform alleviates each of these pain points by decoupling an application from a particular operating system, enabling microservice architecture patterns, and fostering portability across on-premises, cloud, and hybrid environments.
As the Modernizing Traditional Applications (MTA) program has matured, Docker has invested in tooling and methodologies that accelerate the transition to containers and decrease the time necessary to experience value from the Docker Enterprise platform. From the initial application assessment process to running containerized applications on a cluster, Docker is committed to improving the experience for customers on the MTA journey.
Application Discovery & Assessment
Enterprises develop and maintain exhaustive portfolios of applications. Such apps come in a myriad of languages, frameworks, and architectures developed by both first and third party development teams. The first step in the containerization journey is to determine which applications are strong initial candidates, and where to begin the process.
A natural instinct is to choose the most complex, sophisticated application in a portfolio to begin containerization; the rationale being that if it works for the toughest app, it will work for less complex applications. For an organization new to the Docker ecosystem this approach can be fraught with challenges. Beginning containerization with an application that is less complex, yet still representative of the overall portfolio and aligned with organizational goals, will foster experience and skill with containers before encountering tougher applications.
Docker has developed a series of archetypes that help to “bucket” similar applications together based on architectural characteristics and estimated level of effort for containerization:

Evaluating a portfolio to place applications within each archetype can help estimate level of effort for a given portfolio of applications and aid in determining good initial candidates for a containerization project. There are a variety of methods for executing such evaluations, including:

Manual discovery and assessment involves humans examining each application within a portfolio. For smaller numbers of apps this approach is often mangeable, however scalability is difficult to hundreds or thousands of applications.
Configuration Management Databases (CMDBs), when used within an organization, provide existing and detailed information about a given environment. Introspecting such data can aid in establishing application characters and related archetypes.
Automated tooling from vendors such as RISC Networks, Movere, BMC Helix Discovery, and others provide detailed assessments of data center environments by monitoring servers for a period of time and then generating reports. Such reports may be used in containerization initiatives and are helpful in understanding interdependencies between workloads.
Systems Integrators may be engaged to undergo a formal portfolio evaluation. Such integrators often have mature methodologies and proprietary tooling to aid in the assessment of applications.

Automated Containerization
Building a container for a traditional application can present several challenges. The original developers of an application are often long gone, making it difficult to understand how the application logic was constructed. Formal source code is often unavailable, with applications instead running on virtual machines without assets living in a source control system. Scaling containerization efforts across dozens or hundreds of applications is time intensive and complicated.
These pain points are alleviated with the use of a conversion tool developed by Docker. Part of the Docker Enterprise platform, this tool was developed to automate the generation of Dockerfiles for applications running on virtual machines or bare metal servers. A server is scanned to determine how the operating system is configured, how web servers are setup, and how application code is running. The data is then assembled into a Dockerfile and the application code pulled into a directory, ready for a Docker Build on a modern operating system. For example, a Windows Server 2003 environment can be scanned to generate Dockerfiles for IIS-based .NET applications running in disparate IIS Application Pools. This automation shifts the user from an author to an editor of a Dockerfile, significantly decreasing the time and effort involved in containerizing traditional applications.

Cluster Management
Running containers on a single server may be sufficient for a single developer, but a cluster of servers working together is used to operationalize container-based workloads. Historically the creation and management of such clusters were either fully controlled by a public cloud provider, tying the user to a particular infrastructure.
A new Docker CLI Plugin, called “Docker Cluster”, is included in the Docker Enterprise 3.0 platform. Docker Cluster streamlines the initial creation of a Docker Enterprise cluster by consuming a declarative YAML file to automatically provision and configure infrastructure resources. Cluster may be used across a variety of infrastructure vendors, including Azure, AWS, and VMware, to stand up identical container platforms across each of the major infrastructure targets. This added flexibility decreases the need to lock into a single provider, enables consistency for multi-cloud and hybrid environments, and provides the option of deploying containers via either the Kubernetes or Swarm orchestrators.

Beyond the automation tooling, Docker also offers detailed, infrastructure-specific Reference Architectures for Certified Infrastructure partners that catalogue best-practices for various providers. These documents offer exhaustive guidance on implementing Docker Enterprise in addition to the automated CLI tooling. Additional guidance on integrating Docker Enterprise with common container ecosystem solutions can be found in Docker’s library of Solution Briefs.
Provisioning and managing a Docker Enterprise cluster has been significantly simplified with the introduction of Docker Cluster, Solution Briefs, and Reference Architectures. These tools allow you to focus on containerizing legacy applications rather than investing additional time into the setup of a container cluster.

Learn more about how #Docker helps customers modernize their portfolios of traditional applications with Docker container #technology and Docker Enterprise, the industry-leading container platform.Click To Tweet

Call to Action

Watch the video 
Learn more about Docker Enterprise
Find out more about Docker containers

The post Docker Tools for Modernizing Traditional Applications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

At A Glance: The Mid-Atlantic + Government Docker Summit

Last week, Docker hosted our 4th annual Mid-Atlantic and Government Docker Summit, a one-day technology conference held on Wednesday, May 29 near Washington, DC. Over 425 attendees in the public and private sector came together to share and learn about the trends driving change in IT from containers, cloud and DeVops. Specifically, the presenters shared content on topics including Docker Enterprise, our industry-leading container platform, Docker’s Kubernetes Service, Container Security and more.

Attendees were a mix of technology users and IT decision makers: everyone from developers, systems admins and architects to Sr. leaders and CTOs.
Summit Recap by the Numbers:

428 Registrations
16 sessions
7 sponsors
3 Tracks (Tech, Business and Workshops)

Keynotes
Highlights include a keynote by Docker’s EVP of Customer Success, Iain Gray, and a fireside chat by the former US CTO and Insight Ventures Partner, Nick Sinai, and current Federal US CIO, Suzette Kent.

The fireside highlighted top of mind issues for Kent and how that aligns with the White House IT Modernization Report; specifically modernization of current federal IT infrastructure and preparing and scaling the workforce. Kent mentioned, “The magic of IT modernization is marrying the technology with the people and the mission.” And that’s exactly what her efforts support:“What’s being doing inside the federal government and serving the entire population of the American people is the largest technology transformation, with the most valuable data, in the world.”
Session Content
This year we had three tracks – business, technical and workshops – with presentations from Docker, ecosystem partners, customers and Docker captains to drive discussions, technology deep dives and hands on tutorials. Attendees loved having varying options to dive into from business to technical, beginner to advanced. 
Whether you could make it or not, dive into the program content to learn more!
For more information:

Learn more about Docker Enterprise and get a free trial today
Check out Docker for Government

Discover what @Docker Summit attendees learned about the IT trends and technologies driving change in the #cloud and in #DevOps including #Docker container platform, #container security, @kubernetesio and more:Click To Tweet

The post At A Glance: The Mid-Atlantic + Government Docker Summit appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

A First Look at Docker Desktop Enterprise

Delivered as part of Docker Enterprise 3.0, Docker Desktop Enterprise is a new developer tool that extends the Docker Enterprise Platform to developers’ desktops, improving developer productivity while accelerating time-to-market for new applications.
It is the only enterprise-ready Desktop platform that enables IT organizations to automate the delivery of legacy and modern applications using an agile operating model with integrated security. With work performed locally, developers can leverage a rapid feedback loop before pushing code or docker images to shared servers / continuous integration infrastructure.

Imagine you are a developer & your organization has a production-ready environment running Docker Enterprise. To ensure that you don’t use any APIs or incompatible features that will break when you push an application to production, you would like to be certain your working environment exactly matches what’s running in Docker Enterprise production systems. This is where Docker Enterprise 3.0 and Docker Desktop Enterprise come in. It is basically a cohesive extension of the Docker Enterprise container platform that runs right on developers’ systems. Developers code and test locally using the same tools they use today and Docker Desktop Enterprise helps to quickly iterate and then produce a containerized service that is ready for their production Docker Enterprise clusters.
The Enterprise-Ready Solution for Dev & Ops
Docker Desktop Enterprise is a perfect devbed for enterprise developers. It allows developers to select from their favourite frameworks, languages, and IDEs. Because of those options, it can also help organizations target every platform. So basically, your organization can provide application templates that include production-approved application configurations, and developers can take those templates and quickly replicate them right from their desktop and begin coding. With the Docker Desktop Enterprise graphical user interface (GUI), developers are no longer required to know lower-level Docker commands and can auto-generate Docker artifacts.

With Docker Desktop Enterprise, IT organizations can easily distribute and manage Docker Desktop Enterprise across teams of developers using their current  third-party endpoint management solution.
A Flawless Integration with 3rd Party Developer Tools

Docker Desktop Enterprise is designed to integrate with existing development environments (IDEs) such as Visual Studio and IntelliJ. And with support for defined application templates, Docker Desktop Enterprise allows organizations to specify the look and feel of their applications.
Exclusive features of Docker Desktop Enterprise

Let us talk about the various features of Docker Desktop Enterprise 2.0 which is discussed below:

Version selection: Configurable version packs ensure the local instance of Docker Desktop Enterprise is a precise copy of the production environment where applications are deployed, and developers can switch between versions of Docker and Kubernetes with a single click.

Docker and Kubernetes versions match UCP cluster versions.
Administrator command line tool simplifies version pack installation.

Application Designer: Application Designer provides a library of application and service templates to help developers quickly create new container-based applications. Application templates allow you to choose a technology stack and focus on the business logic and code, and require only minimal Docker syntax knowledge.

Template support includes .NET, Java Spring, and more.
Single service and multi-services applications are supported.
Deployable to Kubernetes or Swarm orchestrators
Supports Docker App format for multi-environment, parameterized application deployments and application bundling

Device management:

The Docker Desktop Enterprise installer is available as standard MSI (Win) and PKG (Mac) downloads, which allows administrators to script an installation across many developer workstations.

Administrative control:

IT organizations can specify and lock configuration parameters for creation of a standardized development environment, including disabling drive sharing and limiting version pack installations. Developers can then run commands using the command line without worrying about configuration settings.

Under this blog post, we will look at two of the promising features of Docker Desktop Enterprise 2.0:

Application Designer 
Version packs

Installing Docker Desktop Enterprise
Docker Desktop Enterprise is available both for Microsoft Windows and MacOS. One can download via the below links:

Windows
Mac

The above installer includes:

Docker Engine,
Docker CLI client, and
Docker Compose

Please note that you will have to clean up Docker Desktop Community Edition before you install Enterprise edition. Also, Enterprise version will require a separate License key which you need to buy from Docker, Inc.
To install Docker Desktop Enterprise, double-click the .msi or .pkg file and initiate the Setup wizard:

Click “Next” to proceed further and accept the End-User license agreement as shown below:

Click “Next” to proceed with the installation.

Once installed, you will see Docker Desktop icon on the Windows Desktop as shown below:

License file
As stated earlier, to use Docker Desktop Enterprise, you must purchase Docker Desktop Enterprise license file from Docker, Inc.
The license file must be installed and placed under the following location: C:UsersDockerAppDataRoamingDockerdocker_subscription.lic
If the license file is missing, you will be asked to provide it when you try to run Docker Desktop Enterprise. Once the license file is supplied, Docker Desktop Enterprise should come up flawlessly.

What’s New in Docker Desktop UI?
Docker Desktop Enterprise provides you with additional features compared to the Community edition. Right click on whale icon on Task Manager and select “About Docker Desktop” to show up the below window.

You can open up Powershell to verify Docker version up and running. Click on “Settings” option to get list of various sections like shared drives, advanced settings, network, proxies, Docker daemon and Kubernetes.

One of the new features introduced with Docker Desktop Enterprise is to allow Docker Desktop to start whenever you login automatically. This feature can be enabled by selecting “Start Desktop when you login” under General Tab. One can automatically check for updates by enabling this feature.
Docker Desktop Enterprise gives you the flexibility to pre-select resource limitations to make available for Docker Engine as shown below. Based on your system configuration and type of application you are planning to host, you can increase or decrease the resource limit.

Docker Desktop Enterprise includes a standalone Kubernetes server that runs on your Windows laptop, so that you can test deploying your Docker workloads on Kubernetes.

The Kubectl is a command line interface for running commands against Kubernetes clusters. It comes with Docker Desktop by default and one can verify by running the below command:

Running Your First Web Application
Let us try running the custom built Web application using the below command:

Open up the browser to verify that web page is up and running as shown below:

Application Designer

Application Designer provides a library of application and service templates to help Docker developers quickly create new Docker applications. Application templates allow you to choose a technology stack and focus on the business logic and code, and require only minimal Docker syntax knowledge.
Building a Linux-based Application Using Application Designer
Under this section, I will show you how to get started with Application Designer feature which was introduced for the first time.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Let us first try using the set of preconfigured application by clicking on “Choose a template”

Let us test drive Linux-based application. Click on “Linux” option and proceed further. This opens up a variety of ready-made templates as shown below:

Spring application is also included as part of Docker Desktop Enterprise which is basically a sample Java application with Spring framework and a Postgres database as shown below:

Let us go ahead and try out a sample python/Flask application with an Nginx proxy and a MySQL database. Select the desired application template and choose your choice of Python version and accessible port. You can select your choice of MySQL version and Nginx proxy. For this example, I choose Python version 3.6, MySQL 5.7 and Nginx proxy exposed on port 80.

Click on “Continue” to build up this application stack. This should build up your application stack.
Done. Click on “Run Application” to bring up your web application stack.
Once you click on “Run Application”, you can see the output right there on the screen as shown below:

As shown above, one can open up code repository in Visual Studio Code & Windows explorer. You get options to start, stop and restart your application stack.

To verify its functionality, let us try to open up the web application as shown below:

Cool, isn’t it?
Building Windows-based Application using Application Designer
Under this section, we will see how to build Windows-based application using the same Application Designer tool.
Before you proceed, we need to choose “Switch to Windows container” as shown below to allow Windows based container to run on our Desktop.

Right click on whale-icon in the Taskbar and choose “Design New Application”. Once you click on it, it will open the below window:

Click on “Choose a template” and select Windows this time as shown below:

Once you click on Windows, it will open up a sample ASP.Net & MS-SQL application.

Once clicked, it will show frontend and backend with option to set up desired port for your application.

I will go ahead and choose port 82 for this example. Click on “Continue” and supply your desired application name. I named it as “mywinapp2”. Next, click on “Scaffold” to build up your application stack.

While the application stack is coming up, you can open up Visual Studio to view files like Docker Compose, Dockerfile as shown below:

One can view logs to see what’s going on in the backend. Under Application Designer, one can select “Debug” option to open up “View Logs” to view the real time logs.

By now, you should be able to access your application via web browser.
Version Packs
Docker Desktop Enterprise 2.0 is bundled with default version pack Enterprise 2.1 which includes Docker Engine 18.09 and Kubernetes 1.11.5. You can download it via this link.

If you want to use a different version of the Docker Engine and Kubernetes for development work install version pack Enterprise 2.0, you can download version pack Enterprise 2.0 via this link.
Version packs are installed manually or, for administrators, by using the command line tool. Once installed, version packs can be selected for use in the Docker Desktop Enterprise menu.
Installing Additional Version Packs
When you install Docker Desktop Enterprise, the tool is installed under C:Program FilesDockerDesktop location. Version packs can be installed by double-clicking a .ddvp file. Ensure that Docker Desktop is stopped before installing a version pack. The easiest way to add Version Pack is through CLI running the below command:
Open up Windows Powershell via “Run as Administrator” and run the below command:
dockerdesktop-admin.exe’ -InstallVersionPack=’C:Program Files
DockerDockerenterprise-2.0.ddvp’
Uninstalling Version Packs
Uninstalling Version Pack is a matter of single-line command as shown below:
dockerdesktop-admin.exe’ -UninstallVersionPack <VersionPack>
In my next blog post, I will show you how to leverage Application Designer tool to build custom application.
References:

https://goto.docker.com/Docker-Desktop-Enterprise.html

https://blog.docker.com/2018/12/introducing-desktop-enterprise/

The post A First Look at Docker Desktop Enterprise appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes Lifecycle Management with Docker Kubernetes Service (DKS)

There are many tutorials and guides available for getting started with Kubernetes. Typically, these detail the key concepts and  outline the steps for deploying your first Kubernetes cluster. However, when organizations want to roll out Kubernetes at scale or in production, the deployment is much more complex and there are a new set of requirements around both the initial setup and configuration and the ongoing management – often referred to as “Day 1 and Day 2 operations.”

Docker Enterprise 3.0, the leading container platform, includes Docker Kubernetes Service (DKS) – a seamless Kubernetes experience from developers’ desktops to production servers. DKS makes it simple for enterprises to secure and manage their Kubernetes environment by abstracting away many of these complexities. With Docker Enterprise, operations teams can easily deploy, scale, backup and restore, and upgrade a certified Kubernetes environment using a set of simple CLI commands. In this blog post, we’ll highlight some of these new features.
A Declarative Kubernetes Cluster Model
A real Kubernetes cluster deployment will typically involve design and planning to ensure that the environment integrates with an organization’s preferred infrastructure, storage and networking stacks. The design process usually requires cross-functional expertise to determine the instance size, disk space, the load balancer design, and many other factors that will be custom to your particular needs.
To help simplify the deployment, Docker has created a CLI plugin for simplified Docker cluster operations. It’s based on Docker Certified Infrastructure that was launched last year for AWS, Azure, and vSphere environments. It’s now an automated tool using a declarative model so you can “source control” your cluster configurations with a cluster YAML file with the following structure:
variable:
        <name>:
provider:
        <name>:
                <parameter>:
cluster:
        <component>:
                <parameter>:
resource:
        <type>:
                <name>:
                        <parameter>:
The file defines your configuration settings, including the instance types, Docker Enterprise versions which reflect different Kubernetes versions, the OS used, networking setup and more. Once defined, this file can be used with the new ‘docker cluster’ CLI commands:

Create & Inspect
Once a cluster YAML is defined, it can be used to create and clone environments with the same desired configurations. This makes it simple to set up identical staging and production environments and to move between them using the new context switching features of Docker Enterprise. With Docker Enterprise, the Kubernetes managers and workers are automatically installed with all of the necessary components and we also include built-in “batteries included” CNI plugin with Calico:

You can also inspect the cluster you are looking at to view the settings from which it is deployed.
Simple Day 2 Operations
One of the more challenging facets of managing your own Kubernetes infrastructure is upgrades and backups. In a manual deployment, each of the components would need to be upgraded on its own and scripts would be necessary to help automate this. With Docker Enterprise, these are incredibly simple.
Update
Changes to your environment are simple with ‘docker cluster update’. Using the declarative model, you can now change, for example, a version number in your configuration file. The CLI plugin will identify the change and implement a safe upgrade of that particular component. This helps with upgrading the engine, Universal Control Plane, and Docker Trusted Registry using a single command by utilizing a simple configuration file.
`docker cluster` also takes advantage of a new Docker Enterprise 3.0 enhancement which supports upgrading the cluster without any downtime using a blue-green deployment model for worker nodes. Instead of upgrading worker node engines in-place, a new set of worker nodes may also be joined to the cluster with the latest engine to upgrade worker nodes in a blue-green fashion. This allows you to migrate an application from older “green” nodes to newer “blue” nodes that have joined the cluster without downtime.
Backup & Restore
The ‘docker cluster backup’ command stores your cluster environment as a single tarball file that can be stored in your desired location. You can optionally encrypt that back up and then easily restore a cluster from that backup.

With #Docker Enterprise, operations teams can easily deploy, scale, backup and restore, and upgrade a certified #Kubernetes environment using a set of simple CLI commands. Check out these new features to help you get started:Click To Tweet

To learn more about Docker Kubernetes Service in Docker Enterprise 3.0

Watch the DockerCon talk: Lifecycle Management of Docker Clusters
Watch the OnDemand webinar: How Docker Simplifies Kubernetes for the Masses
Get access to the Docker Enterprise 3.0 beta

The post Kubernetes Lifecycle Management with Docker Kubernetes Service (DKS) appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

3 Customer Perspectives on Using Docker Enterprise with Kubernetes

We’ve talked a lot about how Docker Enterprise supports and simplifies Kubernetes. But how are organizations actually running Kubernetes on Docker Enterprise? What have they learned from their experiences?

Here are three of their stories:
McKesson Corporation
When you visit the doctor’s office or hospital, there’s a very good chance McKesson’s solutions and systems are helping make quality healthcare possible. The company ranks number 6 in the Fortune 100 with $208 billion in revenue, and provides information systems, medical equipment and supplies to healthcare providers.
The technology team built the McKesson Kubernetes Platform (MKP) on Docker Enterprise to give its developers a consistent ecosystem to build, share and run software in a secure and resilient fashion. The multi-tenant, multi-cloud platform runs across Microsoft Azure, Google Cloud Platform and on-premise systems supporting several use cases:

Monolithic applications. The team is containerizing an existing SAP e-commerce application that supports over 400,000 customers. The application platform needs to be scalable, support multi-tenancy and meet U.S. and Canadian compliance standards, including HIPAA, PCI and PIPEDA.
Microservices. Pharmaceutical analytics teams are doing a POC of blockchain applications on the platform.
CI/CD. Developer teams are containerizing the entire software pipeline based on Atlassian Bamboo.
Batch jobs. Other teams are moving ETL (extract, transform, load) data functions to the new platform.

You can learn more about McKesson’s story in their recorded DockerCon 2019 talk.
Fortune 500 Financial Services Firm
One of the most recognizable financial services brands is using Docker Enterprise to build a on-premise Kubernetes deployment to support machine learning projects.
Docker Enterprise provides a secure runtime environment for Kubernetes. The team selected Kubernetes as their orchestration solution for several reasons:

Support for machine learning solutions such as Kubeflow, Tensorflow and Jupyter.
GPU support for compute-intensive workloads.
Extensive container storage interface (CSI) options give them flexibility to provide persistent storage to data-intensive applications.

Even though the company has internal expertise in Kubernetes, they already use Docker Enterprise to support their e-commerce applications. The team recognized the benefits of running Kubernetes within Docker Enterprise.
Citizens Bank
The mortgage division at Citizens Bank has been running Docker Enterprise in production since February 2017. The team used Swarm for orchestration from the start, and quickly scaled to supporting 1,000 containers in a single cluster in just over a year.
The deployment grew, and by the fall of 2018 they were running 2,000 containers. When Docker added support for Kubernetes, the team evaluated migrating some applications — particularly stateful services like Kafka and Zookeeper — to take advantage of native automatic scaling and application deployment in Kubernetes.
But for Citizens Bank, the complexity/simplicity trade-off led them to reconsider and keep all workloads on Swarm. As Mike Noe, one of their DevOps engineers put it:
“Swarm and Kubernetes were built with two different philosophies in mind. Swarm aims to offer high impact solutions while prioritizing simplicity. Kubernetes aims to offer a solution for literally every conceivable problem.”
Their advice – pick the right orchestrator for your situation. You can learn more about the Citizens story in their DockerCon 2019 session recording.
The Best of Both Worlds
The Docker platform includes a secure and fully-conformant Kubernetes environment for developers and operators of all skill levels, providing out-of-the-box integrations for common enterprise requirements while still enabling complete flexibility for expert users. That means you can run Kubernetes interchangeably with Swarm orchestration in the same cluster for ultimate flexibility at runtime.

Hear from three #Docker customers on how they are running #Kubernetes on Docker Enterprise:Click To Tweet

For more information

Register for a webinar in the Kubernetes webinar series.
Learn about Docker Kubernetes Service
Find out about Docker Enterprise

The post 3 Customer Perspectives on Using Docker Enterprise with Kubernetes appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/