Introducing Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and more

It’s another exciting day with a new release of Docker Datacenter (DDC) on 1.13. This release includes loads of new features around app services, security, image distribution and usability.  
Check out the upcoming webinar on Feb 16th for a demo of all the latest features.
Let’s dig into some of the new features:
Integrated Secrets Management
This release of Docker Datacenter includes integrated support for secrets management from development all the way to production.

This feature allows users to store confidential data (e.g. passwords, certificates) securely on the cluster and inject these secrets to a service. Developers can reference the secrets needed by different services in the familiar Compose file format and handoff to IT for deployment in production. Check out the blog post on Docker secrets management for more details on implementation. DDC integrates secrets and adds several enterprise-grade enhancements, including lifecycle management and deployment of secrets in the UI, label-based granular access control for enhanced security, and auditing users’ access to secrets via syslog.
Image Security Scanning and Vulnerability Monitoring
Another element of delivering safer apps is around the ability to ensure trusted delivery of the code that makes up that app. In addition to Docker Content Trust (already available in DDC), we are excited to add Docker Security Scanning to enable binary level scanning of images and their layers. Docker Security Scanning creates a bill of materials (BOM) of your image and checks packages and versions against a number of  CVE databases. The BOM is stored and checked regularly against the CVE databases, so if a new vulnerability is reported against an existing package, any user can be notified of the new vulnerability. Additionally, system admins can integrate their CI and build systems with the scanning service using the new registry webhooks.

 

The latest features secrets, security scanning, caching and moreClick To Tweet

HTTP Routing Mesh (HRM)
Previously available as an experimental feature, the HTTP (Hostname) based routing mesh is available for production in this release.  HRM extends the existing swarm-mode networking routing mesh by enabling you to route HTTP-based hostnames to your services.

New features in this release include ability to manage HRM for a service via the UI, HTTPS pass-through support via SNI protocol, using multiple HRM networks for application isolation, and sticky sessions integration. See the screenshot below for how HRM can be easily configured within the DDC admin UI.

Compose for Services
This release of DDC has increased support for managing complex distributed applications in the form of stacks&;groups of services, networks, and volumes. DDC allows users to create stacks via Compose files (version 3.1 yml) and deploy through both the UI and CLI. Developer can specify the stack via the familiar Compose file format; for a seamless handoff, IT can cut and paste that the Compose file and deploy services into production.
 
Once deployed, DDC users are able to manage stacks directly through the UI and click into individual services, tasks, networks, and volumes to manage their lifecycle operations.

Content Cache
For companies with app teams that are distributed across a number of locations and want to maintain centralized control of images, developer performance is top of mind. Having developers connect to repositories thousands of miles away make not always make sense when considering latency and bandwidth.  New for this release is the ability to set up satellite registry caches for faster pulls of Docker images. Caches can be assigned to individual users or configured by each user based on their current location. The registry caches can be deployed in a variety of scenarios including;  high availability and in complex cache-chaining scenarios for the most stringent datacenter environments.

Registry Webhooks
To better integrate with external systems, DDC now includes webhooks to notify external systems of registry events. These events range from push or pull events in individual repositories, security scanning events, create or deletion of repositories, and system events like garbage collection. With this full set of integration points, you can fully automate your continuous integration environment and docker image build process.
Usability Improvements
As always, we have added a number of features to refine and continuously improve the system usability for both developers and IT admins.

Cluster and node level metrics on CPU, memory, and disk usage. Sort nodes by usage in order to quickly troubleshoot issues, and the metrics are also rolled up into the dashboard for a bird’s eye view of resource usage in the cluster.
Smoother application update process with support for rollback during rolling updates, and status notifications for service updates.
Easier installation and configuration with the ability to copy a Docker Trusted Registry install command directly from the Universal Control Plane UI
Additional LDAP/AD configuration options in the Universal Control Plane UI
Cloud templates on AWS and Azure to deploy DDC in a few clicks

These new features and more are featured in a Docker Datacenter demo video series

Get started with Docker Datacenter
These are just the latest set of features to join the Docker Datacenter

Learn More about Docker Secrets Management
Get the FREE 30 day trial 
Register for an upcoming webinar

The post Introducing Docker Datacenter on 1.13 with Secrets, Security Scanning, Content Cache and more appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing Federal Security and Compliance Controls for Docker Datacenter

Security and compliance are top of mind for IT organizations. In a technology-first era rife with cyber threats, it is important for enterprises to have the ability to deploy applications on a platform that adheres to stringent security baselines. This is especially applicable to U.S. Federal Government entities, whose wide-ranging missions, from public safety and national security, to enforcing financial regulations, are critical to keeping policy in order.

Federal agencies and many non-government organizations are dependent on various standards and security assessments to ensure their systems are operating in controlled environments. One such standard is NIST Special Publication 800-53, which provides a library of security controls to which technology systems should adhere. NIST 800-53 defines three security baselines: low, moderate, and high. The number of security controls that need to be met increases from the low to high baselines, and agencies will elect to meet a specific baseline depending on the requirements of their systems.
Another assessment process known as the Federal Risk and Authorization Management Program, or for short, further expands upon the NIST 800-53 controls by including additional security requirements at each baseline. FedRAMP is a program that ensures cloud providers meet stringent Federal government security requirements.
When an agency elects to deploy a system like Docker Datacenter for production use, they must complete a security assessment and grant the system an Authorization to Operate (ATO). The FedRAMP program already includes provisional ATOs at specific security baselines for a number of cloud providers, including AWS and Azure, with scope for on-demand compute services (e.g. Virtual Machines, Networking, etc). Since many cloud providers have already met the requirements defined by FedRAMP, an agency that leverages the provider’s services must only authorize the components of its own system that it deploys and manages at the chosen security baseline.
A goal of Docker is to help make it easier for organizations to build compliant enterprise container environments. As such, to help expedite the agency ATO process, we&;re excited to release NIST 800-53 Revision 4 security and privacy control guidance for Docker Datacenter at the FedRAMP Moderate baseline.
The security content is available in two forms:

An open source project where the community can collaborate on the compliance documentation itself and
System Security Plan (SSP) template for Azure Government

 

 
First, we&8217;ve made the guidance available as part of a project available here. The documentation in the repository is developed using a format known as OpenControl, an open source, &;compliance-as-code&; schema and toolkit that helps software vendors and organizations build compliance documentation. We chose to use OpenControl for this project because we&8217;re big fans of tools at Docker, and it really fits our development principals quite nicely. OpenControl also includes schema definitions for other standards including Payment Card Industry Data Security Standard (PCI DSS). This helps to address compliance needs for organizations outside of the public sector. We’re also licensing this project under CC0 Universal Public Domain. To accelerate compliance for container platforms, Docker is making this project public domain and inviting folks to contribute to the documentation to help enhance the container compliance story.
 
Second, we&8217;re including this documentation in the form of a System Security Plan (SSP) template for running Docker Datacenter on Microsoft Azure Government. The template can be used to help lessen the time it takes for an agency to certify Docker Datacenter for use. To obtain these templates, please contact compliance@docker.com.
We’ve also started to experiment with natural language processing which you’ll find in the project’s repository on GitHub. By using Microsoft’s Cognitive Services Text Analytics API, we put together a simple tool that vets the integrity of the actual security narratives and ensures that what’s written holds true to the NIST 800-53 control definitions. You can think of this as a form of automated proofreading. We’re hoping that this helps to open the door to new and exciting ways to develop content!

New federal security and compliance controls for on @Azure FedRAMP To Tweet

More resources for you:

See What’s New and Learn more about Docker Datacenter
Sign up for a free 30 day trial of Docker Datacenter
Learn More about Docker in public sector.

The post Announcing Federal Security and Compliance Controls for Docker Datacenter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Global Mentor Week: Thank you Docker Community!

Danke, рақмет сізге, tak, धन्यवाद, cảm ơn bạn, شكرا, mulțumesc, Gracias, merci, asante, ευχαριστώ, thank you community for an incredible Docker Global Mentor Week! From Tokyo to Sao Paulo, Kisimu to Copenhagen and Ottowa to Manila, it was so awesome to see the energy from the community coming together to celebrate and learn about Docker!

Over 7,500 people registered to attend one of the 110 mentor week events across 5 continents! A huge thank you to all the Docker meetup organizers who worked hard to make these special events happen and offer Docker beginners and intermediate users an opportunity to participate in Docker courses.
None of this would have been possible without the support (and expertise!) of the 500+ advanced Docker users who signed up as mentors to help newcomers .
Whether it was mentors helping attendees, newcomers pushing their first image to Docker Hub or attendees mingling and having a good time, everyone came together to make mentor week a success as you can see on social media and the Facebook photo album.
Here are some of our favorite tweets from the meetups:
 

@Docker LearnDocker at Grenoble France 17Nov2016 @HPE_FR pic.twitter.com/8RSxXUWa4k
— Stephane Bureau (@SBUCloud) November 18, 2016

Awesome turnout at tonight&;s @DockerNYC learndocker event! We will be hosting more of these &; Keep tabs on meetup: https://t.co/dT99EOs4C9 pic.twitter.com/9lZocCjMPb
— Luisa M. Morales (@luisamariethm) November 18, 2016

And finally&; &;Tada&; Docker Mentor Weeklearndocker pic.twitter.com/6kzedIoGyB
— Károly Kass (@karolykassjr) November 17, 2016

 
Learn Docker
In case you weren’t able to attend a local event, the five courses are now available to everyone online here: https://training.docker.com/instructor-led-training
Docker for Developers Courses
Developer &8211; Beginner Linux Containers
This tutorial will guide you through the steps involved in setting up your computer, running your first containers, deploying a web application with Docker and running a multi-container voting app with Docker Compose.
Developer &8211; Beginner Windows Containers
This tutorial will walk you through setting up your environment, running basic containers and creating a Docker Compose multi-container application using Windows containers.
Developer &8211; Intermediate (both Linux and Windows)
This tutorial teaches you how to network your containers, how you can manage data inside and between your containers and how to use Docker Cloud to build your image from source and use developer tools and programming languages with Docker.
Docker for Operations courses
This courses are step-by-step guides where you will build your own Docker cluster, and use it to deploy a sample application. We have two solutions for you to create your own cluster.

Using play-with-docker

Play With Docker is a Docker playground that was built by two amazing Docker captains: Marcos Nils and Jonathan Leibiusky during the Docker Distributed Systems Summit in Berlin last October.
Play with Docker (aka PWD) gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode.
Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.
To get started, go to http://play-with-docker.com/ and click on ADD NEW INSTANCE five times. You will get five &8220;docker-in-docker&8221; containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to &8220;SSH on node X&8221;, just go to the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell you to &8220;connect to the IP address of your node on port XYZ&8221; you will have to use a different method.
We suggest to use &8220;supergrok&8221;, a container offering a NGINX+ngrok combo to expose your services. To use it, just start (on any of your nodes) the jpetazzo/supergrok image. The image will output further instructions:
docker run –name supergrok -d jpetazzo/supergrok
docker logs –follow supergrok
The logs of the container will give you a tunnel address and explain you how to connected to exposed services. That&8217;s all you need to do!
You can also view this excellent video by Docker Brussels Meetup organizer Nils de Moor who walks you through the steps to build a Docker Swarm cluster in a matter of seconds through the new play-with-docker tool.

 
Note that the instances provided by Play-With-Docker have a short lifespan (a few hours only), so if you want to do the workshop over multiple sessions, you will have to start over each time &8230; Or create your own cluster with option below.

Using Docker Machine to create your own cluster

This method requires a bit more work to get started, but you get a permanent cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or the Docker Toolbox, you&8217;re all set already). You will also need:

credentials for a cloud provider (e.g. API keys or tokens),
or a local install of VirtualBox or VMware (or anything supported by Docker Machine).

Full instructions are in the prepare-machine subdirectory.
Once you have decided what option to choose to create your swarm cluster, you ready to get started with one of the operations course below:
Operations &8211; Beginner
The beginner part of the Ops tutorial will teach you how to set up a swarm, how to use it to host your own registry, how to build your app container images and how to deploy and scale a distributed application called Dockercoins.
Operations &8211; Intermediate
From global container scheduling, overlay networks troubleshooting, dealing with stateful services and node management, this tutorial will show you how to operate your swarm cluster at scale and take you on a swarm mode deep dive.

Danke, Gracias, Merci, Asante, ευχαριστώ, thank you Docker community for an amazing&8230;Click To Tweet

The post Global Mentor Week: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Three Considerations for Planning your Docker Datacenter Deployment

Congratulations! You&;ve decided to make the change your application environment with Docker Datacenter. You&8217;re now on your way to greater agility, portability and control within your environment. But what do you need to get started? In this blog, we will cover things you need to consider (strategy, infrastructure, migration) to ensure a smooth POC and migration to production.
1. Strategy
Strategy involves doing a little work up-front to get everyone on the same page. This stage is critical to align expectations and set clear success criteria for exiting the project. The key focus areas are to determining your objective, plan out how to achieve it and know who should be involved.
Set the objective &; This is a critical step as it helps to set clear expectations, define a use case and outline the success criteria for exiting a POC. A common objective is to enable developer productivity by implementing a Continuous Integration environment with Docker Datacenter.
Plan how to achieve it &8211; With a clear use case and outcome identified, the next step is to look at what is required to complete this project. For a CI pipeline, Docker is able to standardize the development environment, provide isolation of the applications and their dependencies and eliminate any &;works on my machine&; issues to facilitate the CI automation. When outlining the plan, make sure to select the pilot application. The work involved will vary depending on whether it is a legacy application refactoring or new application development.
Integration between source control and CI allows Docker image builds to be automatically triggered from a standard Git workflow.  This will drive the automated building of Docker images. After Docker images are built they are shipped to the secure Docker registry to store them (Docker Trusted Registry) and role based access controls enable secure collaboration. Images can then be pulled and deployed across a secure cluster as running applications via the management layer of Docker Datacenter (Universal Control Plane).
Know who should be involved &8211; The solution will involve multiple teams and it is important to include the correct people early to avoid any potential barriers later on. These teams can include the following teams, depending on the initial project: development, middleware, security, architects, networking, database, and operations. Understand their requirements and address them early and gain consensus through collaboration.
PRO TIP &8211; Most first successes tend to be web applications with some sort of data tier that can either utilize traditional databases or be containerized with persistent data being stored in volumes.
 
2. Infrastructure
Now that you understand the basics of building a strategy for your deployment, it’s time to think about infrastructure.  In order to install Docker Datacenter (DDC) in a highly available (HA) deployment, the minimum base infrastructure is six nodes.  This will allow for the installation of three UCP managers and three DTR replicas on worker nodes in addition to the worker nodes where the workloads will be deployed. An HA set up is not required for an evaluation but we recommend a minimum of 3 replicas and managers for production deployments so your system can handle failures.
PRO TIP &8211; A best practice is to not deploy and run any container workloads on the UCP managers and DTR replicas. These nodes perform critical functions within DDC and are best if they only run the UCP or DTR services.
Nodes are defined as cloud, virtual or physical servers with Commercially Supported (CS) Docker Engine installed as a base configuration.
Each node should consist of a minimum of:

4GB of RAM
16GB storage space
For RHEL/CentOS with devicemapper: separate block device OR additional free space on the root volume group should be available for Docker storage.
Unrestricted network connectivity between nodes
OPTIONAL Internet access to Docker Hub to ease the initial downloads of the UCP/DTR and base content images
Installed with Docker supported operating system 
Sudo access credentials to each node

Other nodes may be required for related CI tooling. For a POC built around DDC in a HA deployment with CI/CD, ten nodes are recommended. For a POC built around DDC in a non-HA deployment with CI/CD, five nodes are recommended.
Below are specific requirements for the individual components of the DDC platform:
Universal Control Plane

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for UCP in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for UCP and may be used but additional configuration is required).
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

Docker Trusted Registry

Commercially Supported (CS) Docker Engine must be used in conjunction with DDC.
TCP Load balancer should be available for DTR in an HA configuration.
A valid DNS entry should be created for the load balancer VIP.
Image Storage options include a clustered filesystem for HA or blob storage (AWS S3, Azure, S3 compatible storage, or OpenStack Swift)
SSL certificate from a trusted root CA should be created (a self-signed certificate is created for DTR and may be used but additional configuration is required).
LDAP/AD is available for authentication; managed built-in authentication can also be used but requires additional configuration
DDC License for 30 day trial or annual subscription must be obtained or purchased for the POC.

The POC design phase is the ideal time to assess how Docker Datacenter will integrate into your existing IT infrastructure, from CI/CD, networking/load balancing, volumes for persistent data, configuration management, monitoring, and logging systems. During this phase, understand how  how the existing tools fit and discover any  gaps in your tooling. With the strategy and infrastructure prepared, begin the POC installation and testing. Installation docs can be found here.
 
3. Moving from POC Into Production
Once you have the built out your POC environment, how do you know if it’s ready for production use? Here are some suggested methods to handle the migration.

Perform the switchover from the non-Dockerized apps to Docker Datacenter in pre-production environments. Have Dev, Test, and Prod environments, switchover Dev and/or Test and run through a set burn in cycle to allow for the proper testing of the environment to look for any unexpected or missing functionality. Once non-production environments are stable, switch over to the production environment.

Start integrating Docker Datacenter alongside your existing application deployments. This method requires that the application can run with multiple instances running at the same time. For example, if your application is fronted by a load balancer, add the Dockerized application to the existing load balancer pool and begin sending traffic to the application running in Docker Datacenter. Should issues arise, remove the Dockerized application running  from the load balancer pool until issues can be resolved.

Completely cutover to a Dockerized environment all in one go. As additional applications begin to utilize Docker Datacenter, continue to use a tested pattern that works best for you to provide a standard path to production for your applications.

We hope these tips, learned from first hand experience with our customers help you in planning for your deployment. From standardizing your application environment and simultaneously adding more flexibility for your application teams, Docker Datacenter gives you a foundation to build, ship and run containerized applications anywhere.

3 Considerations for a successful deployment Click To Tweet

Enjoy your Docker Datacenter POC

Get started with your Docker Datacenter POC
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

The post Three Considerations for Planning your Docker Datacenter Deployment appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Get to Know the Docker Datacenter Networking Updates

The latest release of Docker Datacenter (DDC) on Docker Engine 1.12 brings many new networking features that were designed with service discovery and high availability in mind. As organizations continue their journey towards modernizing legacy apps and microservices architectures, these new features were created to address modern day infrastructure demands. DDC builds on and extends the built-in orchestration capabilities including declarative services, scheduling, networking and security features of Engine 1.12. In addition to these new features, we published a new Reference Architecture to help guide you in designing and implementing this for your unique application requirements.

Among the new features in DDC are:

DNS for service discovery
Automatic internal service load balancing
Cluster-wide transport-layer (L4) load balancing
Cluster-wide application-layer (L7) load balancing using the new HTTP Routing Mesh (HRM) experimental feature

 
When creating a microservice architecture where services are often decoupled and communicated using APIs, there is an intrinsic need for many of these services to know how to communicate with each other. If a new service is created, how will it know where to find the other services it needs to communicate with? As a service needs to be scaled, what mechanism can be used for the additional containers to be added to a load balancer pool? DDC ships with the tools that tackle these challenges and enable engineers to deliver software at the pace of ever shifting business needs.
As services are created in DDC, the service name registers in a DNS resolver for each docker network. Each service will register in the Docker DNS resolver and can be reached from other applications on the same network by its service name. DNS works well for service discovery; it requires minimal configuration and can integrate with existing systems since the model has existed for decades.
It&;s also important for services to remain highly available after they discover each other. What good is a newly discovered service if you can&8217;t reach the API that developers labored over for weeks? I think we all know the answer to that, and it&8217;s a line in an Edwin Starr song (Hint: Absolutely nothing). There are a few new load balancing features introduced in DDC that are designed to always keep your services accessible. When services register in DNS, they are automatically assigned a Virtual IP (VIP). Internal requests pass through the VIP and then are load balanced. Docker handles the distribution of traffic among each healthy service task.
 
There are two new ways to load balance applications externally into a DDC managed cluster: the Swarm Mode Routing Mesh and the experimental HTTP Routing Mesh (HRM).

The Swarm Mode Routing Mesh works on the transport-layer (L4) where the admin assigns a port to a service (8080 in the example below) and when the external web traffic comes to the port on any host, the Routing Mesh will route the traffic onto any host that is running a container for that service. With Routing Mesh, the host that accepts the incoming traffic does not need to have the service running on it.
The HTTP Routing Mesh works on the application-layer (L7) where the admin assigns a label to the service that corresponds to the host address. The external load balancer routes the hostnames to the nodes and the Routing Mesh send the traffic across the nodes in the cluster to the correct containers for the service.

These offer multiple options to load balance and keep your application highly available

Finally, while it&8217;s important to keep your services highly available, it&8217;s also important for the management of your cluster to be highly available. We improved the API health checks for Docker Trusted Registry (DTR) so that a load balancer can easily be placed in front of all replicas in order to route traffic to healthy instances.  The new health check API endpoint is /health and you can set a HTTPS check from your load balancer to the new endpoint to ensure high availability of DTR.
 

There is a new Reference Architecture available with more detailed information on load balancing with Docker Datacenter and Engine 1.12.  Additionally because DDC is backwards compatible for applications built with previous versions of Docker Engine (1.11 and 1.10 using Docker Swarm 1.2), both the new Routing Mesh and Interlock based load balancing and service discovery are supported in parallel on the same DDC managed cluster. For your applications built with previous versions of Engine, a Reference Architecture for Load Balancing and Service Discovery with DDC + Docker Swarm 1.2 is also available.

New networking features plus Reference ArchitectureClick To Tweet

More Resources:

Read the latest RA: Docker UCP Service Discovery and Load Balancing
See What’s New in Docker Datacenter
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial

The post Get to Know the Docker Datacenter Networking Updates appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Image Signing Policy in Docker Datacenter

My colleague colleague Ying Li and I recently blogged about Securing the Software Supply Chain and drew the analogy between traditional physical supply chains and the creation, building, and deployment involved in a software supply chain. We believe that a software pipeline that can be verified at every stage is an important step in raising the security bar for all software, and we didn’t stop at simply presenting the idea.

Integrated Content Trust and Image Signing Policy
In the recent release of Docker Datacenter,  we announced a new feature that starts to brings these security capabilities together along the software supply chain. Built on Notary, a signing infrastructure based on The Update Framework (TUF), along with Docker Content Trust (DCT), an integration of the Notary toolchain into the Docker client, DDC now allows administrators to set up signing policies that prevent untrusted content from being deployed.
In this release of DDC, the Docker Trusted Registry (DTR) now also ships with integrated Notary services. This means you’re ready to start using DCT and the new Signing Policy features out of the box! No separate server and database to install, configure and connect to the registry.

Bringing it all together
Image signing is important for image creators to provide a proof of origin and verification through a digital signature of that image. Because an image is built in layers and passes through many different stages and is touched by different systems and teams, the ability to tie this together with a central policy ensures a greater level of application security.
In the web UI under settings, the admin can enable Content Trust to enforce that only signed images can be deployed to the DDC managed cluster. As part of that configuration, the admin can also select which signatures are required in order for that image to be deployed.

The configuration screen prompts the admin to select any number of teams from which a signature is required. A team in DDC can be defined as automated systems (Build / CI) or people in your organization.
The diagram below shows a sample workflow where the Content Trust Settings are required to check for CI and QA.

Stage 1: Developer checks in code and kicks of an integration test. Code passes CI and automatically triggers a new image build, signature and push to Docker Trusted Registry (DTR).
Stage 2: QA team pulls image from DTR, performs additional testing and once completed (and passes), signs and pushes the image to DTR
Stage 3: Release engineering goes to deploy the image to the production cluster. Since the Content Trust setting requires a signature from both CI and QA, DDC will check the image for both signatures and since they exist (in our example) will deploy the container.

We are excited to introduce this feature to our enterprise users to increase the security of their software supply chain and add a level of automated enforcement of policies that can be set up centrally.  As applications scale and teams grow, these features help provide assurances with proof of content origin, safe transport and that the approval gates have been met before deploying to production.
Download the free 30 day evaluation of Docker Datacenter to get started today.

offers enhanced security w/layered image signing & policy enforcementClick To Tweet

Learn More

Save your seat: Demo webinar &; Tomorrow Wed Nov. 16th
Learn more by visiting the Docker Datacenter webpage
See What’s New in Docker Datacenter
Read the blog about the Secure Software Supply Chain
Sign up for a free 30 day trial license

The post Introducing Image Signing Policy in Docker Datacenter appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Datacenter adds enterprise orchestration, security policy and refreshed UI

Today we are excited to introduce new additions to Docker Datacenter, our Container as a Service (CaaS) platform for enterprise IT and application teams. Docker Datacenter provides an integrated platform for developers and IT operations teams to collaborate securely on the application lifecycle. Built on the foundation of Docker Engine, Docker Datacenter (DDC) also provides integrated orchestration, management and security around managing resources like access, images, applications, networks and more across the cluster.

This latest release of Docker Datacenter includes a number of new features and improvements focused in the following areas:

Enterprise orchestration and operations to make running and operating multi container applications simple, secure and scalable
Integrated end to end security to cover all of the components and people that interact with the application pipeline
User experience and performance improvements ensure that even the most complex operations are handled efficiently

Let’s dig into some of the new features.
Enterprise orchestration with backward compatibility
This release of Docker Datacenter not only integrates the built in orchestration capabilities of Docker Engine 1.12 utilizing swarm mode and services, but also provides backwards compatibility for standalone containers using the docker run commands. To help enterprise application teams migrate, it is important for us to provide this continuity and time for applications to be updated to services while still supporting environments that may contain both new Docker services and individual Docker containers. We do this by simultaneously enabling swarm mode and running warm containers across the same cluster of nodes. This is completely transparent to the user; it’s all handled as part of the DDC installation and there is nothing for the admin to configure.  The applications built with Docker Compose (version 2) files on Docker Engine 1.10 and 1.11 will continue to operate when deployed to the 1.12 cluster running DDC.
Docker Services, Load Balancing and Service Discovery
We’ve talked about Docker Services before with 1.12, where every Docker Service can easily scale out to add additional instances by declaring a desired start. This enables you to create a replicated, distributed, load balanced process on a swarm, which includes a virtual IP (VIP) and internal load balancing using IPVS. This can all be addressed through Docker Datacenter as well through both the CLI and new refreshed GUI that walks through the process of creating and managing services, especially if you’re new to the concept. You can also optionally add HTTP hostname-based routing using an experimental feature called HTTP Routing Mesh.
 
 
 
Integrated Image Signing and Policy Enforcement
To enable a secure software supply chain requires building security directly into the platform and making it a natural part of any admin tasks. In this release of Docker Datacenter we advance content security with an integration to Docker Content Trust in both a  seamless installation experience and also the ability to enforce deployment policy in the cluster based on the  image signatures. Stay tuned as our security team has a detailed blog on this later this week.
 
Refreshed User Interface and New Features
Providing an intuitive UI that is robust and easy to use is paramount to operating applications at scale, especially applications that can be comprised of tens or even hundreds of different containers that are rapidly changing. With this release we took the opportunity to refresh the GUI as we added more resources to manage and configuration screens.
 
Integrating orchestration into Docker Datacenter also means exposing many of these new capabilities directly in the GUI.  One example is the ability to deploy services directly from the DDC UI. You can simply type all of the parameters like service name, image name, the number of replicas and permissions for this service.
 
In addition to deploying services, new capabilities have been added to the web UI like:

Node Management: The ability to add, remove, pause nodes and drain containers from the node.You can also manage labels and SAN (Subject Alternative Name) for certificates assigned to each node.
Tag Metadata: Within the image repository, DDC now displays additional metadata for each tag that’s pushed to the repository, to provide greater visibility to what’s happening and who’s pushing changes with each image.
Container Health Checks: Introduced in Docker Engine 1.12 command line is available in the Docker Datacenter UI as part of the container details page.
Access Control for Networks: Now networks can be assigned labels for granular levels of access control, just like services and containers.
DTR Installer: The commands to deploy the Trusted Registry are now available from inside the UI so it’s easier than ever to get working as quickly as possible.
Expanded Storage Support for images: we’ve added and enhanced support for image storage including new support for Google Cloud Storage, S3 Compatible Object Storage (e.g. IBM Cleversafe) and enhanced configuration for NFS.

This is a jam packed release of big and small features &; all designed to bring more agility and control to the enterprise application pipeline. Our goal is to make it easy for application teams to build and operate dockerized workloads in the infrastructure they already have. Don’t miss the demo webinar on Wednesday to check out the new features in real time.
Learn More

Save your seat: Demo webinar on Wed Nov. 16th
Learn more by visiting the Docker Datacenter webpage
Sign up for a free 30 day trial license

Check out the latest w/ more security, new GUI and built in orchestrationClick To Tweet

The post Docker Datacenter adds enterprise orchestration, security policy and refreshed UI appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker at Tech Field Day 12

Docker will be presenting at Tech Field Day 12, and you can sit in on the sessions &; at least virtually.
Tech Field Day is an opportunity for IT practitioners to hear from some of the leading technology companies, and Docker is excited to be participating again. Many thanks to Stephen Foskett and Tom Hollingsworth for cultivating a vibrant community of technical leaders and evangelists and inviting us to participate. Looking forward to meeting more of the delegates.
Our session will be Wednesday, November 16th, from 4:30 to 6:30pm Pacific. We have a full slate of topics including:

Docker Datacenter: What is Docker Datacenter and how can it help organizations implement their own Container as a Service platform.
Docker for Windows Server: An overview of the integration of Docker containers and Windows Server 2016.
Docker for AWS and Docker for Azure: Learn about the easiest way to deploy and manage clusters of Docker hosts on both Azure and AWS.
Docker Security: We’ll discuss how to implement a secure software supply chain with Docker.
Docker Networking: A conversation on how Docker allows developers to define container centric networks that run on top of your existing infrastructure.

Not at the event? You will be able to watch live streams of all these presentations here.
Finally, If you’d like to check out videos of presentations from previous Tech Field Day events visit our page on the Tech Field Day site.
See you online!
More Resources:

Watch live: All the presentations
View On Demand: Sessions from previous events
Learn More about Docker
Try Docker Datacenter free for 30 days

Watch live to learn about , Networking, Security and moreClick To Tweet

The post Docker at Tech Field Day 12 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/