Introducing Moby Project: a new open-source project to advance the software containerization movement

Since Docker democratized software four years ago, a whole ecosystem grew around containerization and in this compressed time period it has gone through two distinct phases of growth. In each of these two phases, the model for producing container systems evolved to adapt to the size and needs of the user community as well as the project and the growing contributor ecosystem.
The Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas.
Let’s review how we got where we are today. In 2013-2014 pioneers started to use containers and collaborate in a monolithic open source codebase, Docker and few other projects, to help tools mature.

Then in 2015-2016, containers were massively adopted in production for cloud-native applications. In this phase, the user community grew to support tens of thousands of deployments that were backed by hundreds of ecosystem projects and thousands of contributors. It is during this phase, that Docker evolved its production model to an open component based approach. In this way, it allowed us to increase both the surface area of innovation and collaboration.
What sprung up were new independent Docker component projects that helped spur growth in the partner ecosystem and the user community. During that period, we extracted and rapidly innovated on components out of the Docker codebase so that systems makers could reuse them independently as they were building their own container systems: runc, HyperKit, VPNKit, SwarmKit, InfraKit, containerd  etc..

Being at the forefront of the container wave, one trend we see emerging in 2017 is containers going mainstream, spreading to every category of computing, server, data center, cloud, desktop, Internet of Things and mobile.  Every industry and vertical market, finance, healthcare, government, travel, manufacturing. And every use case, modern web applications, traditional server applications, machine learning, industrial control systems, robotics. What many new entrants in the container ecosystem have in common is that they build specialized systems, targeted at a particular infrastructure, industry or use case.
As a company Docker uses open source as our innovation lab, in collaboration with a whole ecosystem. Docker’s success is tied to the success of the container ecosystem: if the ecosystem succeeds, we succeed. Hence we have been planning for the next phase of the container ecosystem growth: what production model will help us scale the container ecosystem to fulfill the promise of making containers mainstream?
Last year, our customers started to ask for Docker on many platforms beyond Linux: Mac and Windows desktop, Windows Server, cloud platforms like Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform, and we created a dozen Docker editions specialized for these platforms. In order to be able to build and ship these specialized editions is a relatively short time, with small teams, in a scalable way, without having to reinvent the wheel; it was clear we needed a new approach.  We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.

We think the best way to scale the container ecosystem to the next level to get containers mainstream is to collaborate on assemblies at the ecosystem level.

In order to enable this new level of collaboration, today we are announcing the Moby Project, a new open-source project to advance the software containerization movement. It provides a “Lego set” of dozens of components, a framework for assembling them into custom container-based systems, and a place for all container enthusiasts to experiment and exchange ideas. Think of Moby as the “Lego Club” of container systems.
Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.

Moby is designed for system builders, who want to build their own container based systems, not for application developers, who can use Docker or other container platforms. Participants in the Moby project can choose from the library of components derived from Docker or they can elect to “bring your own components” (BYOC) packaged as containers with the option to mix and match among all of the components to create a customized container system.
Docker uses the Moby Project as an open R&D lab, to experiment, develop new components, and collaborate with the ecosystem on the future of container technology. All our open source collaboration will move to the Moby project.
Please join us in helping take software containers mainstream, and grow our ecosystem and our user community to the next level by collaborating on components and assemblies.

Let’s take containers mainstream with moby https://mobyproject.org/ To Tweet

 
Learn more about the Moby Project:

https://mobyproject.org/

Join us for the DockerCon 2017 Online Meetup Recap
Read the Announcement

The post Introducing Moby Project: a new open-source project to advance the software containerization movement appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Enterprise Ready Software from Docker Store

Store is the place to discover and procure trusted, enterprise-ready containerized software &; free, open source and commercial.
Docker Store is the evolution of the Docker Hub, which is the world’s largest container registry, catering to millions of users. As of March 1, 2017, we crossed 11 billion pulls from the public registry!  Docker Store leverages the public registry’s massive user base and ensures our customers &8211; developers, operators and enterprise Docker users get what they ask for. The Official Images program was developed to create a set of curated and trusted content that developers could use as a foundation for building containerized software. From the lessons learned and best practices, Docker recently launched a certification program that  enables ISVs, around the world to take advantage of Store in offering great software, packaged to operate optimally on the Docker platform.

The Docker Store is designed to bring Docker users and ecosystem partners together with

Certified with ISV apps that have been validated against Docker Enterprise Edition, and comes with cooperative support from Docker and the ISV
Enhanced search and discovery capabilities of containers, including filtering support for platforms, categories and OS.
Self service publisher workflow and interface to facilitate a scalable marketplace.
Support for a range of licensing models for published content

Publishers with certified content on Docker Store include:  AVI Networks, Cisco, Bleemeo, BlobCity DB, Blockbridge, CodeCov, CoScale, Datadog, Dynatrace, Gitlab, Hedvig, HPE, Hypergrid, Kaazing, Koekiebox, Microsoft, NetApp, Nexenta, Nimble, Nutanix, Polyverse Portworx, Sysdig, and Weaveworks
The simplest way to get started is to go check out Docker Store!

Using Docker Store
For developers and IT teams building Docker apps, the Docker Store is the best place to get the components they need available as containers. Containerization technology has emerged as a strong solution for developers, devops and IT &8211; and enterprises especially need assurances that software packages are trusted and “just works” when deployed. The Docker Certification program takes containers and through an end-to-end testing process and provides collaborative support for any potential issues. Read more about the certification program here!

Enhanced Discovery: Easily search for a wide range of solutions from Docker, ISV containers or plugins. Use filters and categories to search for specific characteristics
Software Trials: Where available, free trials of commercial software (including Docker) are available from the Docker Store.
Community Content: Developers can continue to browse and download from Docker Hub public repos from the Docker Store. The Docker Community is very vibrant and active, and community images will be accessible from the Docker Store.
Notifications: Alerts and updates are available to manage subscriptions of Docker Store listings including patches, fixes or new versions.

Publish Content to Docker Store
From large ISV with hundreds of products to a small start up building new tools, Docker Store provides a marketplace to package and distribute software and plugins in containers ready for use on the Docker platform. Making their tools more accessible to the community of millions of Docker users and accelerating their time to value with these partner solutions.
In addition, Publishers gain the following benefits from the Docker Store:

Access to a globally scalable container distribution service.
Path to certification for software and plugin content to differentiate the solution from ecosystem and to signal additional value to end users.
Visibility and analytics including managing subscribers and sales reports.
Flexible fulfillment and billing support with “Paid via Docker” and BYOL (Bring your own License) models. You focus on creating great software and we take care of the rest.
Reputation management via Ratings and Reviews.

Getting started as a publisher on Docker Store is as simple as 1-2-3!

Tips for becoming a publisher:

Create Great Containerized Content (you have probably already done this!)
Follow best practices

https://success.docker.com/store

https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/

Use an official image as your base image.
github.com/docker/docker-bench-securityWe will keep adding more best practices and tools to make your content robust.

Go to https://store.docker.com and click on “Publish”.

More Resources

Learn more about certification. 
Sign up for a Docker Store Workshop at DockerCon
Learn More about Docker Enterprise Edition 

Docker Store is the place to get your certified containers, plugins and Editions!Click To Tweet

The post Enterprise Ready Software from Docker Store appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Global Mentor Week: Thank you Docker Community!

Danke, рақмет сізге, tak, धन्यवाद, cảm ơn bạn, شكرا, mulțumesc, Gracias, merci, asante, ευχαριστώ, thank you community for an incredible Docker Global Mentor Week! From Tokyo to Sao Paulo, Kisimu to Copenhagen and Ottowa to Manila, it was so awesome to see the energy from the community coming together to celebrate and learn about Docker!

Over 7,500 people registered to attend one of the 110 mentor week events across 5 continents! A huge thank you to all the Docker meetup organizers who worked hard to make these special events happen and offer Docker beginners and intermediate users an opportunity to participate in Docker courses.
None of this would have been possible without the support (and expertise!) of the 500+ advanced Docker users who signed up as mentors to help newcomers .
Whether it was mentors helping attendees, newcomers pushing their first image to Docker Hub or attendees mingling and having a good time, everyone came together to make mentor week a success as you can see on social media and the Facebook photo album.
Here are some of our favorite tweets from the meetups:
 

@Docker LearnDocker at Grenoble France 17Nov2016 @HPE_FR pic.twitter.com/8RSxXUWa4k
— Stephane Bureau (@SBUCloud) November 18, 2016

Awesome turnout at tonight&;s @DockerNYC learndocker event! We will be hosting more of these &; Keep tabs on meetup: https://t.co/dT99EOs4C9 pic.twitter.com/9lZocCjMPb
— Luisa M. Morales (@luisamariethm) November 18, 2016

And finally&; &;Tada&; Docker Mentor Weeklearndocker pic.twitter.com/6kzedIoGyB
— Károly Kass (@karolykassjr) November 17, 2016

 
Learn Docker
In case you weren’t able to attend a local event, the five courses are now available to everyone online here: https://training.docker.com/instructor-led-training
Docker for Developers Courses
Developer &8211; Beginner Linux Containers
This tutorial will guide you through the steps involved in setting up your computer, running your first containers, deploying a web application with Docker and running a multi-container voting app with Docker Compose.
Developer &8211; Beginner Windows Containers
This tutorial will walk you through setting up your environment, running basic containers and creating a Docker Compose multi-container application using Windows containers.
Developer &8211; Intermediate (both Linux and Windows)
This tutorial teaches you how to network your containers, how you can manage data inside and between your containers and how to use Docker Cloud to build your image from source and use developer tools and programming languages with Docker.
Docker for Operations courses
This courses are step-by-step guides where you will build your own Docker cluster, and use it to deploy a sample application. We have two solutions for you to create your own cluster.

Using play-with-docker

Play With Docker is a Docker playground that was built by two amazing Docker captains: Marcos Nils and Jonathan Leibiusky during the Docker Distributed Systems Summit in Berlin last October.
Play with Docker (aka PWD) gives you the experience of having a free Alpine Linux Virtual Machine in the cloud where you can build and run Docker containers and even create clusters with Docker features like Swarm Mode.
Under the hood DIND or Docker-in-Docker is used to give the effect of multiple VMs/PCs.
To get started, go to http://play-with-docker.com/ and click on ADD NEW INSTANCE five times. You will get five &8220;docker-in-docker&8221; containers, all on a private network. These are your five nodes for the workshop!
When the instructions in the slides tell you to &8220;SSH on node X&8221;, just go to the tab corresponding to that node.
The nodes are not directly reachable from outside; so when the slides tell you to &8220;connect to the IP address of your node on port XYZ&8221; you will have to use a different method.
We suggest to use &8220;supergrok&8221;, a container offering a NGINX+ngrok combo to expose your services. To use it, just start (on any of your nodes) the jpetazzo/supergrok image. The image will output further instructions:
docker run –name supergrok -d jpetazzo/supergrok
docker logs –follow supergrok
The logs of the container will give you a tunnel address and explain you how to connected to exposed services. That&8217;s all you need to do!
You can also view this excellent video by Docker Brussels Meetup organizer Nils de Moor who walks you through the steps to build a Docker Swarm cluster in a matter of seconds through the new play-with-docker tool.

 
Note that the instances provided by Play-With-Docker have a short lifespan (a few hours only), so if you want to do the workshop over multiple sessions, you will have to start over each time &8230; Or create your own cluster with option below.

Using Docker Machine to create your own cluster

This method requires a bit more work to get started, but you get a permanent cluster, with less limitations.
You will need Docker Machine (if you have Docker Mac, Docker Windows, or the Docker Toolbox, you&8217;re all set already). You will also need:

credentials for a cloud provider (e.g. API keys or tokens),
or a local install of VirtualBox or VMware (or anything supported by Docker Machine).

Full instructions are in the prepare-machine subdirectory.
Once you have decided what option to choose to create your swarm cluster, you ready to get started with one of the operations course below:
Operations &8211; Beginner
The beginner part of the Ops tutorial will teach you how to set up a swarm, how to use it to host your own registry, how to build your app container images and how to deploy and scale a distributed application called Dockercoins.
Operations &8211; Intermediate
From global container scheduling, overlay networks troubleshooting, dealing with stateful services and node management, this tutorial will show you how to operate your swarm cluster at scale and take you on a swarm mode deep dive.

Danke, Gracias, Merci, Asante, ευχαριστώ, thank you Docker community for an amazing&8230;Click To Tweet

The post Global Mentor Week: Thank you Docker Community! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Presents at Inaugural Cloud Field Day

Thanks to everyone who joined us last Thursday. We were really excited to participate in the first Cloud Field Day event and to host at Docker HQ in San Francisco. In watching the trend to cloud and the changing dynamics of application development, the Tech Field Day organizers, Stephen Foskett and Tom Hollingsworth started Cloud Field Day to create a forum for companies to share and for the delegates to discuss. The delegates came from backgrounds in software development, , networking, virtualization, storage, data and of course, cloud… As always, the delegates asked a lot of questions, kicked off some great discussions, even had some spirited debates both in the room and online, always with the end user in mind. We are looking forward to doing this again.

ICYMI: The videos and event details are now available online and also follow the conversation from Twitter with the delegates.

containers are really about applications, not infrastructure @docker https://t.co/BAabGfwKIm pic.twitter.com/S8YrLDLd92
— Karen Lopez (@datachick) September 15, 2016

 

It’s staggering how far apart many traditional IT departments are from where the leading edge currently is… CFD1
— Jason Nash (@TheJasonNash) September 15, 2016

 

There is NO way to run @docker swarm mode insecurely! TLS built in! Gotta like that&; CFD1
— Nigel Poulton (@nigelpoulton) September 15, 2016

The three livestreamed sessions have been recorded and are now available to view.
Session 1: What is Docker?  Featuring product manager Vivek Saraswat
In this session, Vivek explains container architecture, how it is different than VMs and how they can be applied to application environments.  Bonus demo featuring an app with rotating cat gifs.

Session 2: Docker Orchestration featuring architect Andrea Luzzardi
With Docker 1.12, orchestration is built in directly into the Engine.  As an optional feature, orchestration includes node clustering, container scheduling, notion of application level services, container aware networking, security and much more.

Session 3: Docker and Microsoft featuring product manager Michael Friis
Enterprises have a mix of Linux and Windows application workloads. In this session, Michael explains how Docker and Windows Server deliver Windows containers and other integrations to the native Microsoft developer and IT pro toolset.

And we are not finished yet! The Docker Team will be participating in the upcoming Tech Field Day 12 in Silicon Valley on November 15-16th. Check back on the Tech Field Day site to get updated times and a link to view the live stream.
See you online soon!
More resources:

Learn more about Docker for the Enterprise
Read the white paper: Docker for the Virtualization Admin
Docker 1.12 with built in orchestration
Learn more about Docker Datacenter

The post Docker Presents at Inaugural Cloud Field Day appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Securing the Enterprise Software Supply Chain Using Docker

At we have spent a lot of time discussing runtime security and isolation as a core part of the container architecture. However that is just one aspect of the total software pipeline. Instead of a one time flag or setting, we need to approach security as something that occurs at every stage of the application lifecycle. Organizations must apply security as a core part of the software supply chain where people, code and infrastructure are constantly moving, changing and interacting with each other.
If you consider a physical product like a phone, it’s not enough to think about the security of the end product. Beyond the decision of what kind of theft resistant packaging to use, you might want to know  where the materials are sourced from and how they are assembled, packaged, transported. Additionally it is important to ensure that  the phone is not tampered with or stolen along the way.

The software supply chain maps almost identically to the supply chain for a physical product. You have to be able to identify and trust the raw materials (code, dependencies, packages), assemble them together, ship them by sea, land, or air (network) to a store (repository) so the item (application) can be sold (deployed) to the end customer.
Securing the software supply chain is also also quite similar.  You have to:

Identify all the stuff in your pipeline; from people, code, dependencies, to infrastructure
Ensure a consistent and quality build process
Protect the product while in storage and transit
Guarantee and validate the final product at delivery against a bill of materials

In this post we will explain how Docker’s security features can be used to provide active and continuous security for a software supply chain.
Identity
The foundation of the entire pipeline is built on identity and access. You fundamentally need to know who has access to what assets and can run processes against them. The Docker architecture has a distinct identity concept that underpins the security strategy for securing your software supply chain: cryptographic keys allows the publisher to sign images to ensure proof-of-origin, authenticity, and provenance for Docker images.
Consistent Builds: Good Input = Good Output
Establishing consistent builds allow you to create a repeatable process and get control of your application dependencies and components to make it easier to test for defects and vulnerabilities. When you have a clear understanding of your components, it becomes easier to identify the things that break or are anomalous.

To get consistent builds, you have to ensure you are adding good components:

Evaluate the quality of the dependency, make sure it is the most recent/compatible version and test it with your software
Authenticate that the component comes from a source you expect and was not corrupted or altered in transit
Pin the dependency ensuring subsequent rebuilds are consistent so it is easier to uncover if a defect is caused by a change in code or dependency
Build your image from a trusted, signed base image using Docker Content Trust

Application Signing Seals Your Build
Application signing is the step that effectively “seals” the artifact from the build. By signing the images, you ensure that whomever verifies the signature on the receiving side (docker pull) establishes a secure chain with you (the publisher).  This relationship assures that the images were not altered, added to, or deleted from while stored in a registry or during transit. Additionally, signing indicates that the publisher “approves” that the image you have pulled is good.

Enabling Docker Content Trust on both build machines and the runtime environment sets a policy so that only signed images can be pulled and run on those Docker hosts.  Signed images signal to others in the organization that the publisher (builder) declares the image to be good.
Security Scanning and Gating
Your CI system and developers verify that your build artifact works with the enumerated dependencies, that operations on your application have expected behavior in both the success path and failure path, but have they vetted the dependencies for vulnerabilities?  Have they vetted subcomponents of the dependencies or bundled system libraries for dependencies?  Do they know the licenses for their dependencies? This kind of vetting is almost never done on a regular basis, if at all, since it is a huge overhead on top of already delivering bugfixes and features.

Docker Security Scanning assists in automating the vetting process by scanning the image layers.  Because this happens as the image is pushed to the repo, it acts as a last check or final gate before are deployed into production. Currently available in Docker Cloud and coming soon to Docker Datacenter, Security Scanning creates a Bill of Materials of all of the image’s layers, including packages and versions. This Bill of Materials is used to continuously monitor against a variety of CVE databases.  This ensures that this scanning happens more than once and notifies the system admin or application developer when a new vulnerability is reported for an application package that is in use.
Threshold Signing &; Tying it all Together
One of the strongest security guarantees that comes from signing with Docker Content Trust is the ability to have multiple signers participate in the signing process for a container. To understand this, imagine a simple CI process that moves a container image through the following steps:

Automated CI
Docker Security Scanning
Promotion to Staging
Promotion to Production

This simple 4 step process can add a signature after each stage has been completed and verify the every stage of the CI/CD process has been followed.

Image passes CI? Add a signature!
Docker Security Scanning says the image is free of vulnerabilities? Add a signature!
Build successfully works in staging? Add a signature!
Verify the image against all 3 signatures and deploy to production

Now before a build can be deployed to the production cluster, it can be cryptographically verified that each stage of the CI/CD process has signed off on an image.
Conclusion
The Docker platform provide enterprises the ability to layer in security at each step of the software lifecycle.  From establishing trust with their users, to the infrastructure and code, our model gives both freedom and control to the developer and IT teams. From building secure base images to scanning every image to signing every layer, each feature allows IT to layer in a level of trust and guarantee into the application.  As applications move through their lifecycle, their security profile is actively managed, updated and finally gated before it is finally deployed.

Docker secure beyond containers to your entire app pipeline To Tweet

More Resources:

Read the Container Isolation White Paper
ADP hardens enterprise containers with Docker Datacenter
Try Docker Datacenter free for 30 days
Watch this talk from DockerCon 2016

The post Securing the Enterprise Software Supply Chain Using Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/