Be a Budget Hero with Docker Enterprise Edition

We recently started a multi-part learning series for SysAdmins and IT professionals called IT Starts With Docker. We started with the basics, covering container technology and Docker solutions for the enterprise. Now, we shift to the important question: Is it worth your time and your company’s investment to further explore Docker Enterprise Edition (EE)?
The resounding answer to that question is YES. IT teams who have adopted Docker EE are finding it faster and easier to deploy and maintain their applications, plus drive better infrastructure utilization, all without touching the underlying code. Developer teams are realizing productivity gains of their own by being able to onboard new developers faster, shortening the cycles from development to production, and elimination of the burdensome “it worked on my machine” problems.

Try the simple ROI calculator for yourself. It takes just a couple of minutes and allows you to estimate your own savings with Docker EE and how you can become the budget hero of your department. Then, register for our live webinar on Tuesday, August 15th, The Business Value of Docker, where we will outline how organizations like your own are saving 50% on their total costs with Docker EE. 
The calculator is based on real data from Docker EE customers across a broad range of industries. In just the first 5 days of a Proof-of-Concept through the Modernize Traditional Apps program, customers are seeing how they can:

cut application deployment times from days or even weeks to hours or minutes
streamline application maintenance, cutting time spent on upgrades by as much as 99%
consolidate and maximize infrastructure utilization (yes, it works with your existing VMs)
and enable development teams and IT operations teams to work efficiently together to accelerate application delivery

Where IT Begins
If you’re new to IT Starts With Docker, learn the basics of Docker and containers with our online hands-on learning environment. Within a few minutes of following the labs, you will see why so many organizations are now running containers in production. These labs may trigger your own ideas of how Docker might help you in your job: smoother deployment and maintenance of applications, securing apps throughout your software supply chain, and managing apps across disparate infrastructure. Docker can help with all of these and more.
From the first application that gets containerized with Docker, you can show positive returns. We invite you to sign up for updates on this special learning series and see how Docker can make you a budget hero.
To learn more about Docker EE:

Read more about Docker Enterprise Edition 
Try the ROI calculator and download the whitepaper
Visit IT Starts with Docker and sign up for updates
Explore and register for other upcoming webinars or join a local Meetup
Learn everything you need to know about Docker at DockerCon Europe

Four Paths To Become an IT Budget Hero with #Docker #ITStartsWithDockerClick To Tweet

The post Be a Budget Hero with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

What is containerd ?

We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way.  Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms.  I would like to do more posts on the featureset and design of containerd in the future but for now, we will start with the basics.
I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime…  containerd as the name implies, not contain nerd as some would like to troll me with, is a container daemon.  It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and Kubernetes.

Since there is no such thing as Linux containers in the kernelspace, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container.  That is where containerd lives.  It provides a client layer of types that platforms can build on top of without ever having to drop down to the kernel level.  It’s so much nicer towork with Container, Task, and Snapshot types than it is to manage calls to clone() or mount().
Containerd was designed to be used by Docker and Kubernetes as well as any other container platform that wants to abstract away syscalls or OS specific functionality to run containers on linux, windows, solaris, or other OSes.  With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don’t.  Realistically this is impossible but at least that is what we try for.  Things like networking are out of scope for containerd.  The reason for this is, when you are building a distributed system, networking is a very central aspect.  With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux.  Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted.  Service discovery, DNS, etc all have to be notified of these changes as well.  It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd.  What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about.  We also expose a task API that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container’s process without the need for complex hooks in various points of a container’s lifecycle.
Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats.  You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.
We also took the time to rethink how “graphdrivers” work.  These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds.  Graphdrivers were initially written by Solomon and I when we added support for devicemapper.  Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem.  However, making a block level filesystem such as devicemapper/lvm act like an overlay fillesystem proved to be much harder to do in the long run.  The interfaces had to expand over time to support different features than what we originally thought would be needed.  With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa.  This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don’t have a strict parent/child relationship.  This helped us build out a smaller interface for the snapshotters while still fulfilling the requirements needed from things like a builder as well as reduce the amount of code needed, making it much easier to maintain in the long run.
So what do you actually get using containerd?  You get push and pull functionality as well as image management.  You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management.  Basically everything that you need to build a container platform without having to deal with the underlying OS details.  I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.

What’s #containerd? All you need to know about #Docker’s open and reliable #container runtime Click To Tweet

Learn more about containerd:

Check out the containerd GitHub Repo
Join the containerd Slack channel
Register for the Moby Summit LA alongside Open Source Summit North America
Register for DockerCon Europe and DockerCon Moby Summit

The post What is containerd ? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker 101: Introduction to Docker webinar recap

Docker is standardizing the way to package applications, making it easier for developers to code and build apps on their laptop or workstation and for IT to manage, secure and deploy into a variety of infrastructure platforms
In last week’s webinar, Docker 101: An Introduction to Docker, we went from describing what a container is, all the way to what a production deployment of Docker looks like, including how large enterprise organizations and world-class universities are leveraging Docker Enterprise Edition (EE)  to modernize their legacy applications and accelerate public cloud adoption.
If you missed the webinar, you can watch the recording here:

We ran out of time to go through everyone’s questions, so here are some of the top questions from the webinar:
­Q: How does Docker get access to platform resources, such as I/O, networking, etc.­ Is it a type of hypervisor?
A: Docker EE is not a type of hypervisor. Hypervisors create virtual hardware: they make one server appear to be many servers but generally know little or nothing about the applications running inside them. Containers are the opposite: they make one OS or one application server appear to be many isolated instances. Containers explicitly must know the OS and application stack but the hardware underneath is less important to the container. In Linux operating systems, the Docker engine is a daemon installed directly in a host operating system kernel that isolates and segregates different procedures for the different containers running on that operating system. The platform resources are accessed by the host operating system and each container gets isolated access to these resources through segregated namespaces and control groups (cgroups). cgroups allow Docker to share available hardware resources to containers and optionally enforce limits and constraints. You can read more about this here.
Q: ­Are containers secure since they run on the same OS?­
Yes, cgroups, namespaces, seccomp profiles and the “secure by default” approach of Docker all contribute to the security of containers. Separate namespaces protects processes running within a container meaning it cannot see, and even less affect, processes running in another container, or in the host system. Cgroups help ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources. And Docker is designed to limit root access of containers themselves by default, meaning that even if an intruder manages to escalate to root within a container, it will be much harder to do serious damage, or to escalate to the host. These are just some of the many ways Docker is designed to be secure by default. Read more about Docker security and security features here. 
Docker Enterprise Edition includes additional advanced security options including role-based access control (RBAC), image signing to validate image integrity, secrets management, and image scanning to protect images from known vulnerabilities. These advanced capabilities provide an additional layer of security across the entire software supply chain, from developer’s laptop to production.
Q: ­Can a Docker image created under one OS (e.g Windows) be used to run on a different operating system (e.g RedHat 7.x)?
A: Unlike VMs, Docker containers share the OS kernel of the underlying host so containers can go from one Linux OS to another but not from Windows to Linux. So you cannot run a .NET app natively on a Linux machine, but you can run a RHEL-based container on a SUSE-based host because they both leverage the same OS kernel.
Q: Is there another advantage other than DevOps for implementing Docker in enterprise IT infrastructure?
A: Yes! Docker addresses many different IT challenges and aligns well with major IT initiatives including hybrid/multi-cloud, data center and app modernization. Legacy applications are difficult and expensive to maintain. They can be fragile and insecure due to neglect over time while maintaining them consumes a large portion of the overall IT budget. By containerizing these traditional applications, IT organizations save time and money and make these applications more nimble. For example:

Cloud portability: By containerizing applications, they can be easily deployed across different certified platforms without requiring code changes.
Easier application deployment and maintenance: Containers are based on images which are defined in Dockerfiles. This simplifies the dependencies of an application, making them easier to move between dev, test, QA, and production environments and also easier to update and maintain when needed. 62% of customers with Docker EE see a reduction in their mean time to resolution (MTTR).
Cost savings: Moving to containers provides overall increased utilization of available resources which means that customers often see up to 75% improved consolidation of virtual machines or CPU utilization. That frees up more budget to spend on innovation,

To learn more about how IT can benefit from modernizing traditional applications with Docker, check out www.docker.com/MTA.
Q: Can you explain more about how Docker EE can be used to convert apps to microservices?
A: Replacing an existing application with a microservices architecture is often a large undertaking that requires significant investment in application development. Sometimes it is impossible as it requires systems of record that cannot be replaced. What we see many companies do is containerize an entire traditional application as a starting point. They then peel away pieces of the application and convert those to microservices rather than taking on the whole application. This allows the organization to modernize components like the web interface without complete re-architecture, allowing the application to have a modern interface while still accessing legacy data.
­Q: Are there any tools that will help us manage private/corporate images? ­Can we have host our own image repository in-house vs using the cloud?
A: Yes! Docker Trusted Registry (DTR) is a private registry included in Docker Enterprise Edition Standard and Advanced. In addition, DTR provides additional advanced capabilities around security (eg. image signing, image scanning) and access controls (eg. LDAP/AD integration, RBAC). It is intended to be a private registry for you to install either in your data center or in your virtual private cloud environment.
Q: ­Is there any way to access the host OS file system(s)?  I want to put my security scan software in a Docker container but scan the host file system.
A: The best way to do this is to mount the host directory as a volume in the container with “-v /:/root_fs” so that the file system and directory are shared and visible in both places. More information around storage volumes, mounting shared volumes, backup and more are here.

Top 7 questions from #Docker 101 – Webinar recapClick To Tweet

Next Steps:

If you’re an IT professional, join our multi-part learning series: IT Starts with Docker
If you’re a developer, check out the Docker Playground 
Learn more about Docker Enterprise Edition or try the new hosted demo environment
Explore and register for other upcoming webinars or join a local Meetup

The post Docker 101: Introduction to Docker webinar recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing the first DockerCon Europe 2017 Speakers

Summer is flying by and DockerCon Europe 2017 (October 16-19) will be here before we know it! The DockerCon team is heads down reviewing all of the proposals submitted and we are almost ready to release a full agenda. With that, we are thrilled to share with you the DockerCon Europe 2017 Website including the first confirmed speakers and sessions.

Abby Fuller, AWS
 

Adrian Mouat, Container Solutions
 

Arun Gupta, AWS
 

 
Bret Fisher, Independent Consultant
 

 
Elton Stoneman, Docker
 

 
Nandhini Santhanam, Docker
 

 
Mike Coleman, Docker
  

Tycho Andersen, Docker
 

Learn more about DockerCon: 

Register to DockerCon Europe
Sign up to receive DockerCon News

Announcing the first @DockerCon Europe 2017 speakers cc @arungupta @adrianmouat @abbyfullerClick To Tweet

The post Announcing the first DockerCon Europe 2017 Speakers appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

IT Starts with Docker

Happy SysAdmin Day! Cheers to all of you who keep your organizations running, keep our data secure, respond at a moment’s notice and bring servers and apps back to life after a crash. Today we say, “Thank You!”

 Anniversaries are a great time to reflect on accomplishments of the last year: the projects you’ve completed, the occasions you’ve saved your company money or time, the new technology you’ve learned. In a role like IT, so much can change each year as technology progresses and becomes more challenging to stay ahead of that curve. So this SysAdmin Day, we at Docker want to congratulate your past successes and prepare you for the year to come.
Containers are not just for developers anymore and Docker is the standard for packaging all kinds of applications – Windows, Linux, traditional, and microservices. Over the next few months, we’ll be covering how SysAdmins like yourself are enabling their organizations to innovate faster while saving their companies’ money by embracing containers with Docker Enterprise Edition.
Sign up here to start your journey and learn how IT Starts with Docker. 

This multi-part series will include:

How Docker Enterprise Edition is helping IT organizations free up money for new initiatives by changing the way applications are deployed and maintained, and how customers are seeing 50-75% infrastructure savings when running containers in production.
Hands-on learning around container management and security to see how organizations are using containers across a broad spectrum of applications and infrastructure platforms.
How IT can lead the containerizing of traditional applications and gain application portability, security, efficiency in just 5 days. We’ll provide a closer look at the Modernize Traditional Applications (MTA) program that is co-delivered by Docker and our strategic partners and how you can leverage that to start your organization’s modernization efforts.
Close examination and customer stories of the key use cases for Docker Enterprise Edition to help you apply this new knowledge to your own upcoming IT projects.

Sign up today and we’ll make sure that by next year’s SysAdmin Day, you’ll be able to reflect on how Docker has helped you accomplish even more in your organization.
To get started:

Visit the Docker page for IT professionals
Begin your learning with the basics
Sign up to receive newsletter updates on this series

Start your IT journey with Docker’s new multi-part learning series. Sign up today! #SysAdminDay…Click To Tweet

The post IT Starts with Docker appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Happy Second Birthday: A Kubernetes Retrospective

As we do every July, we’re excited to celebrate Kubernetes 2nd birthday! In the two years since GA 1.0 launched as an open source project, Kubernetes (abbreviated as K8s) has grown to become the highest velocity cloud-related project. With more than 2,611 diverse contributors, from independents to leading global companies, the project has had 50,685 commits in the last 12 months. Of the 54 million projects on GitHub, Kubernetes is in the top 5 for number of unique developers contributing code. It also has more pull requests and issue comments than any other project on GitHub.  Figure 1: Kubernetes RankingsAt the center of the community are Special Interest Groups with members from different companies and organizations, all with a common interest in a specific topic. Given how fast Kubernetes is growing, SIGs help nurture and distribute leadership, while advancing new proposals, designs and release updates. Here’s a look at the SIG building blocks supporting Kubernetes:Kubernetes has also earned the trust of many Fortune 500 companies with deployments at Box, Comcast, Pearson, GolfNow, eBay, Ancestry.com and contributions from CoreOS, Fujitsu, Google, Huawei, Mirantis, Red Hat, Weaveworks and ZTE Company and others. Today, on the second anniversary of the Kubernetes 1.0 launch, we take a look back at some of the major accomplishments of the last year: July 2016Kubernauts celebrated its first anniversary of the Kubernetes 1.0 launch with 20 #k8sbday parties hosted worldwideKubernetes v1.3 releaseSeptember 2016Kubernetes v1.4 releaseLaunch of kubeadm, a tool that makes Kubernetes dramatically easier to installPokemon Go – one of the largest installs of Kubernetes ever October 2016Introduced Kubernetes service partners program and a redesigned partners pageNovember 2016CloudNativeCon/KubeCon SeattleCloud Native Computing Foundation partners with The Linux Foundation to launch a new Kubernetes certification, training and managed service provider programDecember 2016Kubernetes v1.5 releaseJanuary 2017Survey from CloudNativeCon + KubeCon Seattle showcases the maturation of Kubernetes deploymentMarch 2017CloudNativeCon/KubeCon EuropeKubernetes v1.6 releaseApril 2017The Battery Open Source Software (BOSS) Index lists Kubernetes as #33 in the top 100 popular open-source software projectsMay 2017Four Kubernetes projects accepted to The Google Summer of Code (GSOC) 2017 programStutterstock and Kubernetes appear in The Wall Street Journal: “On average we [Shutterstock] deploy 45 different releases into production a day using that framework. We use Docker, Kubernetes and Jenkins [to build and run containers and automate development,” said CTO Marty Brodbeck on the company’s IT overhaul and adoption of containerization. June 2017Kubernetes v1.7 releaseSurvey from CloudNativeCon + KubeCon Europe shows Kubernetes leading as the orchestration platform of choiceKubernetes ranked #4 in the 30 highest velocity open source projectsFigure 2: The 30 highest velocity open source projects. Source: https://github.com/cncf/velocityJuly 2017Kubernauts celebrate the second anniversary of the Kubernetes 1.0 launch with #k8sbday parties worldwide!At the one year anniversary of the Kubernetes 1.0 launch, there were 130 Kubernetes-related Meetup groups. Today, there are more than 322 Meetup groups with 104,195 members. Local Meetups around the world joined the #k8sbday celebration! Take a look at some of the pictures from their celebrations. We hope you’ll join us at CloudNativeCon + KubeCon, December 6- 8 in Austin, TX.Celebrating at the K8s birthday party in San Francisco Celebrating in RTP, NC with a presentation from Jason McGee, VP and CTO, IBM Cloud Platform. Photo courtesy of @FranklyBrianaThe Kubernetes Singapore meetup celebrating with an intro to GKE. Photo courtesy of @hunternieldNew York celebrated with mini k8s cupcakes and a presentation on the history of cloud native from CNCF Executive Director, Dan Kohn. Photo courtesy of @arieljatib and @coreos Quebec City had custom k8s cupcakes too! Photo courtesy of @zig_max Beijing celebrated with custom k8s lollipops. Photo courtesy of @maxwell9215– Sarah Novotny, Program Manager, Kubernetes Community 
Quelle: kubernetes

Modernize Traditional Applications by Docker Webinar Recap

IT organizations continue to spend 80% of their budget on simply maintaining their existing applications while only spending 20% on innovation. That ratio has not changed over the last 10 years, and yet, there’s no shortage of pressure to innovate. Whether it comes directly from your customers asking for new features, or it comes from your management chain, the story is the same; you have to do more with less.

Thankfully, there is Modernize Traditional Applications from Docker. Where you can take your existing legacy applications, the same ones that underline your business, and make them 70% more efficient, more secure, and best of all – portable across any infrastructure. And you can do all of that, without touching a single line of the underlying application code. Sounds too good to be true right? Well, watch the recording below and you’ll see that it’s absolutely possible.

Give your legacy application modern capabilities without touching code using Docker EE by way of…Click To Tweet

Learn more about the Modernize Traditional Apps program:

Visit docker.com/mta to find out more information about getting involved
Contact Sales to see about getting your own MTA engagement
Take a look at the Docker ROI Calculator and see how much you can save

The post Modernize Traditional Applications by Docker Webinar Recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Community Spotlight: Adina-Valentina Radulescu

Adina-Valentina Radulescu, a DevOps/Integration Engineer for Pentalog Romania, has been organizing meetups for not one but two meetup groups.
In February of last year, Adina founded Docker Brasov and Docker Timisoara, and has since done an amazing job creating and fostering a sense of belonging in her community. This month, we’re happy to shine the community spotlight on Adina to learn more about her Docker story.
Tell us about your first experience with Docker.
The first time I heard about Docker was back in 2014. I played around with Docker and I was impressed with the simplicity of integration so I wanted to learn more. I was able to attend DockerCon EU in 2015 in Barcelona where I completed some labs and attended the talks to learn as much as I could about Docker. It was a powerful feeling.
Why did you start Docker Brasov and Docker Timisoara?
I wanted to have a Docker sharing exchange experience when I get back in Romania. I relocated from Timisoara to a beautiful mountain city, Brasov. In Timisoara, I knew people and companies. In Brasov, I knew almost no one. This is why I decided to start the two groups so I could share what I had been learning, allow others to share their experience, and so we could stay up-to-date on Docker.

 
What do you love about the community and specifically the Docker community in Romania? What makes your community unique?
For both cities, what I love about the community is the people. Romanian people are extraordinary. In both groups, there are many different types of people with different backgrounds and experience. There are developers, Q&A managers, ops, students, freelancers etc. If someone can’t come, or has to cancel, they let me know. Our members respect each other and everyone is very warm and welcoming.
Now that you use Docker, how do you use it, and what do you use it for?
I use Docker both professionally and in my personal projects. At work, I help run all the tools that our development team needs in Docker. When IPs and network changes were made, the QA was affected first so they noticed communication issues that we could fix before getting into the release delivery. Since I brought Docker into the mix, the developers and QAs could actually focus on implementing and testing the application functionalitIes, not spending 50% of their time on system setups. It felt great that I was able to help them.
What are some aspects you love about organizing Docker meetups? 
I’m a people person so I really enjoy bringing the community together, meeting new people and sharing my experiences. I love meeting people who are talking about Docker and what I especially like about the Docker meetups, is how much I’m constantly learning. I enjoy planning the content and finding speakers that the groups would like to hear about.
What advice would you give to a new Docker organizer?
Passion is the key! Don’t be worried about your experience level. Use your passion to share and learn to organize great events. Docker puts together really great trainings and resources to get you started.
Try to identify passionate people in your community to help you out. Your contact for the venue can help with promotion, or someone might know a company who can sponsor food. I personally couldn’t have done it without the help of Ovidiu-Florin Bogdan who is very passionate about using Docker and who has been involved in the meetups from the beginning.
Adapt the meetup duration according to the participants needs. Either try to fit in a schedule (Timisoara) or don’t impose time limits (Brasov).

What do you do when you are not organizing meetup events?
I really like to stay active and travel. I particularly like Zumba and swimming. Every so often, I’m in my home city Resita organizing Coder Dojo meetups and mentoring kids.
Motto or personal mantra?

The best time is now.
If you are afraid of something you have to do it!  

Learn more about the Docker Community

Join the Docker Community Slack
Join your local Docker Meetup group 
Join the Docker Online Meetup group
Check out the Docker Playground for free hands-on labs
Attend the next DockerCon Europe in Copenhagen

Community Spotlight with @rav121rav: Read about her #Docker journey here: Click To Tweet

The post Docker Community Spotlight: Adina-Valentina Radulescu appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Demystifying the Open Container Initiative (OCI) Specifications

The Open Container Initiative (OCI) announced the completion of the first versions of the container runtime and image specifications this week. The OCI is an effort under the auspices of the Linux Foundation to develop specifications and standards to support container solutions. A lot of effort has gone into the building of these specifications over the past two years. With that in mind, let’s take a look at some of the myths that have arisen over the past two years.

Myth #1: The OCI is a replacement for Docker
Standards are important, but they are far from a complete production platform. Take for example, the World Wide Web. It  has evolved over the last 25 years and was built on core dependable standards like TCP/IP, HTTP and HTML. Using TCP/IP as an example, when enterprises coalesced around TCP/IP as a common protocol, it fueled the growth of routers and in particular – Cisco. However, Cisco became a leader in its market by focusing on differentiated features on its routing platform. We believe the parallel exists with the OCI specifications and Docker.
Docker is a complete production platform for developing, distributing, securing and orchestrating container-based solutions. The OCI specification is used by Docker, but it represents only about five percent of our code and a small part of the Docker platform concerned with the runtime behavior of a container and the layout of a container image. 
Myth #2: Products and projects already are certified to the OCI specifications
The runtime and image specifications have just released as 1.0 this week. However, the OCI certification program is still in development so companies cannot claim compliance, conformance or compatibility until certification is formally rolled out later this year.
The OCI certification working group is currently defining the standard so that products and open source projects can demonstrate conformance to the specifications. Standards and specifications are important for engineers implementing solutions, but formal certification is the only way to reassure customers that the technology they are working with is truly conformant to the standard.
Myth #3: Docker doesn’t support the OCI specifications work
Docker has a long history with contributing to the OCI. We developed and donated a majority of the OCI code and have been instrumental in defining the OCI runtime and image specifications as maintainers of the project. When the Docker runtime and image format quickly became the de facto standards after being released as open source in 2013, we thought it would be beneficial to donate the code to a neutral governance body to avoid fragmentation and encourage innovation. The goal was to provide a dependable and standardized specification so Docker contributed runc, a simple container runtime, as the basis of the runtime specification work, and later contributed the Docker V2 image specification as the basis for the OCI image specification work.
Docker developers like Michael Crosby and Stephen Day have been key contributors from the beginning of this work, ensuring Docker’s experience hosting and running billions of container images carries through to the OCI. When the certification working group completes its work, Docker will bring its products through the OCI certification process to demonstrate OCI conformance.
Myth #4: The OCI specifications are about Linux containers 
There is a misperception that the OCI is only applicable to Linux container technologies because it is under the aegis of the Linux Foundation. The reality is that although Docker technology started in the Linux world, Docker has been collaborating with Microsoft to bring our container technology, platform and tooling to the world of Windows Server. Additionally, the underlying technology that Docker has donated to the OCI is broadly applicable to  multi-architecture environments including Linux, Windows and Solaris and covers x86, ARM and IBM zSeries. 
Myth #5: Docker was just one of many contributors to the OCI
The OCI as an organization has a lot of supporting members representing the breadth of the container industry. That said, it has been a small but dedicated group of individual technologists that have contributed the time and technology to the efforts that have produced the initial specifications. Docker was a founding member of the OCI, contributing the initial code base that would form the basis of the runtime specification and later the reference implementation itself. Likewise, Docker contributed the Docker V2 Image specification to act as the basis of the OCI image specification.
Myth #6: CRI-O is an OCI project
CRI-O is an open source project in the Kubernetes incubator in the Cloud Native Computing Foundation (CNCF) – it is not an OCI project. It is based on an earlier version of the Docker architecture, whereas containerd is a direct CNCF project that is a larger container runtime that includes the runc reference implementation. containerd is responsible for image transfer and storage, container execution and supervision, and low-level functions to support storage and network attachments. Docker donated containerd to the CNCF with the support of the five largest cloud providers: Alibaba Cloud, AWS, Google Cloud Platform, IBM Softlayer and Microsoft Azure with a charter of being a core container runtime for multiple container platforms and orchestration systems.  
Myth #7: The OCI specifications are now complete 
While the release of the runtime and image format specifications is an important milestone, there’s still work to be done. The initial scope of the OCI was to define a narrow specification on which developers could depend for the runtime behavior of a container, preventing fragmentation in the industry, and still allowing innovation in the evolving container domain. This was later expanded to include a container image specification.
As the working groups complete the first stable specifications for runtime behavior and image format, new work is under consideration. Ideas for future work include distribution and signing. The next most important work for the OCI, however, is delivering on a certification process backed by a test suite now that the first specifications are stable.
Learn more about OCI and Open Source at Docker:

Read the blog post about the OCI Release of v1.0 Runtime and Image Format Specifications
Visit the Open Container Initiative website
Visit the Moby Project website
Attend DockerCon Europe 2017
Attend the Moby Summit LA alongside OSS NA

7 things to know about v1.0 of the #OCI runtime and image specifications Click To Tweet

The post Demystifying the Open Container Initiative (OCI) Specifications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Leads OCI Release of v1.0 Runtime and Image Format Specifications

Today marks an important milestone for the Open Container Initiative (OCI) with the release of the OCI v1.0 runtime and image specifications – a journey that Docker has been central in driving and navigating over the last two years. It has been our goal to provide low-level standards as building blocks for the community, customers and the broader industry. To understand the significance of this milestone, let’s take a look at the history of Docker’s growth and progress in developing industry-standard container technologies.
The History of Docker Runtime and Image Donations to the OCI
Docker’s image format and container runtime quickly emerged as the de facto standard following its release as an open source project in 2013. We recognized the importance of turning it over to a neutral governance body to fuel innovation and prevent fragmentation in the industry. Working together with a broad group of container technologists and industry leaders, the Open Container Project was formed to create a set of container standards and was launched under the auspices of the Linux Foundation in June 2015 at DockerCon. It became the Open Container Initiative (OCI) as the project evolved that Summer.
Docker contributed runc, a reference implementation for the container runtime software that had grown out of Docker employee Michael Crosby’s libcontainer project. runc is the basis for the runtime specification describing the life-cycle of a container and the behavior of a container runtime. runc is used in production across tens of millions of nodes, which is an order of magnitude more than any other code base. runc became the reference implementation for the runtime specification project itself, and continued to evolve with the project.  
Almost a year after work began on the runtime specification, a new working group formed to specify a container image format. Docker donated the Docker V2 Image Format to the OCI as the basis for the image specification. With this donation, the OCI defines the data structures — the primitives — that make up a container image. Defining the container image format is an important step for adoption, but it takes a platform like Docker to activate its value by defining and providing tooling on how to build images, manage them and ship them around. For example, things such as the Dockerfile are not included in the OCI specifications.
Title: Docker’s History of Contribution to the OCI
  
The Journey to Open Container Standards
The specifications have continued to evolve for two years now. Smaller projects have been spun out of the runc reference implementation as the code has been refactored, as well as support testing tools that will become the test suite for certification.
See the timeline above for details about Docker’s involvement in shaping OCI, which includes: creating runc, iterating on the runtime specification with the community, creating containerd to integrate runc in Docker 1.11, donating the Docker V2 Image Format to OCI as a base for the image format specification, implementing that specification in containerd so that this core container runtime covers both the runtime and image format standards, and finally donating containerd to the Cloud Native Computing Foundation (CNCF) and iterating on it towards a 1.0 alpha release this month.
Maintainers Michael Crosby and Stephen Day have lead the development of these specifications and have been instrumental in bringing v1.0 to fruition, alongside contributions from Alexander Morozov, Josh Hawn, Derek McGown and Aaron Lehmann, as well as Stephen Walli participating in the certification working group.
Docker remains committed to driving container standards, building a strong base at the layers where everyone agrees so that the industry can innovate at the layers that are still very differentiated.
Open Standards are Only a Piece of the Puzzle
Docker is a complete platform for creating, managing, securing, and orchestrating containers and container images. The vision has always been a base of industry standard specifications that support open source components or the plumbing of a container solution. The Docker platform sits above this layer – providing users and customers with a secure container management solution from development through production.  
The OCI runtime and image specifications become the dependable standards base that allow and encourage the greatest number of container solutions and at the same time, they do not restrict product innovation or shutout major contributors. To draw a comparison, TCP/IP, HTTP and HTML became the dependable standards base upon which the World Wide Web was built over the past 25 years. Companies continue to innovate with new tools, technologies and browsers on these standards. The OCI specifications provide the similar foundation for containers solutions going forward.
Open source projects also play a role in providing components for product development. The OCI runc reference implementation is used by the containerd project, a larger container runtime responsible for image transfer and storage, container execution and supervision, and low-level functions to support storage and network attachments. The containerd project was contributed by Docker to the CNCF and sits alongside other important projects to support cloud native computing solutions.
Docker uses containerd and more of its own core open source infrastructure elements like the LinuxKit, InfraKit and Notary projects to build and secure container solutions that become the Docker Community Edition tools. Users and organizations looking for complete container platforms that are holistic and provide container management, security, orchestration, networking and more can look to Docker Enterprise Edition.

This diagram highlights that the OCI specifications provide a layer of standards, implemented by a container runtime: containerd and runc. To assemble a full container platform such as Docker with full container lifecycle workflow, many other components are brought together: to manage infrastructure (InfraKit), provide an operating system (LinuxKit), deliver orchestration (SwarmKit), ensure security (Notary).

What’s Next for the OCI
We should celebrate the efforts of the developers as the runtime and image specifications are published. The next critical work to be done by the Open Container Initiative is to deliver a certification program to validate claims from implementers that their products and projects do indeed conform to the runtime and image specifications. The Certification Working Group has been putting together a program that in conjunction with a developing suite of test tools for both the runtime and image specifications will show how implementations fare against the standards.
At the same time, the developers of the current specifications are considering the next most important areas of container technology to specify. Work is underway around a common networking interface for containers in the Cloud Native Computing Foundation, but work to support signing and distribution are areas under consideration for the OCI.
Alongside the OCI and its members, Docker remains committed to standardizing container technology. The OCI’s mission is to give users and companies the baseline on which they can innovate in the areas of developer tooling, image distribution, container orchestration, security, monitoring and management. Docker will continue to lead the charge in innovation – not only with tooling that increases productivity and increases efficiencies, but also by empowering users, partners and customers to innovate as well.
Learn more about OCI and Open Source at Docker:

Read about the OCI specifications Myths
Visit the Open Container Initiative website
Visit the Moby Project website
Attend DockerCon Europe 2017
Attend the Moby Summit LA alongside OSS NA

Docker leads release of @OCI_ORG v1.0 container specifications to fuel ecosystem collaboration…Click To Tweet

The post Docker Leads OCI Release of v1.0 Runtime and Image Format Specifications appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/