Convince your manager to send you to DockerCon

Has it sunk in yet that DockerCon is in roughly 2 months? That’s right, this year we gather in April as a community and ecosystem in Austin, Texas for 3 days of deep learning and networking (with a side serving of Docker fun). DockerCon is the annual community and industry event for makers and operators of next generation distributed apps built with containers. If Docker is important to your daily workflow or your business, you and your team (reach out for group discounts) should attend this conference to stay up to date on the latest progress with the Docker platform and ecosystem.
Do you really want to go to DockerCon, but are having a hard time convincing your manager on pulling the trigger to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
Well, fear not! We’ve put together a few more resources and reasons to help convince your manager that DockerCon 2017 on April 17-20, is an invaluable experience you need to attend.
Something for everyone
DockerCon is the best place to learn and share your experiences with the industry’s greatest minds and the guarantee to bring learnings back to your team. We will have plenty of learning materials and networking opportunities specific to our 3 main audiences:
1. Developers
From programming language specific workshops such as Docker for Java developers or modernizing monolithic ASP.NET applications with Docker, to sessions on building effective Docker images or using Docker for Mac, Docker for Windows and Docker Cloud, the DockerCon agenda will showcase plenty of hands-on content for developers.

2. IT Pros
The review committee is also making sure to include lots of learning materials for IT pros. Register now if you want to attend the orchestration and networking workshops as they will sold out soon. Here is the list of Ops centric topics, which will be covered in the breakout sessions: tracing, container and cluster monitoring, container orchestration, securing the software supply chain and the Docker for AWS and Docker for Azure editions.

3. Enterprise
Every DockerCon attendee will also have the opportunity to learn how Docker offers an integrated Container-as-a-Service platform for developers and IT operations to collaborate in the enterprise software supply chain. Companies with a lot of experience running Docker in production will go over their reference architecture and explain how they brought security, policy and controls to their application lifecycle without sacrificing any agility or application portability. Use case sessions will be heavy on technical detail and implementation advice.
Proof is in the numbers

According to surveyed DockerCon attendees, 91% would recommend investing in DockerCon again, not to mention DockerCon 2016 scored an improved NPS of 61.
DockerCon continues to grow due to high demand. DockerCon attendance has increased 900% since 2014 and +25% in the just the last year. We hope to continue to welcome more to DockerCon and the community each year while preserving the intimacy of the conference.
87% of surveyed attendees said the content and subject matter was relevant to their professional endeavours.  

Take part in priceless learning opportunities
At the heart of DockerCon are amazing learning opportunities from not just the Docker team but the entire community. This year we will provide event more tools and resources to facilitate professional networking, learning and mentorship opportunities based on areas of expertise and interest. No matter your expertise level, DockerCon is the one place where you can not only learn and ask, but also teach and help. To us, this is what makes DockerCon unlike any other conference.
Leave motivated to create something amazing
The core theme of every DockerCon is to celebrate the makers within us all. Through the robust content and pure energy of the community, we are confident that you will leave DockerCon inspired to return to work to apply all of your new knowledge and best practices to your line of work. Don’t believe us? Just check out our closing session of 2016 that featured cool hacks created by community and Docker team members.
DockerCon Schedule
We have extended the conference this year to 3 days with instructor led workshops beginning on Monday afternoon. General sessions, breakout sessions and ecosystem expo will take place Tuesday &; Wednesday. We’ve added the extra day to the conference to help your over crammed agendas with repeat top sessions, hands on labs and mini summits that will take place on Thursday, April 20.

Plan Your Trip

Sending an employee to a conference is an investment and can be a big expense. Below you will find a budget template to help you plan for your trip. Ready to send an email to your boss about DockerCon ? Here is the sample letter you can use as a starting point.
We invite you to join the community and us at DockerCon 2017, and we hope you find this packet of event information useful, including a helpful letter you can use to send to your manager to justify your trip and build a budget estimate. We are confident there’s something at DockerCon for everyone, so feel free to share within your company and networks.

Convince your manager to send you to @dockercon &8211; the 1st and largest container conferenceClick To Tweet

The post Convince your manager to send you to DockerCon appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Storage and Infinit FAQ

Last December, acquired a company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker.

During the last Docker Online Meetup, Julien Quintard, member of Docker’s technical staff and former CEO at Infinit, went through the design principles behind their product and demonstrated how the platform can be used to deploy a storage infrastructure through Docker containers in a few command lines.
Providing state to applications in Docker requires a backend storage component that is both scalable and resilient in order to cope with a variety of use cases and failure scenarios. The Infinit Storage Platform has been designed to provide Docker applications with a set of interfaces (block, file and object) allowing for different tradeoffs.
Check out the following slidedeck to learn more about the internals of their platform:

Unfortunately, the video recording from the meetup is not available this time around but you can watch the following presentation and demo of Infinit from its CTO Quentin Hocquet at the Docker Distributed Systems Summit:

Docker and Infinit FAQ
1. Do you consider NFS/GPFS and other HPC cluster distributed storage as traditional? So far volume is working well  for our evaluations, why would we need Infinit in an HPC use case?
Infinit has not been designed for HPC use cases. More specifically, it has been designed with scalability and resilience in mind. As such, if you are looking for high performance, there are a number of HPC-specific solutions. But those are likely to be limited one way or another when it comes tosecurity, scalability, flexibility, programmability etc.
Infinit may end up being an OK solution for HPC deployments but those are not the scenarios we have been targeting so far.
2. Does it work like P2P torrent?
Infinit and Bittorrent (and more generally torrent solutions) share a number of concepts such as the way data is retrieved by leveraging the upload bandwidth of a number of nodes to fill up a client’s bandwidth, also know as multi sourcing. Both solutions also rely on a distributed hash table (DHT).
However, Bittorrent is all about retrieval speed while Infinit is about scalability, resilience and security. In other words, Bittorrent’s algorithms are based on the popularity of a piece of data. The more nodes have that piece, the faster it will be for many concurrent clients to retrieve it. The drawback is that if a piece of information is unpopular, it will eventually be forgotten. Infinit, providing a storage solution to enterprises, cannot allow that and must therefore favor reliability and durability.
3. Does Infinit honor sync writes and what is the performance impact? Is there a reliability trade-off? (eventually consistent)
Yes indeed, there is always a tradeoff between reliability and performance. There is no magic, reliability can only be achieved through redundancy, be it through replication, erasure coding or else. And since such algorithms “enhance” the original information to make it unlikely to be forgotten should part of it be lost, it takes longer to write and to read.
Now I couldn’t possibly quantify the performance impact because it depends on many factors from your computing and networking resources to the redundancy algorithm and the factor you use to the data flow that will be generated and read from the storage layer.
In terms of consistency, Infinit has been designed to be strongly consistent, meaning that a system call completing indicates that the data has been redundantly written. However, given that we provide several logics on top of our key-value store (block, object and file) along with a set of interfaces (NFS, iSCSI, Amazon S3 etc.), we could emulate eventual consistency on top of our strongly consistent consensus algorithm.
4. For existing storage plugin owners, is this a replacement, or does it mean we can adapt our plugins to work with the Infinit architecture?
It is not Docker’s philosophy to impose on its community or customers a single solution. Docker has always described itself as a plumbing platform for mass innovation. Even though Infinit will very likely solve storage-related challenges in Docker’s products, it will always be possible to switch from the default for another storage solution per our batteries included but swappable philosophy.
As such, Docker’s objective with the acquisition of Infinit is not to replace all the other storage solution but rather to provide a reasonable default to the community. Also keep in mind that a storage solution solving all the use cases will likely never exist. The user must be able to pick the solution that best fits her needs.
5. Can you run the Infinit tools in a or does it require being a part of the host OS?
You can definitely run Infinit within a container if you want. Just note that if you intend to access the Infinit storage platform through an interface that relies on a kernel module, your container will need super-privileges to install/use this kernel module e.g FUSE.
6. Can you share the commands used during the demo?
The demo is very similar to what the Get Started demonstrates. I therefore invite you to follow this guide.
7. Would Infinit provide object & block storage?
Yes that is absolutely the plan. We’ve started with a file system logic and FUSE interface but we already have an object store logic in the pipeline as well as an Amazon S3 interface. However, the likely next logic you will see Infinit providing is a block storage with a network block device (NBD) interface.
8. It seems like this technology has use cases beyond Docker and containers, such as a modern storage infrastructure to use in place of RAID style systems. How do you expect that to play out with the Docker acquisition?
You are right, Infinit can be used in many use cases. Unfortunately it is a bit early to say how Infinit and Docker will integrate. As you are all aware, Docker is moving extremely fast. We are still working on figuring out where, when and how Infinit is going to contribute to Docker’s ecosystem.
So far, Infinit remains a standalone software-defined storage solution. As such, anyone can use it outside of Docker. It may remain like that in the future or it may become completely integrated in Docker. In any case, note that should Infinit be fully embedded in Docker, the reason would be to further simplify its deployment.
9. What are the next steps for Infinit now?
The next steps are quite simple. At the Docker level, we need to ease the process of deploying Infinit on a cluster of nodes so that developers and operators alike can benefit from a storage platform that is as easy to set up as an application cluster.
At the Infinit level, we are working on improving the scalability and resilience of the key-value store. Even though Infinit has been conceived with this properties in mind, we have not had enough time so far to stress Infinit through various scenarios.
We have also started working on more logics/interfaces: object storage with Amazon S3 and block storage with NBD. You can follow Infinit’s Roadmap on the  website.
Finally, we’ve been working on open sourcing the three main Infinit components, namely the core libraries, key-value store and storage platform. For more information, you can check our Open Source webpage.
10. Good stuff how to get hold of bits to play with?
Everything is available on Infinit’s website, from tutorials, example deployments, documentation on the underlying technology, FAQ, roadmap, change log and soon, the sources.
Still hungry for more info?

Check this play with Docker and Infinit blog post
Join the docker-storage slack channel

[Tweet “Docker and @Infinit: A New Data Layer For Distributed Apps and container environments”]
The post Docker Storage and Infinit FAQ appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

containerd livestream recap

In case you missed it last month, we announced that is extracting a key component of its platform, a part of the engine plumbing called  &; a core container runtime – and committed to donating it to an open foundation.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post.

You can also watch the following video recording of the containerd online meetup, for a summary and Q&A with Arnaud Porterie, Michael Crosby, Stephen Day, Patrick Chanezon and Solomon Hykes from the Docker team:

Here is the list of top questions we got following this announcement:
Q. Are you planning to run docker without runC ?
A. Although runC is the default runtime, as of  Docker 1.12, it can be replaced by any other OCI-compliant implementation. Docker will be compliant with the OCI Runtime Specification
Q. What major changes are on the roadmap for swarmkit to run on containerd if any? 
A. SwarmKit is using Docker Engine to orchestrate tasks, and Docker Engine is already using containerd for container execution. So technically, you are already using containerd when using SwarmKit. There is no plan currently to have SwarmKit directly orchestrate containerd containers though.
Q. Mind sharing why you went with GRPC for the API?
A. containerd is a component designed to be embedded in a higher level system, and serve a host local API over a socket. GRPC enables us to focus on designing RPC calls and data structures instead of having to deal with JSON serialization and HTTP error codes. This improves iteration speed when designing the API and data structures. For higher level systems that embed containerd, such as Docker or Kubernetes, a JSON/HTTP API makes more sense, allowing easier integration. The Docker API will not change, and will continue to be based on JSON/HTTP.
Q. How do you expect to see others leverage containerd outside of Docker?
A. Cloud managed container services such as Amazon ECS, Microsoft ACS, Google Container Engine, or orchestration tools such as Kubernetes or Mesos can leverage containerd as their core container runtime. containerd has been designed to be embedded for that purpose.
Q. How did you decided which feature should get into containerd?  How did you came up with the scope of the future containers?
A. We’re trying to capture in containerd the features that any container-centric platform would need, and for which there’s reasonable consensus on the way it should be implemented. Aspects which are either not widely agreed on or that can trivially be built one layer up were left out.
Q. How integrate with CNI and CNM?
A. Phase 3 of the containerd roadmap involves porting the network drivers from libnetwork and finding a good middle ground between the CNM abstraction of libnetwork and the CNI spec.
Additional Resources:

Contribute to containerd
Join the containerd slack channel
Read the engineering team’s blog post.

Docker Extracts & Donates containerd, it&;s Core Container Runtime for the container IndustryClick To Tweet

The post containerd livestream recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

containerd – a core container runtime project for the industry

Today Docker is spinning out its core runtime functionality into a standalone component, incorporating it into a separate project called , and will be donating it to a neutral foundation early next year. This is the latest chapter in a multi-year effort to break up the Docker platform into a more modular architecture of loosely coupled components.
Over the past 3 years, as Docker adoption skyrocketed, it grew into a complete platform to build, ship and run distributed applications, covering many functional areas from infrastructure to orchestration, the core container runtime being just a piece of it. For millions of developers and IT pros, a complete platform is exactly what they need. But many platform builders and operators are looking for “boring infrastructure”: a basic component that provides the robust primitives for running containers on their system, bundled in a stable interface, and nothing else. A component that they can customize, extend and swap out as needed, without unnecessary abstraction getting in their way. containerd is built to provide exactly that.

What Docker does best is provide developers and operators with great tools which make them more productive. Those tools come from integrating many different components into a cohesive whole. Most of those components are invented by others &; but along the way we find ourselves developing some of those components from scratch. Over time we spin out these components as independent projects which anyone can reuse and contribute back to. containerd is the latest of those components.

containerd is already deployed on millions of machines since April 2016 when it was included in Docker 1.11. Today we are announcing a roadmap to extend containerd, with input from the largest cloud providers, Alibaba Cloud, AWS, Google, IBM, Microsoft, and other active members of the container ecosystem. We will add more Docker Engine functionality to containerd so that containerd 1.0 will provide all the core primitives you need to manage containers with parity on Linux and Windows hosts:

Container execution and supervision
Image distribution
Network Interfaces Management
Local storage
Native plumbing level API
Full OCI support, including the extended OCI image specification

When containerd 1.0 implements that scope, in Q2 2017, Docker and other leading container systems, from AWS ECS to Microsoft ACS, Kubernetes, Mesos or Cloud Foundry will be able to use it as their core container runtime. containerd will use the OCI standard and be fully OCI compliant.

Over the past 3 years, the adoption of containers with Docker has triggered an unprecedented wave of innovation in our industry. We think containerd will unlock a whole new phase of innovation and growth across the entire container ecosystem, which in turn will benefit every Docker developer and customer.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post. We plan to have a summit at the end of February to bring in more contributors, stay tuned for more details about that in the next few weeks.
Thank you to Arnaud Porterie, Michael Crosby, Mickaël Laventure, Stephen Day, Patrick Chanezon and Mike Goelzer from the Docker team, and all the maintainers and contributors of the Docker project for making this project a reality.

Introducing containerd &8211; a core container runtime project for the industryClick To Tweet

The post containerd &8211; a core container runtime project for the industry appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

More details about containerd, Docker’s core container runtime component

Today we announced that Docker is extracting a key component of its platform, a part of the engine plumbing&; a core container runtime&8211;and commits to donating it to an open foundation. containerd is designed to be less coupled, and easier to integrate with other tools sets. And it is being written and designed to address the requirements of the major cloud providers and container orchestration systems.
Because we know a lot of Docker fans want to know how the internals work, we thought we would share the current state of containerd and what we plan for version 1.0. Before that, it’s a good idea to look at what Docker has become over the last three and a half years.
The Docker platform isn’t a container runtime. It is in fact a set of integrated tools that allow you to build ship and run distributed applications. That means Docker handles networking, infrastructure, build, orchestration, authorization, security, and a variety of other services that cover the complete distributed application lifecycle.

The core container runtime, which is containerd, is a small but vital part of the platform. We started breaking out containerd from the rest of the engine in Docker 1.11, planning for this eventual release.
This is a look at Docker Engine 1.12 as it currently is, and how containerd fits in.

You can see that containerd has just the APIs currently necessary to run a container. A GRPC API is called by the Docker Engine, which triggers an execution process. That spins up a supervisor and an executor which is charged with monitoring and running containers. The container is run (i.e. executed) by runC, which is another plumbing project that we open sourced as a reference implementation of the Open Container Initiative runtime standard.
When containerd reaches 1.0, we plan to have a number of other features from Docker Engine as well.

That feature set and scope of containerd is:

A distribution component that will handle pushing to a registry, without a preferencetoward a particular vendor.
Networking primitives for the creation of system interfaces and APIs to manage a container&;s network namespace
Host level storage for image and container filesystems
A GRPC API
A new metrics API in the Prometheus format for internal and container level metrics
Full support of the OCI image spec and runC reference implementation

A more detailed architecture overview is available in the project’s GitHub repository.
This is a look at a future version of Docker Engine leveraging containerd 1.0.

containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users; and in fact this evolution of Docker plumbing will go unnoticed by end-users. It has a CLI, ctr, designed for debugging and experimentation, and a GRPC API designed for embedding. It’s designed as a plumbing component, designed to be integrated into other projects that can benefit from the lessons we’ve learned running containers.
We are at containerd version 0.2.4, so a lot of work needs to be done. We’ve invited the container ecosystem to participate in this project and are please to have support from Alibaba, AWS, Google, IBM and Microsoft who are providing contributors to help developing containerd. You can find up-to-date roadmap, architecture and API definitions in the github repo, and learn more at the containerd livestream meetup Friday, December 16th at 10am PST. We also plan to organize a summit at the end of February to bring contributors together.

More details about containerd, @Docker’s core container runtime componentClick To Tweet

The post More details about containerd, Docker’s core container runtime component appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs

We often get asked at , “Where should I run my application? On bare metal, virtual or cloud?” The beauty of Docker is that you can run a anywhere, so we usually answer this question with “It depends.” Not what you were looking for, right?
To answer this, you first need to consider which infrastructure makes the most sense for your application architecture and business goals. We get this question so often that our technical evangelist, Mike Coleman has written a few blogs to provide some guidance:

To Use Physical Or To Use Virtual: That Is The Container Deployment Question
So, When Do You Use A Container Or VM?

During our recent webinar, titled &;Docker for Windows Server 2016&;, this question came up a lot, specifically what to consider when deploying a Windows Server 2016 application in a -V VM with Docker and how it works. First, you’ll need to understand the differences between Windows Server containers, Hyper-V containers, and Hyper-V VMs before considering how they work together.
A Hyper-V container is a Windows Server container running inside a stripped down Hyper-V VM that is only instantiated for containers.

This provides additional kernel isolation and separation from the host OS that is used by the containerized application. Hyper-V containers automatically create a Hyper-V VM using the application’s base image and the Hyper-V VM includes the required application binaries, libraries inside that Windows container. For more information on Windows Containers read our blog. Whether your application runs as a Windows Server container or as a Hyper-V container is a runtime decision. Additional isolation is a good option for multi tenant environments. No changes are required to the Dockerfile or image, the same image can be run in either mode.
Here we the the top Hyper-V container questions with answers:
Q: I thought that containers do not need a hypervisor?
A: Correct, but since a Hyper-V container packages the same container image with its own dedicated kernel it ensures tighter isolation in multi-tenant environments which may be a business or application requirement for specific Windows Server 2016 applications.
Q: ­Do you need a hypervisor layer before the OS in both Hyper-V and Docker for Windows Server containers?
A: The hypervisor is optional. With Windows Server containers, isolation is achieved not with hypervisor, but with process isolation, filesystem and registry sandboxing.
Q: Can the Hyper-V containers be managed from the Hyper-V Manager, in the same way that the VM&;s are? (ie. turned on/off, check memory usage, etc?)
A: While Hyper-V is the runtime technology powering Hyper-V Isolation, Hyper-V containers are not VMs and neither appear as a Hyper-V resource nor be managed with classic Hyper-V tools, like Hyper-V Manager. Hyper-V containers are only executed at runtime by the Docker Engine.
Q: Can you run Windows Server container and Hyper-V Containers running Linux workloads on the same host?
A: Yes. You can run a Hyper-V VM with a Linux OS on a physical host running Windows Server.  Inside the VM, you can run containers built with Linux.

Next week we’ll bring you the next blog in our Windows Server 2016 Q&A Series &; Top questions about Docker for SQL Server Express. See you again next week.
For more resources:

Learn more: www.docker.com/microsoft
Read the blog: Webinar Recap: Docker For Windows Server 2016
Learn how to get started with Docker for Windows Server 2016
Read the blog to get started shifting a legacy Windows virtual machine to a Windows Container

Top considerations for running Docker @WindowsServer container in Hyper-VClick To Tweet

The post Considerations for Running Docker for Windows Server 2016 with Hyper-V VMs appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker SF Meetup #47; Docker 1.12, Docker for Mac and Tugbot

On Wednesday members of the  SF  community joined us at Docker HQ for our 47th Docker meetup in San Francisco! It was a great evening with talks and demos from Docker’s own Ben Bonnefoy, Nishant Totla, as well as Neil Gehani from HPE.

Ben Bonnefoy is currently working on Docker for and Docker for , which were released in beta in March. At the meetup, he gave an insight into the new features as well as the open source components used under the hood namely:

HyperKit ™: A lightweight virtualization toolkit on OSX
DataKit ™: A modern pipeline framework for distributed components
VPNKit ™: A library toolkit for embedding virtual networking

 
.@FrenchBen talking at the SF @docker meetup on Insight into Docker for Mac and Docker for Windows! pic.twitter.com/oQ0pkD6P8k— Docker (@docker) August 4, 2016
In case you missed it, Docker 1.12 was made generally available on July 28! Nishant Totla, who works on the core open source team and is currently working on Docker Swarm, spoke after Ben and gave attendees all the latest updates on Docker 1.12. Take a look at his slides on the new features below.

 
.@nishanttotla talks updates on Docker 1.12 at SF @Docker meetup! pic.twitter.com/f9Lx7QeXDI— Docker (@docker) August 4, 2016
The third talk and final talk of the evening was by a guest speaker, Neil Gehani, from HPE. Neil’s talk was on ‘Tugbot’, an in-cluster testing framework. To find out more, view Neil’s slides below.

 
.@gehaniNeil of @HPE at @docker meetup introduces "Tugbot" in-cluster container testing! pic.twitter.com/REstXvopgR— Docker (@docker) August 4, 2016
For those of you who would like to watch the talks and see the demos in full, we also recorded the meetup so feel free to watch and share!

 
Thank you speakers @FrenchBen @nishanttotla @GehaniNeil @HPE & the attendees who joined our @docker meetup 2night! pic.twitter.com/BjW0WVXItw— Docker (@docker) August 4, 2016

New blog post w/ video & slides from meetup w/ @frenchben @Nishanttotla @GehaniNeilClick To Tweet

Quelle: https://blog.docker.com/feed/