Announcing Docker Birthday #4: Spreading the Docker Love!

Community is at the heart of and thanks to the hard work of thousands of maintainers, contributors, Captains, mentors, organizers, and the entire Docker community, the Docker platform is now used in production by companies of all sizes and industries.
To show our love and gratitude, it has become a tradition for Docker and our awesome network of meetup organizers to host Docker Birthday meetup celebrations all over the world. This year the celebrations will take place during the week of March 13-19, 2017. Come learn, mentor, celebrate, eat cake, and take an epic !
Docker Love
We wanted to hear from the community about why they love Docker!
Wellington Silva, Docker São Paulo meetup organizer said “Docker changed my life, I used to spend days compiling and configuring environments. Then I used to spend hours setting up using VM. Nowadays I setup an environment in minutes, sometimes in seconds.”

Love the new organization of commands in Docker 1.13!
— Kaslin Fields (@kaslinfields) January 25, 2017

Docker Santo Domingo organizer, Victor Recio said, “Docker has increased my effectiveness at work, currently I can deploy software to production environment without worrying that it will not work when the delivery takes place. I love docker and I&;m very grateful with it and whenever I can share my knowledge about docker with the young people of the communities of my country I do it and I am proud that there are already startups that have reach a Silicon Valley level.”

We love docker here at @Harvard for our screening platform. https://t.co/zpp8Wpqvk5
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) January 12, 2017

Docker Birthday Labs
At the local birthday 4 meetups, there will be Docker labs and challenges to help attendees at all levels and welcome new members into the community. We’re partnering with CS schools, non-profit organizations, and local meetup groups to throw a series of events around the world. While the courses and labs are geared towards newcomers and intermediate level users, advanced and expert community members are invited to join as mentors to help attendees work through the materials.
Find a Birthday meetup near you!
There are already 44 Docker Birthday 4 celebrations scheduled around the world with more on the way! Check back as more events are announced.

Thursday, March 9th

Fulda, Germany

Saturday, March 11th

Madurai, India

Sunday, March 12th

Mumbai, India

Monday, March 13th

Dallas, TX
Grenoble, France
Liège, Belgium
Luxembourg, Luxembourg

Tuesday, March 14th

Austin, TX
Berlin, Germany
Las Vegas, NV
Malmö, Sweden
Miami, FL

Wednesday, March 15th

Columbus, OH
Istanbul, Turkey
Nantes, France
Phoenix, AZ
Prague, Czech Republic
San Francisco, CA
Santa Barbara, CA
Singapore, Singapore

Thursday, March 16th

Brussels, Belgium
Budapest, Hungary
Dhahran, Saudi Arabia
Dortmund, Germany
Iráklion, Greece
Montreal, Canada
Nice, France
Saint Louis, MO
Stuttgart, Germany
Tokyo, Japan
Washington, DC

Saturday, March 18th

Delhi, India
Hermosillo, Mexico
Kanpur, India
Kisumu, Kenya
Novosibirsk, Russia
Porto, Portugal
Rio de Janeiro, Brazil
Thanh Pho Ho Chi Minh, Vietnam

Monday, March 20th

London, United Kingdom
Milan, Italy

Thursday, March 23rd

Dublin, Ireland

Wednesday, March 29th

Colorado Springs, CO
Ottawa, Canada

Want to help us organize a Docker Birthday celebration in your city? Email us at meetups@docker.com for more information!
Are you an advanced Docker user? Join us as a mentor!
We are recruiting a network of mentors to attend the local events and help guide attendees through the Docker Birthday labs. Mentors should have experience working with Docker Engine, Docker Networking, Docker Hub, Docker Machine, Docker Orchestration and Docker Compose. Click here to sign up as a mentor.

Excited to LearnDocker at the 4th ! Join your local edition: http://dockr.ly/2jXcwz8 Click To Tweet

The post Announcing Docker Birthday 4: Spreading the Docker Love! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker Secrets Management

Containers are changing how we view apps and infrastructure. Whether the code inside containers is big or small, container architecture introduces a change to how that code behaves with hardware &; it fundamentally abstracts it from the infrastructure. believes that there are three key components to container security and together they result in inherently safer apps.

A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. We are excited to introduce Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
With containers, applications are now dynamic and portable across multiple environments. This  made existing secrets distribution solutions inadequate because they were largely designed for static environments. Unfortunately, this led to an increase in mismanagement of application secrets, making it common to find insecure, home-grown solutions, such as embedding secrets into version control systems like GitHub, or other equally bad—bolted on point solutions as an afterthought.

Introducing Docker Secrets Management
We fundamentally believe that apps are safer if there is a standardized interface for accessing secrets. Any good solution will also have to follow security best practices, such as encrypting secrets while in transit; encrypting secrets at rest; preventing secrets from unintentionally leaking when consumed by the final application; and strictly adhere to the principle of least-privilege, where an application only has access to the secrets that it needs—no more, no less.
By integrating secrets into Docker orchestration, we are able to deliver a solution for the secrets management problem that follows these exact principles.
The following diagram provides a high-level view of how the Docker swarm mode architecture is applied to securely deliver a new type of object to our containers: a secret object.

In Docker, a secret is any blob of data, such as a password, SSH private key, TLS Certificate, or any other piece of data that is sensitive in nature. When you add a secret to the swarm (by running docker secret create), Docker sends the secret over to the swarm manager over a mutually authenticated TLS connection, making use of the built-in Certificate Authority that gets automatically created when bootstrapping a new swarm.
 
$ echo “This is a secret” | docker secret create my_secret_data –
 
Once the secret reaches a manager node, it gets saved to the internal Raft store, which uses NACL’s Salsa20Poly1305 with a 256-bit key to ensure no data is ever written to disk unencrypted. Writing to the internal store gives secrets the same high availability guarantees that the the rest of the swarm management data gets.
When a swarm manager starts up, the encrypted Raft logs containing the secrets is decrypted using a data encryption key that is unique per-node. This key, and the node’s TLS credentials used to communicate with the rest of the cluster, can be encrypted with a cluster-wide key encryption key, called the unlock key, which is also propagated using Raft and will be required on manager start.
When you grant a newly-created or running service access to a secret, one of the manager nodes (only managers have access to all the stored secrets stored) will send it over the already established TLS connection exclusively to the nodes that will be running that specific service. This means that nodes cannot request the secrets themselves, and will only gain access to the secrets when provided to them by a manager &8211; strictly for the services that require them.
 
$ docker service  create –name=”redis” –secret=”my_secret_data” redis:alpine
 
The  unencrypted secret is mounted into the container in an in-memory filesystem at /run/secrets/<secret_name>.
$ docker exec $(docker ps –filter name=redis -q) ls -l /run/secrets
total 4
-r–r–r–    1 root     root            17 Dec 13 22:48 my_secret_data
 
If a service gets deleted, or rescheduled somewhere else, the manager will immediately notify all the nodes that no longer require access to that secret to erase it from memory, and the node will no longer have any access to that application secret.
$ docker service update –secret-rm=”my_secret_data” redis

$ docker exec -it $(docker ps –filter name=redis -q) cat /run/secrets/my_secret_data

cat: can’t open ‘/run/secrets/my_secret_data': No such file or directory

 
Check out the Docker secrets docs for more information and examples on how to create and manage your secrets. And a special shout out to Laurens Van Houtven (https://lvh.io) in collaboration with the Docker security and core engineering team to help make this feature a reality.

Get safer apps for dev and ops w/new Docker secrets management Click To Tweet

Safer Apps with Docker
Docker secrets is designed to be easily usable by developers and IT ops teams to build and run safer apps. Docker secrets is a container first architecture designed to keep secrets safe and used only when needed by the exact container that needs that secret to operate. From defining apps and secrets with Docker Compose through an IT admin deploying that Compose file directly in Docker Datacenter, the services, secrets, networks and volumes will travel securely, safely with the application.
Resources to learn more:

Download Docker and get started today
Try secrets in Docker Datacenter
Read the Documentation
Attend an upcoming webinar

The post Introducing Docker Secrets Management appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Online Meetup recap: Introducing Docker 1.13

Last week, we released 1.13 to introduce several new enhancements in addition to building on and improving Docker swarm mode introduced in Docker 1.12. Docker 1.13 has many new features and fixes that we are excited about, so we asked core team member and release captain, Victor Vieux to introduce Docker 1.13 in an online .
The meetup took place on Wednesday, Jan 25 and over 1000 people RSVPed to hear Victor’s presentation live. Victor gave an overview and demo of many of the new features:

Restructuration of CLI commands
Experimental build
CLI backward compatibility
Swarm default encryption at rest
Compose to Swarm
Data management commands
Brand new “init system”
Various orchestration enhancements

In case you missed it, you can watch the recording and access Victor’s slides below.

 
Below is a short list of the questions asked to Victor at the end of the Online meetup:
Q: What will happened if we call docker stack deploy multiple times to the same file?
A: All the services that were modified in the compose file will be updated according to their respective update policy. It won’t recreate a new stack, update the current one. Same mechanism used in the docker-compose python tool.
Q: In &;docker system df&8220;, what exactly constitutes an &8220;active&; image?
A: It means it’s associated with a container, if you have (even stopped) container(s) using the `redis` image, then the `redis` images is “active”
Q: criu integration is available with `–experimental` then?
A: Yes! One of the many features I didn’t cover in the slides as there are so many new ones in Docker 1.13. There is no need to download a separate build anymore, it’s just a flag away
Q: When will we know when certain features are out of the experimental state and part of the foundation of this product?
A: Usually experimental features tend to remain in an experimental state for only one release. Larger or more complex features and capabilities may require two releases to gather feedback and make incremental improvements.
Q: Can I configure docker with multiple registries (some private and some public)?
A: It’s not really necessary to configure docker as the “configuration” happen in the image name.
docker pull my-private-registry.com:9876/my-image and docker pull my-public-registry.com:5432/my-image

Missed the Intro to Docker 1.13 online meetup w/ @vieux? Check out the video & slides here!Click To Tweet

The post Docker Online Meetup recap: Introducing Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Meetup Community reaches 150K members

We are thrilled to announce that the community has reached over 150,000 members! We’d like to take a moment to acknowledge all the amazing contributors and Docker enthusiasts who are working hard to organize frequent and interesting Docker-centric meetups. Thanks to you, there are 275 Docker meetup groups, in 75 countries, across 6 continents.
There were over 1000 Docker meetups held all over the world last year. Big shout out to Ben Griffin, organizer of Docker Melbourne, who organized 18 meetups in 2016,  Karthik Gaekwad, Lee Calcote, Vikram Sabnis and Everett Toews, organizers of Docker Austin who organized 16 meetups, Gerhard Schweinitz and Stephen J Wallace, organizers of Docker Sydney who organized 13, and Jesse White, Luisa Morales and Doug Masiero from Docker NYC who organized 12. 

We also wanted to thank and give a massive shout out to organizers Adrien Blind and Patrick Aljord have grown the Docker Paris Meetup group to nearly 4,000 members and have hosted 46 events since they launched the group almost 4 years ago!
 

Reached 3925 @DockerParis meetup members ! We may be able to celebrate 4000 members during feb docker event @vcoisne @jpetazzo @docker pic.twitter.com/CGmvShIj0L
— Adrien Blind (@AdrienBlind) January 17, 2017

One of our newest groups, Docker Havana, started last November and they already have +200 members! The founding organizers, Enrique Carbonell and Manuel Morejón are doing a fantastic job recruiting new members and have even started planning awesome meetups in other Cuban cities too!

 
Interested in getting involved with the Docker Community? The best way to participate is through your local meetup group. Check out this map to see if a Docker user group exists in your city, or take a look at the list of upcoming Docker events.

Can’t find a group near you? Learn more here about how to start a group and the process of becoming an organizer. Our community team would be happy to work with you on solving some of the challenges associated with organizing meetups in your area.
Not interested in starting a group? You can always join the Docker Online Meetup Group!
In case you missed it, we’ve recently introduced a Docker Community Directory and Slack to further enable community building and collaboration. Our goal is to give everyone the opportunity to become a more informed and engaged member of the community by creating sub groups and channels based on location, language, use cases, interest in specific Docker-centric projects or initiatives.
Sign up for the Docker Community Directory and Slack  
 

Docker community reaches 150k meetup members! Join a local group to and meet other&;Click To Tweet

The post Docker Meetup Community reaches 150K members appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

CPU Management in Docker 1.13

Resource management for containers is a huge requirement for production users. Being able to run multiple containers on a single host and ensure that one container does not starve the others in terms of cpu, memory, io, or networking in an efficient way is why I like working with containers. However, cpu management for containers is still not as straightforward as what I would like. There are many different options when it comes to dealing with restricting the cpu usage for a container. With things like memory, its is very easy for people to think that , –memory 512m gives the container up to 512mb. With CPU, it&;s hard for people to understand a container’s limit with the current options.
In 1.13 we added a –cpus flag, which is the best tech for limiting cpu usage of a container with a sane UX that the majority of users can understand. Let’s take a look at a couple of the options in 1.12 to show why this is necessary.
There are various ways to set a cpu limit for a container. Cpu shares, cpuset, cfs quota and period are the three most common ways. We can just go ahead and say that using cpu shares is the most confusing and worst functionality out of all the options we have. The numbers don’t make sense. For example, is 5 a large number or is 512 half of my system’s resources if there is a max of 1024 shares?  Is 100 shares significant when I only have one container; however, if I add two more containers each with 100 shares, what does that mean?  We could go in depth on cpu shares but you have to remember that cpu shares are relative to everything else on the system.
Cpuset is a viable alternative but it takes much more thought and planning to use it correctly and use it in the correct circumstances. The cfs scheduler along with quota and period are some of the best options for limiting container cpu usage but they come with bad user interfaces. Specifying cpu usage in nanoseconds for a user is sometimes hard to determine when you want to do simple tasks such as limiting a container to one core.
In 1.13 though, if you want a container to be limited to one cpu then you can just add –cpus 1.0 to your Docker run/create command line. If you would like two and a half cpus as the limit of the container then just add –cpus 2.5. In Docker we are using the CFS quota and period to limit the container’s cpu usage to what you want and doing the calculations for you.
If you are limiting cpu usage for your containers, look into using this new flag and API to handle your needs. This flag will work on both Linux and Windows when using Docker.  
For more information on the feature you can look at the docs https://docs.docker.com/engine/admin/resource_constraints/
For more information on Docker 1.13 in general, check out these links:

Read the product documentation
Learn more about the latest Docker 1.13 release
Get started and install Docker
Attend the next Docker Online Meetup on Wed 1/25 at 10am PST

Introducing a new CPU management flag in Docker 1.13 Click To Tweet

The post CPU Management in Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing Docker 1.13

Today we’re releasing 1.13 with lots of new features, improvements and fixes to help Docker users with New Year’s resolutions to build more and better container apps. Docker 1.13 builds on and improves Docker swarm mode introduced in Docker 1.12 and has lots of other fixes. Read on for Docker 1.13 highlights.

Use compose-files to deploy swarm mode services
Docker 1.13 adds Compose-file support to the `docker stack deploy` command so that services can be deployed using a `docker-compose.yml` file directly. Powering this is a major effort to extend the swarm service API to make it more flexible and useful.
Benefits include:

Specifying the number of desired instances for each service
Rolling update policies
Service constraints

Deploying a multi-host, multi-service stack is now as simple as:
docker stack deploy –compose-file=docker-compose.yml my_stack
Improved CLI backwards compatibility
Ever been bitten by the dreaded Error response from daemon: client is newer than server problem because your Docker CLI was updated, but you still need to use it with older Docker engines?
Starting with 1.13, newer CLIs can talk to older daemons. We’re also adding feature negotiation so that proper errors are returned if a new client is attempting to use features not supported in an older daemon. This greatly improves interoperability and makes it much simpler to manage Docker installs with different versions from the same machine.
Clean-up commands
Docker 1.13 introduces a couple of nifty commands to help users understand how much disk space Docker is using, and help remove unused data.

docker system df will show used space, similar to the unix tool df
docker system prune will remove all unused data.

Prune can also be used to clean up just some types of data. For example: docker volume prune removes unused volumes only.
CLI restructured
Docker has grown many features over the past couple years and the Docker CLI now has a lot of commands (40 at the time of writing). Some, like build or run are used a lot, some are more obscure, like pause or history. The many top-level commands clutters help pages and makes tab-completion harder.
In Docker 1.13, we regrouped every command to sit under the logical object it’s interacting with. For example list and startof containers are now subcommands of docker container and history is a subcommand of docker image.
docker container list
docker container start
docker image history
These changes let us clean up the Docker CLI syntax, improve help text and make Docker simpler to use. The old command syntax is still supported, but we encourage everybody to adopt the new syntax.
Monitoring improvements
docker service logs is a powerful new experimental command that makes debugging services much simpler. Instead of having to track down hosts and containers powering a particular service and pulling logs from those containers, docker service logs pulls logs from all containers running a service and streams them to your console.
Docker 1.13 also adds an experimental Prometheus-style endpoint with basic metrics on containers, images and other daemon stats.
Build improvements
docker build has a new experimental –squash flag. When squashing, Docker will take all the filesystem layers produced by a build and collapse them into a single new layer. This can simplify the process of creating minimal container images, but may result in slightly higher overhead when images are moved around (because squashed layers can no longer be shared between images). Docker still caches individual layers to make subsequent builds fast.
1.13 also has support for compressing the build context that is sent from CLI to daemon using the –compress flag. This will speed up builds done on remote daemons by reducing the amount of data sent.
Docker for AWS and Azure out of beta
Docker for AWS and Azure are out of public beta and ready for production. We’ve spent the past 6 months perfecting Docker for AWS and Azure and incorporating user feedback, and we’re incredibly grateful to all the users that helped us test and identify problems. Also, stay tuned for more updates and enhancements over the coming months.
Get started with Docker 1.13
Docker for Mac and Windows users on both beta and stable channels will get automatic upgrade notifications (in fact, beta channel users have been running Docker 1.13 release clients for the past couple of months). If you’re new to Docker, download Docker for Mac and Windows to get started.
To deploy Docker for AWS or Docker for Azure, check out the docs or use these buttons to get started:

If you are interested in installing Docker on Linux, check the install instructions for details.
Helpful links:

Join us for the next Docker Online Meetup on Wed 1/25 at 9am PST 
Become a member of the Docker community
Read the product documentation

 

Introducing Docker 1.13 with compress, squash, prune and service logsClick To Tweet

The post Introducing Docker 1.13 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

InfraKit Under the Hood: High Availability

Back in October, released , an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the first in a two part series that dives more deeply into the internals of InfraKit.
Introduction
At Docker,  our mission to build tools of mass innovation constantly challenges to look at ways to improve the way developers and operators work. Docker Engine with integrated orchestration via Swarm mode have greatly simplified and improved efficiency in application deployment and management of microservices. Going a level deeper, we asked ourselves if we could improve the lives of operators by making tools to simplify and automate orchestration of infrastructure resources. This led us to open source InfraKit, as set of building blocks for creating self-healing and self-managing systems.

There are articles and tutorials (such as this, and this) to help you get acquainted with InfraKit. InfraKit is made up of a set of components which actively manage infrastructure resources based on a declarative specification. These active agents continuously monitor and reconcile differences between your specification and actual infrastructure state. So far, we have implemented the functionality of scaling groups to support the creation of a compute cluster or application cluster that can self-heal and dynamically scale in size. To make this functionality available for different infrastructure platforms (e.g. AWS or bare-metal) and extensible for different applications (e.g. Zookeeper or Docker orchestration), we support customization and adaptation through the instance and flavor plugins. The group controller exposes operations for scaling in and out and for rolling update and communicates with the plugins using JSON-RPC 2.0 over HTTP. While the project provides packages implemented in Go for building platform-specific plugins (like this one for AWS), it is possible to use other language and tooling to create interesting and compatible plugins.
High Availability
Because InfraKit is used to ensure the availability and scaling of a cluster, it needs to be highly available and perform its duties without interruption.  To support this requirement, we consider the following:

Redundancy & Failover &; for active management without interruption.
Infrastructure State &8212; for an accurate view of the cluster and its resources.
User specification &8212; keeping it available even in case of failure.

Redundancy & Failover
Running multiple sets of the InfraKit daemons on separate physical nodes is an obvious approach to achieving redundancy.  However, while multiple replicas are running, only one of the replica sets can be active at a time. Having at most one leader (or master) at any time ensures no multiple controllers are independently making decisions and thus end up conflicting with one another while attempting to correct infrastructure state. However, with only one active instance at any given time, the role of the active leader must transition smoothly and quickly to another replica in the event of failure. When a node running as the leader crashes, another set of InfraKit daemons will assume leadership and attempt to correct the infrastructure state. This corrective measure will then restore the lost instance in the cluster, bringing the cluster back to the desired state before outage.
There are many options in implementing this leadership election mechanism. Popular coordinators for this include Zookeeper and Etcd which are consensus-based systems in which multiple nodes form a quorum. Similar to these is the Docker engine (1.12+) running in Swarm Mode, which is based on SwarmKit, a native clustering technology based on the same Raft consensus algorithm as Etcd. In keeping with the goal of creating a toolkit for building systems, we made these design choices:

InfraKit only needs to observe leadership in a cluster: when the node becomes the leader, the InfraKit daemons on that node become active. When leadership is lost, the daemons on the old leader are deactivated, while control is transferred over to the InfraKit daemons running on the new leader.
Create a simple API for sending leadership information to InfraKit. This makes it possible to connect InfraKit to a variety of inputs from Docker Engines in Swarm Mode (post 1.12) to polling a file in a shared file system (e.g. AWS EFS).
InfraKit does not itself implement leader election. This allows InfraKit to be readily integrated into systems that already have its own manager quorum and leader election such as Docker Swarm. Of course, it’s possible to add leader election using a coordinator such as Etcd and feed that to InfraKit via the leadership observation API.

With this design, coupled with a coordinator, we can run InfraKit daemons in replicas on multiple nodes in a cluster while ensuring only one leader is active at any given time. When leadership changes, InfraKit daemons running on the new leader must be able to assess infrastructure state and determine the delta from user specification.
Infrastructure State
Rather than relying on an internal, central datastore to manage the state of the infrastructure, such as an inventory of all vm instances, InfraKit aggregates and computes the infrastructure state based on what it can observe from querying the infrastructure provider. This means that:

The instance plugin needs to transform the query from the group controller to appropriate calls to the provider’s API.
The infrastructure provider should support labeling or tagging of provisioned resources such as vm instances.
In cases where the provider does not support labeling and querying resources by labels, the instance plugin has the responsibility to maintain that state. Approaches for this vary with plugin implementation but they often involve using services such as S3 for persistence.

Not having to store and manage infrastructure state greatly simplified the system. Since the infrastructure state is always aggregated and computed on-demand, it is always up to date. However, other factors such as availability and performance of the platform API itself can impact observability. For example, high latencies and even API throttling must be handled carefully in determining the cluster state and consequently deriving a plan to push toward convergence with the user’s specifications.
User Specification
InfraKit daemons continuously observe the infrastructure state and compares that with the user’s specification. The user’s specification for the cluster is expressed in JSON format and is used to determine the necessary steps to drive towards convergence. InfraKit requires this information to be highly available so that in the event of failover, the user specification can be accessed by the new leader.
There are options for implementing replication of the user specification. These range from using file systems backed by persistent object stores such as S3 to EFS to using distributed key-value store such as Zookeeper or Etcd. Like other parts of the toolkit, we opted to define an interface with different implementations of this configuration store. In the repo, there are stores implemented using file system and Docker Swarm. More implementations are possible and we welcome contributions!
Conclusion
In this article, we have examined some of the considerations in designing InfraKit. As a systems meant to be incorporated as a toolkit into larger systems, we aimed for modularity and composability. To achieve these goals, the project specifies interfaces which define interactions of different subsystems. As a rule, we try to provide different implementations to test and demonstrate these ideas. One such implementation of high availability with InfraKit leverages Docker Engine in Swarm Mode &8212; the native clustering and orchestration technology of the Docker Platform &8212; to give the swarm self-healing properties. In the next installment, we will investigate this in greater detail.
Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &8212; from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and send us a PR or open an issue with your ideas!
More Resources:

Check out all the Infrastructure Plumbing projects
Sign up for Docker for AWS or Docker for Azure
Try Docker today

InfraKit Under the Hood: High Availability docker Click To Tweet

 
The post InfraKit Under the Hood: High Availability appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017 first speakers announced

To the rest of the world, 2017 may seem a ways away, but here at we are heads down reading your Call for Papers submissions and curating content to make this the biggest and best DockerCon to date. With that, we are thrilled to share with you the DockerCon 2017 Website with helpful information including ten of the first confirmed speakers and sessions.
If you want to join this amazing lineup and haven’t submitted your cool hack, use case or deep dive session, don’t hesitate! The Call for Papers closes this Saturday, January 14th.
 
Submit a talk
 
First DockerCon speakers
 
Laura Frank
Sr. Software Engineer, Codeship
Everything You Thought You Already Knew About Orchestration
 
 
 

Julius Volz
Co-founder, Prometheus
Monitoring, the Prometheus Way
 
 

 
Liz Rice
Co-founder & CEO, Microscaling Systems
What have namespaces done for you lately?

 
 

 
Thomas Graf
Principal Engineer at Noiro, Cisco
Cilium – BPF & XDP for containers
 
 

 
Brendan Gregg 
Sr. Performance Architect, Netflix
Container Tracing Deep Dive
 
 

 
Thomas Shaw
Build Engineer, Activision
Activision&;s Skypilot: Delivering amazing game experiences through containerized pipelines
 
 

 
Fabiane Nardon
Chief Scientist at TailTarget
Docker for Java Developers
 
 

 
Arun Gupta
Vice President of Developer Advocacy, Couchbase
Docker for Java Developers
 
 

 
Justin Cappos
Assistant Professor in the Computer Science and Engineering department at New York University
Securing the Software Supply Chain
 
 

 
John Zaccone
Software Engineer
A Developer’s Guide to Getting Started with Docker

Convince your boss to send you to DockerCon
Do you really want to go to DockerCon, but are having a hard time convincing your boss on pulling the trigger to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
We want you to join the community and us at DockerCon 2017, so we’ve put together the following packet of event information, including a helpful letter you can use to send to your boss to justify your trip. We are confident there’s something at DockerCon for everyone, so feel free to share within your company and networks.

Download Now
More information about DockerCon 2017:

Register for the conference
Submit a talk
Choose what workshop to attend
Book your Hotel room
Become a sponsor

DockerCon 2017 first speakers announced &; still time to submit your docker talksClick To Tweet

The post DockerCon 2017 first speakers announced appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Now Open: 2017 Docker Scholarship & Meet the 2016 Recipients!

Last year, announced our inaugural Docker Program in partnership with Hack Reactor. The 2017 scholarship to Hack Reactor’s March cohort is now open and accepting applications.
 
 
The scholarship includes full tuition to Hack Reactor, pending program acceptance, and recipients will be paired with a Docker mentor.
Applications will be reviewed and candidates who are accepted into the Hack Reactor program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to the Hack Reactor program, demonstration of personal financial need and quality of application responses. The Docker scholarship is open to anyone who demonstrates a commitment to advancing equality in their community. All gender and gender identities are encouraged to apply. Click here for more information.
 
Apply to the Docker Scholarship
 
We are excited to introduce our 2016 Docker scholarship recipients, Maurice Okumu and Sauvaghn Jones!
In their own words, learn more about Maurice and Savaughn below:
Maurice Okumu 
 
My name is Maurice Okumu and I was born and raised in Kenya. I came to the USA about three years ago after having lived in Dubai for more than five years where I met my wife while she was working for the military and based in Germany. We have a new baby born on the 24th of October 2016 whom we named Jared Russel.
I started coding more than one year ago and most of my knowledge I gained online on platforms such a s Khan Academy and Code Academy. Then I learned about Telegraph Academy and what they represented and was immediately drawn towards it. Telegraph aims to bridge the technology gap between the underrepresented in the field.
I am so excited that soon I will be able to seemingly create stuff out of thin air, and I am particularly excited about the prospect that I will be able to create animations and bring joy and laughter to people through my  animations as I remember growing up and seeing cartoons and how they made my day every time I watched them. Being able to be a small part of a community that will continue spreading laughter and happiness in the world is what really excites me in technology.
I have been attending Hack Reactor for two weeks now and it has been such a joy to learn so much stuff in such a short period of time. The learning pace  at hack reactor is very fast and very enjoyable at the same time because everyday I go home fulfilled with the thought that I am growing and becoming a better programmer each and every single day.
I would love to work for a medium to large company after graduation and learn even more about coding. I would also love to teach coding to kids and capture their imagination through technology. The support I am getting in my journey to become a software engineer is just amazing and overwhelming and it makes this journey very enjoyable and smoother than most undertakings I have been involved with.

Savaughn Jones
 
How did you hear about the Docker scholarship?
My college friend and Hack Reactor alumni told me about the Docker scholarship. I think he found out about it through a blog post.
Why did you choose Hack Reactor/Telegraph Academy and what excites you about coding?
Two of my college friends completed the Hack Reactor program and their lives improved exponentially. I have always wanted to get into coding and I heard that Hack Reactor was the Harvard of coding bootcamps.
You&;ve been in the program a few weeks, describe your experience so far. What have you enjoyed the most?
I am amazed at how much I have learned in two months. I was always skeptical about learning enough to deserve the title of software engineer. The most amazing thing is the ability to learn new things.
What are your goals/plans after graduation?
I have applied for a Hacker in Residence position at Hack Reactor. It would be like a paid internship of sorts. Otherwise, my plan is to get a job ASAP and continue to pick up new skills and technologies. My ultimate goals are to develop for augmented reality platforms and start my own augmented reality based tabletop gaming company.

Interested in attending @HackReactor? Apply for a Docker Scholarship! Learn more and apply hereClick To Tweet

The post Now Open: 2017 Docker Scholarship &; Meet the 2016 Recipients! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Docker Storage and Infinit FAQ

Last December, acquired a company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker.

During the last Docker Online Meetup, Julien Quintard, member of Docker’s technical staff and former CEO at Infinit, went through the design principles behind their product and demonstrated how the platform can be used to deploy a storage infrastructure through Docker containers in a few command lines.
Providing state to applications in Docker requires a backend storage component that is both scalable and resilient in order to cope with a variety of use cases and failure scenarios. The Infinit Storage Platform has been designed to provide Docker applications with a set of interfaces (block, file and object) allowing for different tradeoffs.
Check out the following slidedeck to learn more about the internals of their platform:

Unfortunately, the video recording from the meetup is not available this time around but you can watch the following presentation and demo of Infinit from its CTO Quentin Hocquet at the Docker Distributed Systems Summit:

Docker and Infinit FAQ
1. Do you consider NFS/GPFS and other HPC cluster distributed storage as traditional? So far volume is working well  for our evaluations, why would we need Infinit in an HPC use case?
Infinit has not been designed for HPC use cases. More specifically, it has been designed with scalability and resilience in mind. As such, if you are looking for high performance, there are a number of HPC-specific solutions. But those are likely to be limited one way or another when it comes tosecurity, scalability, flexibility, programmability etc.
Infinit may end up being an OK solution for HPC deployments but those are not the scenarios we have been targeting so far.
2. Does it work like P2P torrent?
Infinit and Bittorrent (and more generally torrent solutions) share a number of concepts such as the way data is retrieved by leveraging the upload bandwidth of a number of nodes to fill up a client’s bandwidth, also know as multi sourcing. Both solutions also rely on a distributed hash table (DHT).
However, Bittorrent is all about retrieval speed while Infinit is about scalability, resilience and security. In other words, Bittorrent’s algorithms are based on the popularity of a piece of data. The more nodes have that piece, the faster it will be for many concurrent clients to retrieve it. The drawback is that if a piece of information is unpopular, it will eventually be forgotten. Infinit, providing a storage solution to enterprises, cannot allow that and must therefore favor reliability and durability.
3. Does Infinit honor sync writes and what is the performance impact? Is there a reliability trade-off? (eventually consistent)
Yes indeed, there is always a tradeoff between reliability and performance. There is no magic, reliability can only be achieved through redundancy, be it through replication, erasure coding or else. And since such algorithms “enhance” the original information to make it unlikely to be forgotten should part of it be lost, it takes longer to write and to read.
Now I couldn’t possibly quantify the performance impact because it depends on many factors from your computing and networking resources to the redundancy algorithm and the factor you use to the data flow that will be generated and read from the storage layer.
In terms of consistency, Infinit has been designed to be strongly consistent, meaning that a system call completing indicates that the data has been redundantly written. However, given that we provide several logics on top of our key-value store (block, object and file) along with a set of interfaces (NFS, iSCSI, Amazon S3 etc.), we could emulate eventual consistency on top of our strongly consistent consensus algorithm.
4. For existing storage plugin owners, is this a replacement, or does it mean we can adapt our plugins to work with the Infinit architecture?
It is not Docker’s philosophy to impose on its community or customers a single solution. Docker has always described itself as a plumbing platform for mass innovation. Even though Infinit will very likely solve storage-related challenges in Docker’s products, it will always be possible to switch from the default for another storage solution per our batteries included but swappable philosophy.
As such, Docker’s objective with the acquisition of Infinit is not to replace all the other storage solution but rather to provide a reasonable default to the community. Also keep in mind that a storage solution solving all the use cases will likely never exist. The user must be able to pick the solution that best fits her needs.
5. Can you run the Infinit tools in a or does it require being a part of the host OS?
You can definitely run Infinit within a container if you want. Just note that if you intend to access the Infinit storage platform through an interface that relies on a kernel module, your container will need super-privileges to install/use this kernel module e.g FUSE.
6. Can you share the commands used during the demo?
The demo is very similar to what the Get Started demonstrates. I therefore invite you to follow this guide.
7. Would Infinit provide object & block storage?
Yes that is absolutely the plan. We’ve started with a file system logic and FUSE interface but we already have an object store logic in the pipeline as well as an Amazon S3 interface. However, the likely next logic you will see Infinit providing is a block storage with a network block device (NBD) interface.
8. It seems like this technology has use cases beyond Docker and containers, such as a modern storage infrastructure to use in place of RAID style systems. How do you expect that to play out with the Docker acquisition?
You are right, Infinit can be used in many use cases. Unfortunately it is a bit early to say how Infinit and Docker will integrate. As you are all aware, Docker is moving extremely fast. We are still working on figuring out where, when and how Infinit is going to contribute to Docker’s ecosystem.
So far, Infinit remains a standalone software-defined storage solution. As such, anyone can use it outside of Docker. It may remain like that in the future or it may become completely integrated in Docker. In any case, note that should Infinit be fully embedded in Docker, the reason would be to further simplify its deployment.
9. What are the next steps for Infinit now?
The next steps are quite simple. At the Docker level, we need to ease the process of deploying Infinit on a cluster of nodes so that developers and operators alike can benefit from a storage platform that is as easy to set up as an application cluster.
At the Infinit level, we are working on improving the scalability and resilience of the key-value store. Even though Infinit has been conceived with this properties in mind, we have not had enough time so far to stress Infinit through various scenarios.
We have also started working on more logics/interfaces: object storage with Amazon S3 and block storage with NBD. You can follow Infinit’s Roadmap on the  website.
Finally, we’ve been working on open sourcing the three main Infinit components, namely the core libraries, key-value store and storage platform. For more information, you can check our Open Source webpage.
10. Good stuff how to get hold of bits to play with?
Everything is available on Infinit’s website, from tutorials, example deployments, documentation on the underlying technology, FAQ, roadmap, change log and soon, the sources.
Still hungry for more info?

Check this play with Docker and Infinit blog post
Join the docker-storage slack channel

[Tweet “Docker and @Infinit: A New Data Layer For Distributed Apps and container environments”]
The post Docker Storage and Infinit FAQ appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/