A Stronger Foundation for Creating and Managing Kubernetes Clusters

Editor’s note: Today’s post is by Lucas Käldström an independent Kubernetes maintainer and -Cluster-Lifecycle member, sharing what the group has been building and what’s upcoming. Last time you heard from us was in September, when we announced . The work on making kubeadm a first-class citizen in the Kubernetes ecosystem has continued and evolved. Some of us also met before KubeCon and had a very productive meeting where we talked about what the scopes for our SIG, kubeadm, and kops are. Continuing to Define SIG-Cluster-LifecycleWhat is the scope for kubeadm?We want kubeadm to be a common set of building blocks for all Kubernetes deployments; the piece that provides secure and recommended ways to bootstrap Kubernetes. Since there is no one true way to setup Kubernetes, kubeadm will support more than one method for each phase. We want to identify the phases every deployment of Kubernetes has in common and make configurable and easy-to-use kubeadm commands for those phases. If your organization, for example, requires that you distribute the certificates in the cluster manually or in a custom way, skip using kubeadm just for that phase. We aim to keep kubeadm useable for all other phases in that case. We want you to be able to pick which things you want kubeadm to do and let you do the rest yourself.Therefore, the scope for kubeadm is to be easily extendable, modular and very easy to use. Right now, with this v1.5 release we have, kubeadm can only do the “full meal deal” for you. In future versions that will change as kubeadm becomes more componentized, while still leaving the opportunity to do everything for you. But kubeadm will still only handle the bootstrapping of Kubernetes; it won’t ever handle provisioning of machines for you since that can be done in many more ways. In addition, we want kubeadm to work everywhere, even on multiple architectures, therefore we built in multi-architecture support from the beginning.What is the scope for kops?The scope for kops is to automate full cluster operations: installation, reconfiguration of your cluster, upgrading kubernetes, and eventual cluster deletion. kops has a rich configuration model based on the Kubernetes API Machinery, so you can easily customize some parameters to your needs. kops (unlike kubeadm) handles provisioning of resources for you. kops aims to be the ultimate out-of-the-box experience on AWS (and perhaps other providers in the future). In the future kops will be adopting more and more of kubeadm for the bootstrapping phases that exist. This will move some of the complexity inside kops to a central place in the form of kubeadm.What is the scope for SIG-Cluster-Lifecycle?The SIG-Cluster-Lifecycle actively tries to simplify the Kubernetes installation and management story. This is accomplished by modifying Kubernetes itself in many cases, and factoring out common tasks. We are also trying to address common problems in the cluster lifecycle (like the name says!). We maintain and are responsible for kubeadm and kops. We discuss problems with the current way to bootstrap clusters on AWS (and beyond) and try to make it easier. We hangout on Slack in the sig-cluster-lifecycle and kubeadm channels. We meet and discuss current topics once a week on Zoom. Feel free to come and say hi! Also, don’t be shy to contribute; we’d love your comments and insight!Looking forward to v1.6Our goals for v1.6 are centered around refactoring, stabilization and security. First and foremost, we want to get kubeadm and its composable configuration experience to beta. We will refactor kubeadm so each phase in the bootstrap process is invokable separately. We want to bring the TLS Bootstrap API, the Certificates API and the ComponentConfig API to beta, and to get kops (and other tools) using them. We will also graduate the token discovery we’re using now (aka. the gcr.io/google_containers/kube-discovery:1.0 image) to beta by adding a new controller to the controller manager: the BootstrapSigner. Using tokens managed as Secrets, that controller will sign the contents (a kubeconfig file) of a well known ConfigMap in a new kube-public namespace. This object will be available to unauthenticated users in order to enable a secure bootstrap with a simple and short shared token.You can read the full proposal here.In addition to making it possible to invoke phases separately, we will also add a new phase for bringing up the control plane in a self-hosted mode (as opposed to the current static pod technique). The self-hosted technique was developed by CoreOS in the form of bootkube, and will now be incorporated as an alternative into an official Kubernetes product. Thanks to CoreOS for pushing that paradigm forward! This will be done by first setting up a temporary control plane with static pods, injecting the Deployments, ConfigMaps and DaemonSets as necessary, and lastly turning down the temporary control plane. For now, etcd will still be in a static pod by default. We are supporting self hosting, initially, because we want to support doing patch release upgrades with kubeadm. It should be easy to upgrade from v1.6.2 to v1.6.4 for instance. We consider the built-in upgrade support a critical capability for a real cluster lifecycle tool. It will still be possible to upgrade without self-hosting but it will require more manual work.On the stabilization front, we want to start running kubeadm e2e tests. In this v1.5 timeframe, we added unit tests and we will continue to increase that coverage. We want to expand this to per-PR e2e tests as well that spin up a cluster with kubeadm init and kubeadm join; runs some kubeadm-specific tests and optionally the Conformance test suite.Finally, on the security front, we also want to kubeadm to be as secure as possible by default. We look to enable RBAC for v1.6, lock down what kubelet and built-in services like kube-dns and kube-proxy can do, and maybe create specific user accounts that have different permissions.Regarding releasing, we want to have the official kubeadm v1.6 binary in the kubernetes v1.6 tarball. This means syncing our release with the official one. More details on what we’ve done so far can be found here. As it becomes possible, we aim to move the kubeadm code out to the kubernetes/kubeadm repo (This is blocked on some Kubernetes code-specific infrastructure issues that may take some time to resolve.)Nice-to-haves for v1.6 would include an official CoreOS Container Linux installer container that does what the debs/rpms are doing for Ubuntu/CentOS. In general, it would be nice to extend the distro support. We also want to adopt Kubelet Dynamic Settings so configuration passed to kubeadm init flows down to nodes automatically (it requires manual configuration currently). We want it to be possible to test Kubernetes from HEAD by using kubeadm.Through 2017 and beyondApart from everything mentioned above, we want kubeadm to simply be a production grade (GA) tool you can use for bootstrapping a Kubernetes cluster. We want HA/multi-master to be much easier to achieve generally than it is now across platforms (though kops makes this easy on AWS today!). We want cloud providers to be out-of-tree and installable separately. kubectl apply -f my-cloud-provider-here.yaml should just work. The documentation should be more robust and should go deeper. Container Runtime Interface (CRI) and Federation should work well with kubeadm. Outdated getting started guides should be removed so new users aren’t mislead.Refactoring the cloud provider integration pluginsRight now, the cloud provider integrations are built into the controller-manager, the kubelet and the API Server. This combined with the ever-growing interest for Kubernetes makes it unmaintainable to have the cloud provider integrations compiled into the core. Features that are clearly vendor-specific should not be a part of the core Kubernetes project, rather available as an addon from third party vendors. Everything cloud-specific should be moved into one controller, or a few if there’s need. This controller will be maintained by a third-party (usually the company behind the integration) and will implement cloud-specific features. This migration from in-core to out-of-core is disruptive yes, but it has very good side effects: leaner core, making it possible for more than the seven existing clouds to be integrated with Kubernetes and much easier installation. For example, you could run the cloud controller binary in a Deployment and install it with kubectl apply easily.The plan for v1.6 is to make it possible to:Create and run out-of-core cloud provider integration controllersShip a new and temporary binary in the Kubernetes release: the cloud-controller-manager. This binary will include the seven existing cloud providers and will serve as a way of validating, testing and migrating to the new flow.In a future release (v1.9 is proposed), the `–cloud-provider` flag will stop working, and the temporary cloud-controller-manager binary won’t be shipped anymore. Instead, a repository called something like kubernetes/cloud-providers will serve as a place for officially-validated cloud providers to evolve and exist, but all providers there will be independent to each other. (issue ; proposal ; code .)Changelogs from v1.4 to v1.5kubeadm v1.5 is a stabilization release for kubeadm. We’ve worked on making kubeadm more user-friendly, transparent and stable. Some new features have been added making it more configurable.Here’s a very short extract of what’s changed:Made the console output of kubeadm cleaner and more user-friendly kubeadm reset and to drain and cleanup a node and checks implementation that fails fast if the environment is invalid and logs and kubectl exec can now be used with kubeadm a lot of other improvements, please read the full changelog.kopsHere’s a short extract of what’s changed:Support for CNI network plugins (Weave, Calico, Kope.io)Fully private deployments, where nodes and masters do not have public IPsImproved rolling update of clusters, in particular of HA clustersOS support for CentOS / RHEL / Ubuntu along with Debian, and support for sysdig & perf toolsGo and check out the kops releases page in order to get information about the latest and greatest kops release.SummaryIn short, we’re excited on the roadmap ahead in bringing a lot of these improvements to you in the coming releases. Which we hope will make the experience to start much easier and lead to increased adoption of Kubernetes.Thank you for all the feedback and contributions. I hope this has given you some insight in what we’re doing and encouraged you to join us at our meetings to say hi!– Lucas Käldström, Independent Kubernetes maintainer and SIG-Cluster-Lifecycle member
Quelle: kubernetes

A Stronger Foundation for Creating and Managing Kubernetes Clusters

Editor’s note: Today’s post is by Lucas Käldström an independent Kubernetes maintainer and -Cluster-Lifecycle member, sharing what the group has been building and what’s upcoming. Last time you heard from us was in September, when we announced . The work on making kubeadm a first-class citizen in the Kubernetes ecosystem has continued and evolved. Some of us also met before KubeCon and had a very productive meeting where we talked about what the scopes for our SIG, kubeadm, and kops are. Continuing to Define SIG-Cluster-LifecycleWhat is the scope for kubeadm?We want kubeadm to be a common set of building blocks for all Kubernetes deployments; the piece that provides secure and recommended ways to bootstrap Kubernetes. Since there is no one true way to setup Kubernetes, kubeadm will support more than one method for each phase. We want to identify the phases every deployment of Kubernetes has in common and make configurable and easy-to-use kubeadm commands for those phases. If your organization, for example, requires that you distribute the certificates in the cluster manually or in a custom way, skip using kubeadm just for that phase. We aim to keep kubeadm useable for all other phases in that case. We want you to be able to pick which things you want kubeadm to do and let you do the rest yourself.Therefore, the scope for kubeadm is to be easily extendable, modular and very easy to use. Right now, with this v1.5 release we have, kubeadm can only do the “full meal deal” for you. In future versions that will change as kubeadm becomes more componentized, while still leaving the opportunity to do everything for you. But kubeadm will still only handle the bootstrapping of Kubernetes; it won’t ever handle provisioning of machines for you since that can be done in many more ways. In addition, we want kubeadm to work everywhere, even on multiple architectures, therefore we built in multi-architecture support from the beginning.What is the scope for kops?The scope for kops is to automate full cluster operations: installation, reconfiguration of your cluster, upgrading kubernetes, and eventual cluster deletion. kops has a rich configuration model based on the Kubernetes API Machinery, so you can easily customize some parameters to your needs. kops (unlike kubeadm) handles provisioning of resources for you. kops aims to be the ultimate out-of-the-box experience on AWS (and perhaps other providers in the future). In the future kops will be adopting more and more of kubeadm for the bootstrapping phases that exist. This will move some of the complexity inside kops to a central place in the form of kubeadm.What is the scope for SIG-Cluster-Lifecycle?The SIG-Cluster-Lifecycle actively tries to simplify the Kubernetes installation and management story. This is accomplished by modifying Kubernetes itself in many cases, and factoring out common tasks. We are also trying to address common problems in the cluster lifecycle (like the name says!). We maintain and are responsible for kubeadm and kops. We discuss problems with the current way to bootstrap clusters on AWS (and beyond) and try to make it easier. We hangout on Slack in the sig-cluster-lifecycle and kubeadm channels. We meet and discuss current topics once a week on Zoom. Feel free to come and say hi! Also, don’t be shy to contribute; we’d love your comments and insight!Looking forward to v1.6Our goals for v1.6 are centered around refactoring, stabilization and security. First and foremost, we want to get kubeadm and its composable configuration experience to beta. We will refactor kubeadm so each phase in the bootstrap process is invokable separately. We want to bring the TLS Bootstrap API, the Certificates API and the ComponentConfig API to beta, and to get kops (and other tools) using them. We will also graduate the token discovery we’re using now (aka. the gcr.io/google_containers/kube-discovery:1.0 image) to beta by adding a new controller to the controller manager: the BootstrapSigner. Using tokens managed as Secrets, that controller will sign the contents (a kubeconfig file) of a well known ConfigMap in a new kube-public namespace. This object will be available to unauthenticated users in order to enable a secure bootstrap with a simple and short shared token.You can read the full proposal here.In addition to making it possible to invoke phases separately, we will also add a new phase for bringing up the control plane in a self-hosted mode (as opposed to the current static pod technique). The self-hosted technique was developed by CoreOS in the form of bootkube, and will now be incorporated as an alternative into an official Kubernetes product. Thanks to CoreOS for pushing that paradigm forward! This will be done by first setting up a temporary control plane with static pods, injecting the Deployments, ConfigMaps and DaemonSets as necessary, and lastly turning down the temporary control plane. For now, etcd will still be in a static pod by default. We are supporting self hosting, initially, because we want to support doing patch release upgrades with kubeadm. It should be easy to upgrade from v1.6.2 to v1.6.4 for instance. We consider the built-in upgrade support a critical capability for a real cluster lifecycle tool. It will still be possible to upgrade without self-hosting but it will require more manual work.On the stabilization front, we want to start running kubeadm e2e tests. In this v1.5 timeframe, we added unit tests and we will continue to increase that coverage. We want to expand this to per-PR e2e tests as well that spin up a cluster with kubeadm init and kubeadm join; runs some kubeadm-specific tests and optionally the Conformance test suite.Finally, on the security front, we also want to kubeadm to be as secure as possible by default. We look to enable RBAC for v1.6, lock down what kubelet and built-in services like kube-dns and kube-proxy can do, and maybe create specific user accounts that have different permissions.Regarding releasing, we want to have the official kubeadm v1.6 binary in the kubernetes v1.6 tarball. This means syncing our release with the official one. More details on what we’ve done so far can be found here. As it becomes possible, we aim to move the kubeadm code out to the kubernetes/kubeadm repo (This is blocked on some Kubernetes code-specific infrastructure issues that may take some time to resolve.)Nice-to-haves for v1.6 would include an official CoreOS Container Linux installer container that does what the debs/rpms are doing for Ubuntu/CentOS. In general, it would be nice to extend the distro support. We also want to adopt Kubelet Dynamic Settings so configuration passed to kubeadm init flows down to nodes automatically (it requires manual configuration currently). We want it to be possible to test Kubernetes from HEAD by using kubeadm.Through 2017 and beyondApart from everything mentioned above, we want kubeadm to simply be a production grade (GA) tool you can use for bootstrapping a Kubernetes cluster. We want HA/multi-master to be much easier to achieve generally than it is now across platforms (though kops makes this easy on AWS today!). We want cloud providers to be out-of-tree and installable separately. kubectl apply -f my-cloud-provider-here.yaml should just work. The documentation should be more robust and should go deeper. Container Runtime Interface (CRI) and Federation should work well with kubeadm. Outdated getting started guides should be removed so new users aren’t mislead.Refactoring the cloud provider integration pluginsRight now, the cloud provider integrations are built into the controller-manager, the kubelet and the API Server. This combined with the ever-growing interest for Kubernetes makes it unmaintainable to have the cloud provider integrations compiled into the core. Features that are clearly vendor-specific should not be a part of the core Kubernetes project, rather available as an addon from third party vendors. Everything cloud-specific should be moved into one controller, or a few if there’s need. This controller will be maintained by a third-party (usually the company behind the integration) and will implement cloud-specific features. This migration from in-core to out-of-core is disruptive yes, but it has very good side effects: leaner core, making it possible for more than the seven existing clouds to be integrated with Kubernetes and much easier installation. For example, you could run the cloud controller binary in a Deployment and install it with kubectl apply easily.The plan for v1.6 is to make it possible to:Create and run out-of-core cloud provider integration controllersShip a new and temporary binary in the Kubernetes release: the cloud-controller-manager. This binary will include the seven existing cloud providers and will serve as a way of validating, testing and migrating to the new flow.In a future release (v1.9 is proposed), the `–cloud-provider` flag will stop working, and the temporary cloud-controller-manager binary won’t be shipped anymore. Instead, a repository called something like kubernetes/cloud-providers will serve as a place for officially-validated cloud providers to evolve and exist, but all providers there will be independent to each other. (issue ; proposal ; code .)Changelogs from v1.4 to v1.5kubeadm v1.5 is a stabilization release for kubeadm. We’ve worked on making kubeadm more user-friendly, transparent and stable. Some new features have been added making it more configurable.Here’s a very short extract of what’s changed:Made the console output of kubeadm cleaner and more user-friendly kubeadm reset and to drain and cleanup a node and checks implementation that fails fast if the environment is invalid and logs and kubectl exec can now be used with kubeadm a lot of other improvements, please read the full changelog.kopsHere’s a short extract of what’s changed:Support for CNI network plugins (Weave, Calico, Kope.io)Fully private deployments, where nodes and masters do not have public IPsImproved rolling update of clusters, in particular of HA clustersOS support for CentOS / RHEL / Ubuntu along with Debian, and support for sysdig & perf toolsGo and check out the kops releases page in order to get information about the latest and greatest kops release.SummaryIn short, we’re excited on the roadmap ahead in bringing a lot of these improvements to you in the coming releases. Which we hope will make the experience to start much easier and lead to increased adoption of Kubernetes.Thank you for all the feedback and contributions. I hope this has given you some insight in what we’re doing and encouraged you to join us at our meetings to say hi!– Lucas Käldström, Independent Kubernetes maintainer and SIG-Cluster-Lifecycle member
Quelle: kubernetes

InfraKit Under the Hood: High Availability

Back in October, released , an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the first in a two part series that dives more deeply into the internals of InfraKit.
Introduction
At Docker,  our mission to build tools of mass innovation constantly challenges to look at ways to improve the way developers and operators work. Docker Engine with integrated orchestration via Swarm mode have greatly simplified and improved efficiency in application deployment and management of microservices. Going a level deeper, we asked ourselves if we could improve the lives of operators by making tools to simplify and automate orchestration of infrastructure resources. This led us to open source InfraKit, as set of building blocks for creating self-healing and self-managing systems.

There are articles and tutorials (such as this, and this) to help you get acquainted with InfraKit. InfraKit is made up of a set of components which actively manage infrastructure resources based on a declarative specification. These active agents continuously monitor and reconcile differences between your specification and actual infrastructure state. So far, we have implemented the functionality of scaling groups to support the creation of a compute cluster or application cluster that can self-heal and dynamically scale in size. To make this functionality available for different infrastructure platforms (e.g. AWS or bare-metal) and extensible for different applications (e.g. Zookeeper or Docker orchestration), we support customization and adaptation through the instance and flavor plugins. The group controller exposes operations for scaling in and out and for rolling update and communicates with the plugins using JSON-RPC 2.0 over HTTP. While the project provides packages implemented in Go for building platform-specific plugins (like this one for AWS), it is possible to use other language and tooling to create interesting and compatible plugins.
High Availability
Because InfraKit is used to ensure the availability and scaling of a cluster, it needs to be highly available and perform its duties without interruption.  To support this requirement, we consider the following:

Redundancy & Failover &; for active management without interruption.
Infrastructure State &8212; for an accurate view of the cluster and its resources.
User specification &8212; keeping it available even in case of failure.

Redundancy & Failover
Running multiple sets of the InfraKit daemons on separate physical nodes is an obvious approach to achieving redundancy.  However, while multiple replicas are running, only one of the replica sets can be active at a time. Having at most one leader (or master) at any time ensures no multiple controllers are independently making decisions and thus end up conflicting with one another while attempting to correct infrastructure state. However, with only one active instance at any given time, the role of the active leader must transition smoothly and quickly to another replica in the event of failure. When a node running as the leader crashes, another set of InfraKit daemons will assume leadership and attempt to correct the infrastructure state. This corrective measure will then restore the lost instance in the cluster, bringing the cluster back to the desired state before outage.
There are many options in implementing this leadership election mechanism. Popular coordinators for this include Zookeeper and Etcd which are consensus-based systems in which multiple nodes form a quorum. Similar to these is the Docker engine (1.12+) running in Swarm Mode, which is based on SwarmKit, a native clustering technology based on the same Raft consensus algorithm as Etcd. In keeping with the goal of creating a toolkit for building systems, we made these design choices:

InfraKit only needs to observe leadership in a cluster: when the node becomes the leader, the InfraKit daemons on that node become active. When leadership is lost, the daemons on the old leader are deactivated, while control is transferred over to the InfraKit daemons running on the new leader.
Create a simple API for sending leadership information to InfraKit. This makes it possible to connect InfraKit to a variety of inputs from Docker Engines in Swarm Mode (post 1.12) to polling a file in a shared file system (e.g. AWS EFS).
InfraKit does not itself implement leader election. This allows InfraKit to be readily integrated into systems that already have its own manager quorum and leader election such as Docker Swarm. Of course, it’s possible to add leader election using a coordinator such as Etcd and feed that to InfraKit via the leadership observation API.

With this design, coupled with a coordinator, we can run InfraKit daemons in replicas on multiple nodes in a cluster while ensuring only one leader is active at any given time. When leadership changes, InfraKit daemons running on the new leader must be able to assess infrastructure state and determine the delta from user specification.
Infrastructure State
Rather than relying on an internal, central datastore to manage the state of the infrastructure, such as an inventory of all vm instances, InfraKit aggregates and computes the infrastructure state based on what it can observe from querying the infrastructure provider. This means that:

The instance plugin needs to transform the query from the group controller to appropriate calls to the provider’s API.
The infrastructure provider should support labeling or tagging of provisioned resources such as vm instances.
In cases where the provider does not support labeling and querying resources by labels, the instance plugin has the responsibility to maintain that state. Approaches for this vary with plugin implementation but they often involve using services such as S3 for persistence.

Not having to store and manage infrastructure state greatly simplified the system. Since the infrastructure state is always aggregated and computed on-demand, it is always up to date. However, other factors such as availability and performance of the platform API itself can impact observability. For example, high latencies and even API throttling must be handled carefully in determining the cluster state and consequently deriving a plan to push toward convergence with the user’s specifications.
User Specification
InfraKit daemons continuously observe the infrastructure state and compares that with the user’s specification. The user’s specification for the cluster is expressed in JSON format and is used to determine the necessary steps to drive towards convergence. InfraKit requires this information to be highly available so that in the event of failover, the user specification can be accessed by the new leader.
There are options for implementing replication of the user specification. These range from using file systems backed by persistent object stores such as S3 to EFS to using distributed key-value store such as Zookeeper or Etcd. Like other parts of the toolkit, we opted to define an interface with different implementations of this configuration store. In the repo, there are stores implemented using file system and Docker Swarm. More implementations are possible and we welcome contributions!
Conclusion
In this article, we have examined some of the considerations in designing InfraKit. As a systems meant to be incorporated as a toolkit into larger systems, we aimed for modularity and composability. To achieve these goals, the project specifies interfaces which define interactions of different subsystems. As a rule, we try to provide different implementations to test and demonstrate these ideas. One such implementation of high availability with InfraKit leverages Docker Engine in Swarm Mode &8212; the native clustering and orchestration technology of the Docker Platform &8212; to give the swarm self-healing properties. In the next installment, we will investigate this in greater detail.
Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting &8212; from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and send us a PR or open an issue with your ideas!
More Resources:

Check out all the Infrastructure Plumbing projects
Sign up for Docker for AWS or Docker for Azure
Try Docker today

InfraKit Under the Hood: High Availability docker Click To Tweet

 
The post InfraKit Under the Hood: High Availability appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017 first speakers announced

To the rest of the world, 2017 may seem a ways away, but here at we are heads down reading your Call for Papers submissions and curating content to make this the biggest and best DockerCon to date. With that, we are thrilled to share with you the DockerCon 2017 Website with helpful information including ten of the first confirmed speakers and sessions.
If you want to join this amazing lineup and haven’t submitted your cool hack, use case or deep dive session, don’t hesitate! The Call for Papers closes this Saturday, January 14th.
 
Submit a talk
 
First DockerCon speakers
 
Laura Frank
Sr. Software Engineer, Codeship
Everything You Thought You Already Knew About Orchestration
 
 
 

Julius Volz
Co-founder, Prometheus
Monitoring, the Prometheus Way
 
 

 
Liz Rice
Co-founder & CEO, Microscaling Systems
What have namespaces done for you lately?

 
 

 
Thomas Graf
Principal Engineer at Noiro, Cisco
Cilium – BPF & XDP for containers
 
 

 
Brendan Gregg 
Sr. Performance Architect, Netflix
Container Tracing Deep Dive
 
 

 
Thomas Shaw
Build Engineer, Activision
Activision&;s Skypilot: Delivering amazing game experiences through containerized pipelines
 
 

 
Fabiane Nardon
Chief Scientist at TailTarget
Docker for Java Developers
 
 

 
Arun Gupta
Vice President of Developer Advocacy, Couchbase
Docker for Java Developers
 
 

 
Justin Cappos
Assistant Professor in the Computer Science and Engineering department at New York University
Securing the Software Supply Chain
 
 

 
John Zaccone
Software Engineer
A Developer’s Guide to Getting Started with Docker

Convince your boss to send you to DockerCon
Do you really want to go to DockerCon, but are having a hard time convincing your boss on pulling the trigger to send you? Have you already explained that sessions, training and hands-on exercises are definitely worth the financial investment and time away from your desk?
We want you to join the community and us at DockerCon 2017, so we’ve put together the following packet of event information, including a helpful letter you can use to send to your boss to justify your trip. We are confident there’s something at DockerCon for everyone, so feel free to share within your company and networks.

Download Now
More information about DockerCon 2017:

Register for the conference
Submit a talk
Choose what workshop to attend
Book your Hotel room
Become a sponsor

DockerCon 2017 first speakers announced &; still time to submit your docker talksClick To Tweet

The post DockerCon 2017 first speakers announced appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Now Open: 2017 Docker Scholarship & Meet the 2016 Recipients!

Last year, announced our inaugural Docker Program in partnership with Hack Reactor. The 2017 scholarship to Hack Reactor’s March cohort is now open and accepting applications.
 
 
The scholarship includes full tuition to Hack Reactor, pending program acceptance, and recipients will be paired with a Docker mentor.
Applications will be reviewed and candidates who are accepted into the Hack Reactor program and meet Docker’s criteria will be invited to Docker HQ for a panel interview with Docker team members. Scholarships will be awarded based on acceptance to the Hack Reactor program, demonstration of personal financial need and quality of application responses. The Docker scholarship is open to anyone who demonstrates a commitment to advancing equality in their community. All gender and gender identities are encouraged to apply. Click here for more information.
 
Apply to the Docker Scholarship
 
We are excited to introduce our 2016 Docker scholarship recipients, Maurice Okumu and Sauvaghn Jones!
In their own words, learn more about Maurice and Savaughn below:
Maurice Okumu 
 
My name is Maurice Okumu and I was born and raised in Kenya. I came to the USA about three years ago after having lived in Dubai for more than five years where I met my wife while she was working for the military and based in Germany. We have a new baby born on the 24th of October 2016 whom we named Jared Russel.
I started coding more than one year ago and most of my knowledge I gained online on platforms such a s Khan Academy and Code Academy. Then I learned about Telegraph Academy and what they represented and was immediately drawn towards it. Telegraph aims to bridge the technology gap between the underrepresented in the field.
I am so excited that soon I will be able to seemingly create stuff out of thin air, and I am particularly excited about the prospect that I will be able to create animations and bring joy and laughter to people through my  animations as I remember growing up and seeing cartoons and how they made my day every time I watched them. Being able to be a small part of a community that will continue spreading laughter and happiness in the world is what really excites me in technology.
I have been attending Hack Reactor for two weeks now and it has been such a joy to learn so much stuff in such a short period of time. The learning pace  at hack reactor is very fast and very enjoyable at the same time because everyday I go home fulfilled with the thought that I am growing and becoming a better programmer each and every single day.
I would love to work for a medium to large company after graduation and learn even more about coding. I would also love to teach coding to kids and capture their imagination through technology. The support I am getting in my journey to become a software engineer is just amazing and overwhelming and it makes this journey very enjoyable and smoother than most undertakings I have been involved with.

Savaughn Jones
 
How did you hear about the Docker scholarship?
My college friend and Hack Reactor alumni told me about the Docker scholarship. I think he found out about it through a blog post.
Why did you choose Hack Reactor/Telegraph Academy and what excites you about coding?
Two of my college friends completed the Hack Reactor program and their lives improved exponentially. I have always wanted to get into coding and I heard that Hack Reactor was the Harvard of coding bootcamps.
You&;ve been in the program a few weeks, describe your experience so far. What have you enjoyed the most?
I am amazed at how much I have learned in two months. I was always skeptical about learning enough to deserve the title of software engineer. The most amazing thing is the ability to learn new things.
What are your goals/plans after graduation?
I have applied for a Hacker in Residence position at Hack Reactor. It would be like a paid internship of sorts. Otherwise, my plan is to get a job ASAP and continue to pick up new skills and technologies. My ultimate goals are to develop for augmented reality platforms and start my own augmented reality based tabletop gaming company.

Interested in attending @HackReactor? Apply for a Docker Scholarship! Learn more and apply hereClick To Tweet

The post Now Open: 2017 Docker Scholarship &; Meet the 2016 Recipients! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Kubernetes UX Survey Infographic

Editor’s note: Today’s post is by Dan Romlein, UX Designer at Apprenda and member of the -UI, sharing UX survey results from the Kubernetes community. The following infographic summarizes the findings of a survey that the team behind Dashboard, the official web UI for Kubernetes, sent during KubeCon in November 2016. Following the KubeCon launch of the survey, it was promoted on Twitter and various Slack channels over a two week period and generated over 100 responses. We’re delighted with the data it provides us to now make feature and roadmap decisions more in-line with the needs of you, our users.Satisfaction with DashboardLess than a year old, Dashboard is still very early in its development and we realize it has a long way to go, but it was encouraging to hear it’s tracking on the axis of MVP and even with its basic feature set is adding value for people. Respondents indicated that they like how quickly the Dashboard project is moving forward and the activity level of its contributors. Specific appreciation was given for the value Dashboard brings to first-time Kubernetes users and encouraging exploration. Frustration voiced around Dashboard centered on its limited capabilities: notably, the lack of RBAC and limited visualization of cluster objects and their relationships.Respondent DemographicsKubernetes UsagePeople are using Dashboard in production, which is fantastic; it’s that setting that the team is focused on optimizing for.Feature PriorityIn building Dashboard, we want to continually make alignments between the needs of Kubernetes users and our product. Feature areas have intentionally been kept as high-level as possible, so that UX designers on the Dashboard team can creatively transform those use cases into specific features. While there’s nothing wrong with “faster horses”, we want to make sure we’re creating an environment for the best possible innovation to flourish.Troubleshooting & Debugging as a strong frontrunner in requested feature area is consistent with the previous KubeCon survey, and this is now our top area of investment. Currently in-progress is the ability to be able to exec into a Pod, and next up will be providing aggregated logs views across objects. One of a UI’s strengths over a CLI is its ability to show things, and the troubleshooting and debugging feature area is a prime application of this capability.In addition to a continued ongoing investment in troubleshooting and debugging functionality, the other focus of the Dashboard team’s efforts currently is RBAC / IAM within Dashboard. Though on the ranking of feature areas, In various conversations at KubeCon and the days following, this emerged as a top-requested feature of Dashboard, and the one people were most passionate about. This is a deal-breaker for many companies, and we’re confident its enablement will open many doors for Dashboard’s use in production.ConclusionIt’s invaluable to have data from Kubernetes users on how they’re putting Dashboard to use and how well it’s serving their needs. If you missed the survey response window but still have something you’d like to share, we’d love to connect with you and hear feedback or answer questions: Email us at the SIG-UI mailing listChat with us on the Kubernetes Slack SIG-UI channelJoin our weekly meetings at 4PM CEST. See the SIG-UI calendar for details.
Quelle: kubernetes

Docker Storage and Infinit FAQ

Last December, acquired a company called Infinit. Using their technology, we will provide secure distributed storage out of the box, making it much easier to deploy stateful services and legacy enterprise applications on Docker.

During the last Docker Online Meetup, Julien Quintard, member of Docker’s technical staff and former CEO at Infinit, went through the design principles behind their product and demonstrated how the platform can be used to deploy a storage infrastructure through Docker containers in a few command lines.
Providing state to applications in Docker requires a backend storage component that is both scalable and resilient in order to cope with a variety of use cases and failure scenarios. The Infinit Storage Platform has been designed to provide Docker applications with a set of interfaces (block, file and object) allowing for different tradeoffs.
Check out the following slidedeck to learn more about the internals of their platform:

Unfortunately, the video recording from the meetup is not available this time around but you can watch the following presentation and demo of Infinit from its CTO Quentin Hocquet at the Docker Distributed Systems Summit:

Docker and Infinit FAQ
1. Do you consider NFS/GPFS and other HPC cluster distributed storage as traditional? So far volume is working well  for our evaluations, why would we need Infinit in an HPC use case?
Infinit has not been designed for HPC use cases. More specifically, it has been designed with scalability and resilience in mind. As such, if you are looking for high performance, there are a number of HPC-specific solutions. But those are likely to be limited one way or another when it comes tosecurity, scalability, flexibility, programmability etc.
Infinit may end up being an OK solution for HPC deployments but those are not the scenarios we have been targeting so far.
2. Does it work like P2P torrent?
Infinit and Bittorrent (and more generally torrent solutions) share a number of concepts such as the way data is retrieved by leveraging the upload bandwidth of a number of nodes to fill up a client’s bandwidth, also know as multi sourcing. Both solutions also rely on a distributed hash table (DHT).
However, Bittorrent is all about retrieval speed while Infinit is about scalability, resilience and security. In other words, Bittorrent’s algorithms are based on the popularity of a piece of data. The more nodes have that piece, the faster it will be for many concurrent clients to retrieve it. The drawback is that if a piece of information is unpopular, it will eventually be forgotten. Infinit, providing a storage solution to enterprises, cannot allow that and must therefore favor reliability and durability.
3. Does Infinit honor sync writes and what is the performance impact? Is there a reliability trade-off? (eventually consistent)
Yes indeed, there is always a tradeoff between reliability and performance. There is no magic, reliability can only be achieved through redundancy, be it through replication, erasure coding or else. And since such algorithms “enhance” the original information to make it unlikely to be forgotten should part of it be lost, it takes longer to write and to read.
Now I couldn’t possibly quantify the performance impact because it depends on many factors from your computing and networking resources to the redundancy algorithm and the factor you use to the data flow that will be generated and read from the storage layer.
In terms of consistency, Infinit has been designed to be strongly consistent, meaning that a system call completing indicates that the data has been redundantly written. However, given that we provide several logics on top of our key-value store (block, object and file) along with a set of interfaces (NFS, iSCSI, Amazon S3 etc.), we could emulate eventual consistency on top of our strongly consistent consensus algorithm.
4. For existing storage plugin owners, is this a replacement, or does it mean we can adapt our plugins to work with the Infinit architecture?
It is not Docker’s philosophy to impose on its community or customers a single solution. Docker has always described itself as a plumbing platform for mass innovation. Even though Infinit will very likely solve storage-related challenges in Docker’s products, it will always be possible to switch from the default for another storage solution per our batteries included but swappable philosophy.
As such, Docker’s objective with the acquisition of Infinit is not to replace all the other storage solution but rather to provide a reasonable default to the community. Also keep in mind that a storage solution solving all the use cases will likely never exist. The user must be able to pick the solution that best fits her needs.
5. Can you run the Infinit tools in a or does it require being a part of the host OS?
You can definitely run Infinit within a container if you want. Just note that if you intend to access the Infinit storage platform through an interface that relies on a kernel module, your container will need super-privileges to install/use this kernel module e.g FUSE.
6. Can you share the commands used during the demo?
The demo is very similar to what the Get Started demonstrates. I therefore invite you to follow this guide.
7. Would Infinit provide object & block storage?
Yes that is absolutely the plan. We’ve started with a file system logic and FUSE interface but we already have an object store logic in the pipeline as well as an Amazon S3 interface. However, the likely next logic you will see Infinit providing is a block storage with a network block device (NBD) interface.
8. It seems like this technology has use cases beyond Docker and containers, such as a modern storage infrastructure to use in place of RAID style systems. How do you expect that to play out with the Docker acquisition?
You are right, Infinit can be used in many use cases. Unfortunately it is a bit early to say how Infinit and Docker will integrate. As you are all aware, Docker is moving extremely fast. We are still working on figuring out where, when and how Infinit is going to contribute to Docker’s ecosystem.
So far, Infinit remains a standalone software-defined storage solution. As such, anyone can use it outside of Docker. It may remain like that in the future or it may become completely integrated in Docker. In any case, note that should Infinit be fully embedded in Docker, the reason would be to further simplify its deployment.
9. What are the next steps for Infinit now?
The next steps are quite simple. At the Docker level, we need to ease the process of deploying Infinit on a cluster of nodes so that developers and operators alike can benefit from a storage platform that is as easy to set up as an application cluster.
At the Infinit level, we are working on improving the scalability and resilience of the key-value store. Even though Infinit has been conceived with this properties in mind, we have not had enough time so far to stress Infinit through various scenarios.
We have also started working on more logics/interfaces: object storage with Amazon S3 and block storage with NBD. You can follow Infinit’s Roadmap on the  website.
Finally, we’ve been working on open sourcing the three main Infinit components, namely the core libraries, key-value store and storage platform. For more information, you can check our Open Source webpage.
10. Good stuff how to get hold of bits to play with?
Everything is available on Infinit’s website, from tutorials, example deployments, documentation on the underlying technology, FAQ, roadmap, change log and soon, the sources.
Still hungry for more info?

Check this play with Docker and Infinit blog post
Join the docker-storage slack channel

[Tweet “Docker and @Infinit: A New Data Layer For Distributed Apps and container environments”]
The post Docker Storage and Infinit FAQ appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon workshops: Which one will you be attending?

Following in last year’s major success, we are excited to be bringing back and expand the paid workshops at 2017. The pre-conference workshops will focus on a range of subjects from Docker 101 to deep dives in networking, Docker for JAVA and  advanced orchestration. Each workshop is designed to give you hands-on instruction and insight on key Docker topics, taught by Docker Engineers and Docker Captains. The workshops are a great opportunity to get better acquainted and excited about Docker technology to start off DockerCon week.

Take advantage of the lowest DockerCon pricing and get your Early Bird Ticket + Workshop now! Early Bird Tickets are limited and will sell out in the next two weeks!
Here are the basics of the DockerCon workshops:
Date: Monday, April 17, 2017
Time: 2:00pm &; 5:00pm
Where: Austin Convention Center &8211; 500 E. Cesar Chavez Street, Austin, TX
Cost: $150
Class size: Classes will remain small and are limited to 50 attendees per class.
Registration: The workshops are only open to DockerCon attendees. You can register for the workshops as an add-on package through the registration site here.

Below are overviews of each workshop. To learn more about each topic head over to the DockerCon 2017 registration site.
Learn Docker
If you are just getting started learning about Docker and want to get up to speed, this is the workshop for you. Come learn Docker basics including running containers, building images and basics on networking, orchestration, security and  volumes.
Orchestration Workshop: Beginner
You&;ve installed Docker, you know how to run containers, you&8217;ve written Dockerfiles to build container images for your applications (or parts of your applications), and perhaps you&8217;re even using Compose to describe your application stack as an assemblage of multiple containers.
But how do you go to production? What modifications are necessary in your code to allow it to run on a cluster? (Spoiler alert: very little, if any.) How does one set up such a cluster, anyway? Then how can we use it to deploy and scale applications with high availability requirements?
In this workshop, we will answer those questions using tools from the Docker ecosystem, with a strong focus on the native orchestration capabilities available since Docker Engine 1.12, aka &;Swarm Mode.&;
Orchestration Workshop: Advanced
Already using Docker and recently started using Swarm Mode in 1.12? Let’s start where previous Orchestration workshops may have left off, and dive into monitoring, logging, troubleshooting, and security of docker engine and docker services (Swarm Mode) for production workloads. Pulled from real world deployments, we&8217;ll cover centralized logging with ELK, SaaS, and others, monitoring/alerting with CAdvisor and Prometheus, backups of persistent storage, optional security features (namespaces, seccomp and apparmor profiles, notary), and a few cli tools for troubleshooting. Come away ready to take your Swarm to the next level!
Stay tuned as more workshop topics will be announced in the coming weeks! The workshops will sell out, so act fast and add the pre-conference workshops to your DockerCon 2017 registration!
Docker Networking
In this 3-hour, instructor-led training, you will get an in-depth look into Docker Networking. We will cover all the networking features natively available in Docker and take you through hands-on exercises designed to help you learn the skills you need to deploy and maintain Docker containers in your existing network environment.
Docker Store for Publishers
This workshop is designed to help potential Docker Store Publishers to understand the process, the best practices and the workflow of creating and publishing great content. You will get to interact with the members of the Docker Store’s engineering team. Whether you are an established ISV, a startup trying to distribute your software creation using Docker Containers or an independent developer, just trying to reach as many users as possible, you will benefit from this workshop by learning how to create and distribute trusted and Enterprise-ready content for the Docker Store.
Docker for Java Developers
Docker provides PODA (Package Once Deploy Anywhere) and complements WORA (Write Once Run Anywhere) provided by Java. It also helps you reduce the impedance mismatch between dev, test, and production environment and simplifies Java application deployment.
This workshop will explain how to:

Running first Java application with Docker
Package your Java application with Docker
Sharing your Java application using Docker Hub
Deploy your Java application using Maven
Deploy your application using Docker for AWS
Scaling Java services with Docker Engine swarm mode
Package your multi-container application and use service discovery
Monitor your Docker + Java applications
Build a deployment pipeline using common tools

Hands-On Docker for Raspberry Pi
Take part in our first-of-a-kind hands-on Raspberry Pi and Docker workshop where you will be given all the hardware you need to start creating and deploying containers with Docker including an 8-LED RGB add-on from Pimoroni. You will learn the subtleties of working with an ARM processor and how to control physical hardware through the GPIO interface. Programming experience is not required but a basic understanding of Python is helpful.
Microservices Lifecycle Explained Through Docker and Continuous Deployment
The workshop will go through the whole microservices development lifecycle. We’ll start from the very beginning and define and design architecture. From there on we’ll do some coding and testing all the way until the final deployment to production. Once our new services are up and running we’ll see how to maintain them, scale them, and recover them in case of failures. The goal will be to design a fully automated continuous deployment (CDP) pipeline with Docker containers.
During the workshop we’ll explore tools like Docker Engine with built in orchestration via swarm mode,, Docker Compose, Jenkins, HAProxy, and a few others.
Modernizing Monolothic ASP.NET Applications with Docker
Learn how to use Docker to run traditional ASP.NET applications In Windows containers without an application re-write. We’ll use Docker tools to containerize a monolithic ASP.NET app, then see how the platform helps us iterate quickly &8211; pulling high-value features out of the app and running them in separate containers. This workshop gives you a roadmap for modernizing your own ASP.NET workloads.

What dockercon workshop will you be attending? Limited number of spots => save yours now!Click To Tweet

The post DockerCon workshops: Which one will you be attending? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

containerd livestream recap

In case you missed it last month, we announced that is extracting a key component of its platform, a part of the engine plumbing called  &; a core container runtime – and committed to donating it to an open foundation.
You can find up-to-date roadmap, architecture and API definitions in the Github repository, and more details about the project in our engineering team’s blog post.

You can also watch the following video recording of the containerd online meetup, for a summary and Q&A with Arnaud Porterie, Michael Crosby, Stephen Day, Patrick Chanezon and Solomon Hykes from the Docker team:

Here is the list of top questions we got following this announcement:
Q. Are you planning to run docker without runC ?
A. Although runC is the default runtime, as of  Docker 1.12, it can be replaced by any other OCI-compliant implementation. Docker will be compliant with the OCI Runtime Specification
Q. What major changes are on the roadmap for swarmkit to run on containerd if any? 
A. SwarmKit is using Docker Engine to orchestrate tasks, and Docker Engine is already using containerd for container execution. So technically, you are already using containerd when using SwarmKit. There is no plan currently to have SwarmKit directly orchestrate containerd containers though.
Q. Mind sharing why you went with GRPC for the API?
A. containerd is a component designed to be embedded in a higher level system, and serve a host local API over a socket. GRPC enables us to focus on designing RPC calls and data structures instead of having to deal with JSON serialization and HTTP error codes. This improves iteration speed when designing the API and data structures. For higher level systems that embed containerd, such as Docker or Kubernetes, a JSON/HTTP API makes more sense, allowing easier integration. The Docker API will not change, and will continue to be based on JSON/HTTP.
Q. How do you expect to see others leverage containerd outside of Docker?
A. Cloud managed container services such as Amazon ECS, Microsoft ACS, Google Container Engine, or orchestration tools such as Kubernetes or Mesos can leverage containerd as their core container runtime. containerd has been designed to be embedded for that purpose.
Q. How did you decided which feature should get into containerd?  How did you came up with the scope of the future containers?
A. We’re trying to capture in containerd the features that any container-centric platform would need, and for which there’s reasonable consensus on the way it should be implemented. Aspects which are either not widely agreed on or that can trivially be built one layer up were left out.
Q. How integrate with CNI and CNM?
A. Phase 3 of the containerd roadmap involves porting the network drivers from libnetwork and finding a good middle ground between the CNM abstraction of libnetwork and the CNI spec.
Additional Resources:

Contribute to containerd
Join the containerd slack channel
Read the engineering team’s blog post.

Docker Extracts & Donates containerd, it&;s Core Container Runtime for the container IndustryClick To Tweet

The post containerd livestream recap appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DockerCon 2017: Call For Papers FAQ

It’s a new year, and we are looking for new stories of how you are using technology to do big things. Submit your cool hack, use case or deep dive sessions before the 2017 CFP closes on January 14th.

To help with your submissions, we’ve answered the most frequent questions below and put together a list of tips to help get your proposal selected.
Q. How do I submit a proposal?
A. Submit your proposal here.
Q. What kind of talks are you looking for?
A. This year, we are looking for cool hacks, user stories and deep dive submissions:

Cool Hacks: Show us your cool hack and wow us with the interesting ways you can push the boundaries of the Docker stack. You do not have to have your hack ready by the submission deadline, just clearly explain your hack, what makes it cool and the technologies you will use.

Using Docker: Tell us first-hand about your Docker usage, challenges and what you learned along the way and inspire us on how to use Docker to accomplish real tasks.

Deep Dives: Propose code and demo heavy deep-dive sessions on what you have been able to transform with your use of the Docker stack. Entice your audience by going deeply technical and teach them how to do something they haven’t done.

Above all, DockerCon is a user conference and product and vendor pitches are not appropriate.
Q. What will I need to include in my submission?
A. Speaking proposals will ask for:

Title, the more catchy and descriptive, the better. But don&;t be too cute.
Abstract describing the presentation. This is what gets shown in the agenda and how the audience decides if they want to attend your session.
Key Takeaways that communicate your session’s main idea and conclusion. This is your gift to the audience, what will they learn from your session and be able to apply when they get back to work the following week.
Speaker(s): expertise and summary biography
Suggested tags
Past Speaking examples
Recommendations of appropriate audience.

Q. How can I increase the odds of my proposal being selected?
A. Check out the following resources:

Read our tips to help get your proposal selected
See the list of sessions chosen for the 2016 DockerCon and DockerCon EU 2015 programs and read their descriptions
Watch videos from previous DockerCons
See speaker slides from previous DockerCons.

Q. How are submissions selected?
A. After a proposal is submitted, it will be reviewed initially for content and format. Once past the initial review, a committee will read the proposals and vote on best submissions. There are a limited number of speaking slots and we work to achieve a balance of presentations that will interest the Docker community.
Q. How will Speakers be compensated?
A. One speaker for every session will be given a full conference pass. Any additional speakers will be given a pass at the Early Bird rate.
Q. Will there be a Speaker room at the conference?
A. Yes, we will provide a Speaker Ready room for speakers to prepare for presentations, relax and mingle. Speakers should check in with the DockerCon 2017 speaker manager on the day of your talk in the Speaker Room and make sure you are all set for your talk.
Q. What are the important dates to remember?
A.

Call for Proposals Closes &; January 14, 2017 at 11:59 PST
All proposers notified &8211; Late February
Program announced &8211; Late February
Submit your proposal &8211; Today!

DockerCon 2017 CFP is open until Jan 14! Submit your Docker story todayClick To Tweet

The post DockerCon 2017: Call For Papers FAQ appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/