OK, I give up. Is Docker now Moby? And what is LinuxKit?

The post OK, I give up. Is Docker now Moby? And what is ? appeared first on Mirantis | Pure Play Open Cloud.
This week at , Docker made several announcements, but one in particular caused massive confusion as users thought that &;Docker&; was becoming &8220;Moby.&8221;  Well&; OK, but which Docker? The Register probably put it best, when they said, &8220;Docker (the company) decided to differentiate Docker (the commercial software products Docker CE and Docker EE) from Docker (the open source project).&8221;  Tack on a second project about building core operating systems, and there&;s a lot to unpack.
Let&8217;s start with Moby.  
What is Moby?
Docker, being the foundation of many peoples&8217; understanding of containers, unsurprisingly isn&8217;t a single monolithic application.  Instead, it&8217;s made up of components such as runc, containerd, InfraKit, and so on. The community works on those components (along with Docker, of course) and when it&8217;s time for a release, Docker packages them all up and out they go. With all of those pieces, as you might imagine, it&8217;s not a simple task.
And what happens if you want your own custom version of Docker?  After all, Docker is built on the philosophy of &8220;batteries included but swappable&8221;.  How easy is it to swap something out?
In his blog post introducing the Moby Project, Solomon Hykes explained that the idea is to simplify the process of combining components into something usable. &8220;We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.&8221;
Hykes explained that from now on, Docker releases would be built using Moby and its components.  At the moment there are 80+ components that can be combined into assemblies.  He further explained that:
&8220;Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.&8221;

Who needs to know about Moby?
The first group that needs to know about Moby is Docker developers, as in the people building the actual Docker software, and not people building applications using Docker containers, or even people building Docker containers.  (Here&8217;s hoping that eventually this nomenclature gets cleared up.)  Docker developers should just continue on as usual, and Docker pull requests will be reouted to the Moby project.
So everyone else is off the hook, right?
Well, um, no.
If all you do is pull together containers from pre-existing components and software you write yourself, then you&8217;re good; you don&8217;t need to worry about Moby. Unless, that is, you aren&8217;t happy with your available Linux distributions.
Enter LinuxKit.
What is LinuxKit?
While many think that Docker invented the container, in actuality linux containers had been around for some time, and Docker containers are based on them.  Which is really convenient &; if you&8217;re using Linux.  If, on the other hand, you are using a system that doesn&8217;t include Linux, such as a Mac, a Windows PC, or that Raspberry Pi you want to turn into an automatic goat feeder, you&8217;ve got a problem.
Docker requires linuxcontainers.  Which is a problem if you have no linux.
Enter LinuxKit.  
The idea behind LinuxKit is that you start with a minimal Linux kernal &8212; the base distro is only 35MB &8212; and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to.  Stephen Foskitt tweeted a picture of an example from the announcement:

More about LinuxKit DockerCon pic.twitter.com/TfRJ47yBdB
— Stephen Foskett (@SFoskett) April 18, 2017

The end result is that you can build containers that run on desktops, mainframes, bare metal, IoT, and VMs.
The project will be managed by the Linux Foundation, which is only fitting.
So what about Alpine, the minimal Linux that&8217;s at the heart of Docker?  Docker&8217;s security director, Nathan McCauley said that &8220;LinuxKit&8217;s roots are in Alpine.&8221;  The company will continue to use it for Docker.

Today we launch LinuxKit &8212; a Linux subsystem focussed on security. pic.twitter.com/Q0YJsX67ZT
— Nathan McCauley (@nathanmccauley) April 18, 2017

So what does this have to do with Moby?
Where LinuxKit has to do with Moby
If you&8217;re salivating at the idea of building your own Linux distribution, take a deep breath. LinuxKit is an assembly within Moby.  
So if you want to use LinuxKit, you need to download and install Moby, then use it to build your LinuxKit pieces.
So there you have it. You now have the ability to build your own Linux system, and your own containerization system. But it&8217;s definitely not for the faint of heart.
Resources

Wait – we can explain, says Moby, er, Docker amid rebrand meltdown • The Register
Moby, LinuxKit Kick Off New Docker Collaboration Phase | Software | LinuxInsider
Why Docker created the Moby Project | CIO
GitHub &; linuxkit/linuxkit: A toolkit for building secure, portable and lean operating systems for containers
Docker LinuxKit: Secure Linux containers for Windows, macOS, and clouds | ZDNet
Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems &8211; Docker Blog
Stephen Foskett on Twitter: &8220;More about LinuxKit DockerCon https://t.co/TfRJ47yBdB&8221;
Introducing Moby Project: a new open-source project to advance the software containerization movement &8211; Docker Blog
DockerCon 2017: Moby’s Cool Hack sessions &8211; Docker Blog

The post OK, I give up. Is Docker now Moby? And what is LinuxKit? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack

The post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Mirantis Cloud Platform 1.0 is a distribution of OpenStack and Kubernetes that can orchestrate VMs, Containers and Bare Metal

SUNNYVALE, CA – April 19, 2017 – Mirantis, the managed open cloud company, today announced availability of a commercially-supported distribution of OpenStack and Kubernetes, delivered in a single, integrated package, and with a unique build-operate-transfer delivery model.

“Today, infrastructure consumption patterns are defined by the public cloud, where everything is API driven, managed and continuously delivered. Mirantis OpenStack, which featured Fuel as an installer, was the easiest OpenStack distribution to deploy, but every new version required a forklift upgrade,” said Boris Renski, Mirantis co-founder and CMO. “Mirantis Cloud Platform departs from the traditional installer-centric architecture and towards an operations-centric architecture, continuously delivered by either Mirantis or the customers’ DevOps team with zero downtime. Updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis. In the next five to ten years, all vendors in the space will either find a way to adapt to this pattern or they will disappear.”

Along with launching Mirantis Cloud Platform (MCP) 1.0, Mirantis is also first to introduce a unique delivery model for the platform. Unlike traditional vendors that sell software subscriptions, Mirantis will onboard customers to MCP through a build-operate-transfer delivery model. The company will operate an open cloud platform for customers for a period of at least twelve months with up to four nines SLA prior to off boarding the operational burden to customer&;s team, if desired. The delivery model ensures that not just the software, but also the customer&8217;s team and process are aligned with DevOps best practices.

Unlike any other solution in the industry, customers onboarded to MCP have an option to completely transfer the platform under their own management. Everything in MCP is based on popular open standards with no lock-in, making it possible for customers to break ties with Mirantis and run the platform independently should they choose to do so.

“We are happy to see a growing number of vendors embrace Kubernetes and launch a commercially supported offering based on the technology,&; said Allan Naim from the Kubernetes and Container Engine Product Team.

&;As the industry embraces composable, open infrastructure, the &8220;LAMP stack of cloud&8221; is emerging, made up of OpenStack, Kubernetes, and other key open technologies,” said Mark Collier, chief operating officer, OpenStack Foundation. “Mirantis Cloud Platform presents a new vision for the OpenStack distribution, one that embraces diverse compute, storage and networking technologies continuously rather than via major upgrades on six-month cycles.&8221;

Specifically, Mirantis Cloud Platform 1.0 is:

Open Cloud Software &; providing a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis OpenStack to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN), specifically Mirantis OpenContrail for VMs and bare metal, and Calico for containers.
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain &8212; Mirantis DriveTrain sets the foundation for DevOps style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility to customize the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight &8212; enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards.

StackLight avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
It includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

With the release of MCP, Mirantis is also announcing end-of-life for Mirantis OpenStack (MOS) and Fuel by September 2019. Mirantis will be working with all customers currently using MOS on a tailored transition plan from MOS to MCP.

To learn more about MCP, watch an overview video and sign up for the introductory webinar at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&;T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How do you build 12-factor apps using Kubernetes?

The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
It&;s said that there are 12 factors that define a cloud-native application.  It&8217;s also said that Kubernetes is designed for cloud native computing. So how do you create a 12-factor application using Kubernetes?  Let&8217;s take a look at exactly what twelve factor apps are and how they relate to Kubernetes.
What is a 12 factor application?
The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion &; where over time an application that&8217;s not updated gets to be out of sync with the latest operating systems, security patches, and so on &8212; an app should follow these 12 principles:

Codebase
One codebase tracked in revision control, many deploys
Dependencies
Explicitly declare and isolate dependencies
Config
Store config in the environment
Backing services
Treat backing services as attached resources
Build, release, run
Strictly separate build and run stages
Processes
Execute the app as one or more stateless processes
Port binding
Export services via port binding
Concurrency
Scale out via the process model
Disposability
Maximize robustness with fast startup and graceful shutdown
Dev/prod parity
Keep development, staging, and production as similar as possible
Logs
Treat logs as event streams
Admin processes
Run admin/management tasks as one-off processes

Let&8217;s look at what all of this means in terms of Kubernetes applications.
Principle I. Codebase
Principle 1 of a 12 Factor App is &;One codebase tracked in revision control, many deploys&;.
For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

spec:
     containers:
     – name: AcctApp
       image: acctApp:v3

In this way, you might have multiple versions of your application running in different deployments.
Applications can also behave differently depending on the configuration information with which they run.
Principle II. Dependencies
Principle 2 of a 12 Factor App is &8220;Explicitly declare and isolate dependencies&8221;.
Making sure that an application&8217;s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12 factor app must be self-contained.
That includes making sure that the application is isolated enough that it&8217;s not affected by conflicting libraries that might be installed on the host machine.
Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)
For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.
Principle III. Config
Principle 3 of a 12 Factor App is &8220;Store config in the environment&8221;.
The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.
Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.
Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, &8220;unlikely to be checked into the repository by accident&8221;, and they&8217;re operating system independent.
Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that&8217;s not a complete solution.
Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:
apiVersion: v1
kind: Pod
metadata:
 name: secret-env-pod
spec:
 containers:
   – name: mycontainer
     image: redis
     env:
       – name: SECRET_USERNAME
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: username
       – name: SECRET_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: password
       – name: CONFIG_VERSION
         valueFrom:
           configMapKeyRef:
             name: redis-app-config
             key: version.id
As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.
Of course, there&8217;s still a risk of someone mis-handling the files used to create these objects, but it&8217;s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.
What&8217;s more, there are those in the community that point out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.
Principle IV. Backing services
Principle 4 of the 12 Factor App is &8220;Treat backing services as attached resources&8221;.
In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service &8212; via an HTTP or similar request &8212; and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.
For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.
This requirement has two implications for a Kubernetes-based application.
First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn&8217;t want to have a local Mysql instance, even if you&8217;re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).  
Similarly, if you&8217;re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.
Note that both of these examples assume that though you&8217;re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.
Principle V. Build, release, run
Principle 5 of the 12 Factor App is &8220;Strictly separate build and run stages&8221;.
These days it&8217;s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.
Releases should be identifiable.  You should be able to say, &;This deployment is running Release 1.14 of this application&8221; or something similar, the same way we say we&8217;re running &8220;the OpenStack Ocata release&8221; or &8220;Kubernetes 1.6&8221;.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say &8220;application&8221; we&8217;re no longer talking about large, monolithic releases.  Instead, we&8217;re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.
All of this is so that when the app is running, that &8220;run&8221; process can be completely automated. Twelve factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.
Translating this to the Kubernetes realm, we&8217;ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That&8217;s your build process.  We talked about separating out the configuration information, so that&8217;s what needs to be combined with the build to make a release. And the ability to automatically run the application &8212; or multiple copies of the application &8212; is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.
Principle VI. Processes
Principle 6 of the 12 Factor App is &8220;Execute the app as one or more stateless processes&8221;.
Stateless processes are a core idea behind cloud native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.
If you&8217;re new to cloud application programming, this might be deceptively simple; many developers are used to &8220;sticky&8221; sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.
Instead, if you&8217;re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.
One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn&8217;t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement &8212; but that&8217;s probably pushing it a bit.
Principle VII. Port binding
Principle 7 of the 12 Factor App is &8220;Export services via port binding&8221;.
In an environment where we&8217;re assuming that different functionalities are handled by different processes, it&8217;s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it&8217;s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not be depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it&8217;s using HTTP or another protocol.
In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.
Principle VIII. Concurrency
Principle 8 of the 12 Factor App is to &8220;Scale out via the process model&8221;.
When you&8217;re writing a twelve-factor app, make sure that you&8217;re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.
Principle IX. Disposability
Principle 9 of the 12 Factor App is to &8220;Maximize robustness with fast startup and graceful shutdown&8221;.
It seems like this principle was tailor made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won&8217;t be affected, either because there are others to take its place, because it&8217;ll start right up again, or both.
Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.
Principle X. Dev/prod parity
Principle 10 of the 12 Factor App is &8220;Keep development, staging, and production as similar as possible&8221;.
This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems.
At a deeper level, though, as the Twelve-Factor App manifesto puts it, it&8217;s about three different types of &8220;gaps&8221;:

The time gap: A developer may work on code that takes days, weeks, or even months to go into production.

The personnel gap: Developers write code, ops engineers deploy it.

The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.
Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.
For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they&8217;re well-suited to this kind of environment.
As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.
Principle XI. Logs
Principle 11 of the 12 Factor App is to &8220;Treat logs as event streams&8221;.
While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it&8217;s the execution environment that&8217;s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.
In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you&8217;re using Google Cloud, and Elasticsearch if you&8217;re not.  You can find more information on setting Kubernetes logging destinations here.
Principle XII. Admin processes
Principle 12 of the 12 Factor App is &8220;Run admin/management tasks as one-off processes&8221;.
This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they&8217;re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.
You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.
How many of these factors did you hit?
Unless you&8217;re still building desktop applications, chances are you can feel good about hitting at least a few of these essential principles of twelve-factor apps. But chances are you also found at least one or two you can probably work a little harder on.
So we want to know: which of these factors are the biggest struggle for you? Where do you think you need to work harder, and what would make it easier for you?  Let us know in the comments.
Thanks to Jedrzej Nowak, Piotr Shamruk, Ivan Shvedunov, Tomasz Napierala and Lukasz Oles for reviewing this article!
Check out more Kubernetes resources.
The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Mirantis Cloud Platform: Stop wandering in the desert

The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
There&;s no denying that the last year has seen a great deal of turmoil in the OpenStack world, and here at Mirantis we&8217;re not immune to it.
In fact, some would say that we&8217;re part of that turmoil. Well, we are in the middle of a sea change in how we handle cloud deployments, moving from a model in which we focused on deploying OpenStack to one in which we focus on achieving outcomes for our customers.  
And then there&8217;s the fact that we are changing the architecture of our technology.
It&8217;s true. Over the past few months, we have been moving from Mirantis OpenStack to Mirantis Cloud Platform (MCP), but there&8217;s no need to panic. While it may seem a little scary, we&8217;re not moving away from OpenStack – rather, we are growing up and tackling the bigger picture, not just a part of it. In early installations with marquee customers, we’ve seen MCP provide a tremendous advantage in deployment and scale-out time. In just a few days, we will publicly launch MCP, and you will have our first visible signpost leading you out of the desert. We still have lots of work to do, but we&8217;re convinced this is the right path for our industry to take, and we&8217;re making great progress in that direction.
Where we started
To understand what&8217;s going on here, it helps to have a firm grasp of where we started.
When I started here at Mirantis four years ago, we had one product, Mirantis Fuel, and it had one purpose: deploy OpenStack. Back then that was no easy feat. Even with a tool like Fuel, it could be a herculean task taking many days and lots of calls to people who knew more than I did.
Over the intervening years, we came to realize that we needed to take a bigger hand in OpenStack itself, and we produced Mirantis OpenStack, a set of hardened OpenStack packages.  We also came to realize that deployment was only the beginning of the process; customers needed Lifecycle Management.
The Big Tent
And so Fuel grew. And grew. And grew. Finally, Fuel became be so big that we felt we needed to involve the community even more than we already had, and we submitted Fuel to the Big Tent.
Here Fuel has thrived, and does an awesome job of deploying OpenStack, and a decent job at lifecycle management.
But it&8217;s not enough.
Basically, when you come right down to it, OpenStack is nothing more than a big, complicated, distributed application. Sure, it&8217;s a big, complicated distributed application that deploys a cloud platform, but it&8217;s still a big complicated distributed application.
And let&8217;s face it: deploying and managing big, complicated, distributed applications is a solved problem.
The Mirantis Cloud Platform architecture
So let&8217;s look at what this means in practice.  The most important thing to understand is that where Mirantis OpenStack was focused on deployment, MCP is focused on the operations tasks you need to worry about after that deployment. MCP means:

A single cloud that runs VMs, containers, and bare metal with rich Software Defined Networking (SDN) and Software Defined Storage (SDS) functionality
Flexible deployment and simplified operations and lifecycle management through a new DevOps tool called DriveTrain
Operations Support Services in the form of enhanced StackLight software, which also provides continuous monitoring to ensure compliance to strict availability SLAs

OK, so that&8217;s a little less confusing than the diagram, but there&8217;s still a lot of &;sales&; speak in there.
Let&8217;s get down to the nitty gritty of what MCP means.
What Mirantis Cloud Platform really means
Let&8217;s look at each of those things individually and see why it matters.
A multi-platform cloud
There was a time when you would have separate environments for each type of computing you wanted to do. High performance workloads ran on bare metal, virtual machines ran on OpenStack, containers (if you were using them at all) ran on their own dedicated clusters.
In the last few years, bare metal was brought into Openstack, so that you could manage your physical machines the same way you managed your virtual ones.
Now Mirantis Cloud Platform brings in the last remaining piece. Your Kubernetes cluster is part of your cloud, enabling you to easily manage your container-based applications in the same environment and with the same tools as your traditional cloud resources.
All of this is made possible by the inclusion of powerful SDN and SDS components. Software Defined Networking for OpenStack is handled by OpenContrail, providing the benefits of commercial-grade networking without the lock-in, with Calico stepping in for the container environment. Storage takes the form of powerful open source Ceph clusters, which are used by both OpenStack and container applications.
These components enable MCP to provide an environment where all of these pieces work together seamlessly, so your cloud can be so much more than just OpenStack.
Knowing what&8217;s happening under the covers
With all of these pieces, you need to know what&8217;s happening &; and what might happen next. To that end, Mirantis Cloud Platform includes an updated version of StackLight, which gives you a comprehensive view of how each component of your cloud is performing; if an application on a particular VM acts up, you can isolate the problem before it brings down the entire node,
What&8217;s more, the StackLight Operations Support System analyzes the voluminous information it gets from your OpenStack cloud and can often let you know there&8217;s trouble &8212; before it causes problems.
All of this enables you to ensure uptime for your users &8212; and compliance with SLAs.
Finally solving the operations dilemma
Perhaps the biggest change, however, is in the form of DriveTrain. DriveTrain is a combination of various open source projects, such as Gerrit and Jenkins for CI/CD and Salt for configuration management, enabling a powerful, flexible way for you to both deploy and manage your cloud.
Because let&8217;s face it: the job of running a private cloud doesn&8217;t end when you&8217;ve spun up the cloud &8212; it&8217;s just begun.
Upgrading OpenStack has always been a nightmare, but DriveTrain is designed so that your cloud infrastructure software can always be up-to-date. Here&8217;s how it works:
Mirantis continually monitors changes to OpenStack and other relevant projects, providing extensive testing and making sure that no errors get introduced, in a process called &8220;hardening&8221;.  Once we decide these changes are ready for general use, we release them into the DriveTrain CI/CD infrastructure.
Once changes hit the CI/CD infrastructure, you pull them down into a staging environment and decide when you&8217;re ready to push them to production.
In other words, no more holding your breath every six months &8212; or worse, running cloud software that&8217;s year old.
Where do you want to go?
OpenStack started with great promise, but in the last few years it&8217;s become clear that the private cloud world is more than just one solution; it&8217;s time for everyone &8212; and that includes us here at Mirantis &8212; to step up and embrace a future that includes virtual machines, bare metal and containers, but in a way that makes both technological and business sense.
Because at the end of the day, it&8217;s all about outcomes; if your cloud doesn&8217;t do what you want, or if you can&8217;t manage it, or if you can&8217;t keep it up to date, you need something better. We&8217;ve been working hard at making MCP the solution that gets you where you want to be. Let us know how we can help get you there.
The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Ten Ways a Cloud Management Platform Makes your Virtualization Life Easier

I spent the last decade working with virtualization platforms and the certifications and accreditation’s that go along with them.  During this time, I thought I understood what it meant to run an efficient data center. After six months of working with Red Hat CloudForms, a Cloud Management Platform (CMP), I now wonder what was I thinking.  I encountered every one of the problems below, each are preventable with the right solution. Remember, we live in the 21st century&;shouldn’t the software that we use act like it?

We filled up a data store and all of the machines on it stopped working. 
It does not matter if it is a development environment or the mission critical database cluster, when storage fills up everything stops!  More often than not it is due to an excessive number of snapshots. The good news is CloudForms can quickly be set up with a policy to recognize and prevent this from happening.For example we can check the storage utilization and if it is over 90% full take action, or better yet, when it is within two weeks of being full based on usage trends. That way if manual action is required, there is enough forewarning to do so.  Another good practice is to setup a policy to disable more than a few snapshots. We all love to take snapshots, but there is a real cost to them, and there is no need to let them get out of hand.
I just got thousands of emails telling me that my host is down.The only thing worse than no email alert is receiving thousands of them. In CloudForms it is not only easy to set up alerts, but also to define how often they should be acted upon. For example, check every hour, but only notify once per day.
Your virtual machines (VMs) cannot be migrated because the VM tools updater CD-ROM image was not un-mounted correctly. 
This is a serious issue for a number of reasons.  First it breaks Disaster Recovery (DR) operations and can cause virtual machines to be out of balance. It also disables the ability to put a node into maintenance mode, potentially causing additional outages and delays.Most solutions involve writing a shell script that runs as root and attempts to periodically unmount the virtual CD-ROM drives. These scripts usually work, but are both scary from a security standpoint and indiscriminately dangerous, imagine physically ejecting the CD-ROM while the database administrator is in the middle of a database upgrade!  With CloudForms we can setup a simple policy that can unmount drives once a day, but only after sanity checking that it is the correct CD-ROM image and that the system is in a state where it can be safely unmounted.
I have to manually ensure that all of my systems pass an incredibly detailed and painful compliance check (STIGS, PCI, FIPS, etc.) by next week! 
I have lost weeks of my life to this and if you have not had the pleasure, count yourself lucky.  When the “friendly” auditors show up with a stack of three-ring binders and a mandate to check everything, you might as well clear your calendar for the next few weeks. In addition, since these checks are usually a requirement to continuing operations, expect many of these meetings to involve layers of upper management you did not know existed, and this is definitely not the best time to become acquainted.The good news is CloudForms allows for you to run automatic checks on VMs and hosts. If you are not already familiar with its OpenSCAP scanning capability, you owe yourself a look. Not only that, but if someone attempts to bring a VM online that is not compliant, CloudForms can shut it right back down. That is the type of peace of mind that allows for sleep-filled nights.
Someone logged into a production server as root using the virtual console and broke it.  Now you have to physically hunt down and interrogate all the potential culprits &; as well as fix the problem. 
Before you pull out your foam bat and roam the halls to apply some “sense” to the person who did this, it is good to know exactly who it was and what they did. With CloudForms you can see a timeline of each machine, who logged into what console, as well as perform a drift analysis to potentially see what changed.  With this knowledge you can now not only fix the problem, but also “educate” the responsible party.
The developers insist that all VM’s must have 8 vCPU’s and 64GB of RAM. 
The best way to fight flagrant waste or resources is with data.  CloudForms provides the concept of “Right-Sizing” where it will watch VMs operate and determine what resource allocation is the ideal size. With this information in hand CloudForms can either automatically adjust the allocations, or spit out a report to be used to show what the excessive resources are costing.
Someone keeps creating 32bit VM’s with more than 4GB of RAM! 
As we know there is no “good” way that a 32bit VM can possibly use that much memory and it is essentially just waste.  A simple CloudForms policy to check for “OS Type = 32bit” and “RAM > 4GB”, can be a very interesting report to run. Or better yet, put a policy in place to automatically adjust the memory to 4GB and notify the system owner.
I have to buy hardware for next year, but my capacity-planning formula involves a spreadsheet and a dart board. 
Long term planning in IT is hard, especially with dynamic workloads in a multi-cloud environment.  Once CloudForms is running, it automatically collects performance data and executes trend line analysis to assist with operational management. For example, in 23 days you will be out of storage on your production SAN. If that does not get the system administrator&;s attention nothing will. It can also perform simulations to see what your environment would look like if you added resources. So you can see your trend lines and capacity if you added another 100 VMs of a particular type and size.
For some reason two hosts were swapping VMs back and forth, and I only found out when people complained about performance. 
As an administrator there is no worse way to find out that something is wrong than being told by a user. Large scale issues such as this can be hard to see from the logs since they consist of typical output. With CloudForms, a timeline overview of the entire environment highlights issues like this and the root cause can be tracked down.
I spend most of my day pushing buttons, spinning up VMs, manually grouping them into virtual folders and tracking them with spreadsheets. 
Before starting a new administrator role it is always good to ask for the “Point of Truth” system that keeps track of what systems are running, where they are, and who is responsible for them.  More often than not the answer is, “A guy, who keeps track of the list, on his laptop”.This may be how it was always done, but now with tools such as CloudForms, you can automatically tag machines based on location, projects, users, or any other combination of characteristics, and as a bonus, can provide usage and costing information back to the user. Gary could only dream of providing that much helpful information.

Conclusion
There is never enough time in the day, and the pace of new technologies is accelerating. The only way to keep up is to automate processes. The tools that got you where you are today are not necessarily the same ones that will get you through the next generation of technologies. It will be critical to have tools that work across multiple infrastructure components and provide the visibility and automation required. This is why you need a cloud management platform and where the real power of CloudForms comes into play.
Quelle: CloudForms

We installed an OpenStack cluster with close to 1000 nodes on Kubernetes. Here’s what we found out.

The post We installed an OpenStack cluster with close to 1000 nodes on Kubernetes. Here&;s what we found out. appeared first on Mirantis | Pure Play Open Cloud.
Late last year, we did a number of tests that looked at deploying close to 1000 OpenStack nodes on a pre-installed Kubernetes cluster as a way of finding out what problems you might run into, and fixing them, if at all possible. In all we found several, and though in general, we were able to fix them, we thought it would still be good to go over the types of things you need to look for.
Overall we deployed an OpenStack cluster that contained more than 900 nodes using Fuel-CCP on a Kubernetes that had been deployed using Kargo. The Kargo tool is part of the Kubernetes Incubator project and uses the Large Kubernetes Cluster reference architecture as a baseline.
As we worked, we documented issues we found, and contributed fixes to both the deployment tool and reference design document where appropriate.  Here&8217;s what we found.
The setup
We started with just over 175 bare metal machines, allocating 3 of them to be used for Kubernetes control plane services placement (API servers, ETCD, Kubernetes scheduler, etc.), others had 5 virtual machines on each node, where every VM was used as a Kubernetes minion node.
Each bare metal node had the following specifications:

HP ProLiant DL380 Gen9
CPU &; 2x Intel(R) Xeon(R) CPU E5-2680 v3 @ .50GHz
RAM &8211; 264G
Storage &8211; 3.0T on RAID on HP Smart Array P840 Controller, HDD &8211; 12 x HP EH0600JDYTL
Network &8211; 2x Intel Corporation Ethernet 10G 2P X710

The running OpenStack cluster (as far as Kubernetes is concerned) consists of:

OpenStack control plane services running on close to 150 pods over 6 nodes
Close to 4500 pods spread across all of the remaining nodes, at 5 pods per minion node

One major Prometheus problem
During the experiments we used Prometheus monitoring tool to verify resource consumption and the load put on the core system, Kubernetes, and OpenStack services. One note of caution when using Prometheus:  Deleting old data from Prometheus storage will indeed improve the Prometheus API speed &; but it will also delete any previous cluster information, making it unavailable for post-run investigation. So make sure to document any observed issue and its debugging thoroughly!
Thankfully, we had in fact done that documentation, but one thing we&8217;ve decided to do going forward to prevent this problem by configuring Prometheus to back up data to one of the persistent time series databases it supports, such as InfluxDB, Cassandra, or OpenTSDB. By default, Prometheus is optimized to be used as a real time monitoring / alerting system, and there is an official recommendation from the Prometheus developers team to keep monitoring data retention for only about 15 days to keep the tool working in a quick and responsive manner. By setting up the backup, we can store old data for an extended amount of time for post-processing needs.
Problems we experienced in our testing
Huge load on kube-apiserver
Symptoms
Initially, we had a setup with all nodes (including the Kubernetes control plane nodes) running on a virtualized environment, but the load was such that the API servers couldn&8217;t function at all so they were moved to bare metal.  Still, both API servers running in the Kubernetes cluster were utilising up to 2000% of the available CPU (up to 45% of total node compute performance capacity), even after we migrated them to hardware nodes.
Root cause
All services that are not on Kubernetes masters (kubelet, kube-proxy on all minions) access kube-apiserver via a local NGINX proxy. Most of those requests are watch requests that lie mostly idle after they are initiated (most timeouts on them are defined to be about 5-10 minutes). NGINX was configured to cut idle connections in 3 seconds, which causes all clients to reconnect and (even worse) restart aborted SSL sessions. On the server side, this it makes kube-apiserver consume up to 2000% of the CPU resources, making other requests very slow.
Solution
Set the proxy_timeout parameter to 10 minutes in the nginx.conf configuration file, which should be more than long enough to prevent cutting SSL connections before te requests time out by themselves. After this fix was applied, one api-server consumed only 100% of CPU (about 2% of total node compute performance capacity), while the second one consumed about 200% (about 4% of total node compute performance capacity) of CPU (with average response time 200-400 ms).
Upstream issue status: fixed
Make the Kargo deployment tool set proxy_timeout to 10 minutes: issue fixed with pull request by Fuel CCP team.
KubeDNS cannot handle large cluster load with default settings
Symptoms
When deploying an OpenStack cluster on this scale, kubedns becomes unresponsive because of the huge load. This end up with a slew of errors appearing in the logs of the dnsmasq container in the kubedns pod:
Maximum number of concurrent DNS queries reached.
Also, dnsmasq containers sometimes get restarted due to hitting the high memory limit.
Root cause
First of all, kubedns often seems to fail often in this architecture, even even without load. During the experiment we observed continuous kubedns container restarts even on an empty (but large enough) Kubernetes cluster. Restarts are caused by liveness check failing, although nothing notable is observed in any logs.
Second, dnsmasq should have taken the load off kubedns, but it needs some tuning to behave as expected (or, frankly, at all) for large loads.
Solution
Fixing this problem requires several levels of steps:

Set higher limits for dnsmasq containers: they take on most of the load.
Add more replicas to kubedns replication controller (we decided to stop on 6 replicas, as it solved the observed issue &8211; for bigger clusters it might be needed to increase this number even more).
Increase number of parallel connections dnsmasq should handle (we used &8211;dns-forward-max=1000 which is recommended parameter setup in dnsmasq manuals)
Increase size of cache in dnsmasq: it has hard limit of 10000 cache entries which seems to be reasonable amount.
Fix kubedns to handle this behaviour in proper way.

Upstream issue status: partially fixed
and 2 are fixed by making them configurable in Kargo by Kubernetes team: issue, pull request.
Others &8211; work has not yet started.
Kubernetes scheduler needs to be deployed on a separate node
Symptoms
During the huge OpenStack cluster deployment against Kubernetes, scheduler, controller-manager and kube-apiserver start fighting for CPU cycles as all of them are under a large load. Scheduler is the most resource-hungry, so we need a way to deploy it separately.
Solution
We moved the Kubernetes scheduler moved to a separate node manually; all other schedulers were manually killed to prevent them from moving to other nodes.
Upstream issue status: reported
Issue in Kargo.
Kubernetes scheduler is ineffective with pod antiaffinity
Symptoms
It takes a significant amount of time for the scheduler to process pods with pod antiaffinity rules specified on them. It is spending about 2-3 seconds on each pod, which makes the time needed to deploy an OpenStack cluster of 900 nodes unexpectedly long (about 3h for just scheduling). OpenStack deployment requires the use of antiaffinity rules to prevent several OpenStack compute nodes from being launched on a single Kubernetes minion node.
Root cause
According to profiling results, most of the time is spent on creating new Selectors to match existing pods against, which triggers the validation step. Basically we have O(N^2) unnecessary validation steps (where N = the number of pods), even if we have just 5 deployment entities scheduled to most of the nodes.
Solution
In this case, we needed a specific optimization that speeds up scheduling time up to about 300 ms/pod. It’s still slow in terms of common sense (about 30m spent just on pods scheduling for a 900 node OpenStack cluster), but it is at least close to reasonable. This solution lowers the number of very expensive operations to O(N), which is better, but still depends on the number of pods instead of deployments, so there is space for future improvement.
Upstream issue status: fixed
The optimization was merged into master (pull request) and backported to the 1.5 branch, and is part of the 1.5.2 release (pull request).
kube-apiserver has low default rate limit
Symptoms
Different services start receiving “429 Rate Limit Exceeded” HTTP errors, even though kube-apiservers can take more load. This problem was discovered through a scheduler bug (see below).
Solution
Raise the rate limit for the kube-apiserver process via the &8211;max-requests-inflight option. It defaults to 400, but in our case it became workable at 2000. This number should be configurable in the Kargo deployment tool, as bigger deployments might require an even bigger increase.
Upstream issue status: reported
Issue in Kargo.
Kubernetes scheduler can schedule incorrectly
Symptoms
When creating a huge amount of pods (~4500 in our case) and faced with HTTP 429 errors from kube-apiserver (see above), the scheduler can schedule several pods of the same deployment on one node, in violation of the pod antiaffinity rule on them.
Root cause
See pull request below.
Upstream issue status: pull request
Fix from Mirantis team: pull request (merged, part of Kubernetes 1.6 release).
Docker sometimes becomes unresponsive
Symptoms
The Docker process sometimes hangs on several nodes, which results in timeouts in the kubelet logs. When this happens, pods cannot be spawned or terminated successfully on the affected minion node. Although many similar issues have been fixed in Docker since 1.11, we are still observing these symptoms.
Workaround
The Docker daemon logs do not contain any notable information, so we had to restart the docker service on the affected node. (During the experiments we used Docker 1.12.3, but we have observed similar symptoms in 1.13 release candidates as well.)
OpenStack services don’t handle PXC pseudo-deadlocks
Symptoms
When run in parallel, create operations of lots of resources were failing with DBError saying that Percona Xtradb Cluster identified a deadlock and the transaction should be restarted.
Root cause
oslo.db is responsible for wrapping errors received from the DB into proper classes so that services can restart transactions if similar errors occur, but it didn’t expect the error in the format that is being sent by Percona. After we fixed this, however, we still experienced similar errors, because not all transactions that could be restarted were properly decorated in Nova code.
Upstream issue status: fixed
The bug has been fixed by Roman Podolyaka’s CR and backported to Newton. It fixes Percona deadlock error detection, but there’s at least one place in Nova that still needs to be fixed.
Live migration failed with live_migration_uri configuration
Symptoms
With the live_migration_uri configuration, live migrations fails because one compute host can’t connect to a libvirt on another host.
Root cause
We can’t specify which IP address to use in the live_migration_uri template, so it was trying to use the address from the first interface that happened to be in the PXE network, while libvirt listens on the private network. We couldn’t use the live_migration_inbound_addr, which would solve this problem, because of a problem in upstream Nova.
Upstream issue status: fixed
A bug in Nova has been fixed and backported to Newton. We switched to using live_migration_inbound_addr after that.
The post We installed an OpenStack cluster with close to 1000 nodes on Kubernetes. Here&8217;s what we found out. appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Scaling with Kubernetes DaemonSets

The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
We&;re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify.  For example, you might create a DaemonSet to tell Kubernetes that any time you create a node with the label app=webserver you want it to run Nginx.  Let&8217;s take a look at how that works.
Creating a DaemonSet
Let&8217;s start by looking at a sample YAML file to define a Daemon Set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Here we&8217;re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property &; in this case, app=frontend-webserver.
The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.
The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there&8217;s no input at all from the scheduler, so schedulability of a node isn&8217;t taken into account.  On the other hand, Daemon Sets are a great way to deploy pods that need to be running before other objects.
Let&8217;s go ahead and create the Daemon Set.  Create a file called ds.yaml with the definition in it and run the command:
$ kubectl create -f ds.yaml
daemonset “datastore” created
Now let&8217;s see the Daemon Set in action.
Scaling capacity using a DaemonSet
If we check to see if the pods have been deployed, we&8217;ll see that they haven&8217;t:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
That&8217;s because we don&8217;t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have &;
$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d
We can go ahead and add at least one of them by adding the app=frontend-node label:
$kubectl label  node 10.0.10.5 app=frontend-node
node “10.0.10.5” labeled
Now if we get a list of pods again&8230;
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s
We can see that the pod was started without us taking any additional action.  
Now we have a single webserver running.  If we wanted to scale up, we could simply add our second node to the Daemon Set:
$ kubectl label  node 10.0.10.7 app=frontend-node
node “10.0.10.7” labeled
If we check the list of pods again, we can see that a new one was automatically started:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s
If we remove a node from the DaemonSet, any related pods are automatically terminated:
$ kubectl label  node 10.0.10.5 –overwrite app=backend
node “10.0.10.5” labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m
Updating Daemon Sets, and improvements in Kubernetes 1.6
OK, so how do we update a running DaemonSet?  Well, as of Kubernetes 1.5, the answer is &;you don&8217;t.&; Currently, it&8217;s possible to change the template of a DaemonSet, but it won&8217;t affect the pods that are already running.  
Starting in Kubernetes 1.6, however, you will be able to do rolling updates with Kubernetes DaemonSets. You&8217;ll have to set the updateStrategy, as in:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 updateStrategy: RollingUpdate
   maxUnavailable: 1
   minReadySeconds: 0
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Once you&8217;ve done that, you can make changes and they&8217;ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:
$kubectl set image ds/frontend webserver=httpd
If you want to make more substantive changes, you can edit or patch the Daemon Set:
kubectl edit ds/frontend
or
kubectl patch ds/frontend -p=ds-changes.yaml
(Obviously you would use your own DaemonSet names and files!)
So that&8217;s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.
The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

What’s new in Kubernetes 1.6 — a focus on stability

The post What&;s new in Kubernetes 1.6 &; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.6 is forecast to be released this week. Major themes include new capabilities for Daemon Sets, the beta release of Kubernetes federation and new scheduling features, and new networking capabilities. You can get an in-depth look at all of the new features in the Kubernetes 1.6 release notes, but let&8217;s get a quick overview here.
DaemonSet rolling updates
You&8217;re probably used to dealing with Kubernetes in terms of creating a Deployment or a ReplicationController and having it manage your pods, making certain that you always have a particular number of instances spread among the nodes that are available.  DaemonSets, on the other hand, look at things from the opposite perspective.
With DaemonSets, you specify the nodes to run a particular set of containers, and Kubernetes will make certain that any nodes that satisfy those requirements will run those pods. With Kubernetes 1.6, you now have the option to update those DaemonSets with a new image or other information.  (For more information on DaemonSets, you can see this article,which explains how and why to use them.)
Kubernetes Federation
As Kubernetes takes hold, the likelihood of running into situations in which users have multiple large clusters to deal with increases. Federation enables you to create an infrastructure in which users can use, say, the closest cluster to them, or the one that has the most spare capacity.
Now in beta, kubefed &;supports hosting federation on on-prem clusters, [and] automatically configures kube-dns in joining clusters and allows passing arguments to federation components.&;
Authentication and access control improvements
Role-Based Access Control (RBAC), which makes it possible to define roles for control plane, node, and controller components, is now in the beta phase.  (It also defines default roles for these components.) There are numerous changes from the alpha version (such as a change from using * for all users to using system:authenticated or system:unauthenticated) so make sure to check out the release notes for all the details.
Attribute-Based Access Control (ABAC) also been tweaked, with wild cards defaulting to authenticated users. The kube-apiserver and the authentication API have also seen a number of improvements.
Scheduling changes
Now in beta is the ability to have multiple schedulers, with each controlling a different set of pods. You can also set the scheduler you want for a particular pod on the pod sec, rather than as an annotation, as in the alpha version.
Also in beta are node and pod affinity/anti-affinity. This capability enables you to intelligently schedule pods that should, or shouldn&8217;t be, on the same piece of hardware.  For example, if you have a web application that talks to a database, you might wat them on the same pod.  If, on the other hand, you have a pod that needs to be highly available, you might want to spread different instances over different nodes as a safeguard against failure. You can specify the affinity field on the PodSpec.
Kubernetes 1.6 also includes the beta release of taints and tolerations, and some improvements to that functionality from the alpha version.  Taints enable you to dedicate a node to a particular kind of pod, similar to the way in which you might flavors in OpenStack. Unlike OpenStack, however, you can tell Kubernetes to try to avoid scheduling pods that aren&8217;t explicitly allowed (read: tolerated) to that node, but if it has no choice, it can go ahead. This functionality also enables to you specify a period of time a mod might run on this node before being &8220;evicted.&8221;
And speaking of being evicted, Kubernetes 1.6 now enables you to override the default 5 minute period during which a pod remains bound to a node if there are problems,s o you can specify that a pod either finds another node more quickly, or is more patient and waits even longer.
The Container Runtime Interface is now the default
While it&8217;s natural to assume that containers running on Kubernetes are Docker containers, that&8217;s not always true.  Kubernetes also supports rkt containers, and in fact the goal is to enable Kubernetes to orchestrate any container runtime. Up until now, that&8217;s been difficult, because the container runtimes were coded into the kubelet component that runs the actual containers.
Now, with Kubernetes 1.6, the beta version of the Docker Container Runtime Interface is enabled by default &8212; you can turn it off with &;enable-cri=false &8212; it will be easier to add new runtimes.  The old non-runtime architecture is deprecated in 1.6 and is scheduled for remove in Kubernetes 1.7.
Storage improvements
Kubernetes 1.6 includes the general availability release of StorageClasses, which enable you to specify a particular type of storage resource for users without exposing them to the details.  (This is also similar to flavors in OpenStack.)
Also now in GA are the ability to populate environment variables from a configmap or a secret, as well as support for writing and running your own dynamic PersistentVolume provisioners.
Note that StorageClasses will change the behaviors of PersistentVolumeClaim objects on existing clouds, so be sure to read the Release Notes.
Networking improvements
You now have added control over DNS; Kubernetes 1.6 enables you to set stubDomains, which define the nameservers used for specific domains (such as *.mycompany.local), and to specify what upstreamNameservers you want to use, overriding resolve.conf.
Digging deeper, the Container Network Interface (CNI) is now integrated with the Container Runtime Interface (CRI) by default, and the standard bridge plugin has been validated with the combination.
Other changes
Kubernetes 1.6 includes a huge number of changes and improvements, some of which will only be of interest to operators, as opposed to end users, but all of which are important. Some of these changes include:

By default, etcd v3 is enabled, enabling clusters up to 5000 nodes
The ability to know via the API whether a Deployment is blocked
Easier logging access
Improvements to the Horizontal Pod Autoscaler
The ability to add third party resources and extension API servers with the edit command
New commands for creating roles, as well as determining whether you can perform an action
New fields added to describe output
Improvements to kubeadm

Definitely take a look at the full release notes to get the details.
The post What&8217;s new in Kubernetes 1.6 &8212; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Using Kubernetes Helm to install applications

The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.

After reading this introduction to Kubernetes Helm, you will know how to:

Install Helm
Configure Helm
Use Helm to determine available packages
Use Helm to install a software package
Retrieve a Kubernetes Secret
Use Helm to delete an application
Use Helm to roll back changes to an application

Difficulty is a relative thing. Deploying an application using containers can be much easier than trying to manage deployments of a traditional application over different environments, but trying to manage and scale multiple containers manually is much more difficult than orchestrating them using Kubernetes.  But even managing Kubernetes applications looks difficult compared to, say, &;apt-get install mysql&;. Fortunately, the container ecosystem has now evolved to that level of simplicity. Enter Helm.
Helm is a Kubernetes-based package installer. It manages Kubernetes &8220;charts&8221;, which are &8220;preconfigured packages of Kubernetes resources.&8221;  Helm enables you to easily install packages, make revisions, and even roll back complex changes.
Next week, my colleague Maciej Kwiek will be giving a talk at Kubecon about Boosting Helm with AppController, so we thought this might be a good time to give you an introduction to what it is and how it works.
Let&;s take a quick look at how to install, configure, and utilize Helm.
Install Helm
Installing Helm is actually pretty straightforward.  Follow these steps:

Download the latest version of Helm from https://github.com/kubernetes/helm/releases.  (Note that if you are using an older version of Kubernetes (1.4 or below) you might have to downgrade Helm due to breaking changes.)
Unpack the archive:
$ gunzip helm-v2.2.3-darwin-amd64.tar.gz
$ tar -xvf helm-v2.2.3-darwin-amd64.tar
x darwin-amd64/
x darwin-amd64/helm
x darwin-amd64/LICENSE
x darwin-amd64/README.md
Next move the helm executable to your path:
$ mv dar*/helm /usr/local/bin/.

Finally, initialize helm to both set up the local environment and to install the server portion, Tiller, on your cluster.  (Helm will use the default cluster for Kubernetes, unless you tell it otherwise.)
$ helm init
Creating /Users/nchase/.helm
Creating /Users/nchase/.helm/repository
Creating /Users/nchase/.helm/repository/cache
Creating /Users/nchase/.helm/repository/local
Creating /Users/nchase/.helm/plugins
Creating /Users/nchase/.helm/starters
Creating /Users/nchase/.helm/repository/repositories.yaml
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
$HELM_HOME has been configured at /Users/nchase/.helm.

Tiller (the helm server side component) has been instilled into your Kubernetes Cluster.
Happy Helming!

Note that you can also upgrade the Tiller component using:
helm init –upgrade
That&8217;s all it takes to install Helm itself; now let&8217;s look at using it to install an application.
Install an application with Helm
One of the things that Helm does is enable authors to create and distribute their own applications using charts; to get a full list of the charts that are available, you can simply ask:
$ helm search
NAME                          VERSION DESCRIPTION                                       
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.    
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you…
stable/chronograf             0.1.2   Open-source web application written in Go and R…

In our case, we&8217;re going to install MySQL from the stable/mysql chart. Follow these steps:

First update the repo, just as you&8217;d do with apt-get update:
$ helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
Writing to /Users/nchase/.helm/repository/cache/stable-index.yaml
…Successfully got an update from the “stable” chart repository
Update Complete. ⎈ Happy Helming!⎈

Next, we&8217;ll do the actual install:
$ helm install stable/mysql
This command produces a lot of output, so let&8217;s take it one step at a time.  First, we get information about the release that&8217;s been deployed:
NAME:   lucky-wildebeest
LAST DEPLOYED: Thu Mar 16 16:13:50 2017
NAMESPACE: default
STATUS: DEPLOYED
As you can see, it&8217;s called lucky-wildebeest, and it&8217;s been successfully DEPLOYED.
Your release will, of course, have a different name. Next, we get the resources that were actually deployed by the stable/mysql chart:
RESOURCES:
==> v1/Secret
NAME                    TYPE    DATA  AGE
lucky-wildebeest-mysql  Opaque  2     0s

==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
lucky-wildebeest-mysql  Bound   pvc-11ebe330-0a85-11e7-9bb2-5ec65a93c5f1  8Gi       RWO          0s

==> v1/Service
NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)   AGE
lucky-wildebeest-mysql  10.0.0.13   <none>       3306/TCP  0s

==> extensions/v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
lucky-wildebeest-mysql  1        1        1           0          0s
This is a good example because we can see that this chart configures multiple types of resources: a Secret (for passwords), a persistent volume (to store the actual data), a Service (to serve requests) and a Deployment (to manage it all).
The chart also enables the developer to add notes:
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:
   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:
Run an Ubuntu pod that you can use as a client:
   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:
   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:
$ mysql -h lucky-wildebeest-mysql -p

These notes are the basic documentation a user needs to use the actual application. There let&8217;s see how we put it all to use.
Connect to mysql
The first lines of the notes make it seem deceptively simple to connect to MySql:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local
Before you can do anything with that information, however, you need to do two things: get the root password for the database, and get a working client with network access to the pod hosting it.
Get the mysql password
Most of the time, you&8217;ll be able to get the root password by simply executing the code the developer has left you:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
DBTzmbAikO
Some systems &; notably MacOS &8212; will give you an error:
$ kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo
Invalid character in input stream.
This is because of an error in base64 that adds an extraneous character. In this case, you will have to extract the password manually.  Basically, we&8217;re going to execute the same steps as this line of code, but one at a time.
Start by looking at the Secrets that Kubernetes is managing:
$ kubectl get secrets
NAME                     TYPE                                  DATA      AGE
default-token-0q3gy      kubernetes.io/service-account-token   3         145d
lucky-wildebeest-mysql   Opaque                                2         20m
It&8217;s the second, lucky-wildebeest-mysql that we&8217;re interested in. Let&8217;s look at the information it contains:
$ kubectl get secret lucky-wildebeest-mysql -o yaml
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:
 creationTimestamp: 2017-03-16T20:13:50Z
 labels:
   app: lucky-wildebeest-mysql
   chart: mysql-0.2.5
   heritage: Tiller
   release: lucky-wildebeest
 name: lucky-wildebeest-mysql
 namespace: default
 resourceVersion: “43613”
 selfLink: /api/v1/namespaces/default/secrets/lucky-wildebeest-mysql
 uid: 11eb29ed-0a85-11e7-9bb2-5ec65a93c5f1
type: Opaque
You probably already figured out where to look, but the developer&8217;s instructions told us the raw password data was here:
jsonpath=”{.data.mysql-root-password}”
So we&8217;re looking for this:
apiVersion: v1
data:
 mysql-password: a1p1THdRcTVrNg==
 mysql-root-password: REJUem1iQWlrTw==
kind: Secret
metadata:

Now we just have to go ahead and decode it:
$ echo “REJUem1iQWlrTw==” | base64 –decode
DBTzmbAikO
Finally!  So let&8217;s go ahead and connect to the database.
Create the mysql client
Now we have the password, but if we try to just connect iwt the mysql client on any old machine, we&8217;ll find that there&8217;s no connectivity outside of the cluster.  For example, if I try to connect with my local mysql client, I get an error:
$ ./mysql -h lucky-wildebeest-mysql.default.svc.cluster.local -p
Enter password:
ERROR 2005 (HY000): Unknown MySQL server host ‘lucky-wildebeest-mysql.default.svc.cluster.local’ (0)
So what we need to do is create a pod on which we can run the client.  Start by creating a new pod using the ubuntu:16.04 image:
$ kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never

$ kubectl get pods
NAME                                      READY     STATUS             RESTARTS   AGE
hello-minikube-3015430129-43g6t           1/1       Running            0          1h
lucky-wildebeest-mysql-3326348642-b8kfc   1/1       Running            0          31m
ubuntu                                   1/1       Running            0          25s
When it&8217;s running, go ahead and attach to it:
$ kubectl attach ubuntu -i -t

Hit enter for command prompt
Next install the mysql client:
root@ubuntu2:/# apt-get update && apt-get install mysql-client -y
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]

Setting up mysql-client-5.7 (5.7.17-0ubuntu0.16.04.1) …
Setting up mysql-client (5.7.17-0ubuntu0.16.04.1) …
Processing triggers for libc-bin (2.23-0ubuntu5) …
Now we should be ready to actually connect. Remember to use the password we extracted in the previous step.
root@ubuntu2:/# mysql -h lucky-wildebeest-mysql -p
Enter password:

Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 410
Server version: 5.7.14 MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type ‘help;’ or ‘h’ for help. Type ‘c’ to clear the current input statement.
Of course you can do what you want here, but for now we&8217;ll go ahead and exit both the database and the container:
mysql> exit
Bye
root@ubuntu2:/# exit
logout
So we&8217;ve successfully installed an application &8212; in this case, MySql, using Helm.  But what else can Helm do?
Working with revisions
So now that you&8217;ve seen Helm in action, let&8217;s take a quick look at what you can actually do with it.  Helm is designed to let you install, upgrade, delete, and roll back revisions. We&8217;ll get into more details about upgrades in a later article on creating charts, but let&8217;s quickly look at deleting and rolling back revisions:
First off, each time you make a change with Helm, you&8217;re creating a Revision.  By deploying MySql, we created a Revision, which we can see in this list:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     1        Sun Mar 19 22:07:56 2017 DEPLOYEmysql-0.2.5   default  
operatic-starfish 2        Thu Mar 16 17:10:23 2017 DEPLOYEredmine-0.4.0 default  
As you can see, we created a revision called lucky-wildebeest, based on the mysql-0.2.5 chart, and its status is DEPLOYED.
We could also get back the information we got when it was first deployed by getting the status of the revision:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     43m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-08e0027a-0d12-11e7-833b-5ec65a93c5f1  8Gi       RWO          43m

Now, if we wanted to, we could go ahead and delete the revision:
$ helm delete lucky-wildebeest
Now if you list all of the active revisions, it&8217;ll be gone.
$ helm ls
However, even though the revision s gone, you can still see the status:
$ helm status lucky-wildebeest
LAST DEPLOYED: Sun Mar 19 22:07:56 2017
NAMESPACE: default
STATUS: DELETED

NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
lucky-wildebeest-mysql.default.svc.cluster.local

To get your root password run:

   kubectl get secret –namespace default lucky-wildebeest-mysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo

To connect to your database:

Run an Ubuntu pod that you can use as a client:

   kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

Install the mysql client:

   $ apt-get update && apt-get install mysql-client -y

Connect using the mysql cli, then provide your password:

$ mysql -h lucky-wildebeest-mysql -p
OK, so what if we decide that we&8217;ve changed our mind, and we want to roll back that deletion?  Fortunately, Helm is designed for that.  We can specify that we want to rollback our application to a specific revision (in this case, 1).
$ helm rollback lucky-wildebeest 1
Rollback was a success! Happy Helming!
We can see that the application is back, and the revision has been incremented:
NAME              REVISION UPDATED                  STATUS CHART         NAMESPACE
lucky-wildebeest     2        Sun Mar 19 23:46:52 2017 DEPLOYEmysql-0.2.5   default  

We can also check the status:
$ helm status intended-mule
LAST DEPLOYED: Sun Mar 19 23:46:52 2017
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                 TYPE    DATA  AGE
intended-mule-mysql  Opaque  2     21m

==> v1/PersistentVolumeClaim
NAME                 STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
intended-mule-mysql  Bound   pvc-dad1b896-0d1f-11e7-833b-5ec65a93c5f1  8Gi       RWO          21m

Next time, we&8217;ll talk about how to create charts for Helm.  Meanwhile, if you&8217;re going to be at Kubecon, don&8217;t forget Maciej Kwiek&8217;s talk on Boosting Helm with AppController.
The post Using Kubernetes Helm to install applications appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Docker Partners with Girl Develop It and Launches Pilot Class

Yesterday marked International Women&;s Day, a global day celebrating the social, cultural, economic and political achievements of women. In that spirit, we’re thrilled to announce that we’re partnering with Girl Develop It, a national 501(c)3 nonprofit that provides affordable and judgment-free opportunities for adult women interested in learning web and software development through accessible in-person programs. Through welcoming, low-cost classes, GDI helps women of diverse backgrounds achieve their technology goals and build confidence in their careers and their everyday lives.

Girl Develop It deeply values community and supportive learning for women regardless of race, education levels, income and upbringing, and those are values we share. The Docker team is committed to ensuring that we create welcoming spaces for all members of the tech community. To proactively work towards this goal, we have launched several initiatives to strengthen the Docker community and promote diversity in the larger tech community including our DockerCon Diversity Scholarship Program, which provides mentorship and a financial scholarship to attend DockerCon. PS &; Are you a women in tech and want to attend DockerCon in Austin April 17th-20th? Use code  for 50% off your ticket! 

New program for WomeninTech at @DockerCon incl networking events, mentorship opps, etc. Use code&;Click To Tweet

Launching Pilot Class
In collaboration with the GDI curriculum team, we are developing an intro to Docker class that will introduce students to the Docker platform and take them through installing, integrating, and running it in their working environment. The pilot class will take place this spring in San Francisco and Austin.

The Intro to Docker class is fully aligned with Girl Develop It’s mission to unlock the potential of women returning to the workforce, looking for a career change, or leveling up their skills said Executive Director, Corinne Warnshuis. “A course on Docker has been requested by students and leaders in the community for some time. We&8217;re thrilled to be working with Docker to provide a valuable introduction to their platform through our in-person affordable, judgment free program.”
Want to help Docker with these initiatives?
We’re always happy to connect with others who work towards improving opportunities for women and underrepresented groups throughout the global Docker ecosystem and promote inclusion in the larger tech community.
If you or your organization are interested in getting more involved, please contact us at community@docker.com. Let’s join forces and take our impact to the next level!
 

Docker partners with @girldevelopit to launch a pilot course in San Francisco and AustinClick To Tweet

The post Docker Partners with Girl Develop It and Launches Pilot Class appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/