How do you build 12-factor apps using Kubernetes?

The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
It&;s said that there are 12 factors that define a cloud-native application.  It&8217;s also said that Kubernetes is designed for cloud native computing. So how do you create a 12-factor application using Kubernetes?  Let&8217;s take a look at exactly what twelve factor apps are and how they relate to Kubernetes.
What is a 12 factor application?
The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion &; where over time an application that&8217;s not updated gets to be out of sync with the latest operating systems, security patches, and so on &8212; an app should follow these 12 principles:

Codebase
One codebase tracked in revision control, many deploys
Dependencies
Explicitly declare and isolate dependencies
Config
Store config in the environment
Backing services
Treat backing services as attached resources
Build, release, run
Strictly separate build and run stages
Processes
Execute the app as one or more stateless processes
Port binding
Export services via port binding
Concurrency
Scale out via the process model
Disposability
Maximize robustness with fast startup and graceful shutdown
Dev/prod parity
Keep development, staging, and production as similar as possible
Logs
Treat logs as event streams
Admin processes
Run admin/management tasks as one-off processes

Let&8217;s look at what all of this means in terms of Kubernetes applications.
Principle I. Codebase
Principle 1 of a 12 Factor App is &;One codebase tracked in revision control, many deploys&;.
For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

spec:
     containers:
     – name: AcctApp
       image: acctApp:v3

In this way, you might have multiple versions of your application running in different deployments.
Applications can also behave differently depending on the configuration information with which they run.
Principle II. Dependencies
Principle 2 of a 12 Factor App is &8220;Explicitly declare and isolate dependencies&8221;.
Making sure that an application&8217;s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12 factor app must be self-contained.
That includes making sure that the application is isolated enough that it&8217;s not affected by conflicting libraries that might be installed on the host machine.
Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)
For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.
Principle III. Config
Principle 3 of a 12 Factor App is &8220;Store config in the environment&8221;.
The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.
Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.
Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, &8220;unlikely to be checked into the repository by accident&8221;, and they&8217;re operating system independent.
Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that&8217;s not a complete solution.
Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:
apiVersion: v1
kind: Pod
metadata:
 name: secret-env-pod
spec:
 containers:
   – name: mycontainer
     image: redis
     env:
       – name: SECRET_USERNAME
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: username
       – name: SECRET_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: password
       – name: CONFIG_VERSION
         valueFrom:
           configMapKeyRef:
             name: redis-app-config
             key: version.id
As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.
Of course, there&8217;s still a risk of someone mis-handling the files used to create these objects, but it&8217;s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.
What&8217;s more, there are those in the community that point out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.
Principle IV. Backing services
Principle 4 of the 12 Factor App is &8220;Treat backing services as attached resources&8221;.
In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service &8212; via an HTTP or similar request &8212; and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.
For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.
This requirement has two implications for a Kubernetes-based application.
First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn&8217;t want to have a local Mysql instance, even if you&8217;re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).  
Similarly, if you&8217;re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.
Note that both of these examples assume that though you&8217;re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.
Principle V. Build, release, run
Principle 5 of the 12 Factor App is &8220;Strictly separate build and run stages&8221;.
These days it&8217;s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.
Releases should be identifiable.  You should be able to say, &;This deployment is running Release 1.14 of this application&8221; or something similar, the same way we say we&8217;re running &8220;the OpenStack Ocata release&8221; or &8220;Kubernetes 1.6&8221;.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say &8220;application&8221; we&8217;re no longer talking about large, monolithic releases.  Instead, we&8217;re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.
All of this is so that when the app is running, that &8220;run&8221; process can be completely automated. Twelve factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.
Translating this to the Kubernetes realm, we&8217;ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That&8217;s your build process.  We talked about separating out the configuration information, so that&8217;s what needs to be combined with the build to make a release. And the ability to automatically run the application &8212; or multiple copies of the application &8212; is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.
Principle VI. Processes
Principle 6 of the 12 Factor App is &8220;Execute the app as one or more stateless processes&8221;.
Stateless processes are a core idea behind cloud native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.
If you&8217;re new to cloud application programming, this might be deceptively simple; many developers are used to &8220;sticky&8221; sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.
Instead, if you&8217;re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.
One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn&8217;t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement &8212; but that&8217;s probably pushing it a bit.
Principle VII. Port binding
Principle 7 of the 12 Factor App is &8220;Export services via port binding&8221;.
In an environment where we&8217;re assuming that different functionalities are handled by different processes, it&8217;s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it&8217;s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not be depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it&8217;s using HTTP or another protocol.
In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.
Principle VIII. Concurrency
Principle 8 of the 12 Factor App is to &8220;Scale out via the process model&8221;.
When you&8217;re writing a twelve-factor app, make sure that you&8217;re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.
Principle IX. Disposability
Principle 9 of the 12 Factor App is to &8220;Maximize robustness with fast startup and graceful shutdown&8221;.
It seems like this principle was tailor made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won&8217;t be affected, either because there are others to take its place, because it&8217;ll start right up again, or both.
Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.
Principle X. Dev/prod parity
Principle 10 of the 12 Factor App is &8220;Keep development, staging, and production as similar as possible&8221;.
This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems.
At a deeper level, though, as the Twelve-Factor App manifesto puts it, it&8217;s about three different types of &8220;gaps&8221;:

The time gap: A developer may work on code that takes days, weeks, or even months to go into production.

The personnel gap: Developers write code, ops engineers deploy it.

The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.
Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.
For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they&8217;re well-suited to this kind of environment.
As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.
Principle XI. Logs
Principle 11 of the 12 Factor App is to &8220;Treat logs as event streams&8221;.
While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it&8217;s the execution environment that&8217;s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.
In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you&8217;re using Google Cloud, and Elasticsearch if you&8217;re not.  You can find more information on setting Kubernetes logging destinations here.
Principle XII. Admin processes
Principle 12 of the 12 Factor App is &8220;Run admin/management tasks as one-off processes&8221;.
This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they&8217;re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.
You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.
How many of these factors did you hit?
Unless you&8217;re still building desktop applications, chances are you can feel good about hitting at least a few of these essential principles of twelve-factor apps. But chances are you also found at least one or two you can probably work a little harder on.
So we want to know: which of these factors are the biggest struggle for you? Where do you think you need to work harder, and what would make it easier for you?  Let us know in the comments.
Thanks to Jedrzej Nowak, Piotr Shamruk, Ivan Shvedunov, Tomasz Napierala and Lukasz Oles for reviewing this article!
Check out more Kubernetes resources.
The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Comcast Business offers up IBM Cloud dedicated links

Comcast Business has partnered with IBM to offer its customers a direct, dedicated network link to IBM Cloud and its global network of 50 data centers in 19 countries.
Using those direct links, customers will be able to access the network at speeds of up to 10 gigabits per second.
The partnership gives enterprise customers &;more choices for connectivity so they can store data, optimize their workloads and execute mission-critical applications in the cloud, whether it be on-premise, off-premise or a combination of the two,&; said Jeff Lewis, vice president of data services at Comcast Business.
Enterprises can also gain greater speed, reliability and security with dedicated links than with a standard, open internet connection. Services will be backed by a service-level agreement.
&8220;Enterprises definitely need help with cloud implementations and anything that can make it easier for them is a good thing,&8221; Jack Gold, an analyst at J. Gold Associates, told ComputerWorld.
Read more in ComputerWorld&;s full article.
The post Comcast Business offers up IBM Cloud dedicated links appeared first on news.
Quelle: Thoughts on Cloud

Enabling faster prototyping of IoT solutions

AT&T, a US-based multinational communications company, is connecting millions of Internet of Things (IoT) devices. Partnering with IBM, developers can deliver IoT innovation and insights on a hybrid cloud platform.
Taking the next step forward in a long-standing collaboration
AT&T and IBM have a longstanding relationship which has deepened over the past 20 years as we have leveraged each other drive value to the market. We are extending our collaboration to the Internet of Things, where we are integrating our capabilities to make it easier for developers to create end-to-end IoT solutions and gain deeper insights from data collected from connected devices.
This partnership is a first of a kind in the industry, providing one platform that scales from the device, through the network to powerful analytics to gain deep insights into IoT data.
Paving the way for new opportunities using deeper data insight
There is a strong industry and business need for developers who can create the next generation of innovative IoT solutions, so enterprises can benefit from the massive amounts of data to be generated by more than 29 billion devices IDC says will be connected by 2020.
According to the VisionMobile 2016 Internet of Things Megatrends report, nearly 10 million developers will be active in IoT by 2020, doubling from the estimated 5 million today. As businesses depend more on IoT solutions to succeed, they must invest in developers to stay competitive.
AT&T Flow Designer is now open for business
As of January 2017, AT&T Flow Designer, a graphical application development tool based on IBM Node-RED, is available on IBM Bluemix. Flow Designer, a unique, single platform for IoT DevOps, has been integrated with Watson APIs to enable developers to easily embed cognitive capabilities in their IoT solutions.
The solution integrates IBM Watson IoT cognitive capabilities and IBM Bluemix cloud technology with AT&T Flow Designer to help companies transform their businesses. The new solution will enable developers to quickly gain deeper insights from data collected by connected devices, which has the potential to reveal new market opportunities and improve productivity. These open standards-based tools allow developers to improve their skills and avoid the churn of learning new tools, protecting investments made in IoT solution development.
Using the solution, IoT developers can realize faster time to value, enhanced levels of security and simple, one-stop shopping that can increase their productivity in building innovative applications.
Creating open standards-based tools to build IoT solutions quickly
Combining the unique strengths in cognitive computing and global connectivity to create open standards-based tools on the IBM Cloud, the partnership between AT&T with IBM enables developers to quickly build and implement widely compatible IoT solutions. For example, Node-RED, an IBM-developed IoT tool now open source through the Linux Foundation, is embedded in AT&T’s Flow Designer. This allows developers to tap the Node-RED community’s hundreds of nodes to include new capabilities into their flows.
With this enhanced ability to deploy apps on the IBM Cloud, developers will have more visibility and understanding into the “things” they connect. For example, imagine an asset tracking app that has the ability to not only show the location of an asset, but to couple that with weather data, businesses would then be able to predict delays in the supply chain and reroute deliveries due to bad weather. Adding the Watson Speech API would enable operators of these assets to use the &;hands-free&; driving capability to monitor engine performance in real time to help avoid breakdowns.
A convenient one-stop shop for developers
The IBM and AT&T collaboration provides one-stop-shop access to the tools and capabilities needed to create end-to-end IoT solutions – inclusive of device, global connectivity, platforms, applications and analytics. Developers can rapidly compose and deploy IoT analytics applications and industry focused solutions that provide data to generate new business models and insights.
IBM brings its Watson IoT Platform and strong analytics capabilities, which partners nicely with the global connectivity (cellular and satellite) and IoT services that have made AT&T a leader in connected devices. The solution makes it easier for developers and enterprises to create innovative IoT applications and gain deeper insights from connected devices.
Multiple Watson APIs available to use
Watson APIs can be used to break down barriers to analyzing unstructured data and provide access to powerful capabilities, including advanced cognitive computing, machine learning and deep learning approaches to help better understand and engage users and tackle the massive growth of data in multiple formats.
The full list of IBM APIs available in AT&T Flow Designer is:

Cloudant
IBM Push Notifications
Watson IoT Platform
OpenWhisk
IBM Watson

Alchemy Feature Extract
Alchemy Image Analysis
Watson Language Identification
Watson Language Translation
Watson Natural Language Classifier
Watson Personality Insights
Watson Relationship Extraction
Watson Speech-to-Text
Watson Text-to-Speech
Watson Tradeoff Analytics
Watson Visual Recognition

Exciting use cases continue to emerge
Internet of Things use cases have a common set of fundamental requirements, such as easily onboarding any connected thing, creating a real-time communication channel with the thing, capturing data from the thing and storing it in a historical database, providing access to the collected data, and managing the things and the connectivity to them. In addition to these common elements, there are more complex use cases with extended requirements such as: providing a layer of analytics on the data in both real-time and on historical trend data, triggering events based on specific data conditions, and interacting with the thing from business apps and/or from mobile devices.
Here are two simple use case examples which use different APIs with Flow Designer and Watson IoT Platform:
Manage fleets using Watson Speech API:
Businesses share data on their vehicles in near real-time using fleet management apps. Fleet operators can track a vehicle’s location, tap into Watson Speech API for ‘hands-free’ driving and help monitor engine performance to avoid breakdowns. The IBM Watson IoT Platform gives operators more detailed analytics.
Benefit: They’re then better prepared to face unexpected challenges. Fleets can become more efficient, profitable and deliver better customer service.
Maintain tools tapping Watson Tradeoff Analytics API:
A predictive maintenance and quality solution offers near real-time analytics to increase the lifetime of business assets tapping into Watson Tradeoff Analytics API. A farming company can determine which tractors are in the best condition, if proactive maintenance is required and make better decisions about which equipment to take out of service and when. The app uses current and historical data to recommend which to use and which to repair, which helps to minimize equipment downtime.
Benefit: The farm is more productive, saves costs, and its tractors perform better.
A global solution
The IoT landscape is becoming more mobile and more divergent as devices proliferate around the world. The integration of AT&T and IBM IoT platforms provides global device connectivity, ease of development, shortens the application development life cycle and provides faster time to benefit realization. AT&T and IBM’s commitment to open standards and industry standards bodies ensure that solutions are scalable and extensible across a wide variety of device types and platforms.
Enterprises that are global with footprints in multiple regions will find the technology especially valuable due to the combined global reach of IBM and AT&T. AT&T is connecting more IoT devices than any other provider in North America, with a global network that reaches over 200 countries and territories. IBM is an established leader in the IoT, with more than 4,000 IoT client engagements in 170 countries, 1,400 partners in its growing ecosystem and more than 750 IoT patents.
Get started with AT&T Flow Designer on IBM Bluemix
Unlock the power of the Internet of Things by prototyping, building and hosting IoT applications with AT&T’s Flow Designer on Bluemix. AT&T Flow Designer is a robust web-based development environment where data driven applications can be designed and deployed with ease. Flow makes it easy to prototype IoT and machie-to-machine (M2M) solutions. Flow nodes are Open Source and available via GitHub. The solution is available to the vast network of the AT&T and IBM developer communities.
Learn more about AT&T Flow Designer, IBM Watson IoT Platform and IoT tools from AT&T.
A version of this article originally appeared on the IBM IoT blog.
The post Enabling faster prototyping of IoT solutions appeared first on news.
Quelle: Thoughts on Cloud

Watson identifies the best shots at the Masters

Golf fans know great shots when they see them. And now, Watson does, too.
For this year&;s Masters Tournament, IBM — which has a long history with the Masters as a technology partner — is making use of Watson&8217;s Cognitive Highlights capability to find those memorable moments and spotlight them at Augusta National Golf Club and in cloud-based streaming video apps. It&8217;s a first for sporting events.
&;This year, they really wanted to take the Masters&8217; digital projects to a new level, so we began thinking about how we can have an immersive video experience and what would make that space even more impressive,&; said John Kent, program manager for the IBM worldwide sports and entertainment partnership group. &8220;That&8217;s how Watson became involved.&8221;
The Watson Cognitive Highlights technology uses factors including player gestures and crowd noise to pinpoint shots worthy of replay.
For more, check out ZDNet&;s full article.
The post Watson identifies the best shots at the Masters appeared first on news.
Quelle: Thoughts on Cloud

Mirantis Cloud Platform: Stop wandering in the desert

The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
There&;s no denying that the last year has seen a great deal of turmoil in the OpenStack world, and here at Mirantis we&8217;re not immune to it.
In fact, some would say that we&8217;re part of that turmoil. Well, we are in the middle of a sea change in how we handle cloud deployments, moving from a model in which we focused on deploying OpenStack to one in which we focus on achieving outcomes for our customers.  
And then there&8217;s the fact that we are changing the architecture of our technology.
It&8217;s true. Over the past few months, we have been moving from Mirantis OpenStack to Mirantis Cloud Platform (MCP), but there&8217;s no need to panic. While it may seem a little scary, we&8217;re not moving away from OpenStack – rather, we are growing up and tackling the bigger picture, not just a part of it. In early installations with marquee customers, we’ve seen MCP provide a tremendous advantage in deployment and scale-out time. In just a few days, we will publicly launch MCP, and you will have our first visible signpost leading you out of the desert. We still have lots of work to do, but we&8217;re convinced this is the right path for our industry to take, and we&8217;re making great progress in that direction.
Where we started
To understand what&8217;s going on here, it helps to have a firm grasp of where we started.
When I started here at Mirantis four years ago, we had one product, Mirantis Fuel, and it had one purpose: deploy OpenStack. Back then that was no easy feat. Even with a tool like Fuel, it could be a herculean task taking many days and lots of calls to people who knew more than I did.
Over the intervening years, we came to realize that we needed to take a bigger hand in OpenStack itself, and we produced Mirantis OpenStack, a set of hardened OpenStack packages.  We also came to realize that deployment was only the beginning of the process; customers needed Lifecycle Management.
The Big Tent
And so Fuel grew. And grew. And grew. Finally, Fuel became be so big that we felt we needed to involve the community even more than we already had, and we submitted Fuel to the Big Tent.
Here Fuel has thrived, and does an awesome job of deploying OpenStack, and a decent job at lifecycle management.
But it&8217;s not enough.
Basically, when you come right down to it, OpenStack is nothing more than a big, complicated, distributed application. Sure, it&8217;s a big, complicated distributed application that deploys a cloud platform, but it&8217;s still a big complicated distributed application.
And let&8217;s face it: deploying and managing big, complicated, distributed applications is a solved problem.
The Mirantis Cloud Platform architecture
So let&8217;s look at what this means in practice.  The most important thing to understand is that where Mirantis OpenStack was focused on deployment, MCP is focused on the operations tasks you need to worry about after that deployment. MCP means:

A single cloud that runs VMs, containers, and bare metal with rich Software Defined Networking (SDN) and Software Defined Storage (SDS) functionality
Flexible deployment and simplified operations and lifecycle management through a new DevOps tool called DriveTrain
Operations Support Services in the form of enhanced StackLight software, which also provides continuous monitoring to ensure compliance to strict availability SLAs

OK, so that&8217;s a little less confusing than the diagram, but there&8217;s still a lot of &;sales&; speak in there.
Let&8217;s get down to the nitty gritty of what MCP means.
What Mirantis Cloud Platform really means
Let&8217;s look at each of those things individually and see why it matters.
A multi-platform cloud
There was a time when you would have separate environments for each type of computing you wanted to do. High performance workloads ran on bare metal, virtual machines ran on OpenStack, containers (if you were using them at all) ran on their own dedicated clusters.
In the last few years, bare metal was brought into Openstack, so that you could manage your physical machines the same way you managed your virtual ones.
Now Mirantis Cloud Platform brings in the last remaining piece. Your Kubernetes cluster is part of your cloud, enabling you to easily manage your container-based applications in the same environment and with the same tools as your traditional cloud resources.
All of this is made possible by the inclusion of powerful SDN and SDS components. Software Defined Networking for OpenStack is handled by OpenContrail, providing the benefits of commercial-grade networking without the lock-in, with Calico stepping in for the container environment. Storage takes the form of powerful open source Ceph clusters, which are used by both OpenStack and container applications.
These components enable MCP to provide an environment where all of these pieces work together seamlessly, so your cloud can be so much more than just OpenStack.
Knowing what&8217;s happening under the covers
With all of these pieces, you need to know what&8217;s happening &; and what might happen next. To that end, Mirantis Cloud Platform includes an updated version of StackLight, which gives you a comprehensive view of how each component of your cloud is performing; if an application on a particular VM acts up, you can isolate the problem before it brings down the entire node,
What&8217;s more, the StackLight Operations Support System analyzes the voluminous information it gets from your OpenStack cloud and can often let you know there&8217;s trouble &8212; before it causes problems.
All of this enables you to ensure uptime for your users &8212; and compliance with SLAs.
Finally solving the operations dilemma
Perhaps the biggest change, however, is in the form of DriveTrain. DriveTrain is a combination of various open source projects, such as Gerrit and Jenkins for CI/CD and Salt for configuration management, enabling a powerful, flexible way for you to both deploy and manage your cloud.
Because let&8217;s face it: the job of running a private cloud doesn&8217;t end when you&8217;ve spun up the cloud &8212; it&8217;s just begun.
Upgrading OpenStack has always been a nightmare, but DriveTrain is designed so that your cloud infrastructure software can always be up-to-date. Here&8217;s how it works:
Mirantis continually monitors changes to OpenStack and other relevant projects, providing extensive testing and making sure that no errors get introduced, in a process called &8220;hardening&8221;.  Once we decide these changes are ready for general use, we release them into the DriveTrain CI/CD infrastructure.
Once changes hit the CI/CD infrastructure, you pull them down into a staging environment and decide when you&8217;re ready to push them to production.
In other words, no more holding your breath every six months &8212; or worse, running cloud software that&8217;s year old.
Where do you want to go?
OpenStack started with great promise, but in the last few years it&8217;s become clear that the private cloud world is more than just one solution; it&8217;s time for everyone &8212; and that includes us here at Mirantis &8212; to step up and embrace a future that includes virtual machines, bare metal and containers, but in a way that makes both technological and business sense.
Because at the end of the day, it&8217;s all about outcomes; if your cloud doesn&8217;t do what you want, or if you can&8217;t manage it, or if you can&8217;t keep it up to date, you need something better. We&8217;ve been working hard at making MCP the solution that gets you where you want to be. Let us know how we can help get you there.
The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Red Hat joins the DPDK Project

Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.
&;Open source communities continue to be a driving force behind technology innovation, and open networking and NFV are great examples of that. Red Hat believes deeply in the power of open source to help transform the telecommunications industry, enabling service providers to build next generation efficient, flexible and agile networks,” said Chris Wright, Vice President and Chief Technologist, Office of Technology at Red Hat. “DPDK has played an important role in this network transformation, and our contributions to the DPDK community are aimed at helping to continue this innovation.&;
DPDK, the Data Plane Development Kit, is a set of libraries and drivers which enable very fast processing of network packets, by handling traffic in user space or on specialized hardware to provide greater throughput and processing performance. The ability to do this is vital to get the maximum performance out of network hardware under dataplane intensive workloads. For this reason, DPDK has become key to the telecommunications industry as part of Network Functions Virtualization (NFV) infrastructure, to enable applications like wireless and wireline packet core, deep packet inspection, video streaming, and voice services.

Open source projects like DPDK have taken a leadership role in driving the transition to NFV and enabling technology innovation in the field of networking by accelerating the datapath for network traffic across virtual switching and routing infrastructure.
It is opportune that this move is announced during the Open Networking Summit, an event which celebrates the role of open source projects and open standards in the networking industry. DPDK is a critical component to enable projects like OPNFV, Open vSwitch and fd.io to accelerate the datapath for network traffic across virtual switching and routing infrastructure, and provide the necessary performance to network operators.
Quelle: RedHat Stack

Red Hat Summit 2017 – Planning your OpenStack labs

This year in Boston, MA you can attend the Red Hat Summit 2017, the event to get your updates on open source technologies and meet with all the experts you follow throughout the year.
It&;s taking place from May 2-4 and is full of interesting sessions, keynotes, and labs.
This year I was part of the process of selecting the labs you are going to experience at Red Hat Summit and wanted to share here some to help you plan your OpenStack labs experience. These labs are for you to spend time with the experts who will teach you hands-on how to get the most out of your Red Hat OpenStack product.
Each lab is a 2-hour session, so planning is essential to getting the most out of your days at Red Hat Summit.
As you might be struggling to find and plan your sessions together with some lab time, here is an overview of the labs you can find in the session catalog for exact room and times. Each entry includes the lab number, title, abstract, instructors and is linked to the session catalog entry:

L103175 &; Deploy Ceph Rados Gateway as a replacement for OpenStack Swift
Come learn about these new features in Red Hat OpenStack Platform 10: There is now full support for Ceph Rados Gateway, and &;composable roles&; let administrators deploy services in a much more flexible way. Ceph capabilities are no longer limited to block only. With a REST object API, you are now able to store and consume your data through a RESTful interface, just like Amazon S3 and OpenStack Swift. Ceph Rados Gateway has a 99.9% API compliance with Amazon S3, and it can communicate with the Swift API. In this lab, you&8217;ll tackle the REST object API use case, and to get the most of your Ceph cluster, you&8217;ll learn how to use Red Hat OpenStack Platform director to deploy Red Hat OpenStack Platform with dedicated Rados Gateways nodes.
Instructors: Sebastien Han, Gregory Charot, Cyril Lopez
 
L104387 &8211; Hands on for the first time with Red Hat OpenStack Platform
In this lab, an instructor will lead you in configuring and running core OpenStack services in a Red Hat OpenStack Platform environment. We&8217;ll also cover authentication, compute, networking, and storage. If you&8217;re new to Red Hat OpenStack Platform, this session is for you.
Instructors: Rhys Oxenham, Jacob Liberman, Guil Barros
 
L102852 &8211; Hands on with Red Hat OpenStack Platform director
Red Hat OpenStack Platform director is a tool set for installing and managing Infrastructure-as-a-Service (IaaS) clouds. In this two-hour instructor-led lab, you will deploy and configure a Red Hat OpenStack Platform cloud using OpenStack Platform director. This will be a self-paced, hands-on lab, and it&8217;ll include both the command line and graphical user interfaces. You&8217;ll also learn, in an interactive session, about the architecture and approach of Red Hat OpenStack Platform director.
Instructors: Rhys Oxenham, Jacob Liberman
 
L104665 &8211; The Ceph power show—hands on with Ceph
Join our Ceph architects and experts for this guided, hands-on lab with Red Hat Ceph Storage. You&8217;ll get an expert introduction to Ceph concepts and features, followed by a series of live interactive modules to gain some experience. This lab is perfect for users of all skills, from beginners to experienced users who want to explore advanced features of OpenStack storage. You&8217;ll get some credits to the Red Hat Ceph Storage Test Drive portal that can be used later to learn and evaluate Red Hat Ceph Storage and Red Hat Gluster Storage. You&8217;ll leave this session having a better understanding of Ceph architecture and concepts, with experience on Red Hat Ceph Storage, and the confidence to install, set up, and provision Ceph in your own environment.
Instructors: Karan Singh, Kyle Bader, Daniel Messer
As you can see, there is plenty of OpenStack in these hands-on labs to get you through the week and hope to welcome you to one or more of the labs!
Quelle: RedHat Stack

Momentum mounts for Kubernetes, cloud native

For any new technology, there are few attributes more valuable then momentum. In the open tech space, few projects have as much momentum as Kubernetes and cloud native application development.
The Cloud Native Computing Foundation (CNCF) kicked off the European leg of its biannual CloudNativeCon/ event in Berlin by welcoming five new member organizations and two new projects.
CNCF has pulled in rkt and as its eighth and ninth open projects, joining Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC and CoreDNS,
IBM senior technical staff member Phil Estes is one of the open source maintainers for containerd. He explained a bit about the project and the role of IBM in the video below:

This week, containerd joined the @CloudNativeFdn. @estesp explains what it means for the community. Details: https://t.co/AQigsrXzqY pic.twitter.com/oC9XAOjO9D
— IBM Cloud (@IBMcloud) March 30, 2017

Meanwhile, CNCF announced that SUSE, HarmonyCloud, QAware, Solinea and TenxCloud have joined as contributing member organizations.
&;The cloud native movement is increasingly spreading to all parts of the world,&; CNCF executive director Dan Kohn told a sellout crowd of 1,500. That number tripled from CloudNativeCon in London a year prior.
We reported last fall that Kubernetes adoption was on the cusp of catching a giant wave. That wave has evolved into a groundswell among developers. There are now 4,000 projects based on Kubernetes, more than 50 products supporting it and more than 200 meetups around the world.
Even more significant has been the IBM announcement in March that Kubernetes is available on IBM Bluemix Container Service.
Linux Foundation Vice President Chris Aniszczyk and IBM Fellow, VP and Cloud Platform CTO Jason McGee discussed the move by IBM to Kube (and much more) on a podcast recoded from the venue. You can listen to it here:

A few more highlights from Berlin:
• 17-year-old Lucas Käldström, the youngest core Kubernetes maintainer, wowed the crowd with his talk on autoscaling a multi-platform Kubernetes cluster built with kubeadm.

Listening to Lucas talk about multi-architecture cluster support for containers/k8s. Oh, he&;s in high school too! pic.twitter.com/V8G3qAylzz
— Phil Estes (@estesp) March 30, 2017

• Docker’s Justin Cormack delivered one of the conference’s most popular sessions with his talk on containerd:

Now @justincormack from @Docker talking containerd in SRO room @CloudNativeFdn Kubecon Berlin. Hey @chanezon open a window, it&8217;s hot! pic.twitter.com/SlVHCyTwH6
— Jeffrey Borek (@jeffborek) March 30, 2017

• An update on the Open Container Initiative from Jeff Borek (IBM), Chris Aniszczyk (Linux Foundation), Vincent Batts (Red Hat) and Brandon Philips (CoreOS)

An update on @OCI_ORG and container standards from @Cra, @JeffBorek, @vbatts, @sauryadas_ & @BrandonPhilips. … https://t.co/MqqBKxwjBU
— Kevin J. Allen (@KevJosephAllen) March 29, 2017

More information about Bluemix.
The post Momentum mounts for Kubernetes, cloud native appeared first on news.
Quelle: Thoughts on Cloud

User Group Newsletter March 2017

User Group Newsletter March 2017
 
BOSTON SUMMIT UPDATE
Exciting news! The schedule for the Boston Summit in May has been released. You can check out all the details on the Summit schedule page.
Travelling to the Summit and need a visa? Follow the steps in this handy guide, 
If you haven’t registered, there is still time! Secure your spot today! 
 
HAVE YOUR SAY IN THE SUPERUSER AWARDS!

The OpenStack Summit kicks off in less than six weeks and seven deserving organizations have been nominated to be recognized during the opening keynotes. For this cycle, the community (that means you!) will review the candidates before the Superuser editorial advisors select the finalists and ultimate winner. See the full list of candidates and have your say here. 
 
COMMUNITY LEADERSHIP CHARTS COURSE FOR OPENSTACK
About 40 people from the OpenStack Technical Committee, User Committee, Board of Directors and Foundation Staff convened in Boston to talk about the future of OpenStack. They discussed the challenges we face as a community, but also why our mission to deliver open infrastructure is more important than ever. Read the comprehensive meeting report here.
 
NEW PROJECT MASCOTS
Fantastic new project mascots were released just before the Project Teams Gathering. Read the the story behind your favourite OpenStack project mascot via this superuser post. 
 
WELCOME TO OUR NEW USER GROUPS
We have some new user groups which have joined the OpenStack community.
Spain- Canary Islands
Mexico City &; Mexico
We wish them all the best with their OpenStack journey and can’t wait to see what they will achieve! Looking for your local group? Are you thinking of starting a user group? Head to the groups portal for more information.
 
LOOK OUT FOR YOUR FELLOW STACKERS AT COMMUNITY EVENTS
OpenStack is participating in a series of upcoming Community events this April.
April 3: Open Networking Summit Santa Clara, CA

OpenStack is sponsoring the Monday evening Open Source Community Reception at Levi Stadium
ldiko Vancsa will be speaking in two sessions:
Monday, 9:00-10:30am on &;The Interoperability Challenge in Telecom and NFV Environments&;, with EANTC Director Carsten Rossenhovel and Chris Price, room 207
Thursday, 1:40-3:30pm, OpenStack our Mini-Summit, topic &8220;OpenStack:Networking Roadmap, Collaboration and Contribution&8221; with Armando Migliaccio and Paul Carver from AT&T; Grand Ballroom A&B

 
April 17-19: DockerCon, Austin, TX

Openstack will be in booth

 
April 19-20: Global Open Source Summit, Beijing, China

Mike Perez will be delivering an OpenStack keynote

 
OPENSTACK DAYS: DATES FOR YOUR CALENDAR
We have lots of upcoming OpenStack Days coming up:
Upcoming OpenStack Days
June 1: Australia
June 5: Israel
June 7: Budapest
June 26: Germany Enterprise (DOST)
Read further information about OpenStack Days from this website. You’ll find a FAQ, see highlights from previous events and an extensive toolkit for hosting an OpenStack Day in your region. 
 
CONTRIBUTING TO UG NEWSLETTER
If you’d like to contribute a news item for next edition, please submit to this etherpad.
Items submitted may be edited down for length, style and suitability.
This newsletter is published on a monthly basis.
 
 
 
Quelle: openstack.org

Scaling with Kubernetes DaemonSets

The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
We&;re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify.  For example, you might create a DaemonSet to tell Kubernetes that any time you create a node with the label app=webserver you want it to run Nginx.  Let&8217;s take a look at how that works.
Creating a DaemonSet
Let&8217;s start by looking at a sample YAML file to define a Daemon Set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Here we&8217;re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property &; in this case, app=frontend-webserver.
The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.
The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there&8217;s no input at all from the scheduler, so schedulability of a node isn&8217;t taken into account.  On the other hand, Daemon Sets are a great way to deploy pods that need to be running before other objects.
Let&8217;s go ahead and create the Daemon Set.  Create a file called ds.yaml with the definition in it and run the command:
$ kubectl create -f ds.yaml
daemonset “datastore” created
Now let&8217;s see the Daemon Set in action.
Scaling capacity using a DaemonSet
If we check to see if the pods have been deployed, we&8217;ll see that they haven&8217;t:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
That&8217;s because we don&8217;t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have &;
$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d
We can go ahead and add at least one of them by adding the app=frontend-node label:
$kubectl label  node 10.0.10.5 app=frontend-node
node “10.0.10.5” labeled
Now if we get a list of pods again&8230;
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s
We can see that the pod was started without us taking any additional action.  
Now we have a single webserver running.  If we wanted to scale up, we could simply add our second node to the Daemon Set:
$ kubectl label  node 10.0.10.7 app=frontend-node
node “10.0.10.7” labeled
If we check the list of pods again, we can see that a new one was automatically started:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s
If we remove a node from the DaemonSet, any related pods are automatically terminated:
$ kubectl label  node 10.0.10.5 –overwrite app=backend
node “10.0.10.5” labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m
Updating Daemon Sets, and improvements in Kubernetes 1.6
OK, so how do we update a running DaemonSet?  Well, as of Kubernetes 1.5, the answer is &;you don&8217;t.&; Currently, it&8217;s possible to change the template of a DaemonSet, but it won&8217;t affect the pods that are already running.  
Starting in Kubernetes 1.6, however, you will be able to do rolling updates with Kubernetes DaemonSets. You&8217;ll have to set the updateStrategy, as in:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 updateStrategy: RollingUpdate
   maxUnavailable: 1
   minReadySeconds: 0
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Once you&8217;ve done that, you can make changes and they&8217;ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:
$kubectl set image ds/frontend webserver=httpd
If you want to make more substantive changes, you can edit or patch the Daemon Set:
kubectl edit ds/frontend
or
kubectl patch ds/frontend -p=ds-changes.yaml
(Obviously you would use your own DaemonSet names and files!)
So that&8217;s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.
The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis