Kubernetes Community Report — May 22, 2016

The post Kubernetes Community Report — May 22, 2016 appeared first on Mirantis | Pure Play Open Cloud.
With all the interest in the Kubernetes community we saw at the OpenStack Summit, we want to keep you up to date with all the goings-on. Let’s highlight the most notable changes and achievements of the last few weeks.
Kubernetes 1.7 is on the way
Perhaps the most important item to highlight is the upcoming Kubernetes 1.7 release. Kubernetes releases are scheduled for every three months, so development has been underway since just after KubeCon EU at the beginning of April, and the release team has been completely formed. The Kubernetes 1.7 release team includes members from multiple companies, including Google, CoreOS, Mirantis, Red Hat, Microsoft and others. The release is due to ship at the end of June. Stay tuned!
Focus on features
And what is a release without new features?  The focus for Kubernetes 1.6 was on stability and  not so many notable new features, but rather more hardened existing ones. (You can get more information about Kubernetes 1.6 from the KubeCon EU keynote by Aparna Sinha, Product Manager at Good.) By contrast, the focus for Kubernetes 1.7 is on new features, and today more than 50 features are targeting Kubernetes 1.7.
Most of these features will target the alpha state, which means that they are bringing new functionality, but with the possible cost of stability and API compatibility, but of course we expect to see them moving to a more stable state with future releases.
While the code freeze won’t happen until June 1 and the list of the features hasn’t been finalized, we already know that huge enhancements are expected, especially in the Federation, Security, Networking and other areas; stay tuned for further issues to read more about it.
Kubernetes at the OpenStack Summit
The latest OpenStack Summit in Boston was notable because of the high volume of Kubernetes-related talks and discussions. Dozens of Kubernetes-related talks were on the schedule during the four-day event. Several keynotes were even dedicated to the significant role that Kubernetes claims in OpenStack world.
Yet another notable event at OpenStack Summit was Kubernetes Day. This event, organized by the Cloud Native Computing Foundation was one of the OpenStack Summit’s Open Source Days and had a goal to be a Kubernetes-specific track at OpenStack Summit. Luckily, this event was more than successful, including 7 presentations and covering multiple topics, with almost full venues and huge attendee interest.
All this confirms the role that Kubernetes has in the OpenStack ecosystem. While OpenStack solves the questions of infrastructure abstraction in a completely open-source way, Kubernetes does the same for the application services abstraction layer.
Improving Kubernetes itself with less pain
There are numerous options for running a Kubernetes cluster locally, whether you’re interested in development, testing, or “just-trying it out”. These include projects such as minikube, kubeadm or kargo.
Most of these options are focused on the “development-on-kubernetes” experience, and there is a lack of tools that are focused on bringing the “development-for-kubernetes” experience. The Kubernetes community is considering a new proposal for running local Kubernetes clusters for development purposes has gone live. The project, named kubeadm-dind-cluster, incorporates the best developer practices and experiences when it comes to provisioning and running Kubernetes local clusters for developing Kubernetes itself.
The Kubernetes Community Code of Conduct
After the resolution of some issues that had been reported by Kubernetes people, the Kubernetes Code of Conduct has been announced and presented. You can find the mailing list thread with the situation description and Code of Conduct announcement here.
It’s important to note that the community is serious when it comes to enforcement; in a somewhat extraordinary situation, an abuser has been “kicked off” the Kubernetes community, including revoking Kubernetes Slack, Kubernetes mailing lists, GitHub org-level access and others based on conduct that occurred prior to his joining the project. While he is still able to jump back, this request will be under deep investigation.
The establishment of a Code of Conduct is a great step towards the project maturity. Having this form of public agreement protects individuals and the whole community from unexpected abuse incidents, and allows people to focus on their contributions.
Coming up
The next few weeks will be productive for the Kubernetes community as well. Several public community events, including CoreOS Fest in the USA and Container Days in Hamburg, Germany, will happen soon, and we will keep you updated here.
The post Kubernetes Community Report — May 22, 2016 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Multi-cloud application orchestration on Mirantis Cloud Platform using Spinnaker

The post Multi-cloud application orchestration on Mirantis Cloud Platform using Spinnaker appeared first on Mirantis | Pure Play Open Cloud.
If there’s one thing that’s become obvious in the last year or so it’s that when it comes to cloud, business success is more than just “installing OpenStack” or “deploying Kubernetes”; it’s about doing whatever it takes to run applications in the most appropriate environment, and in the most reliable way. At the Boston OpenStack Summit in May, I demonstrated a hybrid cloud application that ran Big Data analysis using a combination of components running on bare metal, in virtual machines, and in containers. In this blog, I’ll explain how that worked.
Users aren’t looking for technology
Have you ever brought your car to a mechanic and asked for a new oxygen sensor? Of course not. You go in asking for a solution to bad gas mileage or because your check engine light is on. You don’t care how they fix it, you just want it fixed.
Customers are the same. They don’t care if they have the very best OpenStack distribution if they can’t get their users to stop dropping their credit cards onto AWS or GCE because the experience is so much better.
To that end, we recently shipped Mirantis Cloud Platform, which is no longer “just” an OpenStack or Kubernetes distribution with an installer. Instead it includes DriveTrain, which covers an ecosystem of salt formulas (~140), unified metadata models called reclass, Jenkins with CI/CD pipelines, Gerrit for review and other support services.
While DriveTrain is intended to deploy OpenStack and Kubernetes and keep them up-to-date with the latest versions and improvements, it’s not solely for the underlying IaaS. Instead, it can also enable application orchestration higher in the stack — in other words, the workloads you run on that IaaS.
By providing an experience more like AWS or GCE, we’re enabling customers to focus on solving their problems. Therefore MCP is a generic platform, which allows us to focus on use cases such as Big Data, IoT or NFV clouds. In Boston, we showed the application orchestration of a sample Big Data use case. Let’s go through details.
MCP as a unified platform for VMs, containers and baremetal
Over the last year containers and Kubernetes have gotten a lot of traction in the community. Everyone talks about containers for workloads, but enterprise applications are not ready to just jump in containers.
Taking the OpenStack arena as an example, after 7 years of existence, deployments are finally growing, and OpenStack is mature enough to run enterprise production workloads. Legacy applications need a transition period from VMs to containers, and that will take some time. Therefore enterprise needs to have a unified hybrid platform, where you can split workloads between containers, VMs and non-virtualized resources. You can take best-of-breed from all three workloads and tune them for best performance and optimal costs. For example, MCP 1.0 comes with OpenStack, Kubernetes and OpenContrail, making it possible to create an environment where all those components work together to create a platform for legacy application stacks that are in various stages of transformation.
Let’s go through the steps and architecture of this unified platform. First we needed to provision all hardware nodes in our datacenter with a basic operation system, and OpenStack provides that capability via the Ironic project, Bare Metal-as-a-Service (BMaaS).  These servers then form the foundation nodes for the control plane or support services, as well as compute power for any workloads, as in Figure 1.

Figure 1: Our hardware nodes were split into three groups.
For this example, we split our servers into three groups and we deployed Kubernetes cluster on baremetal on one group, standard OpenStack compute nodes on the second group, and left the third group alone to act as non-virtualized servers.
OpenContrail SDN enabled us to create a single network layer and connect VMs, containers and baremetal nodes. OpenContrail has a Neutron plugin for OpenStack, and also a CNI plugin for Kubernetes, which enables us to use same network technology stack for both. Bare metal servers are then connected through Top-of-Rack (ToR) switches via VXLAN and the OVS-DB protocol, as in Figure 2.

Figure 2: One group of servers is dedicated to Kubernetes, one to OpenStack, and one to bare metal nodes; they’re tied together with OpenContrail networking.
In Figure 3, you can see the demo stack, where we have OpenStack and Kubernetes running with two independent OpenContrail control planes, which are federated via BGP (available since version 2.2x). This feature enables you to build independent OpenStack regions and federate their network stack across multiple sites while still maintaining separate failure domains. Any virtual networks can be directly routable by setting a route target, which establishes a direct datapath from container to VM. Traffic does not go through any gateway or network node, because vRouter creates an end-to-end MPLSoUDP or VXLAN tunnel. As you can see, we’ve created a direct path between the kafka pod (container) and the Spark VM.

Now that we know what we’re trying to deploy, let’s look at how we actually deployed it.
Multi-Cloud orchestration with Spinnaker
Now we have a platform that can can run any kind of workload, but for developers or operators, what’s more important is to how to orchestrate applications. Users do not want to go into OpenStack and manually start a VM, then go to kubernetes to start a container, and after that plug a non-virtualized bare metal node into the network through a ToR switch. That process is complex, error-prone, and time-consuming.
The real value and usability of this platform comes with higher orchestration. Fortunately, we can provide multi-cloud orchestration using a tool called Spinnaker. Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It was developed primarily by Netflix for AWS, but it’s gotten a lot of traction and provides drivers for AWS, OpenStack, Kubernetes, Google Cloud platform and Microsoft Azure.
Spinnaker’s main goal is to bake and rollout immutable images on different providers with different strategies, where you can manage load balancers, security groups and server groups made up of VMs and containers. (One important note here is that Spinnaker is not a PaaS, as some people think. It is really for large app orchestration.) Figure 4 shows that we have enabled two providers: OpenStack and Kubernetes.

Figure 4: Spinnaker has two providers enabled: Kubernetes and OpenStack.
Spinnaker has a simple UI, which enables us to build complex pipelines that include stages such as “manual judgement”, “run script”, “webhook” or “jenkins job”. MCP includes Jenkins as part of DriveTrain, so integration with Spinnaker pipelines is very easy. I can imagine using Spinnaker just for multiple stage Jenkins pipelines for upgrades of hundreds of different OpenStack sites.
Figure 5 shows our the Big Data infrastructure we want Spinnaker to orchestrate on top of MCP. Basically, we deploy HDFS on couple of bare metal servers, Spark in VMs, and kafka with zookeeper in containers. (We’ll talk more about the actual stack in the next subsection.)

Figure 5: Spinnaker will orchestrate Kafka on our Kubernetes cluster, Apache Spark on our VMs, and Hadoop HDFS on the bare metal nodes.
Figure 6 shows our big data infrastructure pipeline, which we’ll need later for our Big Data use case. As you can see, you can create multiple stages with different dependencies, and each stage can do notifications on slack, email, and so on.

Figure 6: The infrastructure pipeline shows multiple stages with different dependencies.
Finally, Figure 7 shows the view of deployed server groups, with Spark on VMs and kafka/zookeeper on kubernetes containers.

Figure 7: Spinnaker’s view of the deployed server groups; Spark is running on VMs, and Kafka and Zookeeper are running in Kubernetes-orchestrated containers.
Big Data Twitter analytics
We wanted to use a real example use case and not just show orchestration or ping between VMs and containers, so we picked Big Data because we are working on similar implementations at a couple of customers. To make it manageable for a short demo, we created a simple app, with real time Twitter streaming API processing, as you can see in Figure 8.

Figure 8: The Twitter Poller pulls data into Kafka, which then sends it to Spark for analysis before it gets sent to Hadoop for visualization and storage.
Our stack consists of following components:

Tweetpub – Reads tweets from the Twitter Streaming API and puts them into a Kafka topic
Apache Kafka – Messaging that transmits data from Twitter into Apache Spark. Alongside Kafka, we run Apache Zookeeper as a requirement.
Apache Spark – Data processing, running TweeTics job.
TweeTics – Spark job that parses tweets from Kafka and stores hashtags popularity as text files to HDFS.
Apache Hadoop HDFS – Data store for processed data.
Tweetviz – Reads processed data from HDFS and shows hashtag popularity as a tag cloud.

We configured Tweetpub to look for tweets that are sent from the Boston, MA area using the location parameters (-71.4415,41.9860,-70.4747,42.9041), and tracked the following words:

openstack
kubernetes
k8s
python
golang
sdn
nfv
weareopenstack
opencontrail
docker
hybridcloud
aws
azure
openstacksummit
mirantis
coreos
redhat
devops
container
api
bigdata
cloudcomputing
iot

Once everything was running, the job looked at every tweet that contained one of those words and stored a count for the hashtag in real time. Our 3D visualisation page then displayed most popular hashtags. We found that almost every time, IoT and Big Data are the most common hashtags, regardless of the  location or time. OpenStackSummit was number 6, as you can see in Figure 9.

Figure 9: The app displays the data in realtime, showing the relative popularity of various terms by adjusting their size in the tag cloud. For example, the two most popular topics were IoT and Big Data.
As you can see, once you have a flexible platform such as MCP, it’s much easier to tackle various use cases and help customers to onboard their workloads and satisfy their requirements through an open source platform. We’ve shown a relatively simple view of this setup; you can see a more in-depth look by viewing the Deep Demo session on orchestration I gave at the summit.
Next time, I’ll try to bring insight more into the NFV and Video streaming use cases.
The post Multi-cloud application orchestration on Mirantis Cloud Platform using Spinnaker appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

KubeCon North America

The post KubeCon North America appeared first on Mirantis | Pure Play Open Cloud.
The post KubeCon North America appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenStack Days Tokyo

The post OpenStack Days Tokyo appeared first on Mirantis | Pure Play Open Cloud.
The post OpenStack Days Tokyo appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

OpenStack Summit Sydney

The post OpenStack Summit Sydney appeared first on Mirantis | Pure Play Open Cloud.
The post OpenStack Summit Sydney appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Redefining TV Infrastructure: Shifting Media Processing into the Cloud Q&A

The post Redefining TV Infrastructure: Shifting Media Processing into the Cloud Q&A appeared first on Mirantis | Pure Play Open Cloud.
While demand for TV services is being disrupted and subscribers are expecting more personalized experiences, operators need to satisfy customer video demands to stay competitive. However, many workflows still operate in functional silos within organizations, and offering non-linear/OTT services often remains costly and inefficient. To innovate and keep pace with new over-the-top (OTT) entrants, operators need to experiment with cloud-orchestrated functionality. 
On March 15, Przemek Bozek, Principal Analyst, Consumer Electronics, Broadband & Video Technology at IHS Markit, Ryan Day, Principal Solutions Architect at Mirantis, and Yaniv Ben-Soussan, VP Product Management Cloud and SaaS at Harmonic presented a webinar exploring the role of the cloud in the pay-TV ecosystem and its importance to agility of service, innovation and competitive differentiation while reducing cost of operations and increasing quality of services. Here are some of the questions that came up.
Will Mirantis and Harmonic prepare a reference architecture for this joint solution?
We have several collateral projects nearing completion right now, including deployment guidelines and a detailed case study.
When you used the term “EMS,” did you mean Element Management System?
Yes – “EMS” means Element Management System.
Is load balancing, i.e., video quality vs bandwidth, achieved through Mirantis cloud or Harmonic VOS?
These trade-offs are managed by VOS. In general, the amount of CPE and bandwidth your application requires will be dictated by the number of different profiles you need to support and the traffic under each profile.
Did it become necessary to transport raw video within your cloud for distributed processing? If so, did this require the need for specialized routers/NICs?
On the contrary — a design requirement for this cloud-based solution is that all traffic is managed under IP with standard network hardware. That said, there are IP-based protocols for encapsulating video which are used as needed to transport data among microservice-based components and ingress and egress systems, as required.
Are we doing the video transcoding in VOS in the software, or there is any requirement for any specialized HW or CPU instructions?
All the encoding and transcoding is managed purely in software running on x86 vCPUs, with no additional special requirements for virtual or physical hardware support.
What are the performance numbers of the video transcoding sessions?
What we find is that — even though, naturally, we have generalized benchmarks — every customer in this industry actually has its own, unique requirements for transcoding, influenced by many factors, including where they are in the world, such as the US, Europe, or Asia. It actually depends on the density you want to achieve, per node. What we usually do, and what we always recommend, is for the customer to benchmark this themselves — and we can help with this; we have a benchmark system. With these benchmarks and target transcoding requirements, we can come back to the customer with recommendations for achieving those numbers and maximizing node density.
How does cloud meet cabling? Ingress/egress over coax?
By definition, the cloud environment is based on IP infrastructure. And over the IP infrastructure you can move compressed or uncompressed video. We have different standards and technology to move video over IP networks, and we know how to do so efficiently. There are separate systems at ingress and egress that can convert raw video to IP, and back into QAM for your customers.
Is Harmonic VOS running on Docker Containers and/or as a VM? Or is it hybrid?
Core services of Harmonic VOS run as microservices in Docker containers. Other parts of VOS include middleware that does container scheduling and orchestration, provides a persistent data store and performs other functions. We install all of this on KVM virtual machines provided by OpenStack.Virtual machines are, of course, only one way to provide compute resources to an application; bare metal is another.
Sound interesting?  You can view the entire webinar on demand.
The post Redefining TV Infrastructure: Shifting Media Processing into the Cloud Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Bombardier selects IBM for $700 million services management partnership

Bombardier announced this week that it has signed a six-year, $700 million agreement with IBM that includes IBM Services and IBM Cloud management of the aerospace and transportation company’s worldwide IT infrastructure.
“Bombardier is working to improve productivity, reduce costs and grow earnings,” said Sean Terriah, Bombardier chief information officer, aerospace and corporate office. “This IT transformation initiative will help us better integrate globally to create a best-in-class IT organization.”
Martin Jetter, senior vice president of IBM Global Technology Services said IBM expertise is helping clients “transform the business of IT to be more competitive, agile and secure through cloud computing and industry services best practices.”
The agreement between the IBM and the Montreal-based manufacturer of planes, trains and other vehicles spans 47 countries and is one of the largest cloud partnerships in Canada for IBM.
Learn more about the IBM partnership with Bombardier.
The post Bombardier selects IBM for $700 million services management partnership appeared first on Cloud computing news.
Quelle: Thoughts on Cloud