Comcast Business offers up IBM Cloud dedicated links

Comcast Business has partnered with IBM to offer its customers a direct, dedicated network link to IBM Cloud and its global network of 50 data centers in 19 countries.
Using those direct links, customers will be able to access the network at speeds of up to 10 gigabits per second.
The partnership gives enterprise customers &;more choices for connectivity so they can store data, optimize their workloads and execute mission-critical applications in the cloud, whether it be on-premise, off-premise or a combination of the two,&; said Jeff Lewis, vice president of data services at Comcast Business.
Enterprises can also gain greater speed, reliability and security with dedicated links than with a standard, open internet connection. Services will be backed by a service-level agreement.
&8220;Enterprises definitely need help with cloud implementations and anything that can make it easier for them is a good thing,&8221; Jack Gold, an analyst at J. Gold Associates, told ComputerWorld.
Read more in ComputerWorld&;s full article.
The post Comcast Business offers up IBM Cloud dedicated links appeared first on news.
Quelle: Thoughts on Cloud

Enabling faster prototyping of IoT solutions

AT&T, a US-based multinational communications company, is connecting millions of Internet of Things (IoT) devices. Partnering with IBM, developers can deliver IoT innovation and insights on a hybrid cloud platform.
Taking the next step forward in a long-standing collaboration
AT&T and IBM have a longstanding relationship which has deepened over the past 20 years as we have leveraged each other drive value to the market. We are extending our collaboration to the Internet of Things, where we are integrating our capabilities to make it easier for developers to create end-to-end IoT solutions and gain deeper insights from data collected from connected devices.
This partnership is a first of a kind in the industry, providing one platform that scales from the device, through the network to powerful analytics to gain deep insights into IoT data.
Paving the way for new opportunities using deeper data insight
There is a strong industry and business need for developers who can create the next generation of innovative IoT solutions, so enterprises can benefit from the massive amounts of data to be generated by more than 29 billion devices IDC says will be connected by 2020.
According to the VisionMobile 2016 Internet of Things Megatrends report, nearly 10 million developers will be active in IoT by 2020, doubling from the estimated 5 million today. As businesses depend more on IoT solutions to succeed, they must invest in developers to stay competitive.
AT&T Flow Designer is now open for business
As of January 2017, AT&T Flow Designer, a graphical application development tool based on IBM Node-RED, is available on IBM Bluemix. Flow Designer, a unique, single platform for IoT DevOps, has been integrated with Watson APIs to enable developers to easily embed cognitive capabilities in their IoT solutions.
The solution integrates IBM Watson IoT cognitive capabilities and IBM Bluemix cloud technology with AT&T Flow Designer to help companies transform their businesses. The new solution will enable developers to quickly gain deeper insights from data collected by connected devices, which has the potential to reveal new market opportunities and improve productivity. These open standards-based tools allow developers to improve their skills and avoid the churn of learning new tools, protecting investments made in IoT solution development.
Using the solution, IoT developers can realize faster time to value, enhanced levels of security and simple, one-stop shopping that can increase their productivity in building innovative applications.
Creating open standards-based tools to build IoT solutions quickly
Combining the unique strengths in cognitive computing and global connectivity to create open standards-based tools on the IBM Cloud, the partnership between AT&T with IBM enables developers to quickly build and implement widely compatible IoT solutions. For example, Node-RED, an IBM-developed IoT tool now open source through the Linux Foundation, is embedded in AT&T’s Flow Designer. This allows developers to tap the Node-RED community’s hundreds of nodes to include new capabilities into their flows.
With this enhanced ability to deploy apps on the IBM Cloud, developers will have more visibility and understanding into the “things” they connect. For example, imagine an asset tracking app that has the ability to not only show the location of an asset, but to couple that with weather data, businesses would then be able to predict delays in the supply chain and reroute deliveries due to bad weather. Adding the Watson Speech API would enable operators of these assets to use the &;hands-free&; driving capability to monitor engine performance in real time to help avoid breakdowns.
A convenient one-stop shop for developers
The IBM and AT&T collaboration provides one-stop-shop access to the tools and capabilities needed to create end-to-end IoT solutions – inclusive of device, global connectivity, platforms, applications and analytics. Developers can rapidly compose and deploy IoT analytics applications and industry focused solutions that provide data to generate new business models and insights.
IBM brings its Watson IoT Platform and strong analytics capabilities, which partners nicely with the global connectivity (cellular and satellite) and IoT services that have made AT&T a leader in connected devices. The solution makes it easier for developers and enterprises to create innovative IoT applications and gain deeper insights from connected devices.
Multiple Watson APIs available to use
Watson APIs can be used to break down barriers to analyzing unstructured data and provide access to powerful capabilities, including advanced cognitive computing, machine learning and deep learning approaches to help better understand and engage users and tackle the massive growth of data in multiple formats.
The full list of IBM APIs available in AT&T Flow Designer is:

Cloudant
IBM Push Notifications
Watson IoT Platform
OpenWhisk
IBM Watson

Alchemy Feature Extract
Alchemy Image Analysis
Watson Language Identification
Watson Language Translation
Watson Natural Language Classifier
Watson Personality Insights
Watson Relationship Extraction
Watson Speech-to-Text
Watson Text-to-Speech
Watson Tradeoff Analytics
Watson Visual Recognition

Exciting use cases continue to emerge
Internet of Things use cases have a common set of fundamental requirements, such as easily onboarding any connected thing, creating a real-time communication channel with the thing, capturing data from the thing and storing it in a historical database, providing access to the collected data, and managing the things and the connectivity to them. In addition to these common elements, there are more complex use cases with extended requirements such as: providing a layer of analytics on the data in both real-time and on historical trend data, triggering events based on specific data conditions, and interacting with the thing from business apps and/or from mobile devices.
Here are two simple use case examples which use different APIs with Flow Designer and Watson IoT Platform:
Manage fleets using Watson Speech API:
Businesses share data on their vehicles in near real-time using fleet management apps. Fleet operators can track a vehicle’s location, tap into Watson Speech API for ‘hands-free’ driving and help monitor engine performance to avoid breakdowns. The IBM Watson IoT Platform gives operators more detailed analytics.
Benefit: They’re then better prepared to face unexpected challenges. Fleets can become more efficient, profitable and deliver better customer service.
Maintain tools tapping Watson Tradeoff Analytics API:
A predictive maintenance and quality solution offers near real-time analytics to increase the lifetime of business assets tapping into Watson Tradeoff Analytics API. A farming company can determine which tractors are in the best condition, if proactive maintenance is required and make better decisions about which equipment to take out of service and when. The app uses current and historical data to recommend which to use and which to repair, which helps to minimize equipment downtime.
Benefit: The farm is more productive, saves costs, and its tractors perform better.
A global solution
The IoT landscape is becoming more mobile and more divergent as devices proliferate around the world. The integration of AT&T and IBM IoT platforms provides global device connectivity, ease of development, shortens the application development life cycle and provides faster time to benefit realization. AT&T and IBM’s commitment to open standards and industry standards bodies ensure that solutions are scalable and extensible across a wide variety of device types and platforms.
Enterprises that are global with footprints in multiple regions will find the technology especially valuable due to the combined global reach of IBM and AT&T. AT&T is connecting more IoT devices than any other provider in North America, with a global network that reaches over 200 countries and territories. IBM is an established leader in the IoT, with more than 4,000 IoT client engagements in 170 countries, 1,400 partners in its growing ecosystem and more than 750 IoT patents.
Get started with AT&T Flow Designer on IBM Bluemix
Unlock the power of the Internet of Things by prototyping, building and hosting IoT applications with AT&T’s Flow Designer on Bluemix. AT&T Flow Designer is a robust web-based development environment where data driven applications can be designed and deployed with ease. Flow makes it easy to prototype IoT and machie-to-machine (M2M) solutions. Flow nodes are Open Source and available via GitHub. The solution is available to the vast network of the AT&T and IBM developer communities.
Learn more about AT&T Flow Designer, IBM Watson IoT Platform and IoT tools from AT&T.
A version of this article originally appeared on the IBM IoT blog.
The post Enabling faster prototyping of IoT solutions appeared first on news.
Quelle: Thoughts on Cloud

Watson identifies the best shots at the Masters

Golf fans know great shots when they see them. And now, Watson does, too.
For this year&;s Masters Tournament, IBM — which has a long history with the Masters as a technology partner — is making use of Watson&8217;s Cognitive Highlights capability to find those memorable moments and spotlight them at Augusta National Golf Club and in cloud-based streaming video apps. It&8217;s a first for sporting events.
&;This year, they really wanted to take the Masters&8217; digital projects to a new level, so we began thinking about how we can have an immersive video experience and what would make that space even more impressive,&; said John Kent, program manager for the IBM worldwide sports and entertainment partnership group. &8220;That&8217;s how Watson became involved.&8221;
The Watson Cognitive Highlights technology uses factors including player gestures and crowd noise to pinpoint shots worthy of replay.
For more, check out ZDNet&;s full article.
The post Watson identifies the best shots at the Masters appeared first on news.
Quelle: Thoughts on Cloud

Mirantis Cloud Platform: Stop wandering in the desert

The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
There&;s no denying that the last year has seen a great deal of turmoil in the OpenStack world, and here at Mirantis we&8217;re not immune to it.
In fact, some would say that we&8217;re part of that turmoil. Well, we are in the middle of a sea change in how we handle cloud deployments, moving from a model in which we focused on deploying OpenStack to one in which we focus on achieving outcomes for our customers.  
And then there&8217;s the fact that we are changing the architecture of our technology.
It&8217;s true. Over the past few months, we have been moving from Mirantis OpenStack to Mirantis Cloud Platform (MCP), but there&8217;s no need to panic. While it may seem a little scary, we&8217;re not moving away from OpenStack – rather, we are growing up and tackling the bigger picture, not just a part of it. In early installations with marquee customers, we’ve seen MCP provide a tremendous advantage in deployment and scale-out time. In just a few days, we will publicly launch MCP, and you will have our first visible signpost leading you out of the desert. We still have lots of work to do, but we&8217;re convinced this is the right path for our industry to take, and we&8217;re making great progress in that direction.
Where we started
To understand what&8217;s going on here, it helps to have a firm grasp of where we started.
When I started here at Mirantis four years ago, we had one product, Mirantis Fuel, and it had one purpose: deploy OpenStack. Back then that was no easy feat. Even with a tool like Fuel, it could be a herculean task taking many days and lots of calls to people who knew more than I did.
Over the intervening years, we came to realize that we needed to take a bigger hand in OpenStack itself, and we produced Mirantis OpenStack, a set of hardened OpenStack packages.  We also came to realize that deployment was only the beginning of the process; customers needed Lifecycle Management.
The Big Tent
And so Fuel grew. And grew. And grew. Finally, Fuel became be so big that we felt we needed to involve the community even more than we already had, and we submitted Fuel to the Big Tent.
Here Fuel has thrived, and does an awesome job of deploying OpenStack, and a decent job at lifecycle management.
But it&8217;s not enough.
Basically, when you come right down to it, OpenStack is nothing more than a big, complicated, distributed application. Sure, it&8217;s a big, complicated distributed application that deploys a cloud platform, but it&8217;s still a big complicated distributed application.
And let&8217;s face it: deploying and managing big, complicated, distributed applications is a solved problem.
The Mirantis Cloud Platform architecture
So let&8217;s look at what this means in practice.  The most important thing to understand is that where Mirantis OpenStack was focused on deployment, MCP is focused on the operations tasks you need to worry about after that deployment. MCP means:

A single cloud that runs VMs, containers, and bare metal with rich Software Defined Networking (SDN) and Software Defined Storage (SDS) functionality
Flexible deployment and simplified operations and lifecycle management through a new DevOps tool called DriveTrain
Operations Support Services in the form of enhanced StackLight software, which also provides continuous monitoring to ensure compliance to strict availability SLAs

OK, so that&8217;s a little less confusing than the diagram, but there&8217;s still a lot of &;sales&; speak in there.
Let&8217;s get down to the nitty gritty of what MCP means.
What Mirantis Cloud Platform really means
Let&8217;s look at each of those things individually and see why it matters.
A multi-platform cloud
There was a time when you would have separate environments for each type of computing you wanted to do. High performance workloads ran on bare metal, virtual machines ran on OpenStack, containers (if you were using them at all) ran on their own dedicated clusters.
In the last few years, bare metal was brought into Openstack, so that you could manage your physical machines the same way you managed your virtual ones.
Now Mirantis Cloud Platform brings in the last remaining piece. Your Kubernetes cluster is part of your cloud, enabling you to easily manage your container-based applications in the same environment and with the same tools as your traditional cloud resources.
All of this is made possible by the inclusion of powerful SDN and SDS components. Software Defined Networking for OpenStack is handled by OpenContrail, providing the benefits of commercial-grade networking without the lock-in, with Calico stepping in for the container environment. Storage takes the form of powerful open source Ceph clusters, which are used by both OpenStack and container applications.
These components enable MCP to provide an environment where all of these pieces work together seamlessly, so your cloud can be so much more than just OpenStack.
Knowing what&8217;s happening under the covers
With all of these pieces, you need to know what&8217;s happening &; and what might happen next. To that end, Mirantis Cloud Platform includes an updated version of StackLight, which gives you a comprehensive view of how each component of your cloud is performing; if an application on a particular VM acts up, you can isolate the problem before it brings down the entire node,
What&8217;s more, the StackLight Operations Support System analyzes the voluminous information it gets from your OpenStack cloud and can often let you know there&8217;s trouble &8212; before it causes problems.
All of this enables you to ensure uptime for your users &8212; and compliance with SLAs.
Finally solving the operations dilemma
Perhaps the biggest change, however, is in the form of DriveTrain. DriveTrain is a combination of various open source projects, such as Gerrit and Jenkins for CI/CD and Salt for configuration management, enabling a powerful, flexible way for you to both deploy and manage your cloud.
Because let&8217;s face it: the job of running a private cloud doesn&8217;t end when you&8217;ve spun up the cloud &8212; it&8217;s just begun.
Upgrading OpenStack has always been a nightmare, but DriveTrain is designed so that your cloud infrastructure software can always be up-to-date. Here&8217;s how it works:
Mirantis continually monitors changes to OpenStack and other relevant projects, providing extensive testing and making sure that no errors get introduced, in a process called &8220;hardening&8221;.  Once we decide these changes are ready for general use, we release them into the DriveTrain CI/CD infrastructure.
Once changes hit the CI/CD infrastructure, you pull them down into a staging environment and decide when you&8217;re ready to push them to production.
In other words, no more holding your breath every six months &8212; or worse, running cloud software that&8217;s year old.
Where do you want to go?
OpenStack started with great promise, but in the last few years it&8217;s become clear that the private cloud world is more than just one solution; it&8217;s time for everyone &8212; and that includes us here at Mirantis &8212; to step up and embrace a future that includes virtual machines, bare metal and containers, but in a way that makes both technological and business sense.
Because at the end of the day, it&8217;s all about outcomes; if your cloud doesn&8217;t do what you want, or if you can&8217;t manage it, or if you can&8217;t keep it up to date, you need something better. We&8217;ve been working hard at making MCP the solution that gets you where you want to be. Let us know how we can help get you there.
The post Mirantis Cloud Platform: Stop wandering in the desert appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Red Hat joins the DPDK Project

Today, the DPDK community announced during the Open Networking Summit that they are moving the project to the Linux Foundation, and creating a new governance structure to enable companies to engage with the project, and pool resources to promote the DPDK community. As a long-time contributor to DPDK, Red Hat is proud to be a founding Gold member of the new DPDK Project initiative under the Linux Foundation.
&;Open source communities continue to be a driving force behind technology innovation, and open networking and NFV are great examples of that. Red Hat believes deeply in the power of open source to help transform the telecommunications industry, enabling service providers to build next generation efficient, flexible and agile networks,” said Chris Wright, Vice President and Chief Technologist, Office of Technology at Red Hat. “DPDK has played an important role in this network transformation, and our contributions to the DPDK community are aimed at helping to continue this innovation.&;
DPDK, the Data Plane Development Kit, is a set of libraries and drivers which enable very fast processing of network packets, by handling traffic in user space or on specialized hardware to provide greater throughput and processing performance. The ability to do this is vital to get the maximum performance out of network hardware under dataplane intensive workloads. For this reason, DPDK has become key to the telecommunications industry as part of Network Functions Virtualization (NFV) infrastructure, to enable applications like wireless and wireline packet core, deep packet inspection, video streaming, and voice services.

Open source projects like DPDK have taken a leadership role in driving the transition to NFV and enabling technology innovation in the field of networking by accelerating the datapath for network traffic across virtual switching and routing infrastructure.
It is opportune that this move is announced during the Open Networking Summit, an event which celebrates the role of open source projects and open standards in the networking industry. DPDK is a critical component to enable projects like OPNFV, Open vSwitch and fd.io to accelerate the datapath for network traffic across virtual switching and routing infrastructure, and provide the necessary performance to network operators.
Quelle: RedHat Stack

Red Hat Summit 2017 – Planning your OpenStack labs

This year in Boston, MA you can attend the Red Hat Summit 2017, the event to get your updates on open source technologies and meet with all the experts you follow throughout the year.
It&;s taking place from May 2-4 and is full of interesting sessions, keynotes, and labs.
This year I was part of the process of selecting the labs you are going to experience at Red Hat Summit and wanted to share here some to help you plan your OpenStack labs experience. These labs are for you to spend time with the experts who will teach you hands-on how to get the most out of your Red Hat OpenStack product.
Each lab is a 2-hour session, so planning is essential to getting the most out of your days at Red Hat Summit.
As you might be struggling to find and plan your sessions together with some lab time, here is an overview of the labs you can find in the session catalog for exact room and times. Each entry includes the lab number, title, abstract, instructors and is linked to the session catalog entry:

L103175 &; Deploy Ceph Rados Gateway as a replacement for OpenStack Swift
Come learn about these new features in Red Hat OpenStack Platform 10: There is now full support for Ceph Rados Gateway, and &;composable roles&; let administrators deploy services in a much more flexible way. Ceph capabilities are no longer limited to block only. With a REST object API, you are now able to store and consume your data through a RESTful interface, just like Amazon S3 and OpenStack Swift. Ceph Rados Gateway has a 99.9% API compliance with Amazon S3, and it can communicate with the Swift API. In this lab, you&8217;ll tackle the REST object API use case, and to get the most of your Ceph cluster, you&8217;ll learn how to use Red Hat OpenStack Platform director to deploy Red Hat OpenStack Platform with dedicated Rados Gateways nodes.
Instructors: Sebastien Han, Gregory Charot, Cyril Lopez
 
L104387 &8211; Hands on for the first time with Red Hat OpenStack Platform
In this lab, an instructor will lead you in configuring and running core OpenStack services in a Red Hat OpenStack Platform environment. We&8217;ll also cover authentication, compute, networking, and storage. If you&8217;re new to Red Hat OpenStack Platform, this session is for you.
Instructors: Rhys Oxenham, Jacob Liberman, Guil Barros
 
L102852 &8211; Hands on with Red Hat OpenStack Platform director
Red Hat OpenStack Platform director is a tool set for installing and managing Infrastructure-as-a-Service (IaaS) clouds. In this two-hour instructor-led lab, you will deploy and configure a Red Hat OpenStack Platform cloud using OpenStack Platform director. This will be a self-paced, hands-on lab, and it&8217;ll include both the command line and graphical user interfaces. You&8217;ll also learn, in an interactive session, about the architecture and approach of Red Hat OpenStack Platform director.
Instructors: Rhys Oxenham, Jacob Liberman
 
L104665 &8211; The Ceph power show—hands on with Ceph
Join our Ceph architects and experts for this guided, hands-on lab with Red Hat Ceph Storage. You&8217;ll get an expert introduction to Ceph concepts and features, followed by a series of live interactive modules to gain some experience. This lab is perfect for users of all skills, from beginners to experienced users who want to explore advanced features of OpenStack storage. You&8217;ll get some credits to the Red Hat Ceph Storage Test Drive portal that can be used later to learn and evaluate Red Hat Ceph Storage and Red Hat Gluster Storage. You&8217;ll leave this session having a better understanding of Ceph architecture and concepts, with experience on Red Hat Ceph Storage, and the confidence to install, set up, and provision Ceph in your own environment.
Instructors: Karan Singh, Kyle Bader, Daniel Messer
As you can see, there is plenty of OpenStack in these hands-on labs to get you through the week and hope to welcome you to one or more of the labs!
Quelle: RedHat Stack

Momentum mounts for Kubernetes, cloud native

For any new technology, there are few attributes more valuable then momentum. In the open tech space, few projects have as much momentum as Kubernetes and cloud native application development.
The Cloud Native Computing Foundation (CNCF) kicked off the European leg of its biannual CloudNativeCon/ event in Berlin by welcoming five new member organizations and two new projects.
CNCF has pulled in rkt and as its eighth and ninth open projects, joining Kubernetes, Fluentd, Linkerd, Prometheus, OpenTracing, gRPC and CoreDNS,
IBM senior technical staff member Phil Estes is one of the open source maintainers for containerd. He explained a bit about the project and the role of IBM in the video below:

This week, containerd joined the @CloudNativeFdn. @estesp explains what it means for the community. Details: https://t.co/AQigsrXzqY pic.twitter.com/oC9XAOjO9D
— IBM Cloud (@IBMcloud) March 30, 2017

Meanwhile, CNCF announced that SUSE, HarmonyCloud, QAware, Solinea and TenxCloud have joined as contributing member organizations.
&;The cloud native movement is increasingly spreading to all parts of the world,&; CNCF executive director Dan Kohn told a sellout crowd of 1,500. That number tripled from CloudNativeCon in London a year prior.
We reported last fall that Kubernetes adoption was on the cusp of catching a giant wave. That wave has evolved into a groundswell among developers. There are now 4,000 projects based on Kubernetes, more than 50 products supporting it and more than 200 meetups around the world.
Even more significant has been the IBM announcement in March that Kubernetes is available on IBM Bluemix Container Service.
Linux Foundation Vice President Chris Aniszczyk and IBM Fellow, VP and Cloud Platform CTO Jason McGee discussed the move by IBM to Kube (and much more) on a podcast recoded from the venue. You can listen to it here:

A few more highlights from Berlin:
• 17-year-old Lucas Käldström, the youngest core Kubernetes maintainer, wowed the crowd with his talk on autoscaling a multi-platform Kubernetes cluster built with kubeadm.

Listening to Lucas talk about multi-architecture cluster support for containers/k8s. Oh, he&;s in high school too! pic.twitter.com/V8G3qAylzz
— Phil Estes (@estesp) March 30, 2017

• Docker’s Justin Cormack delivered one of the conference’s most popular sessions with his talk on containerd:

Now @justincormack from @Docker talking containerd in SRO room @CloudNativeFdn Kubecon Berlin. Hey @chanezon open a window, it&8217;s hot! pic.twitter.com/SlVHCyTwH6
— Jeffrey Borek (@jeffborek) March 30, 2017

• An update on the Open Container Initiative from Jeff Borek (IBM), Chris Aniszczyk (Linux Foundation), Vincent Batts (Red Hat) and Brandon Philips (CoreOS)

An update on @OCI_ORG and container standards from @Cra, @JeffBorek, @vbatts, @sauryadas_ & @BrandonPhilips. … https://t.co/MqqBKxwjBU
— Kevin J. Allen (@KevJosephAllen) March 29, 2017

More information about Bluemix.
The post Momentum mounts for Kubernetes, cloud native appeared first on news.
Quelle: Thoughts on Cloud

User Group Newsletter March 2017

User Group Newsletter March 2017
 
BOSTON SUMMIT UPDATE
Exciting news! The schedule for the Boston Summit in May has been released. You can check out all the details on the Summit schedule page.
Travelling to the Summit and need a visa? Follow the steps in this handy guide, 
If you haven’t registered, there is still time! Secure your spot today! 
 
HAVE YOUR SAY IN THE SUPERUSER AWARDS!

The OpenStack Summit kicks off in less than six weeks and seven deserving organizations have been nominated to be recognized during the opening keynotes. For this cycle, the community (that means you!) will review the candidates before the Superuser editorial advisors select the finalists and ultimate winner. See the full list of candidates and have your say here. 
 
COMMUNITY LEADERSHIP CHARTS COURSE FOR OPENSTACK
About 40 people from the OpenStack Technical Committee, User Committee, Board of Directors and Foundation Staff convened in Boston to talk about the future of OpenStack. They discussed the challenges we face as a community, but also why our mission to deliver open infrastructure is more important than ever. Read the comprehensive meeting report here.
 
NEW PROJECT MASCOTS
Fantastic new project mascots were released just before the Project Teams Gathering. Read the the story behind your favourite OpenStack project mascot via this superuser post. 
 
WELCOME TO OUR NEW USER GROUPS
We have some new user groups which have joined the OpenStack community.
Spain- Canary Islands
Mexico City &; Mexico
We wish them all the best with their OpenStack journey and can’t wait to see what they will achieve! Looking for your local group? Are you thinking of starting a user group? Head to the groups portal for more information.
 
LOOK OUT FOR YOUR FELLOW STACKERS AT COMMUNITY EVENTS
OpenStack is participating in a series of upcoming Community events this April.
April 3: Open Networking Summit Santa Clara, CA

OpenStack is sponsoring the Monday evening Open Source Community Reception at Levi Stadium
ldiko Vancsa will be speaking in two sessions:
Monday, 9:00-10:30am on &;The Interoperability Challenge in Telecom and NFV Environments&;, with EANTC Director Carsten Rossenhovel and Chris Price, room 207
Thursday, 1:40-3:30pm, OpenStack our Mini-Summit, topic &8220;OpenStack:Networking Roadmap, Collaboration and Contribution&8221; with Armando Migliaccio and Paul Carver from AT&T; Grand Ballroom A&B

 
April 17-19: DockerCon, Austin, TX

Openstack will be in booth

 
April 19-20: Global Open Source Summit, Beijing, China

Mike Perez will be delivering an OpenStack keynote

 
OPENSTACK DAYS: DATES FOR YOUR CALENDAR
We have lots of upcoming OpenStack Days coming up:
Upcoming OpenStack Days
June 1: Australia
June 5: Israel
June 7: Budapest
June 26: Germany Enterprise (DOST)
Read further information about OpenStack Days from this website. You’ll find a FAQ, see highlights from previous events and an extensive toolkit for hosting an OpenStack Day in your region. 
 
CONTRIBUTING TO UG NEWSLETTER
If you’d like to contribute a news item for next edition, please submit to this etherpad.
Items submitted may be edited down for length, style and suitability.
This newsletter is published on a monthly basis.
 
 
 
Quelle: openstack.org

Scaling with Kubernetes DaemonSets

The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
We&;re used to thinking about scaling from the point of view of a deployment; we want it to scale up under different conditions, so it looks for appropriate nodes, and puts pods on them. DaemonSets, on the other hand, take a different tack: any time you have a node that belongs to the set, it runs the pods you specify.  For example, you might create a DaemonSet to tell Kubernetes that any time you create a node with the label app=webserver you want it to run Nginx.  Let&8217;s take a look at how that works.
Creating a DaemonSet
Let&8217;s start by looking at a sample YAML file to define a Daemon Set:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Here we&8217;re creating a DaemonSet called frontend. As with a ReplicationController, pods launched by the DaemonSet are given the label specified in the spec.template.metadata.labels property &; in this case, app=frontend-webserver.
The template.spec itself has two important parts: the nodeSelector and the containers.  The containers are fairly self-evident (see our discussion of ReplicationControllers if you need a refresher) but the interesting part here is the nodeSelector.
The nodeSelector tells Kubernetes which nodes are part of the set and should run the specified containers.  In other words, these pods are deployed automatically; there&8217;s no input at all from the scheduler, so schedulability of a node isn&8217;t taken into account.  On the other hand, Daemon Sets are a great way to deploy pods that need to be running before other objects.
Let&8217;s go ahead and create the Daemon Set.  Create a file called ds.yaml with the definition in it and run the command:
$ kubectl create -f ds.yaml
daemonset “datastore” created
Now let&8217;s see the Daemon Set in action.
Scaling capacity using a DaemonSet
If we check to see if the pods have been deployed, we&8217;ll see that they haven&8217;t:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
That&8217;s because we don&8217;t yet have any nodes that are part of our DaemonSet.  If we look at the nodes we do have &;
$ kubectl get nodes
NAME        STATUS    AGE
10.0.10.5   Ready     75d
10.0.10.7   Ready     75d
We can go ahead and add at least one of them by adding the app=frontend-node label:
$kubectl label  node 10.0.10.5 app=frontend-node
node “10.0.10.5” labeled
Now if we get a list of pods again&8230;
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          19s
We can see that the pod was started without us taking any additional action.  
Now we have a single webserver running.  If we wanted to scale up, we could simply add our second node to the Daemon Set:
$ kubectl label  node 10.0.10.7 app=frontend-node
node “10.0.10.7” labeled
If we check the list of pods again, we can see that a new one was automatically started:
$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-7nfxo              1/1       Running   0          1m
frontend-rp9bu              1/1       Running   0          35s
If we remove a node from the DaemonSet, any related pods are automatically terminated:
$ kubectl label  node 10.0.10.5 –overwrite app=backend
node “10.0.10.5” labeled

$ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-rp9bu              1/1       Running   0          1m
Updating Daemon Sets, and improvements in Kubernetes 1.6
OK, so how do we update a running DaemonSet?  Well, as of Kubernetes 1.5, the answer is &;you don&8217;t.&; Currently, it&8217;s possible to change the template of a DaemonSet, but it won&8217;t affect the pods that are already running.  
Starting in Kubernetes 1.6, however, you will be able to do rolling updates with Kubernetes DaemonSets. You&8217;ll have to set the updateStrategy, as in:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
 name: frontend
spec:
 updateStrategy: RollingUpdate
   maxUnavailable: 1
   minReadySeconds: 0
 template:
   metadata:
     labels:
       app: frontend-webserver
   spec:
     nodeSelector:
       app: frontend-node
     containers:
       – name: webserver
         image: nginx
         ports:
         – containerPort: 80
Once you&8217;ve done that, you can make changes and they&8217;ll propagate to the running pods. For example, you can change the image on which the containers are based. For example:
$kubectl set image ds/frontend webserver=httpd
If you want to make more substantive changes, you can edit or patch the Daemon Set:
kubectl edit ds/frontend
or
kubectl patch ds/frontend -p=ds-changes.yaml
(Obviously you would use your own DaemonSet names and files!)
So that&8217;s the basics of working with DaemonSets.  What else would you like to learn about them? Let us know in the comments below.
The post Scaling with Kubernetes DaemonSets appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

What’s new in Kubernetes 1.6 — a focus on stability

The post What&;s new in Kubernetes 1.6 &; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Kubernetes 1.6 is forecast to be released this week. Major themes include new capabilities for Daemon Sets, the beta release of Kubernetes federation and new scheduling features, and new networking capabilities. You can get an in-depth look at all of the new features in the Kubernetes 1.6 release notes, but let&8217;s get a quick overview here.
DaemonSet rolling updates
You&8217;re probably used to dealing with Kubernetes in terms of creating a Deployment or a ReplicationController and having it manage your pods, making certain that you always have a particular number of instances spread among the nodes that are available.  DaemonSets, on the other hand, look at things from the opposite perspective.
With DaemonSets, you specify the nodes to run a particular set of containers, and Kubernetes will make certain that any nodes that satisfy those requirements will run those pods. With Kubernetes 1.6, you now have the option to update those DaemonSets with a new image or other information.  (For more information on DaemonSets, you can see this article,which explains how and why to use them.)
Kubernetes Federation
As Kubernetes takes hold, the likelihood of running into situations in which users have multiple large clusters to deal with increases. Federation enables you to create an infrastructure in which users can use, say, the closest cluster to them, or the one that has the most spare capacity.
Now in beta, kubefed &;supports hosting federation on on-prem clusters, [and] automatically configures kube-dns in joining clusters and allows passing arguments to federation components.&;
Authentication and access control improvements
Role-Based Access Control (RBAC), which makes it possible to define roles for control plane, node, and controller components, is now in the beta phase.  (It also defines default roles for these components.) There are numerous changes from the alpha version (such as a change from using * for all users to using system:authenticated or system:unauthenticated) so make sure to check out the release notes for all the details.
Attribute-Based Access Control (ABAC) also been tweaked, with wild cards defaulting to authenticated users. The kube-apiserver and the authentication API have also seen a number of improvements.
Scheduling changes
Now in beta is the ability to have multiple schedulers, with each controlling a different set of pods. You can also set the scheduler you want for a particular pod on the pod sec, rather than as an annotation, as in the alpha version.
Also in beta are node and pod affinity/anti-affinity. This capability enables you to intelligently schedule pods that should, or shouldn&8217;t be, on the same piece of hardware.  For example, if you have a web application that talks to a database, you might wat them on the same pod.  If, on the other hand, you have a pod that needs to be highly available, you might want to spread different instances over different nodes as a safeguard against failure. You can specify the affinity field on the PodSpec.
Kubernetes 1.6 also includes the beta release of taints and tolerations, and some improvements to that functionality from the alpha version.  Taints enable you to dedicate a node to a particular kind of pod, similar to the way in which you might flavors in OpenStack. Unlike OpenStack, however, you can tell Kubernetes to try to avoid scheduling pods that aren&8217;t explicitly allowed (read: tolerated) to that node, but if it has no choice, it can go ahead. This functionality also enables to you specify a period of time a mod might run on this node before being &8220;evicted.&8221;
And speaking of being evicted, Kubernetes 1.6 now enables you to override the default 5 minute period during which a pod remains bound to a node if there are problems,s o you can specify that a pod either finds another node more quickly, or is more patient and waits even longer.
The Container Runtime Interface is now the default
While it&8217;s natural to assume that containers running on Kubernetes are Docker containers, that&8217;s not always true.  Kubernetes also supports rkt containers, and in fact the goal is to enable Kubernetes to orchestrate any container runtime. Up until now, that&8217;s been difficult, because the container runtimes were coded into the kubelet component that runs the actual containers.
Now, with Kubernetes 1.6, the beta version of the Docker Container Runtime Interface is enabled by default &8212; you can turn it off with &;enable-cri=false &8212; it will be easier to add new runtimes.  The old non-runtime architecture is deprecated in 1.6 and is scheduled for remove in Kubernetes 1.7.
Storage improvements
Kubernetes 1.6 includes the general availability release of StorageClasses, which enable you to specify a particular type of storage resource for users without exposing them to the details.  (This is also similar to flavors in OpenStack.)
Also now in GA are the ability to populate environment variables from a configmap or a secret, as well as support for writing and running your own dynamic PersistentVolume provisioners.
Note that StorageClasses will change the behaviors of PersistentVolumeClaim objects on existing clouds, so be sure to read the Release Notes.
Networking improvements
You now have added control over DNS; Kubernetes 1.6 enables you to set stubDomains, which define the nameservers used for specific domains (such as *.mycompany.local), and to specify what upstreamNameservers you want to use, overriding resolve.conf.
Digging deeper, the Container Network Interface (CNI) is now integrated with the Container Runtime Interface (CRI) by default, and the standard bridge plugin has been validated with the combination.
Other changes
Kubernetes 1.6 includes a huge number of changes and improvements, some of which will only be of interest to operators, as opposed to end users, but all of which are important. Some of these changes include:

By default, etcd v3 is enabled, enabling clusters up to 5000 nodes
The ability to know via the API whether a Deployment is blocked
Easier logging access
Improvements to the Horizontal Pod Autoscaler
The ability to add third party resources and extension API servers with the edit command
New commands for creating roles, as well as determining whether you can perform an action
New fields added to describe output
Improvements to kubeadm

Definitely take a look at the full release notes to get the details.
The post What&8217;s new in Kubernetes 1.6 &8212; a focus on stability appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis