Lanka Bell and IBM team up to accelerate cloud in Sri Lanka

It just got easier for businesses, developers and government organizations in Sri Lanka to access all the benefits of cloud.
Telecommunications provider Lanka Bell and IBM announced a new agreement to offer public, private and hybrid IBM Cloud services in Sri Lanka, including workload migrations, disaster recovery and capacity expansion solutions. Services available will include infrastructure as a service (IaaS), platform as a service (PaaS), storage and virtual machines.
The offerings can be integrated using IBM Network Access Service solutions.
Lanka Bell hopes to &;help enterprise customers in the country to embrace cloud offerings quickly and easily,&; said Prasad Samarasinghe, the company&;s managing director. Samarasinghe noted that the agreement extends a 20-year partnership between IBM and Lanka Bell.
Learn more about the Lanka Bell and IBM partnership in Lanka Business Online&;s full article.
The post Lanka Bell and IBM team up to accelerate cloud in Sri Lanka appeared first on news.
Quelle: Thoughts on Cloud

Making cities safer: data collection for Vision Zero

A critical part of enabling cities to implement their Vision Zero policies &; the goal of the current National Transportation Data Challenge &8211; is to be able to generate open, multi-modal travel experience data. While existing datasets use police and hospital reports to provide a comprehensive picture of fatalities and life altering injuries, by their nature, they are sparse and resist use for prediction and prioritization. Further, changes to infrastructure to support Vision Zero policies frequently require balancing competing needs from different constituencies &8211; protected bike lanes, dedicated signals and expanded sidewalks all raise concerns that automobile traffic will be severely impacted.
A timeline of the El Monte/Marich intersection in Mountain View, from 2014 to 2017 provides an opportunity to put some of these challenges into context.

since there is no standard way to report near misses, the City didn&;t know that the intersection was so dangerous until somebody actually died, and it was not included in the ped and bike plans,
because the number of fatalities is so low, and the number of areas that need to be fixed is so high, past fatalities may not be a good predictors of future ones. But that makes prioritization challenging &8211; should the City play &;whack-a-mole&; with locations where fatalities occurred, or should it stick with the ped and bike plans?
even if the City does pick an area to fix, it is not clear what the fix should be. Note that the City wanted to improve the visibility of the intersection, but the residents were skeptical that any solution that did not address the speeding would be sufficient.
it is not clear how to balance competing needs &8211; addressing the speeding issue will potentially increase the travel times of (the currently speeding) automobile travellers.  Increased travel time is quantifiable, how can we make the increased safety also quantifiable so that we can, as a society, make the appropriate tradeoffs?

The e-mission project in the RISE and BETS labs focuses on building an extensible platform that can instrument the end-to-end multi-modal travel experience at the personal scale, collate it for analysis at the societal scale, and help solve some of the challenges above.
In particular, it combines background data collection of trips, classified by modes, with user-reported incident data, and makes the resulting anonymized heatmaps available via public APIs for further visualization and analysis. The platform also has an integration with the habitica open source platform to enable gamification of data collection.
Click to view slideshow.
This could allow cities to collect crowdsourced stress maps, use them to prioritize the areas that need improvement, and after pilot or final fixes are done, quantify the reduction in stress and mode shifts related to the fix.
Since this is an open source, extensible platform and generates open data, it can easily be extended to come up with some cool projects. Here are five example extensions to give a flavor of what improvements can be done:

enhance the incident reporting to provide more details (why? how serious?)
have the incident prompting be based on phone shake instead of a prompt at the end of every trip
encourage reporting through gamification using the habitica integration
convert the existing heatmaps to aggregate, actionable metrics
automatically identify “top 5” or “top 10” hotspots for cities to prioritize

But these are just examples &8211; the whole point of the challenge is to tap into all the great ideas that are out there. Sign up for the challenge, walk/bike around your cities, hear what planners want, and use your ideas to make the world a better place!
Quelle: Amplab Berkeley

OK, I give up. Is Docker now Moby? And what is LinuxKit?

The post OK, I give up. Is Docker now Moby? And what is ? appeared first on Mirantis | Pure Play Open Cloud.
This week at , Docker made several announcements, but one in particular caused massive confusion as users thought that &;Docker&; was becoming &8220;Moby.&8221;  Well&; OK, but which Docker? The Register probably put it best, when they said, &8220;Docker (the company) decided to differentiate Docker (the commercial software products Docker CE and Docker EE) from Docker (the open source project).&8221;  Tack on a second project about building core operating systems, and there&;s a lot to unpack.
Let&8217;s start with Moby.  
What is Moby?
Docker, being the foundation of many peoples&8217; understanding of containers, unsurprisingly isn&8217;t a single monolithic application.  Instead, it&8217;s made up of components such as runc, containerd, InfraKit, and so on. The community works on those components (along with Docker, of course) and when it&8217;s time for a release, Docker packages them all up and out they go. With all of those pieces, as you might imagine, it&8217;s not a simple task.
And what happens if you want your own custom version of Docker?  After all, Docker is built on the philosophy of &8220;batteries included but swappable&8221;.  How easy is it to swap something out?
In his blog post introducing the Moby Project, Solomon Hykes explained that the idea is to simplify the process of combining components into something usable. &8220;We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.&8221;
Hykes explained that from now on, Docker releases would be built using Moby and its components.  At the moment there are 80+ components that can be combined into assemblies.  He further explained that:
&8220;Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.&8221;

Who needs to know about Moby?
The first group that needs to know about Moby is Docker developers, as in the people building the actual Docker software, and not people building applications using Docker containers, or even people building Docker containers.  (Here&8217;s hoping that eventually this nomenclature gets cleared up.)  Docker developers should just continue on as usual, and Docker pull requests will be reouted to the Moby project.
So everyone else is off the hook, right?
Well, um, no.
If all you do is pull together containers from pre-existing components and software you write yourself, then you&8217;re good; you don&8217;t need to worry about Moby. Unless, that is, you aren&8217;t happy with your available Linux distributions.
Enter LinuxKit.
What is LinuxKit?
While many think that Docker invented the container, in actuality linux containers had been around for some time, and Docker containers are based on them.  Which is really convenient &; if you&8217;re using Linux.  If, on the other hand, you are using a system that doesn&8217;t include Linux, such as a Mac, a Windows PC, or that Raspberry Pi you want to turn into an automatic goat feeder, you&8217;ve got a problem.
Docker requires linuxcontainers.  Which is a problem if you have no linux.
Enter LinuxKit.  
The idea behind LinuxKit is that you start with a minimal Linux kernal &8212; the base distro is only 35MB &8212; and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to.  Stephen Foskitt tweeted a picture of an example from the announcement:

More about LinuxKit DockerCon pic.twitter.com/TfRJ47yBdB
— Stephen Foskett (@SFoskett) April 18, 2017

The end result is that you can build containers that run on desktops, mainframes, bare metal, IoT, and VMs.
The project will be managed by the Linux Foundation, which is only fitting.
So what about Alpine, the minimal Linux that&8217;s at the heart of Docker?  Docker&8217;s security director, Nathan McCauley said that &8220;LinuxKit&8217;s roots are in Alpine.&8221;  The company will continue to use it for Docker.

Today we launch LinuxKit &8212; a Linux subsystem focussed on security. pic.twitter.com/Q0YJsX67ZT
— Nathan McCauley (@nathanmccauley) April 18, 2017

So what does this have to do with Moby?
Where LinuxKit has to do with Moby
If you&8217;re salivating at the idea of building your own Linux distribution, take a deep breath. LinuxKit is an assembly within Moby.  
So if you want to use LinuxKit, you need to download and install Moby, then use it to build your LinuxKit pieces.
So there you have it. You now have the ability to build your own Linux system, and your own containerization system. But it&8217;s definitely not for the faint of heart.
Resources

Wait – we can explain, says Moby, er, Docker amid rebrand meltdown • The Register
Moby, LinuxKit Kick Off New Docker Collaboration Phase | Software | LinuxInsider
Why Docker created the Moby Project | CIO
GitHub &; linuxkit/linuxkit: A toolkit for building secure, portable and lean operating systems for containers
Docker LinuxKit: Secure Linux containers for Windows, macOS, and clouds | ZDNet
Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems &8211; Docker Blog
Stephen Foskett on Twitter: &8220;More about LinuxKit DockerCon https://t.co/TfRJ47yBdB&8221;
Introducing Moby Project: a new open-source project to advance the software containerization movement &8211; Docker Blog
DockerCon 2017: Moby’s Cool Hack sessions &8211; Docker Blog

The post OK, I give up. Is Docker now Moby? And what is LinuxKit? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack

The post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Mirantis Cloud Platform 1.0 is a distribution of OpenStack and Kubernetes that can orchestrate VMs, Containers and Bare Metal

SUNNYVALE, CA – April 19, 2017 – Mirantis, the managed open cloud company, today announced availability of a commercially-supported distribution of OpenStack and Kubernetes, delivered in a single, integrated package, and with a unique build-operate-transfer delivery model.

“Today, infrastructure consumption patterns are defined by the public cloud, where everything is API driven, managed and continuously delivered. Mirantis OpenStack, which featured Fuel as an installer, was the easiest OpenStack distribution to deploy, but every new version required a forklift upgrade,” said Boris Renski, Mirantis co-founder and CMO. “Mirantis Cloud Platform departs from the traditional installer-centric architecture and towards an operations-centric architecture, continuously delivered by either Mirantis or the customers’ DevOps team with zero downtime. Updates no longer happen once every 6-12 months, but are introduced in minor increments on a weekly basis. In the next five to ten years, all vendors in the space will either find a way to adapt to this pattern or they will disappear.”

Along with launching Mirantis Cloud Platform (MCP) 1.0, Mirantis is also first to introduce a unique delivery model for the platform. Unlike traditional vendors that sell software subscriptions, Mirantis will onboard customers to MCP through a build-operate-transfer delivery model. The company will operate an open cloud platform for customers for a period of at least twelve months with up to four nines SLA prior to off boarding the operational burden to customer&;s team, if desired. The delivery model ensures that not just the software, but also the customer&8217;s team and process are aligned with DevOps best practices.

Unlike any other solution in the industry, customers onboarded to MCP have an option to completely transfer the platform under their own management. Everything in MCP is based on popular open standards with no lock-in, making it possible for customers to break ties with Mirantis and run the platform independently should they choose to do so.

“We are happy to see a growing number of vendors embrace Kubernetes and launch a commercially supported offering based on the technology,&; said Allan Naim from the Kubernetes and Container Engine Product Team.

&;As the industry embraces composable, open infrastructure, the &8220;LAMP stack of cloud&8221; is emerging, made up of OpenStack, Kubernetes, and other key open technologies,” said Mark Collier, chief operating officer, OpenStack Foundation. “Mirantis Cloud Platform presents a new vision for the OpenStack distribution, one that embraces diverse compute, storage and networking technologies continuously rather than via major upgrades on six-month cycles.&8221;

Specifically, Mirantis Cloud Platform 1.0 is:

Open Cloud Software &; providing a single platform to orchestrate VMs, containers and bare metal compute resources by:

Expanding Mirantis OpenStack to include Kubernetes for container orchestration.
Complementing the virtual compute stacks with best-in-class open source software defined networking (SDN), specifically Mirantis OpenContrail for VMs and bare metal, and Calico for containers.
Featuring Ceph, the most popular open source software defined storage (SDS), for both Kubernetes and OpenStack.

DriveTrain &8212; Mirantis DriveTrain sets the foundation for DevOps style lifecycle management of the open cloud software stack by enabling continuous integration, continuous testing and continuous delivery through a CI/ CD pipeline. DriveTrain enables:

Increased Day 1 flexibility to customize the reference architecture and configurations during initial software installation.
Greater ability to perform Day 2 operations such as post-deployment configuration, functionality and architecture changes.
Seamless version updates through an automated pipeline to a virtualized control plane to minimize downtime.

StackLight &8212; enables strict compliance to availability SLAs by providing continuous monitoring of the open cloud software stacks through a unified set of software services and dashboards.

StackLight avoids lock-in by including best-in-breed open source software for log management, metrics and alerts.
It includes a comprehensive DevOps portal that displays information such as StackLight visualization and DriveTrain configuration settings.
The entire Mirantis StackLight toolchain is purpose-built for MCP to enable up to 99.99% uptime service level agreements with Mirantis Managed OpenStack.

With the release of MCP, Mirantis is also announcing end-of-life for Mirantis OpenStack (MOS) and Fuel by September 2019. Mirantis will be working with all customers currently using MOS on a tailored transition plan from MOS to MCP.

To learn more about MCP, watch an overview video and sign up for the introductory webinar at www.mirantis.com/mcp.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&;T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

###

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Releases Kubernetes Distribution and Updated Mirantis OpenStack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

IBM Cloud revenue jumps by 33 percent

IBM reported cloud revenue growth of 33 percent year-over-year in its first-quarter earnings released Tuesday.
Adjusting for currency, cloud revenue grew by 35 percent, up to $3.5 billion. Total cloud revenue over the past 12 months reached $14.6 billion, putting IBM ahead of competitors in the field of enterprise cloud.

Specifically in the arena of cloud-as-a-service, the annual exit run rate increased to $8.6 billion from $5.4 billion in the first quarter of last year.
&;In the first quarter, both the IBM Cloud and our cognitive solutions again grew strongly, which fueled robust performance in our strategic imperatives,&; said Ginni Rometty, IBM chairman, president and CEO.
The company also saw strong growth in strategic imperatives, which were up 12 percent year-over-year, driven in part by hybrid cloud services.
Find out more about IBM revenue in cloud, strategic imperatives and other areas in the infographic below.

The post IBM Cloud revenue jumps by 33 percent appeared first on news.
Quelle: Thoughts on Cloud

IBM Cloud revenue jumps by 33 percent

IBM reported cloud revenue growth of 33 percent year-over-year in its first-quarter earnings released Tuesday.
Adjusting for currency, cloud revenue grew by 35 percent, up to $3.5 billion. Total cloud revenue over the past 12 months reached $14.6 billion, putting IBM ahead of competitors in the field of enterprise cloud.

Specifically in the arena of cloud-as-a-service, the annual exit run rate increased to $8.6 billion from $5.4 billion in the first quarter of last year.
&;In the first quarter, both the IBM Cloud and our cognitive solutions again grew strongly, which fueled robust performance in our strategic imperatives,&; said Ginni Rometty, IBM chairman, president and CEO.
The company also saw strong growth in strategic imperatives, which were up 12 percent year-over-year, driven in part by hybrid cloud services.
Find out more about IBM revenue in cloud, strategic imperatives and other areas in the infographic below.

The post IBM Cloud revenue jumps by 33 percent appeared first on news.
Quelle: Thoughts on Cloud

Furious Indians Are Leaving Snapchat One-Star Reviews In The App Store Because They're Mad At The CEO

A former Snap Inc. employee has claimed that CEO Evan Spiegel allegedly said that he didn&;t &;want to expand into poor countries like India and Spain.&;

A former Snap Inc. employee has claimed in a lawsuit that CEO Evan Spiegel said that Snapchat was “only for rich people”, and that he didn’t “want to expand into poor countries like India and Spain.”

A former Snap Inc. employee has claimed in a lawsuit that CEO Evan Spiegel said that Snapchat was “only for rich people”, and that he didn’t “want to expand into poor countries like India and Spain."

Lucas Jackson / Reuters

The news was reported by Variety earlier this week.

In a statement provided to BuzzFeed News, a Snap Inc. spokesperson said: “This is ridiculous. Obviously Snapchat is for everyone&; It&;s available worldwide to download for free.”

Over the weekend, however, Indians battered the Snapchat app with angry reviews and poor ratings in the Indian App Store.

Over the weekend, however, Indians battered the Snapchat app with angry reviews and poor ratings in the Indian App Store.

BuzzFeed News screenshot

They called Spiegel “delusional”…

They called Spiegel "delusional"...

App Store


View Entire List ›

Quelle: <a href="Furious Indians Are Leaving Snapchat One-Star Reviews In The App Store Because They&039;re Mad At The CEO“>BuzzFeed

How do you build 12-factor apps using Kubernetes?

The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
It&;s said that there are 12 factors that define a cloud-native application.  It&8217;s also said that Kubernetes is designed for cloud native computing. So how do you create a 12-factor application using Kubernetes?  Let&8217;s take a look at exactly what twelve factor apps are and how they relate to Kubernetes.
What is a 12 factor application?
The Twelve Factor App is a manifesto on architectures for Software as a Service created by Heroku. The idea is that in order to be really suited to SaaS and avoid problems with software erosion &; where over time an application that&8217;s not updated gets to be out of sync with the latest operating systems, security patches, and so on &8212; an app should follow these 12 principles:

Codebase
One codebase tracked in revision control, many deploys
Dependencies
Explicitly declare and isolate dependencies
Config
Store config in the environment
Backing services
Treat backing services as attached resources
Build, release, run
Strictly separate build and run stages
Processes
Execute the app as one or more stateless processes
Port binding
Export services via port binding
Concurrency
Scale out via the process model
Disposability
Maximize robustness with fast startup and graceful shutdown
Dev/prod parity
Keep development, staging, and production as similar as possible
Logs
Treat logs as event streams
Admin processes
Run admin/management tasks as one-off processes

Let&8217;s look at what all of this means in terms of Kubernetes applications.
Principle I. Codebase
Principle 1 of a 12 Factor App is &;One codebase tracked in revision control, many deploys&;.
For Kubernetes applications, this principle is actually embedded in the nature of container orchestration itself.  Typically, you create your code using a source control repository such as a git repo, then store specific versions of your images in the Docker Hub. When you define the containers to be orchestrated as part of a a Kubernetes Pod, Deployment, DaemonSet, you also specify a particular version of the image, as in:

spec:
     containers:
     – name: AcctApp
       image: acctApp:v3

In this way, you might have multiple versions of your application running in different deployments.
Applications can also behave differently depending on the configuration information with which they run.
Principle II. Dependencies
Principle 2 of a 12 Factor App is &8220;Explicitly declare and isolate dependencies&8221;.
Making sure that an application&8217;s dependencies are satisfied is something that is practically assumed. For a 12 factor app, that includes not just making sure that the application-specific libraries are available, but also not counting on, say, shelling out to the operating system and assuming system libraries such as curl will be there.  A 12 factor app must be self-contained.
That includes making sure that the application is isolated enough that it&8217;s not affected by conflicting libraries that might be installed on the host machine.
Fortunately, if an application does have any specific or unusual system requirements, both of these requirements are handily satisfied by containers; the container includes all of the dependencies on which the application relies, and also provides a reasonably isolated environment in which the container runs.  (Contrary to popular belief, container environments are not completely isolated, but for most situations, they are Good Enough.)
For applications that are modularized and depend on other components, such as an HTTP service and a log fetcher, Kubernetes provides a way to combine all of these pieces into a single Pod, for an environment that encapsulates those pieces appropriately.
Principle III. Config
Principle 3 of a 12 Factor App is &8220;Store config in the environment&8221;.
The idea behind this principle is that an application should be completely independent from its configuration. In other words, you should be able to move it to another environment without having to touch the source code.
Some developers achieve this goal by creating configuration files of some sort, specifying details such as directories, hostnames, and database credentials. This is an improvement, but it does carry the risk that someone will check a config file into the source control repository.
Instead, 12 factor apps store their configurations as environment variables; these are, as the manifesto says, &8220;unlikely to be checked into the repository by accident&8221;, and they&8217;re operating system independent.
Kubernetes enables you to specify environment variables in manifests via the Downward API, but as these manifests themselves do get checked int source control, that&8217;s not a complete solution.
Instead, you can specify that environment variables should be populated by the contents of Kubernetes ConfigMaps or Secrets, which can be kept separate from the application.  For example, you might define a Pod as:
apiVersion: v1
kind: Pod
metadata:
 name: secret-env-pod
spec:
 containers:
   – name: mycontainer
     image: redis
     env:
       – name: SECRET_USERNAME
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: username
       – name: SECRET_PASSWORD
         valueFrom:
           secretKeyRef:
             name: mysecret
             key: password
       – name: CONFIG_VERSION
         valueFrom:
           configMapKeyRef:
             name: redis-app-config
             key: version.id
As you can see, this Pod receives three environment variables, SECRET_USERNAME, SECRET_PASSWORD, and CONFIG_VERSION, the first two from from referenced Kubernetes Secrets, and the third from a Kubernetes ConfigMap.  This enables you to keep them out of configuration files.
Of course, there&8217;s still a risk of someone mis-handling the files used to create these objects, but it&8217;s them together and institute secure handling policies than it is to weed out dozens of config files scattered around a deployment.
What&8217;s more, there are those in the community that point out that even environment variables are not necessarily safe for their own reasons.  For example, if an app crashes, it may save all of the environment variables to a log or even transmit them to another service.  Diogo Mónica points to a tool called Keywhiz you can use with Kubernetes, creating secure secret storage.
Principle IV. Backing services
Principle 4 of the 12 Factor App is &8220;Treat backing services as attached resources&8221;.
In a 12 Factor app, any services that are not part of the core application, such as databases, external storage, or message queues, should be accessed as a service &8212; via an HTTP or similar request &8212; and specified in the configuration, so that the source of the service can be changed without affecting the core code of the application.
For example, if your application uses a message queuing system, you should be able to change from RabbitMQ to ZeroMQ (or ActiveMQ or even something else) without having to change anything but configuration information.
This requirement has two implications for a Kubernetes-based application.
First, it means that you must think about how your applications take in (and give out) information. For example, if you have a backing database, you wouldn&8217;t want to have a local Mysql instance, even if you&8217;re replicating it to other instances. Instead, you would want to have a separate container that handles database operations, and make those operations callable via an API. This way, if you needed to change to, say, PostgreSQL or a remotely hosted MySQL server, you could create a new container image, update the Pod definition, and restart the Pod (or more likely the Deployment or StatefulSet managing it).  
Similarly, if you&8217;re storing credentials or address information in environment variables backed by a ConfigMap, you can change that information and replace the Pod.
Note that both of these examples assume that though you&8217;re not making any changes to the source code (or even the container image for the main application) you will need to replace the Pod; the ability to do this is actually another principle of a 12 Factor App.
Principle V. Build, release, run
Principle 5 of the 12 Factor App is &8220;Strictly separate build and run stages&8221;.
These days it&8217;s hard to imagine a situation where this is not true, but a twelve-factor app must have a separate build stage.  In other words, you should be able to build or compile the code, then combine that with specific configuration information to create a specific release, then deliberately run the release.
Releases should be identifiable.  You should be able to say, &;This deployment is running Release 1.14 of this application&8221; or something similar, the same way we say we&8217;re running &8220;the OpenStack Ocata release&8221; or &8220;Kubernetes 1.6&8221;.  They should also be immutable; any changes should lead to a new release.  If this sounds daunting, remember that when we say &8220;application&8221; we&8217;re no longer talking about large, monolithic releases.  Instead, we&8217;re talking about very specific microservices, each of which has its own release, and which can bump releases without causing errors in consuming services.
All of this is so that when the app is running, that &8220;run&8221; process can be completely automated. Twelve factor apps need to be capable of running in an automated fashion because they need to be capable of restarting should there be a problem.
Translating this to the Kubernetes realm, we&8217;ve already said that the application needs to be stored in source control, then built with all of its dependencies.  That&8217;s your build process.  We talked about separating out the configuration information, so that&8217;s what needs to be combined with the build to make a release. And the ability to automatically run the application &8212; or multiple copies of the application &8212; is precisely what Kubernetes constructs like Deployments, ReplicaSets, and DaemonSets do.
Principle VI. Processes
Principle 6 of the 12 Factor App is &8220;Execute the app as one or more stateless processes&8221;.
Stateless processes are a core idea behind cloud native applications. Every twelve-factor application needs to run in individual, share-nothing processes. That means that any time you need to persist information, it needs to be stored in a backing service such as a database.
If you&8217;re new to cloud application programming, this might be deceptively simple; many developers are used to &8220;sticky&8221; sessions, storing information in the session with the confidence that the next request will come to the same server. In a cloud application, however, you must never make that assumption.
Instead, if you&8217;re running a Kubernetes-based application that hosts multiple copies of a particular pod, you must assume that subsequent requests can go anywhere.  To solve this problem, you will want to use some sort of backing volume or database for persistence.
One caveat to this principle is that Kubernetes StatefulSets can enable you to create Pods with stable network identities, so that you can, theoretically, direct requests to a particular pod. Technically speaking, if the process doesn&8217;t actually store state, and the pod can be deleted and recreated and still function properly, it satisfies this requirement &8212; but that&8217;s probably pushing it a bit.
Principle VII. Port binding
Principle 7 of the 12 Factor App is &8220;Export services via port binding&8221;.
In an environment where we&8217;re assuming that different functionalities are handled by different processes, it&8217;s easy to make the connection that these functionalities should be available via a protocol such as HTTP, so it&8217;s common for applications to be run behind web servers such as Apache or Tomcat.  Twelve-factor apps, however, should not be depend on an additional application in that way; remember, every function should be in its own process, isolated from everything else. Instead, the 12 Factor App manifesto recommends adding a web server library or something similar to the app itself, so that the app can await requests on a defined port, whether it&8217;s using HTTP or another protocol.
In a Kubernetes-based application, this is done partly through the architecture of the application itself, and partly by making sure that the application has all of its dependencies as part of the creation of the base containers on which the application is built.
Principle VIII. Concurrency
Principle 8 of the 12 Factor App is to &8220;Scale out via the process model&8221;.
When you&8217;re writing a twelve-factor app, make sure that you&8217;re designing it to be scaled out, rather than scaled up. That means that in order to add more capacity, you should be able to add more instances rather than more memory or CPU to the machine on which the app is running. Note that this specifically means being able to start additional processes on additional machines, which is, fortunately, a key capability of Kubernetes.
Principle IX. Disposability
Principle 9 of the 12 Factor App is to &8220;Maximize robustness with fast startup and graceful shutdown&8221;.
It seems like this principle was tailor made for containers and Kubernetes-based applications. The idea that processes should be disposable means that at any time, an application can die and the user won&8217;t be affected, either because there are others to take its place, because it&8217;ll start right up again, or both.
Containers are built on this principle, of course, and Kubernetes structures that manage multiple instances and maintain a certain level of availability even in the face of problems, such as ReplicaSets, complete the picture.
Principle X. Dev/prod parity
Principle 10 of the 12 Factor App is &8220;Keep development, staging, and production as similar as possible&8221;.
This is another principle that seems like it should be obvious, but is deeper than most people think. On the surface level, it does mean that you should have identical development, staging, and production environments, inasmuch as that is possible. One way to accomplish this is through the use of Kubernetes namespaces, enabling you to (theoretically) run code on the same actual cluster against the same actual systems while still keeping environments separate. In some situations, you can also use tools such as Minikube or kubeadm-dind-cluster to create near-clones of production systems.
At a deeper level, though, as the Twelve-Factor App manifesto puts it, it&8217;s about three different types of &8220;gaps&8221;:

The time gap: A developer may work on code that takes days, weeks, or even months to go into production.

The personnel gap: Developers write code, ops engineers deploy it.

The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

The goal here is to create a Continuous Integration/Continuous Deployment situation in which changes can go into production virtually immediately (after testing, of course!), deployed by the developers who wrote it so they can actually see it in production, using the same tools on which the code was actually written in order to minimize the possibility of compatibility errors between environments.
Some of these factors are outside of the realm of Kubernetes, of course; the personnel gap is a cultural issue, for example. The time and tools gaps, however, can be helped in two ways.
For the time gap, Kubernetes-based applications are, of course, based on containers, which themselves are based on images that are stored in version-control systems, so they lend themselves to CI/CD. They can also be updated via rolling updates that can be rolled back in case of problems, so they&8217;re well-suited to this kind of environment.
As far as the tools gap is concerned, the architecture of Kubernetes-based applications make it easier to manage, both by making local dependencies simple to include in the various images, and by modularizing the application in such a way that external backing services can be standardized.
Principle XI. Logs
Principle 11 of the 12 Factor App is to &8220;Treat logs as event streams&8221;.
While most traditional applications store log information in a file, the Twelve-Factor app directs it, instead, to stdout as a stream of events; it&8217;s the execution environment that&8217;s responsible for collecting those events. That might be as simple as redirecting stdout to a file, but in most cases it involves using a log router such as Fluentd and saving the logs to Hadoop or a service such as Splunk.
In Kubernetes, you have at least two choices for automatic logging capture: Stackdriver Logging if you&8217;re using Google Cloud, and Elasticsearch if you&8217;re not.  You can find more information on setting Kubernetes logging destinations here.
Principle XII. Admin processes
Principle 12 of the 12 Factor App is &8220;Run admin/management tasks as one-off processes&8221;.
This principle involves separating admin tasks such as migrating a database or inspecting records from the rest of the application. Even though they&8217;re separate, however, they must still be run in the same environment and against the same base code and configuration as the application, and their code must be shipped alongside the application to prevent drift.
You can implement this a number of different ways in Kubernetes-based applications, depending on the size and scale of the application itself. For example, for small tasks, you might use kubectl exec to operate on a specific container, or you can use the Kubernetes Job to run a self-contained application. For more complicated tasks that involve orchestration of changes, however, you can also use Kubernetes Helm charts.
How many of these factors did you hit?
Unless you&8217;re still building desktop applications, chances are you can feel good about hitting at least a few of these essential principles of twelve-factor apps. But chances are you also found at least one or two you can probably work a little harder on.
So we want to know: which of these factors are the biggest struggle for you? Where do you think you need to work harder, and what would make it easier for you?  Let us know in the comments.
Thanks to Jedrzej Nowak, Piotr Shamruk, Ivan Shvedunov, Tomasz Napierala and Lukasz Oles for reviewing this article!
Check out more Kubernetes resources.
The post How do you build 12-factor apps using Kubernetes? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis