Making cities safer: data collection for Vision Zero

A critical part of enabling cities to implement their Vision Zero policies &; the goal of the current National Transportation Data Challenge &8211; is to be able to generate open, multi-modal travel experience data. While existing datasets use police and hospital reports to provide a comprehensive picture of fatalities and life altering injuries, by their nature, they are sparse and resist use for prediction and prioritization. Further, changes to infrastructure to support Vision Zero policies frequently require balancing competing needs from different constituencies &8211; protected bike lanes, dedicated signals and expanded sidewalks all raise concerns that automobile traffic will be severely impacted.
A timeline of the El Monte/Marich intersection in Mountain View, from 2014 to 2017 provides an opportunity to put some of these challenges into context.

since there is no standard way to report near misses, the City didn&;t know that the intersection was so dangerous until somebody actually died, and it was not included in the ped and bike plans,
because the number of fatalities is so low, and the number of areas that need to be fixed is so high, past fatalities may not be a good predictors of future ones. But that makes prioritization challenging &8211; should the City play &;whack-a-mole&; with locations where fatalities occurred, or should it stick with the ped and bike plans?
even if the City does pick an area to fix, it is not clear what the fix should be. Note that the City wanted to improve the visibility of the intersection, but the residents were skeptical that any solution that did not address the speeding would be sufficient.
it is not clear how to balance competing needs &8211; addressing the speeding issue will potentially increase the travel times of (the currently speeding) automobile travellers.  Increased travel time is quantifiable, how can we make the increased safety also quantifiable so that we can, as a society, make the appropriate tradeoffs?

The e-mission project in the RISE and BETS labs focuses on building an extensible platform that can instrument the end-to-end multi-modal travel experience at the personal scale, collate it for analysis at the societal scale, and help solve some of the challenges above.
In particular, it combines background data collection of trips, classified by modes, with user-reported incident data, and makes the resulting anonymized heatmaps available via public APIs for further visualization and analysis. The platform also has an integration with the habitica open source platform to enable gamification of data collection.
Click to view slideshow.
This could allow cities to collect crowdsourced stress maps, use them to prioritize the areas that need improvement, and after pilot or final fixes are done, quantify the reduction in stress and mode shifts related to the fix.
Since this is an open source, extensible platform and generates open data, it can easily be extended to come up with some cool projects. Here are five example extensions to give a flavor of what improvements can be done:

enhance the incident reporting to provide more details (why? how serious?)
have the incident prompting be based on phone shake instead of a prompt at the end of every trip
encourage reporting through gamification using the habitica integration
convert the existing heatmaps to aggregate, actionable metrics
automatically identify “top 5” or “top 10” hotspots for cities to prioritize

But these are just examples &8211; the whole point of the challenge is to tap into all the great ideas that are out there. Sign up for the challenge, walk/bike around your cities, hear what planners want, and use your ideas to make the world a better place!
Quelle: Amplab Berkeley

OK, I give up. Is Docker now Moby? And what is LinuxKit?

The post OK, I give up. Is Docker now Moby? And what is ? appeared first on Mirantis | Pure Play Open Cloud.
This week at , Docker made several announcements, but one in particular caused massive confusion as users thought that &;Docker&; was becoming &8220;Moby.&8221;  Well&; OK, but which Docker? The Register probably put it best, when they said, &8220;Docker (the company) decided to differentiate Docker (the commercial software products Docker CE and Docker EE) from Docker (the open source project).&8221;  Tack on a second project about building core operating systems, and there&;s a lot to unpack.
Let&8217;s start with Moby.  
What is Moby?
Docker, being the foundation of many peoples&8217; understanding of containers, unsurprisingly isn&8217;t a single monolithic application.  Instead, it&8217;s made up of components such as runc, containerd, InfraKit, and so on. The community works on those components (along with Docker, of course) and when it&8217;s time for a release, Docker packages them all up and out they go. With all of those pieces, as you might imagine, it&8217;s not a simple task.
And what happens if you want your own custom version of Docker?  After all, Docker is built on the philosophy of &8220;batteries included but swappable&8221;.  How easy is it to swap something out?
In his blog post introducing the Moby Project, Solomon Hykes explained that the idea is to simplify the process of combining components into something usable. &8220;We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.&8221;
Hykes explained that from now on, Docker releases would be built using Moby and its components.  At the moment there are 80+ components that can be combined into assemblies.  He further explained that:
&8220;Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.&8221;

Who needs to know about Moby?
The first group that needs to know about Moby is Docker developers, as in the people building the actual Docker software, and not people building applications using Docker containers, or even people building Docker containers.  (Here&8217;s hoping that eventually this nomenclature gets cleared up.)  Docker developers should just continue on as usual, and Docker pull requests will be reouted to the Moby project.
So everyone else is off the hook, right?
Well, um, no.
If all you do is pull together containers from pre-existing components and software you write yourself, then you&8217;re good; you don&8217;t need to worry about Moby. Unless, that is, you aren&8217;t happy with your available Linux distributions.
Enter LinuxKit.
What is LinuxKit?
While many think that Docker invented the container, in actuality linux containers had been around for some time, and Docker containers are based on them.  Which is really convenient &; if you&8217;re using Linux.  If, on the other hand, you are using a system that doesn&8217;t include Linux, such as a Mac, a Windows PC, or that Raspberry Pi you want to turn into an automatic goat feeder, you&8217;ve got a problem.
Docker requires linuxcontainers.  Which is a problem if you have no linux.
Enter LinuxKit.  
The idea behind LinuxKit is that you start with a minimal Linux kernal &8212; the base distro is only 35MB &8212; and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to.  Stephen Foskitt tweeted a picture of an example from the announcement:

More about LinuxKit DockerCon pic.twitter.com/TfRJ47yBdB
— Stephen Foskett (@SFoskett) April 18, 2017

The end result is that you can build containers that run on desktops, mainframes, bare metal, IoT, and VMs.
The project will be managed by the Linux Foundation, which is only fitting.
So what about Alpine, the minimal Linux that&8217;s at the heart of Docker?  Docker&8217;s security director, Nathan McCauley said that &8220;LinuxKit&8217;s roots are in Alpine.&8221;  The company will continue to use it for Docker.

Today we launch LinuxKit &8212; a Linux subsystem focussed on security. pic.twitter.com/Q0YJsX67ZT
— Nathan McCauley (@nathanmccauley) April 18, 2017

So what does this have to do with Moby?
Where LinuxKit has to do with Moby
If you&8217;re salivating at the idea of building your own Linux distribution, take a deep breath. LinuxKit is an assembly within Moby.  
So if you want to use LinuxKit, you need to download and install Moby, then use it to build your LinuxKit pieces.
So there you have it. You now have the ability to build your own Linux system, and your own containerization system. But it&8217;s definitely not for the faint of heart.
Resources

Wait – we can explain, says Moby, er, Docker amid rebrand meltdown • The Register
Moby, LinuxKit Kick Off New Docker Collaboration Phase | Software | LinuxInsider
Why Docker created the Moby Project | CIO
GitHub &; linuxkit/linuxkit: A toolkit for building secure, portable and lean operating systems for containers
Docker LinuxKit: Secure Linux containers for Windows, macOS, and clouds | ZDNet
Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems &8211; Docker Blog
Stephen Foskett on Twitter: &8220;More about LinuxKit DockerCon https://t.co/TfRJ47yBdB&8221;
Introducing Moby Project: a new open-source project to advance the software containerization movement &8211; Docker Blog
DockerCon 2017: Moby’s Cool Hack sessions &8211; Docker Blog

The post OK, I give up. Is Docker now Moby? And what is LinuxKit? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

DockerCon 2017: Moby’s Cool Hack sessions

Every year at , we expand the bounds of what Docker can do with new features and products. And every day, we see great new apps that are built on top of Docker. And yet, there’s always a few that stand out not just for being cool apps, but for pushing the bounds of what you can do with Docker.
This year we had two great apps that we featured in the Docker Cool Hacks closing keynote. Both hacks came from members of our Docker Captains program, a group of people from the Docker community who are recognized by Docker as very knowledgeable about Docker, and contribute quite a bit to the community.
Play with Docker
The first Cool Hack was Play with Docker by Marcos Nils and Jonathan Leibiusky. Marcos and Jonathan actually were featured in the Cool Hacks session at DockerCon EU in 2015 for their work on a Container Migration Tool.
Play with Docker is a Docker playground that you can run in your browser.

Play with Docker’s architecture is a Swarm of Swarms, running Docker in Docker instances.

Running on pretty beefy hosts r3.4xlarge on AWS &; Play with Docker is able to run about 3500 containers per host, only running containers as needed for a session. Play with Docker is completely open source, so you can run it on your own infrastructure. And they welcome contributions on their GitHub repo.
FaaS (Function as a Service)
The second Cool Hack was Functions as a Service (FaaS) by Alex Ellis. FaaS is a framework for building serverless functions on Docker Swarm with first class support for metrics. Any UNIX process can be packaged as a function enabling you to consume a range of web events without repetitive boilerplate coding. Each function runs as a container that only runs as long as it takes to run the function.

FaaS also comes with a convenient gateway tester that allows you to try out each of your functions directly in the browser.

FaaS is actively seeking contributions, so feel free to send issues and PRs on the GitHub repo.
Check out the video recording of the cool hack sessions below:

Congratulations to DockerCon Cool Hacks winners @marcosnils, @xetorthio, and @alexellisuk for Play&;Click To Tweet

Learn more about our DockerCon 2017 cool hacks:

Check out Play with Docker
Check out and contribute to FaaS
Contribute to Play with Docker

The post DockerCon 2017: Moby’s Cool Hack sessions appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

More than 60 Red Hat-led sessions confirmed for OpenStack Summit Boston

This Spring&;s 2017 OpenStack Summit in Boston should be another great and educational event. The OpenStack Foundation has posted the final session agenda detailing the entire week&8217;s schedule of events. And once again Red Hat will be very busy during the four-day event, including delivering more than 60 sessions, from technology overviews to deep dive&8217;s around the OpenStack services for containers, storage, networking, compute, network functions virtualization (NFV), and much, much more. 

As a Headline sponsor this Fall, we also have a full day breakout room on Monday, where we plan to present additional product and strategy sessions. And we will have two keynote presenters on stage: President and CEO, Jim Whitehurst, and Vice President and Chief Technologist, Chris Wright. 
To learn more about Red Hat&8217;s general sessions, look at the details below. We&8217;ll add the agenda details of our breakout soon. Also, be sure to visit us at our booth in the center of the Marketplace to meet the team and check out our live demonstrations. Finally, we&8217;ll have Red Hat engineers, product managers, consultants, and executives in attendance, so be sure to talk to your Red Hat representative to schedule an in-person meeting while there.
And in case you haven&8217;t registered yet, visit our OpenStack Summit page for a discounted registration code to help get you to the event. We look forward to seeing you in Boston this April.
For more details on each session, click on the title below:
Monday sessions

Kuryr & Fuxi: delivering OpenStack networking and storage to Docker swarm containers
Antoni Segura Puimedon, Vikas Choudhary, and Hongbin Lu (Huawei)

Multi-cloud demo
Monty Taylor

Configure your cloud for recovery
Walter Bentley

Kubernetes and OpenStack at scale
Stephen Gordon

No longer considered an epic spell of transformation: in-place upgrade!
Krzysztof Janiszewski and Ken Holden

Fifty shades for enrollment: how to use Certmonger to win OpenStack
Ade Lee and Rob Crittenden

OpenStack and OVN &; what&8217;s new with OVS 2.7
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Federation with Keycloak and FreeIPA
Martin Lopes, Rodrigo Duarte Sousa, and John Dennis

7 &;must haves&; for highly effective Telco NFV deployments
Anita Tragler and Greg Smith (Juniper Networks, Inc.)

Containerizing OpenStack deployments: lessons learned from TripleO
Flavio Percoco

Project update &8211; Heat
Rabi Mishra, Zane Bitter, and Rico Lin (EasyStack)

Tuesday sessions

OpenStack Telemetry and the 10,000 instances
Julien Danjou and Alex Krzos

Mastering and troubleshooting NFV issues
Sadique Puthen and Jaison Raju

The Ceph power show &8211; hands-on with Ceph: Episode 2 &8211; &;The Jewel Story&8217;
Karan Singh, Daniel Messer, and Brent Compton

SmartNICs &8211; paving the way for 25G/40G/100G speed NFV deployments in OpenStack
Anita Tragler and Edwin Peer (Netronome)

Scaling NFV &8211; are containers the answer?
Azhar Sayeed

Free my organization to pursue cloud native infrastructure!
Dave Cain and Steve Black (East Carolina University)

Container networking using Kuryr &8211; a hands-on lab
Sudhir Kethamakka and Amol Chobe (Ericsson)

Using software-defined WAN implementation to turn on advanced connectivity services in OpenStack
Ali Kafel and Pratik Roychowdhury (OpenContrail)

Don&8217;t fail at scale: how to plan for, build, and operate a successful OpenStack cloud
David Costakos and Julio Villarreal Pelegrino

Red Hat OpenStack Certification Program
Allessandro Silva

OpenStack and OpenDaylight: an integrated IaaS for SDN and NFV
Nir Yechiel and Andre Fredette

Project update &8211; Kuryr
Antoni Segura Puimedon and Irena Berezovsky (Huawei)

Barbican workshop &8211; securing the cloud
Ade Lee, Fernando Diaz (IBM), Dave McCowan (Cisco Systems), Douglas Mendizabal (Rackspace), Kaitlin Farr (Johns Hopkins University)

Bridging the gap between deploying OpenStack as a cloud application and as a traditional application
James Slagle

Real time KVM and how it works
Eric Lajoie

Wednesday sessions

Projects Update &8211; Sahara
Telles Nobrega and Elise Gafford

Project update &8211; Mistral
Ryan Brady

Bite off more than you can chew, then chew it: OpenStack consumption models
Tyler Britten, Walter Bentley, and Jonathan Kelly (MetacloudCisco)

Hybrid messaging solutions for large scale OpenStack deployments
Kenneth Giusti and Andrew Smith

Project update &8211; Nova
Dan Smith, Jay Pipes (Mirantis), and Matt Riedermann (Huawei)

Hands-on to configure your cloud to be able to charge your users using official OpenStack components
Julien Danjou, Christophe Sautheir (Objectif Libre), and Maxime Cottret (Objectif Libre)

To OpenStack or not OpenStack; that is the question
Frank Wu

Distributed monitoring and analysis for telecom requirements
Tomofumi Hayashi, Yuki Kasuya (KDDI Research), and Ryota Mibu (NEC)

OVN support for multiple gateways and IPv6
Russell Bryant and Numan Siddique

Kuryr-Kubernetes: the seamless path to adding pods to your datacenter networking
Antoni Segura Puimedon, Irena Berezovsky (Huawei), and Ilya Chukhnakov (Mirantis)

Unlocking the performance secrets of Ceph object storage
Karan Singh, Kyle Bader, and Brent Compton

OVN hands-on tutorial part 1: introduction
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

Kuberneterize your baremetal nodes in OpenStack!
Ken Savich and Darin Sorrentino

OVN hands-on tutorial part 2: advanced
Russell Bryant, Ben Pfaff (VMware), and Justin Pettit (VMware)

The Amazon effect on open source cloud business models
Flavio Percoco, Monty Taylor, Nati Shalom (GigaSpaces), and Yaron Haviv (Iguazio)

Neutron port binding and impact of unbound ports on DVR routers with floatingIP
Brian Haley and Swaminathan Vasudevan (HPE)

Upstream contribution &8211; give up or double down?
Assaf Muller

Hyper cool infrastructure
Randy Robbins

Strategic distributed and multisite OpenStack for business continuity and scalability use cases
Rob Young

Per API role-based access control
Adam Young and Kristi Nikolla (Massachusetts Open Cloud)

Logging work group BoF
Erno Kuvaja, Rochelle Grober, Hector Gonzalez Mendoza (Intel), Hieu LE (Fujistu) and Andrew Ukasick (AT&T)

Performance and scale analysis of OpenStack using Browbeat
 Alex Krzos, Sai Sindhur Malleni, and Joe Talerico

Scaling Nova: how CellsV2 affects your deployment
Dan Smith

Ambassador community report
Erwan Gallen, Lisa-Marie Namphy (OpenStack Ambassador), Akihiro Hasegawa (Equinix), Marton Kiss (Aptira), and Akira Yoshiyama (NEC)

Thursday sessions

Examining different ways to get involved: a look at open source
Rob Wilmoth

CephFS backed NFS share service for multi-tenant clouds
Victoria Martinez de la Cruz, Ramana Raja, and Tom Barron

Create your VM in a (almost) deterministic way &8211; a hands-on lab
Sudhir Kethamakka and Geetika Batra

RDO&8217;s continuous packaging platform
Matthieu Huin, Fabien Boucher, and Haikel Guemar (CentOS)

OpenDaylight Network Virtualization solution (NetVirt) with FD.io VPP data plane
Andre Fredette, Srikanth Vavilapalli (Ericsson), and Prem Sankar Gopanna (Ericsson)

Ceph snapshots for fun & profit
Gregory Farnum

Gnocchi and collectd for faster fault detection and maintenance
Julien Danjou and Emma Foley

Project update &8211; TripleO
Emillien Macchi, Flavio Percoco, and Steven Hardy

Project update &8211; Telemetry
Julien Danjou, Mehdi Abaakouk, and Gordon Chung (Huawei)

Turned up to 11: low latency Ceph block storage
Jason Dillaman, Yuan Zhou (INTC), and Tushar Gohad (Intel)

Who reads books anymore? Or writes them?
Michael Solberg and Ben Silverman (OnX Enterprise Solutions)

Pushing the boundaries of OpenStack &8211; wait, what are they again?
Walter Bentley

Multi-site OpenStack &8211; deployment option and challenges for a telco
Azhar Sayeed

Ceph project update
Sage Weil

 
Quelle: RedHat Stack

Bring In the “New” Infrastructure Stack

The post Bring In the “New” Infrastructure Stack appeared first on Mirantis | Pure Play Open Cloud.
Today, Mirantis has announced Mirantis Cloud Platform .0, which heralds an operations-centric approach to open cloud.  But what does that mean in terms of cloud services today and into the future?  I think it may change your perspective when considering or deploying cloud infrastructure.
When our Co-Founder Boris Renski declared Infrastructure Software Is Dead, he was not talking about the validity or usefulness of infrastructure software, he was talking about the delivery and operations model for infrastructure software.  Historically, infrastructure software has been complicated as well as being notorious for challenging in terms of lifecycle management.  The typical model encompassed a very slow release model comprised of very large, integrated releases that arrived on the order of years for major releases (1.x, 2.x, 3.x&;) and many quarters for minor releases (3.2, 3.3, 3.4…).  Moving from one to the other was an extremely taxing process for IT organizations, and combined with a typical hardware refresh cycle, this usually resulted in the mega-project mentality in our industry:

Architect and deploy service(s) on a top-to-bottom stack
Once it is working &; don’t touch it (keep it running)
Defer consumption of new features and innovation until next update
Define a mega-project plan (typically along a 3 year HW refresh)
Execute plan by starting at 1 again

While virtualization and cloud technologies provided a separation of hardware from applications, it didn’t necessarily solve this problem.  Even OpenStack by itself did not solve this problem.  As infrastructure software, it was still released and consumed in slow, integrated cycles.
Meanwhile, many interesting developments occurred in the application space.  Microservices, agile development methodologies, CI/CD, containers, DevOps — all focused on the ability to rapidly innovate and rapidly consume software in very small increments comprising a larger whole as opposed to one large integrated release.  This approach has been successful at the application level and has allowed an arms race to develop in the software economy: who can develop new services to drive revenue for their business faster than their competition?
Ironically, this movement has been happening with applications running on the older infrastructure methodology.  Why not leverage these innovations at the infrastructure level as well?
Enter Mirantis Cloud Platform (MCP)…
MCP was designed with the operations-centric approach in mind, to be able to consume and manage cloud infrastructure in the same way modern microservices are delivered at the application level.  The vision for MCP is that of a Continuously Delivered Cloud:

With a single platform for virtual machines, containers and bare metal
Delivered by a CI/CD pipeline
With continuous monitoring

Our rich OpenStack platform has been extended with a full Kubernetes distribution which together enables the deployment and orchestration of VMs, containers and bare metal together, all on the same cloud.  As containers become increasingly important as a means of microservices development and deployment, they can be managed within the same open cloud infrastructure.
Mirantis will update MCP on a continuous basis with a lifecycle determined in weeks, not years.  This allows for the rapid release and consumption of updates to the infrastructure in small increments as opposed to the large integrated releases necessitating the mega-project.  Your consumption is based on DriveTrain, the lifecycle management tool connecting your cloud to the innovation coming from Mirantis.  With DriveTrain you consume the technology at your desired pace, pushed through a CI/CD pipeline and tested in staging, then promoted into production deployment.  In the future, this will include new features and full upgrades performed non-disruptively in an automated fashion.  You will be able to take advantage of the latest innovations quickly, as opposed to waiting for the next infrastructure mega-project.
Operations Support Systems have always been paramount to successful IT delivery, and even more so in a distributed system based on a continuous lifecycle paradigm. StackLight is the OSS that is purpose-built for MCP and provides continuous monitoring to enable automated alerts with a goal of SLA compliance.  This is the same OSS used when your cloud is managed by Mirantis with our Mirantis Managed OpenStack (MMO) offering where we can deliver up to 99.99% SLA guarantees, or if you are managing MCP in-house with your own IT operations.  As part of our Build-Operate-Transfer model, we focus on operational training with StackLight such that post-transfer you are able to use the same in-place StackLight and same in-place standard operating procedures.
Finally!  Infrastructure software that can be consumed and managed in a modern approach just like microservices are consumed and managed at the application level.  Long live the new infrastructure!
To learn more about MCP, please sign up for our webinar on April 26. See you there!
The post Bring In the “New” Infrastructure Stack appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

The Agility and Flexibility of Docker including Oracle Database and Development Tools

A company’s important applications often are subjected to random and capricious changes due to forces well beyond the control of IT or management.  Events like a corporate merger or even a top programmer on an extended vacation can have an adverse impact on the performance and reliability of critical company infrastructure.
During the second day keynote at DockerCon 2017 in Austin TX, Lily Guo and Vivek Saraswat showed a simulation of how to use Enterprise Edition and its application transformation tools to respond to random events that threaten to undermine the stability of their company critical service.
The demo begins as two developers are returning to work after an extended vacation.  They discover that, during their absence, their CEO has unexpectedly hired an outside contract programmer to rapidly code and introduce an entire application service that they know nothing about.  As they try to build the new service, however, Docker Security Scan detects that a deprecated library has been incorporated by the contractor.  This library is found to have a security vulnerability which violates the company’s best practice standards.  As part of Docker Enterprise Edition Advanced, Docker Security Scan automatically keeps track of code contributions and acts as a gatekeeper to flag issues and protect company standards.   In this case, they are able to find a newer version of the library and build the service successfully.
The next step is to deploy the service.   Docker Compose is the way to describe the application dependencies and secrets access.   It is tempting to simply insert the passwords into the Compose file using plain text. However, the best choice is to let Docker Secrets manage sensitive application configuration data and take advantage of Docker EE with its  ability to manage and enforce RBAC (Role Based Access Control).
It is interesting that the service consists of a Microsoft SQL Server database container that is interacting with other containers that are running Linux.  Docker Enterprise Edition features this ability to run a cluster of microservices in a hybrid Windows and Linux environment.  “It just works.”
All of the problems from the beginning of the demo now seem to be resolved, but the CEO rushes in to announce that they have just purchased a company that uses a traditional on premise application. The merger press announcement will be tomorrow and there is concern about the scope and cost of updating and moving the application to a modern infrastructure. However, they know that can use the Docker transformation tool, image2docker, to do the hard work of taking the traditional application and moving it to a modern Docker Enterprise Edition containers which can be deployed on any infrastructure, including Cloud.
One final step step is needed to complete the move from the traditional architecture.   As the traditional application relies on the popular and powerful Oracle Database, it will need to be acquired and adapted.  Time to go out to the Docker Store.    Lily finds the Oracle DB on Docker Store and integrates it directly into the transformed application &; and “it just works”
 The Docker Store is the place where developers can find trusted and scanned commercial content with collaborative support from Docker and the application container image provider.   Oracle today announced that its flagship databases and developer tools will be immediately available as Docker containers through the Docker Store marketplace.  The first set of certified images include: Oracle Database, Oracle MySQL, Oracle WebLogic Server, Oracle Coherence, Oracle Instant Client, and Oracle Java 8 SE (Server JRE).   
The demo ends and it’s been shown how developers can use Docker Enterprise Edition to quickly resolve a library incompatibility issues and how easy it is to take traditional applications and accomplish the first steps towards adapting them to a modern container infrastructure.

Agility and Flexibility of Docker including @Oracle Database and Development ToolsClick To Tweet

 
The post The Agility and Flexibility of Docker including Oracle Database and Development Tools appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Introducing the Modernize Traditional Apps Program

Today at DockerCon, we announced the Modernize Traditional Applications (MTA) Program to help enterprises make their existing legacy apps more secure, more efficient and portable to hybrid cloud infrastructure.  Collaboratively developed and brought to market with partners Avanade, Cisco, HPE, and Microsoft, the MTA Program consists of consulting services, Docker Enterprise Edition, and hybrid cloud infrastructure from partners to modernize existing .NET Windows or Java Linux applications in five days or less.  Designed for IT operations teams, the MTA Program modernizes existing legacy applications without modifying source code or re-architecting the application.

The First Step In The Microservices Journey
In working with hundreds of our enterprise IT customers the last couple years, when we sit down with them one of the first questions they inevitably ask is, “What is the first step we should take toward microservices?”
Through experience we have found that, for the vast majority of them, the best answer is, “Start with what you have today &; with your existing applications.”   Why is this the right place for them to start?  Because it recognizes two realities facing enterprise IT organizations today: existing applications consume 80% of IT budgets, and most IT organizations responsible for existing apps are also tasked with hybrid cloud initiatives.  
Seeing this pattern repeatedly, we developed this program as a solution for IT operations teams to rapidly address both realities.
Bringing Portability, Security, and Efficiency to Legacy Applications
The heart of the program is methodology and automation tooling to containerize existing .NET Windows or Java Linux applications without modifying source code or re-architecting the app.  Then, using Docker’s Containers-as-a-Service (CaaS) offering, Docker Enterprise Edition (Docker EE), IT operations teams deploy and manage the newly-containerized applications onto partners’ hybrid cloud infrastructure.
The result?  Customers participating in the private beta of the Modernize Traditional Apps Program during the last six months report the following benefits:

Portability.  Customers share that previous attempts to move legacy applications to hybrid cloud infrastructure took months and suffered from high failure rates.  In contrast, thanks to the ability of containers to package together an application and its dependencies, MTA’d legacy applications can be moved in weeks.
Security.  Docker Enterprise Edition (Docker EE) improves the security profile of existing legacy applications through container-based isolation, automated scanning and alerting for vulnerabilities, and integrity verification through digital signatures.

Efficiency.  Customers realize significant improvements in the total cost of ownership (TCO) in both CapEx and OpEx of their existing legacy applications.

To give specific examples, today at DockerCon private beta program participants Northern Trust and Microsoft IT both shared their experiences and results:

Northern Trust, a leading international financial services company, experienced  deployment times that were 4X faster and noted a 2X improvement in infrastructure utilization;
Microsoft is not only a partner in this program; their IT organization is also a beta customer.  Microsoft IT increased app density 4X with zero impact to performance and were able to reduce their infrastructure costs by a third.

Program Powered By Partners
The goal of the program is to accelerate the time-to-value for customers.  To achieve this, we worked closely with our partners to define tightly-scoped, turnkey solutions consisting of consulting services, Docker Enterprise Edition (Docker EE) software, and hybrid cloud infrastructure.

Avanade and Microsoft Azure.  Avanade’s consulting services provides a structured approach for evaluating the customer’s existing legacy applications, containerizing, and deploying and managing the containerized apps on Microsoft Azure hybrid cloud using Docker EE.
Cisco.  Cisco offers consulting services and their UCS converged infrastructure products for MTA Program customers.  Used together with Docker EE,  the solution helps customers take advantage of Cisco’s policy-based container networking technology, Contiv, in deploying containerized apps to hybrid cloud environments.
HPE.  For hybrid cloud solutions employing composable infrastructure, HPE offers MTA Program customers consulting services and converged infrastructure products together with Docker EE to deploy and manage containerized legacy apps.

Docker Enterprise Edition Empower Customers to Control the Journey
These turnkey bundles make the MTA Program a quick, efficient solution for IT operations taking the first step toward microservices.  And with the modernized applications being managed by the Docker CaaS offering, Docker Enterprise Edition (Docker EE), customers have control over the journey’s pace and direction &8211; how fast or slow as well as which application functionality to re-factor into microservices versus which to leave as-is.  This flexibility stems from Docker EE’s ability to manage the lifecycle of any containerized app, from 10-years-old legacy to just-released new microservices-based and anything in between.

We are excited to announce the Modernize Traditional Apps Program today at DockerCon with partners Avanade, Cisco, HPE, and Microsoft and share the stories from IT organizations who are making their existing legacy applications portable, secure, and efficient.  Use the links below to learn more about how the MTA Program from Docker and its partners can help breathe new life into your existing legacy applications.
More Resources:

Learn more about modernizing traditional apps with Docker EE
Read the press release
Read more about the program from Microsoft
Read more about the program with Cisco  

Introducing the Modernize Traditional Apps w/ @Docker program To Tweet

The post Introducing the Modernize Traditional Apps Program appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems

 
Last year, one of the most common requests we heard from our users was to bring a Docker-native experience to their platforms. These platforms were many and varied: from cloud platforms such as AWS, Azure, Google Cloud, to server platforms such as Windows Server, desktop platforms that their developers used such as OSX and Windows 10, to mainframes and IoT platforms &;  the list went on.
We started working on support for these platforms, and we initially shipped Docker for Mac and Docker for Windows, followed by Docker for AWS and Docker for Azure. Most recently, we announced the beta of Docker for GCP. The customizations we applied to make Docker native for each platform have furthered the adoption of the Docker editions.
One of the issues we encountered was that for many of these platforms, the users wanted Linuxcontainer support but the platform itself did not ship with Linux included. Mac OS and Windows are two obvious examples, but cloud platforms do not ship with a standard Linux either. So it made sense for us to bundle Linux into the Docker platform to run in these places.
What we needed to bundle was a secure, lean and portable Linux subsystem that can provide Linux container functionality as a component of a container platform. As it turned out, this is what many other people working with containers wanted as well; secure, lean and portable Linux subsystem for the container movement, So, we partnered with several companies and the Linux Foundation to build this component. These companies include HPE, Intel, ARM, IBM and Microsoft &8211; all of whom are interested in bringing Linux container functionality to new and varied platforms, from IoT to mainframes.
includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed. All components can be substituted with ones that match specific needs. It is a kit, very much in the Docker philosophy of batteries included but swappable.  Today, onstage at Dockercon 2017 we opensourced LinuxKit at https://github.com/linuxkit/linuxkit.
To achieve our goals of a secure, lean and portable OS,we built it from containers, for containers.  Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”
The leanness directly helps with security by removing parts not needed if the OS is designed around the single use case of running containers. Because LinuxKit is container-native, it has a very minimal size &8211; 35MB with a very minimal boot time.  All system services are containers, which means that everything can be removed or replaced.
System services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade.
The kernel comes from our collaboration with the Linux kernel community, participating in the process and work with groups such as the Kernel Self Protection Project (KSPP), while shipping recent kernels with only the minimal patches needed to fix issues with the platforms LinuxKit supports. The kernel security process is too big for a single company to try to develop on their own therefore a broad industry collaboration is necessary.
In addition LinuxKit provides a space to incubate security projects that show promise for improving Linux security. We are working with external open source projects such as Wireguard, Landlock, Mirage, oKernel, Clear Containers and more to provide a testbed and focus for innovation in the container space, and a route to production.
LinuxKit is portable, as it was built for the many platforms Docker runs on now, and with a view to making it run on far more.. Whether they are large or small machines, bare metal or virtualized, mainframes or the kind of devices that are used in Internet of Things scenarios as containers reach into every area of computing.
For the launch we invited John Gossman from Microsoft onto the stage. We have a long history of collaboration with Microsoft, on Docker for Windows Server, Docker for Windows and Docker for Azure. Part of that collaboration has been work on the Linux subsystem in Docker for Windows and Docker for Azure, and working on Hyper-V integration with LinuxKit on those platforms. The next step in that collaboration announced today is that all Windows Server and Windows 10 customers will get access to Linux containers and we will be working together on how to integrate linuxKit with Hyper-V isolation.
Today we open up LinuxKit to partners and open source enthusiasts to build new things with Linux and to expand the container platform. We look forward to seeing what you make from it and contribute back to the community.

Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux SubsystemsClick To Tweet

Learn More about Linuxkit:

Check out the LinuxKit repository on GitHub
Join us for the DockerCon 2017 Online Meetup Recap
Read the Announcement

 
The post Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

DevOps Engineer

The post DevOps Engineer appeared first on Mirantis | Pure Play Open Cloud.
Mirantis is looking for a highly qualified candidate with experience in systems integration, release management, and package development in DEB format. The Infrastructure  team takes code from the Open Source community and applies fixes and patches generated both internally and from external contributors to deliver OpenStack. Experience handling large-scale upgrades, zero-downtime maintenance, and contingency planning are highly desirable.Responsibilities:define and manage test environments required for different types of automated tests,drive cross-team communications to streamline and unify build and test processes,track  hardware utilization by CI/CD pipelinesprovide and maintain specifications and documentation for Infrastructure systems,provide support for users of Infrastructure systems (developers and QA engineers),produce and deliver technical presentations at internal knowledge transfer sessions, public workshops and conferences,Deploy new slaves and standalone servers by puppet/salt/ansibleRequired Skills:Linux system administration &; package management, services administration, networking, KVM-based virtualization;scripting with Bash and Python;experience with the DevOps configuration management methodology and tools (Puppet, Ansible, Salt);ability to describe and document systems design decisions;familiarity with development workflows &8211; feature design, release cycle, code-review practices;English, both written and spoken.Will Be a Plus:knowledge of CI tools and frameworks (Jenkins, Buildbot, etc.);release engineering experience &8211; branching, versioning, managing security updates;understanding of release engineering and QA practices of major Linux distributions;experience in test design and automation;experience in project management;involvement in major Open Source communities (developer, package maintainer, etc.).What We Offer:challenging tasks, providing room for creativity and initiative,work in a highly-distributed international team,work in the Open Source community, contributing patches to upstream,opportunities for career growth and relocation,business trips for meetups and conferences, including OpenStack Summits,strong benefits plan,medical insurance.The post DevOps Engineer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis