Lanka Bell and IBM team up to accelerate cloud in Sri Lanka

It just got easier for businesses, developers and government organizations in Sri Lanka to access all the benefits of cloud.
Telecommunications provider Lanka Bell and IBM announced a new agreement to offer public, private and hybrid IBM Cloud services in Sri Lanka, including workload migrations, disaster recovery and capacity expansion solutions. Services available will include infrastructure as a service (IaaS), platform as a service (PaaS), storage and virtual machines.
The offerings can be integrated using IBM Network Access Service solutions.
Lanka Bell hopes to &;help enterprise customers in the country to embrace cloud offerings quickly and easily,&; said Prasad Samarasinghe, the company&;s managing director. Samarasinghe noted that the agreement extends a 20-year partnership between IBM and Lanka Bell.
Learn more about the Lanka Bell and IBM partnership in Lanka Business Online&;s full article.
The post Lanka Bell and IBM team up to accelerate cloud in Sri Lanka appeared first on news.
Quelle: Thoughts on Cloud

Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition

2017 was an opportunity to hear from customers across multiple industries and segments on how they are leveraging technology to accelerate their business. In the keynote on Day 2 and also a breakout session that afternoon, Visa shared how Docker Enterprise Edition is empowering them on their mission is to make global economies safer by digitizing currency and making electronic payments available to everyone, everywhere.
 
Visa is the world’s largest retail electronic payment network that handles 130 billion transactions a year, processing $5.8 trillion annually. Swamy Kocherlakota, Global Head of Infrastructure and Operations, shared that Visa got here by expanding their global footprint which has put pressure on his organization which has remained mostly flat in headcount during that time. Since going into production with their Docker Containers-as-a-Service architecture 6 months ago, Mr. Kocherlakota has seen a 10x increase in scalability, ensuring that his organization will be able to support their overall mission and growth objectives well into the future.
Global Growth Fuels Need for A New Operating Model
In aligning his organization to the company mission, Swamy decided to focus on two primary metrics: Speed and Efficiency.

Speed is tied to developer on boarding and developer productivity. Visa wants new developers to be able to deploy code on their first day. That means giving them tools they are familiar with and getting out of their way. It also means providing developers access to infrastructure whenever and wherever they need it.

Efficiency is tied to Visa’s ability to maximize utilization of their existing datacenter footprint while also reducing the time the team spends on patching and refreshing hardware. Optimizing their efficiency also frees up both headcount and datacenter resources to support their global growth initiatives.

While considering how they could support these objectives, Visa also has to meet the high bar on security and availability that underpins everything they do. Some of the core systems at Visa have had zero downtime over a span of 20 years!
Modernizing with Docker Enterprise Edition
After investigating different technologies and vendors who could help them achieve both speed and efficiency objectives, Visa chose Docker Enterprise Edition (Docker EE) to help them move towards a microservices application model while also modernizing their data center operations.
Visa was looking for an enterprise-ready solution and appreciated the integrated approach of the Docker EE stack which includes scheduling, service registry, service discovery, container networking, and a centralized management control plane. Docker EE allows them to manage multiple development, QA, and staging environments, gain visibility across their container environment, and retain full control over role-based access.
Visa chose two key applications to begin their Docker journey – a core transaction processing application and a risk decision system. These were legacy monolithic applications which they began to containerize into services. Those two applications are now running in production on Docker EE across multiple regions and handling 100,000 transactions per day. They consist of 100 separate containers and have the ability to instantly scale to 800 when transactions peak.
To learn more about Visa’s application architecture, watch the breakout Docker Networking in Production at Visa below:

Results and Benefits

With Docker EE now in production, Visa is seeing improvements in a number of ways:

Provisioning time: Visa can now provision in seconds rather than days even while more application teams join the effort. They can also deliver just-in-time infrastructure across multiple datacenters around the world with a standardized format that works across their diverse set of applications.
Patching & maintenance: With Docker, Visa can simply redeploy an application with a new image. This also allows Visa to respond quickly to new threats as they can deploy patches across their entire environment at one time.
Tech Refresh: Once applications are containerized with Docker, developers do not have to worry about the underlying infrastructure; the infrastructure is invisible.
Multi-tenancy: Docker containers provides both space and time division multiplexing by allowing Visa to provision and deprovision microservices quickly as needed. This allows them to strategically place new services into the available infrastructure which has allowed the team to support 10x the scale they could previously.

To hear more about how Visa was able to gain 10x scalability for their application with Docker, watch Swamy&;s presentation from the Day 2 general session below:

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE is integrated, certified and supported to provide enterprises like Visa with the most secure container platform in the industry to modernize all applications.
Next Steps

Watch the entire Day 2 General Session from DockerCon 2017
View all the recorded sessions from DockerCon 2017
Learn more about Docker Enterprise Edition

Docker Enterprise Edition and Docker Networking in production at @visa dockercon Click To Tweet

The post Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition

2017 was an opportunity to hear from customers across multiple industries and segments on how they are leveraging technology to accelerate their business. In the keynote on Day 2 and also a breakout session that afternoon, Visa shared how Docker Enterprise Edition is empowering them on their mission is to make global economies safer by digitizing currency and making electronic payments available to everyone, everywhere.
 
Visa is the world’s largest retail electronic payment network that handles 130 billion transactions a year, processing $5.8 trillion annually. Swamy Kocherlakota, Global Head of Infrastructure and Operations, shared that Visa got here by expanding their global footprint which has put pressure on his organization which has remained mostly flat in headcount during that time. Since going into production with their Docker Containers-as-a-Service architecture 6 months ago, Mr. Kocherlakota has seen a 10x increase in scalability, ensuring that his organization will be able to support their overall mission and growth objectives well into the future.
Global Growth Fuels Need for A New Operating Model
In aligning his organization to the company mission, Swamy decided to focus on two primary metrics: Speed and Efficiency.

Speed is tied to developer on boarding and developer productivity. Visa wants new developers to be able to deploy code on their first day. That means giving them tools they are familiar with and getting out of their way. It also means providing developers access to infrastructure whenever and wherever they need it.

Efficiency is tied to Visa’s ability to maximize utilization of their existing datacenter footprint while also reducing the time the team spends on patching and refreshing hardware. Optimizing their efficiency also frees up both headcount and datacenter resources to support their global growth initiatives.

While considering how they could support these objectives, Visa also has to meet the high bar on security and availability that underpins everything they do. Some of the core systems at Visa have had zero downtime over a span of 20 years!
Modernizing with Docker Enterprise Edition
After investigating different technologies and vendors who could help them achieve both speed and efficiency objectives, Visa chose Docker Enterprise Edition (Docker EE) to help them move towards a microservices application model while also modernizing their data center operations.
Visa was looking for an enterprise-ready solution and appreciated the integrated approach of the Docker EE stack which includes scheduling, service registry, service discovery, container networking, and a centralized management control plane. Docker EE allows them to manage multiple development, QA, and staging environments, gain visibility across their container environment, and retain full control over role-based access.
Visa chose two key applications to begin their Docker journey – a core transaction processing application and a risk decision system. These were legacy monolithic applications which they began to containerize into services. Those two applications are now running in production on Docker EE across multiple regions and handling 100,000 transactions per day. They consist of 100 separate containers and have the ability to instantly scale to 800 when transactions peak.
To learn more about Visa’s application architecture, watch the breakout Docker Networking in Production at Visa below:

Results and Benefits

With Docker EE now in production, Visa is seeing improvements in a number of ways:

Provisioning time: Visa can now provision in seconds rather than days even while more application teams join the effort. They can also deliver just-in-time infrastructure across multiple datacenters around the world with a standardized format that works across their diverse set of applications.
Patching & maintenance: With Docker, Visa can simply redeploy an application with a new image. This also allows Visa to respond quickly to new threats as they can deploy patches across their entire environment at one time.
Tech Refresh: Once applications are containerized with Docker, developers do not have to worry about the underlying infrastructure; the infrastructure is invisible.
Multi-tenancy: Docker containers provides both space and time division multiplexing by allowing Visa to provision and deprovision microservices quickly as needed. This allows them to strategically place new services into the available infrastructure which has allowed the team to support 10x the scale they could previously.

To hear more about how Visa was able to gain 10x scalability for their application with Docker, watch Swamy&;s presentation from the Day 2 general session below:

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE is integrated, certified and supported to provide enterprises like Visa with the most secure container platform in the industry to modernize all applications.
Next Steps

Watch the entire Day 2 General Session from DockerCon 2017
View all the recorded sessions from DockerCon 2017
Learn more about Docker Enterprise Edition

Docker Enterprise Edition and Docker Networking in production at @visa dockercon Click To Tweet

The post Visa Inc. Gains Speed and Operational Efficiency with Docker Enterprise Edition appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

Deploying CloudForms in Microsoft Azure

In this article we will deploy the CloudForms appliance in the Azure cloud. Red Hat provides CloudForms as an appliance. For Microsoft Hyper-V and Azure, Red Hat provides a Virtual Hard Disk (VHD) as a dynamic disk. Azure, unfortunately, does not support dynamic disks. In order to import the CloudForms appliance into Azure, we need to convert the appliance VHD to a fixed disk.
The VHD will have a fixed size of around 40GB. To prevent having to upload 40GB and only upload the actual data which is closer to 2GB, we will use several tools. You can of course use Powershell, using the Azure cmdlets, or if you are a Linux guy like me, Microsoft has provided a tool written in Go that works great for uploading disks to Azure. In addition Microsoft provides a command line (Azure CLI) similar to the functionality of Powershell, but written in Python.
Convert VHD from Dynamic to Fixed
The first question you might have is why provide a dynamic disk? Well Red Hat doesn&;t want you to have to download a 40GB image so they provide a dynamic disk. In the next steps we will take that image, convert to fixed disk and upload to Azure.
First, we need to convert the image to a raw image. We can do this using qemu-tools. To do so, we need to compute the appropriate size for the new disk image and resize it. I‘ve written a quick script that will do this. You can get it here.
Upload and Run CloudForms
In order to upload the disk image to Azure and run it, you need to use the Microsoft Azure VHD tools and Azure CLI tools as I mentioned previously. They are written in Go, so you may also need to install Go as well. The steps I took in my environment are at the end of this post.
The command to upload the disk image is:
$ ./azure-vhd-utils upload –localvhdpath /home/cfme-azure-5.7.0.17-1.vhd –stgaccountname <storage account> –stgaccountkey <storage key> –containername templates –blobname cfme-azure-5.7.0.17-1.vhd –parallelism 8
You need to substitute your own values into the storage account and storage key values. Once the upload completes you can deploy the CloudForms Appliance in Azure. In order to do this we will use the Azure CLI.
The following command creates the CloudForms VM from the VHD you just uploaded.
$ azure vm image create cfme-azure-5.7.0.17-1 –blob-url https://premiumsadpdhlose2disk.blob.core.windows.net/templates/cfme-azure-5.7.0.17-1.vhd –os Linux /home/cfme-azure-5.7.0.17-1.vhd
Note that you can also use the Azure portal UI to create the CloudForms VM once the image is uploaded.
Configure CloudForms in Azure
Once the CloudForms Appliance is deployed you can access it using username/password or ssh-key, depending on what you chose when creating VM in Azure.
That‘s it! You can now configure the CloudForms appliance just as you would normally.
Summary
In this article we explored how to deploy the CloudForms appliance in the Azure cloud. CloudForms provides a single-pane for administering various cloud platforms. Having a CloudForms appliance deployed in Azure gives you more responsive management over Azure resources.
Happy Clouding in Azure!

Note: As I mentioned, the Azure VHD tools are written in Go so you need to first install Go. I installed version 1.7.4.
$ gunzip go1.7.4.linux-amd64.tar.gz
$ tar xvf go1.7.4.linux-amd64.tar
$ cd go
Export the environment parameters
$ mkdir $HOME/work
$ export GOPATH=$HOME/work
$ export PATH=$PATH:$GOPATH/bin
Then install the VHD tools
$ go get github.com/Microsoft/azure-vhd-utils
Similarly, to use the Azure CLI, you need to install Python and dependencies first. (These may already be on your system, but are provided here for completeness.)
$ sudo dnf install python
$ sudo dnf install python-pip
$ sudo dnf install python-devel
$ sudo dnf install openssl-devel
$ sudo dnf install npm
Then you can install the Azure CLI
$ sudo npm install azure-cli -g
$ sudo pip install –upgrade pip
$ sudo pip install azure==2.0.0rc5
 
Quelle: CloudForms

OpenStack Maintenance Engineer

The post OpenStack Maintenance Engineer appeared first on Mirantis | Pure Play Open Cloud.
Mirantis is looking for an engineer with expertise in Linux, Puppet and Python to join us as an OpenStack Maintenance Engineer. In this role, you&;ll maintain already shipped releases of Mirantis OpenStack by creating updates to Mirantis OpenStack components to improve security, performance, data consistency, usability and other aspects of production OpenStack environments. You will work closely with Development, QA, Services and Support teams to provide the best user experience for Mirantis OpenStack customers.Job responsibilities:Develop Puppet manifests to deploy maintenance updates to Mirantis OpenStack clustersMaintain deployment manifests of already shipped releases of Mirantis OpenStackInvestigate and troubleshoot technical issuesDevelop and backport patches for Mirantis OpenStack componentsAnalyze upstream stable branches and consume upstream fixes in Mirantis OpenStack maintenance updatesWork closely with development engineering and assist support and services engineers Requirements:5+ years of experience in IT industry3+ years of experience as deployment engineerStrong knowledge of PuppetGood system administration and automation skills in LinuxExperience with HA and relevant tools, such as Corosync, Pacemaker, keepalived, HAProxyExperience with git, gerrit or other distributed version control and review systemsAbility to identify and troubleshoot issues quickly in a Linux-based environmentGood written communication skills in EnglishWould be a plus:Spoken EnglishSoftware development experience with PythonExperience of working with Ansible and/or ChefExperience with OpenStack and cloud computingExperience with common messaging platforms, such as RabbitMQUnderstanding of software development and release management processLinux networking experienceVirtualization experienceThe post OpenStack Maintenance Engineer appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

How businesses can get the cognitive edge

The buzz in the computing industry is all about cognitive.
My clients are at various stages of understanding its implications. They want to know what cognitive truly is and how it solves real business challenges.
So what is cognitive?  As many observers have noted, it’s not a specific product. Instead, it’s an era that includes multiple vendors and various technologies. The move to this new era is driven by changes in the data landscape. Cognitive computing is vital to turning zettabytes of data into meaningful information. It enables computers to understand, reason and learn without a person programming everything to achieve answers.
For a business, cognitive’s implications are enormous. It is the “disruptive enabler.”
IBM has identified five areas where a business can benefit now if it starts building a cognitive business:

Drive deeper engagement. Help clients behind the scenes for better customer experience.
Scale expertise. Companies spend lots of money training employees. This can be scaled more effectively.
Put learning in every product. Build products that adapt to each customer&;s needs.
Change operations. Streamline your supply chain to help margins.
Transform how discovery is done. From pharmaceuticals to financial industries, research will be the foundation of how many companies work in the future.

In our survey of companies on a cognitive journey, the results were profound. For advanced users, the gains in customer engagement and the ability to respond faster to market needs were nearly doubled compared to beginners. Improvements to productivity and efficiency were just as significant.
For example, Mueller, a privately held company that employs 700 people across four manufacturing and distribution centers in the south-central United States, has implemented cognitive systems to assist with revenue forecasting, supply chain management, marketing, employee health and safety, and talent management. Within 12 months, one solution returned 113 percent on investment, creating a net annual benefit of more than $780,000, and reducing scrap metal waste by 20 to 30 percent. Another solution reduced the time spent creating reports by more than 90 percent, while a third solution resulted in a 90 percent improvement in the time to value in data processing.
USAA, a financial services company, provides banking and insurance services to 10.4 million past and present members of the US armed forces and their immediate family members. To better service these customers, USAA has implemented an innovative cognitive computing solution that uses IBM Watson. The solution enables transitioning military members to visit usaa.com or use a mobile browser to ask questions specific to leaving the military, such as “Can I be in the reserve and collect veterans compensation benefits?” or “How do I make the most of the Post-9/11 GI Bill?” As a result, USAA can provide customers comprehensive answers to complex questions in a non-judgmental environment.
WellPoint (now part of Anthem), one of the largest health benefits companies in the United States, delivers numerous health benefit solutions through its networks nationwide. For complex decisions, patients can often wait weeks for the clinical review to occur, and a lack of available evidence or ability to process in a timely fashion can delay treatment or lead to errors. To address this business challenge, WellPoint implemented a cognitive computing solution powered by IBM Watson that provides decision support for the pre-authorization process. Providing these decision support capabilities and reducing paperwork gives clinicians the chance to spend more time with patients.
These are the competitive business advantages an enterprise needs: the capabilities to transform business processes, impact business outcomes and engage customers in the new era ahead.
This fundamental change in computing needs leading-edge providers to drive it. We agree with research analysts at Gartner who said in their latest report that the IBM approach to cognitive computing is “likely to be one of the most attractive platforms in the future.&;
If your business hasn’t done so already, now’s the time to start your cognitive journey.
Learn more about cognitive capabilities with IBM Watson.
The post How businesses can get the cognitive edge appeared first on news.
Quelle: Thoughts on Cloud

Making cities safer: data collection for Vision Zero

A critical part of enabling cities to implement their Vision Zero policies &; the goal of the current National Transportation Data Challenge &8211; is to be able to generate open, multi-modal travel experience data. While existing datasets use police and hospital reports to provide a comprehensive picture of fatalities and life altering injuries, by their nature, they are sparse and resist use for prediction and prioritization. Further, changes to infrastructure to support Vision Zero policies frequently require balancing competing needs from different constituencies &8211; protected bike lanes, dedicated signals and expanded sidewalks all raise concerns that automobile traffic will be severely impacted.
A timeline of the El Monte/Marich intersection in Mountain View, from 2014 to 2017 provides an opportunity to put some of these challenges into context.

since there is no standard way to report near misses, the City didn&;t know that the intersection was so dangerous until somebody actually died, and it was not included in the ped and bike plans,
because the number of fatalities is so low, and the number of areas that need to be fixed is so high, past fatalities may not be a good predictors of future ones. But that makes prioritization challenging &8211; should the City play &;whack-a-mole&; with locations where fatalities occurred, or should it stick with the ped and bike plans?
even if the City does pick an area to fix, it is not clear what the fix should be. Note that the City wanted to improve the visibility of the intersection, but the residents were skeptical that any solution that did not address the speeding would be sufficient.
it is not clear how to balance competing needs &8211; addressing the speeding issue will potentially increase the travel times of (the currently speeding) automobile travellers.  Increased travel time is quantifiable, how can we make the increased safety also quantifiable so that we can, as a society, make the appropriate tradeoffs?

The e-mission project in the RISE and BETS labs focuses on building an extensible platform that can instrument the end-to-end multi-modal travel experience at the personal scale, collate it for analysis at the societal scale, and help solve some of the challenges above.
In particular, it combines background data collection of trips, classified by modes, with user-reported incident data, and makes the resulting anonymized heatmaps available via public APIs for further visualization and analysis. The platform also has an integration with the habitica open source platform to enable gamification of data collection.
Click to view slideshow.
This could allow cities to collect crowdsourced stress maps, use them to prioritize the areas that need improvement, and after pilot or final fixes are done, quantify the reduction in stress and mode shifts related to the fix.
Since this is an open source, extensible platform and generates open data, it can easily be extended to come up with some cool projects. Here are five example extensions to give a flavor of what improvements can be done:

enhance the incident reporting to provide more details (why? how serious?)
have the incident prompting be based on phone shake instead of a prompt at the end of every trip
encourage reporting through gamification using the habitica integration
convert the existing heatmaps to aggregate, actionable metrics
automatically identify “top 5” or “top 10” hotspots for cities to prioritize

But these are just examples &8211; the whole point of the challenge is to tap into all the great ideas that are out there. Sign up for the challenge, walk/bike around your cities, hear what planners want, and use your ideas to make the world a better place!
Quelle: Amplab Berkeley

CurrentCare offers the benefits of assisted living at home with Watson IoT

Forget cameras, microphones, and wearable devices. There’s a better way to monitor the well-being of loved ones who might need assistance. Instead of intruding on their privacy, their houses can check that they’re all right using passive sensors.
Inspired by a number of energy monitoring projects, including one to reduce energy poverty for residents in social housing, Current Cost, a manufacturer of real-time displays for monitoring domestic electricity usage, has taken a new direction and developed a connected-home offering.
Current Cost realized that its energy monitoring product family could be enhanced with additional sensors to provide CurrentCare, a solution for ambient assisted living.
CurrentCare is a spinoff company that, with IBM as its technology partner, passively monitors elderly and vulnerable people in their homes. With sensors, CurrentCare’s telecare solution alerts caregivers and family members when something out of the ordinary is happening.
What appliances can tell you
For example, consider someone who has an electric kettle and makes a cup of tea first thing in the morning. If the kettle is being monitored, it’s obvious, when it goes from zero watts to 3,000 watts in the morning, that someone’s heating water for their tea.
It is also obvious when something hasn’t happened by a certain time. In many cases, it would be extremely unusual, and maybe something is very wrong, if a habitual morning tea drinker hasn’t had at least one cup of tea by 10 in the morning.
Configuring CurrentCare
The CurrentCare solution uses low-cost sensors on key appliances, door open/close sensors, temperature and carbon monoxide monitors, room-level motion detectors, pressure mats in or near the bed, and a toilet flush sensor.
The sensors are easy to install and can be individually configured in a person’s home for their specific needs. Data is sent from the home over broadband (a GSM option will also be available soon) to a cloud-based analytical system, hosted by IBM.
Users can configure alerting rules, customizing them to the individual, to determine what to do if something unusual happens.
How the system works
CurrentCare’s data analysis service is hosted in an IBM cloud data center, and makes use of the secure, scalable and reliable IBM Watson Internet of Things (IoT) connectivity platform, which receives data sent from the CurrentCare equipment in patients&; homes. The data is then processed by an application running in the IBM Bluemix application platform.
Here, the triggering rules for the sensors from each house are applied, determining what alerts are to be sent to which appropriate party:

If the front door opens between midnight and 4 a.m.

Call me
Send the friendly neighbor a text

If the toilet hasn&8217;t flushed by 10 a.m.

Send me a text

If the room temperature drops below 68 degrees

Turn on the heating

If the refrigerator fails

E-mail me

The CurrentCare portal dashboard works with any browser, including tablets and smartphones. This means caregivers can access data and activity charts wherever they are.
CurrentCare minds the house
At the heart of it all, the CurrentCare system is a home monitoring system. This means it can also help homeowners while they’re away, alerting them to factors such as whether anyone has entered the yard or opened the door. They’ll know if their pet sitter has come and whether the mail carrier delivered the package they were expecting. They can also be alerted to unusual activity. It’s a great way to keep an eye on things.
For more about the CurrentCare system and the way it works, visit CurrentCare.
To learn more about this topic, read this post on the IBM Internet of Things blog.
The post CurrentCare offers the benefits of assisted living at home with Watson IoT appeared first on news.
Quelle: Thoughts on Cloud

OK, I give up. Is Docker now Moby? And what is LinuxKit?

The post OK, I give up. Is Docker now Moby? And what is ? appeared first on Mirantis | Pure Play Open Cloud.
This week at , Docker made several announcements, but one in particular caused massive confusion as users thought that &;Docker&; was becoming &8220;Moby.&8221;  Well&; OK, but which Docker? The Register probably put it best, when they said, &8220;Docker (the company) decided to differentiate Docker (the commercial software products Docker CE and Docker EE) from Docker (the open source project).&8221;  Tack on a second project about building core operating systems, and there&;s a lot to unpack.
Let&8217;s start with Moby.  
What is Moby?
Docker, being the foundation of many peoples&8217; understanding of containers, unsurprisingly isn&8217;t a single monolithic application.  Instead, it&8217;s made up of components such as runc, containerd, InfraKit, and so on. The community works on those components (along with Docker, of course) and when it&8217;s time for a release, Docker packages them all up and out they go. With all of those pieces, as you might imagine, it&8217;s not a simple task.
And what happens if you want your own custom version of Docker?  After all, Docker is built on the philosophy of &8220;batteries included but swappable&8221;.  How easy is it to swap something out?
In his blog post introducing the Moby Project, Solomon Hykes explained that the idea is to simplify the process of combining components into something usable. &8220;We needed our teams to collaborate not only on components, but also on assemblies of components, borrowing an idea from the car industry where assemblies of components are reused to build completely different cars.&8221;
Hykes explained that from now on, Docker releases would be built using Moby and its components.  At the moment there are 80+ components that can be combined into assemblies.  He further explained that:
&8220;Moby is comprised of:

A library of containerized backend components (e.g., a low-level builder, logging facility, volume management, networking, image management, containerd, SwarmKit, …)
A framework for assembling the components into a standalone container platform, and tooling to build, test and deploy artifacts for these assemblies.
A reference assembly, called Moby Origin, which is the open base for the Docker container platform, as well as examples of container systems using various components from the Moby library or from other projects.&8221;

Who needs to know about Moby?
The first group that needs to know about Moby is Docker developers, as in the people building the actual Docker software, and not people building applications using Docker containers, or even people building Docker containers.  (Here&8217;s hoping that eventually this nomenclature gets cleared up.)  Docker developers should just continue on as usual, and Docker pull requests will be reouted to the Moby project.
So everyone else is off the hook, right?
Well, um, no.
If all you do is pull together containers from pre-existing components and software you write yourself, then you&8217;re good; you don&8217;t need to worry about Moby. Unless, that is, you aren&8217;t happy with your available Linux distributions.
Enter LinuxKit.
What is LinuxKit?
While many think that Docker invented the container, in actuality linux containers had been around for some time, and Docker containers are based on them.  Which is really convenient &; if you&8217;re using Linux.  If, on the other hand, you are using a system that doesn&8217;t include Linux, such as a Mac, a Windows PC, or that Raspberry Pi you want to turn into an automatic goat feeder, you&8217;ve got a problem.
Docker requires linuxcontainers.  Which is a problem if you have no linux.
Enter LinuxKit.  
The idea behind LinuxKit is that you start with a minimal Linux kernal &8212; the base distro is only 35MB &8212; and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to.  Stephen Foskitt tweeted a picture of an example from the announcement:

More about LinuxKit DockerCon pic.twitter.com/TfRJ47yBdB
— Stephen Foskett (@SFoskett) April 18, 2017

The end result is that you can build containers that run on desktops, mainframes, bare metal, IoT, and VMs.
The project will be managed by the Linux Foundation, which is only fitting.
So what about Alpine, the minimal Linux that&8217;s at the heart of Docker?  Docker&8217;s security director, Nathan McCauley said that &8220;LinuxKit&8217;s roots are in Alpine.&8221;  The company will continue to use it for Docker.

Today we launch LinuxKit &8212; a Linux subsystem focussed on security. pic.twitter.com/Q0YJsX67ZT
— Nathan McCauley (@nathanmccauley) April 18, 2017

So what does this have to do with Moby?
Where LinuxKit has to do with Moby
If you&8217;re salivating at the idea of building your own Linux distribution, take a deep breath. LinuxKit is an assembly within Moby.  
So if you want to use LinuxKit, you need to download and install Moby, then use it to build your LinuxKit pieces.
So there you have it. You now have the ability to build your own Linux system, and your own containerization system. But it&8217;s definitely not for the faint of heart.
Resources

Wait – we can explain, says Moby, er, Docker amid rebrand meltdown • The Register
Moby, LinuxKit Kick Off New Docker Collaboration Phase | Software | LinuxInsider
Why Docker created the Moby Project | CIO
GitHub &; linuxkit/linuxkit: A toolkit for building secure, portable and lean operating systems for containers
Docker LinuxKit: Secure Linux containers for Windows, macOS, and clouds | ZDNet
Announcing LinuxKit: A Toolkit for building Secure, Lean and Portable Linux Subsystems &8211; Docker Blog
Stephen Foskett on Twitter: &8220;More about LinuxKit DockerCon https://t.co/TfRJ47yBdB&8221;
Introducing Moby Project: a new open-source project to advance the software containerization movement &8211; Docker Blog
DockerCon 2017: Moby’s Cool Hack sessions &8211; Docker Blog

The post OK, I give up. Is Docker now Moby? And what is LinuxKit? appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis