Collaboration is king at Cloud Foundry Summit EU

When businesses collaborate on open technology projects, everyone wins. That was the prevailing message throughout the Cloud Foundry Summit in Frankfurt, Germany.
Operators, developers, users and cloud providers gathered to share best practices and reflect on the state of this growing community. In the two years since the Cloud Foundry Foundation was launched, the community has grown tremendously, as these highlights show:

More than 31,000 code commits
2,400-plus code contributors
More than 130 core contributors
65 member companies
17 new member companies in 2016
195 user groups
53,050 individuals
Contributors from 132 cities

Cloud Foundry Foundation CEO Sam Ramji called open source collaboration “a positive-sum game,” meaning that just by participating, members inherently benefit. “The more people who play, the more we win,” he said. “The more you give, the more that is available to everyone.”
Ramji also said that this is “the beginning of a 20-year revolution around what cloud platforms can be.”
It’s ultimately up to the community and its wide stakeholder base to ensure that the revolution is a productive one.
IBM Bluemix continues to grow
IBM offers the world’s largest Cloud Foundry environment with its IBM Bluemix platform. It was on full display during the conference in breakout sessions and even on the mainstage.
Michael “dr.max” Maximilien, a scientist, architect and engineer with the IBM Bluemix team, joined Simon Moser, an IBM senior technical staff member, during the opening keynote to provide an overview of some of the lessons they’ve learned from working in a Cloud Foundry environment.

&;Embrace the weirdness.&; @mosersd & @maximilien share lessons learned from @IBMBluemix at Summit. pic.twitter.com/kJXklTivQX
— IBM Cloud (@IBMcloud) September 27, 2016

The conversation continued with a number of breakout sessions highlighting the emergence of technology in general, particularly OpenWhisk, an IBM open-source, serverless offering. Maximilien told the crowd in his breakout session that OpenWhisk is a continuation of the IBM tradition of launching exciting, new open tech projects.
“We want to help lead the serverless movement,” he said. “Think of OpenWhisk as a push in that direction.”
Kim Bannerman, who leads the Technical Advocacy and Community team inside the Office of the CTO at IBM Blue Box, hosted a panel on serverless technology that featured Ruben Orduz and Tyler Britten, both technical advocates for IBM Blue Box, along with Casey West and Kenny Bastani of Pivotal.
It was clear that we’re still in early days for this technology, as much of the conversation revolved around the question, “What is serverless?” It will be some time before we start to see real-world use cases and more enterprises adopting it. Still, its potential is clear.
A few of the highlights from that session:

Is it Functions as a Service? Event-driven computing? At CloudFoundry Summit, the serverless discussion goes beyond buzzword. pic.twitter.com/pMpR4DeBZB
— IBM Cloud (@IBMcloud) September 27, 2016

Closing the gender gap
One noteworthy topic strung throughout the conference was the gender gap across the IT profession. While the industry is doing a better job of welcoming women into what’s been a traditionally male-dominated sector, there’s still a long way to go in hiring more female developers, ensuring equal pay and seeing more women at the executive level.
On Wednesday, Ursula Morgenstern, global head of consulting and systems integration at Atos, took to the mainstage to deliver a hopeful message that could represent the catalyst that brings more women into the field.

Problems exist at all levels: entering IT, being stuck in the middle and not getting to the top CloudFoundry @u_morgen pic.twitter.com/uVi2bgOqhC
— Paula Kennedy (@PaulaLKennedy) September 28, 2016

“It’s not just about gender. Ethnically diverse companies outperform their competitors by 35%” &; @u_morgen CloudFoundry pic.twitter.com/37c5YJQroF
— Cloud Foundry (@cloudfoundry) September 28, 2016

Later that day, IBM sponsored a diversity luncheon, which brought together Cloud Foundry community members to discuss issues and potential solutions for advocating for a more inclusive IT industry.
Moving forward
As the Cloud Foundry community looks toward the future, three of its leaders— Jason McGee, VP and CTO of IBM Cloud Platform; Duncan Johnston-Watt, CEO of Cloudsoft, and Stormy Peters, VP of Developer Relations at Cloud Foundry—explained what members must do to advance the cause and promote more interoperability and cooperation between foundations.

The post Collaboration is king at Cloud Foundry Summit EU appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Streaming video is disrupting spectator sports

This year’s Olympics marked a leap in how we watch sports. The on-demand, always-on habit that disrupted television was mirrored in this year’s Olympic games. NBC streamed 3.3 billion minutes of Rio coverage, of which more than 2.7 billion minutes were live-streamed — a U.S. event record, according to the Sports Video Group. Just as HBO Go/HBO Now streamed Game of Thrones’s season 6 premiere to millions of viewers, the Games saw over 100 million people streaming video of the action in Rio.
This year’s live stream surpassed traffic numbers from the London and Sochi Olympics combined in just over a week — a powerful signal to the streaming video industry both around sports and other primetime content. Streaming video has brought the gametime experience, from tailgate to recap, online.
New players in streaming video investors
Streaming live sports existed prior to the Olympics, with fans tuning into everything from cricket to Gaelic football. Only in the last few years have the biggest sports in the world become available. Yahoo streamed its first NFL game last year, which drew 15.2 million unique viewers.
Until recently, media industry experts viewed sports as a way to prevent cord cutting (i.e. cancelling traditional TV or satellite services). Controlling sports, it seemed, would allow cable providers and major television networks to maintain fans’ subscriptions.
But in the meantime, non-media companies have joined the streaming video fray. Twitter’s Periscope service will stream every Thursday Night Football game this year. Sports media analyst Leslie Gittess says moves like this will ultimately change how sports are watched.
“The demographic will shift younger and we will see views increase exponentially,” she says. “Viewers will have access to their Snapchats, Twitter feeds, Facebook news, and Instagram accounts as they watch, and the data and information accompanying the stream will be as comprehensive as the viewer wants.”
The wide world of multiscreen streaming sports
Streaming video is drawing new categories of investors because there’s money to be made, and partnerships like the NFL’s deal with Twitter will continue to emerge. Netflix, Amazon, HBO and Hulu have already capitalized on on-demand streaming.
According to a 2015 Nielsen report, more than 40 percent of U.S. households have access to a streaming service and 14 percent have two or more subscriptions. Already, sports fans are willing to pay extra for access to streams –—with more than 12 percent of sports viewers now saying they would pay more than $20 a month to stream a single sport.
“Online streaming of live professional sports has made it to prime time. I see Apple, Google [YouTube], Netflix, Facebook and Snapchat entering the bidding on the next wave of live professional and college sports rights,” Gittess says. “It has made sense for so long. Enjoy the game exactly as you want — this is the future!”
Find out more about IBM Cloud video solutions.
The post Streaming video is disrupting spectator sports appeared first on news.
Quelle: Thoughts on Cloud

What’s the big deal about running OpenStack in containers?

The post What&;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Ever since containers began their meteoric rise in the technical consciousness, people have been wondering what it would mean for OpenStack. Some of the predictions were dire (that OpenStack would cease to be relevant), some were more practical (that containers are not mini VMs, and anyway, they need resources to run on, and OpenStack still existed to manage those resources).
But there were a few people who realized that there was yet another possibility: that containers could actually save OpenStack.
Look, it&8217;s no secret that deploying and managing OpenStack is difficult at best, and frustratingly impossible at worst. So what if I told you that using Kubernetes and containers could make it easy?
Mirantis has been experimenting with container-based OpenStack for the past several years &; since before it was &;cool&; &8212; and lately we&8217;d decided on an architecture that would enable us to take advantage of the management capabilities and scalability that comes with the Kubernetes container orchestration engine.  (You might have seen the news that we&8217;ve also acquired TCP Cloud, which will help us jump our R&D forward about 9 months.)
Specifically, using Kubernetes as an OpenStack underlay lets us turn a monolithic software package into discrete services with well-defined APIs that can be freely distributed, orchestrated, recovered, upgraded and replaced &8212; often automatically based on configured business logic.
That said, it&8217;s more than just dropping OpenStack into containers, and talk is cheap. It&8217;s one thing for me to say that Kubernetes makes it easy to deploy OpenStack services.  And frankly, almost anything would be easier than deploying, say, a new controller with today&8217;s systems.
But what if I told you you could turn an empty bare metal node into an OpenStack controller just by adding a couple of tags to it?
Have a look at this video (you&8217;ll have to drop your information in the form, but it just takes a second):
Containerizing the OpenStack Control Plane on Kubernetes: auto-scaling OpenStack services
I know, right? Are you as excited about this as I am?
The post What&8217;s the big deal about running OpenStack in containers? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

4 steps to set up a high-performance computing cloud instance

In recent years, an abundance of data has presented new challenges to companies.
Increasingly, they’ve been turning to high-performance computing (HPC) to tackle these challenges.
HPC is the process of aggregating computing power to accelerate system performance to solve large computation problems in science, engineering or business. The infrastructure for high-performance computing can be costly, so many have decided to host HPC in the cloud.
The steps below show you how to define an HPC cloud instance on SoftLayer, the IBM infrastructure-as-a-service (IaaS) platform.
1. Identify your workload and business requirements.
Before setting up an HPC environment, first consider the workload that you’ll host in the cloud. Then consider the business requirements of the solution. Based on these requirements, the infrastructure team can develop the underlying SoftLayer infrastructure needed to harness the power of HPC in the cloud: a solution that will help the business meet its identified and targeted objectives.
2. Set up the HPC-on-cloud instance.
Now that you’ve identified the technical and non-functional requirements of the workload, it’s time to set up the SoftLayer HPC environment. IBM can assist you with the design and development of your SoftLayer HPC environment, or you can use the SoftLayer self-service portal.
For example, let’s assume the infrastructure team has identified the need for an HPC instance to support its Hadoop workload. Though SoftLayer offers both bare metal and virtual compute nodes, in this case you should select bare metal (dedicated) servers with NVIDIA Tesla GPUs. By using NVIDIA GPUs, the processor of the bare metal server will be able to manage the high compute requirements of the Hadoop workload to accelerate processing performance.
You have the option of hourly or monthly pricing, and you will select the number of bare metal servers you need to meet the operational requirements of the identified workload.

3. Configure the servers.
During server configuration and setup, you can select the operating system, memory and storage. These selections should be based on application workload and technical requirements. You can also select the location of the data center where the high-performance computing instance will be hosted. If the workload requires higher throughput with low latency, you should add InfiniBand, which can support up to 56 GB per second of throughput for your HPC on SoftLayer compute instance.

One advantage of the HPC SoftLayer environment is the ability to “auto scale.” This feature allows you to design a Hadoop workload based on steady-state utilization requirements. During peak workload compute utilization, the HPC compute instance will automatically spin up additional compute resources to meet the processing demands of the workload. After utilization has returned to the defined steady state and the “cooldown” period has been met, the system will automatically de-provision the additional resources.
4. Reap the benefits.
By defining your HPC cloud instance on SoftLayer, you receive the benefits of a consumption model that gives you the flexibility to increase or decrease resources based on the changing demands of your business. With SoftLayer, you can take full advantage of the benefits of hosting HPC on the cloud. Check out the post “Empowering high-performance computing in the cloud” to learn more.
To learn more about HPC and other technology and features available with SoftLayer, check out our Cloud How-To webcast series.
The post 4 steps to set up a high-performance computing cloud instance appeared first on news.
Quelle: Thoughts on Cloud

Hybrid cloud through the IBM Edge kaleidoscope

IBM Edge 2016 — a global conference that provides a platform for IT leaders to design, build and deliver infrastructure in the cloud — came, mesmerized, and conquered.
The hybrid cloud team engaged with clients, talking with them about new technologies and inspiring them in thought leadership sessions with our subject matter experts, demos, client stories and more.
The event kicked off with general sessions full of stories about how hybrid cloud is helping organizations. In one session, Tom Rosamilia, Senior Vice President of IBM Systems, spoke about how organizations lead by example with hybrid cloud. Red Bull Racing, the F1 Double World Champions, took the stage and told us how they roared and rose high over the competitors using cutting-edge technology from IBM.
The IBM Cloud Integration booth was where all the action was at IBM Edge, where visitors could experience the technology that makes hybrid cloud a reality. This is where the IBM Cloud Orchestrator team showed clients exactly how Cloud Orchestrator allows them to manage public, private, and hybrid clouds with a rapid configuration, provisioning and deployment process. Cloud Orchestrator enables organizations to go live sooner as they develop and test applications, integrating various tools with cloud services.
Here’s a quick look at what it can do:

Rapidly accelerate delivery times. It exponentially improves service delivery times and cuts provisioning times from weeks to minutes.
Increase profits by reducing costs. Cloud Orchestrator gets rid of process-heavy management tools and automates manual workloads.
Innovate and lead with confidence. Users can leverage public cloud services to innovate while keeping business policies intact within IT services.

IBM Cloud Orchestrator transforms the IT services wing into a self-service organization. One example of a customer who saw a massive decrease in delivery time was Bob Hunt, Enterprise Systems Management Manager at American Greetings. He said his company “cut server provisioning times from two weeks to 20 minutes.”
Likewise, Paul Lu, CEO of Wuxi Lake Cloud, said, “Previously, our quickest deployment was two weeks, but some took as long as two months. Now we’re doing it in a week, and we’re anticipating three-day deployments in the future. We’ve reduced system recovery times by about 75 percent.”
IBM Cloud Orchestrator has become a leader in hybrid cloud management by automating cloud services and rapidly speeding up agile IT Service delivery.
Learn more about IBM Cloud Orchestrator.
 
The post Hybrid cloud through the IBM Edge kaleidoscope appeared first on news.
Quelle: Thoughts on Cloud

Migrating On-premise VMs to Azure

In 2008, the company I worked for at the time finally felt that virtualization was ready to host production workloads.  We stood up a two node VMware ESX 3.5 cluster, and started to migrate a handful of Linux, Windows and Novell Netware (!) servers from bare metal to virtual.  Even with VMware’s migration tooling, it was still a very manual process.  I scripted as much as I could, but my higher ups never felt good about farming the process out to lower level resources.  It was always me who was on the hook for physical to virtual migrations in after hour maintenance windows.
But that was a lifetime ago in terms of technology, and long before today’s DevOps mentality and tooling existed.  I don’t hear as many customers planning P2V (Physical-to-Virtual) migrations these days.  Instead, they’re asking about V2V (Virtual-to-Virtual), or to be more specific, how can they move on-prem workloads to the cloud: V2C (Virtual-to-Cloud).  Quite a few times, I’ve been asked “Can CloudForms help me migrate VMs from my internal virtual infrastructure to the cloud?”
The answer I usually give is, “Not out of the box, but with CloudForms Automation and Red Hat Consulting services, it’s definitely possible.”  No customer ever really pursued this beyond the initial inquiry, however. My own curiosity and interest in Microsoft Azure lead me to try to actually prove this concept out. I submitted a proposal for Red Hat Summit for this year on automating on-prem to Azure migrations using Red Hat CloudForms, which was accepted. I wanted to demonstrate that CloudForms can do just about anything you can think of, with your imagination and knowledge of Ruby being the only real limiting factors.
All of the CloudForms automation methods and Ansible playbooks required to enable the migration of on-premise VMs running on Hyper-V to Azure are available on GitHub. There is also a video available of the process on YouTube.
There are two main challenges when it comes to performing any V2V migration:  dealing with the differences in virtual hardware, and converting between different virtual disk formats.  For the first proof-of-concept, I decided to take a bit of a shortcut by using Microsoft Hyper-V as my on-prem infrastructure source, and Microsoft Azure as my cloud destination.  We are seeing a lot of  interest in Azure as a cloud provider since we added support in CloudForms 4.0.  Since there is a lot common ground between Azure and Hyper-V, it was logical to start with these two platforms.  They both use a similar virtual disk format &; only a bit of metadata needs to be removed from a Hyper-V disk before it can be uploaded and used as an image in Azure.  They also use the same virtual hardware, so there is no need to worry about the drivers and kernel modules change.
Here is a workflow of the migration process:

Selecting a VM to migrate
Retrieving Azure information
Preparing the VM
Converting the virtual disk to VHD
Uploading the converted disk
Provisioning the VM in Azure

Selecting a VM to Migrate
The process is initiated by selecting a VM from your on-prem infrastructure.  I used SCVMM/Hyper-V for this test, and I hope to use the same process for VMware & Red Hat Virtualization in the near future. I also tested with Red Hat Enterprise Linux 7 as the guest operating systems, but hope to use other guest operating systems in the future.
Select ‘Migrate to Azure’ from the ‘Migration’ button &8211; a custom button on VMs that leads to the Azure migration dialog.

Retrieving Azure Information
The Azure migration dialog uses several automation methods to retrieve information from the Azure provider in CloudForms.  The basic workflow is:

The Azure credentials and region are derived from the provider information in CloudForms
The resource group list is pulled via the Azure resource manager API, using CloudForms native azure-armrest Ruby gem
Once one of the resource groups is selected, the list of storage accounts, networks, and subnets are refreshed
The user selects the OS type (Windows or Linux), the instance size, enters a password for the “clouduser” account, and a name for the network interface and public IP address resources

Preparing the VM
When the submit button is clicked, CloudForms leverages its Ansible Tower integration to launch a job template that removes VM specific information (e.g. udev rules, SSH host keys, ethernet configuration) and installs the Windows Azure Linux Agent, as required to run on Azure. Similarly, Windows VMs have sysprep run against them to remove machine specific information.

Converting the Virtual Disk to VHD
The VM is shut down as the last task of the Ansible playbook. At this point, the virtual disk can be converted to VHD format to run on Azure.  In the case of Hyper-V, this means CloudForms starts a WinRM remote session against the Hyper-V host the VM was running on.  Using PowerShell, the path to the virtual hard disk is derived, and the disk is converted from VHDX to VHD format &8211; some extra metadata are removed from VHDX to VHD. Upon completion, the file is ready for upload to the selected storage account.
Uploading the Converted Disk
The upload to Azure uses the same WinRM session on the Hyper-V host which requires the installation of Azure Resource Manager powershell commandlets (performed once).  The migration method requires the Azure session credentials to be saved in the file:
C:credsazure.txt

Provisioning the VM in Azure
The time required to upload the image depends on the Internet bandwith. Once finished, a new public IP resource is created, along with a new network interface, and the two resources are associated with one another.  With a functioning network interface, a new instance is created from cloning the uploaded virtual disk.
The process takes approximately 2-3 minutes and results in a new instance ready to SSH/RDPe into. This instance is now listed in the Azure inventory in CloudForms.

Conclusion
In this article, we looked at how CloudForms can reduce a multi-step V2C process to a couple of clicks from a dialog.  This allows IT teams to take complex processes that were previously entrusted to the highest level engineers and put them in the hands of lower level administrators. The Ansible Tower integration added since CloudForms 4.1 extends this even further.
A video of the Azure migration process is available, and you can keep up with the development of this automation method on github.
Quelle: CloudForms

Let’s meet in Barcelona at the OpenStack Summit!

The post Let’s meet in Barcelona at the OpenStack Summit! appeared first on Mirantis | The Pure Play OpenStack Company.

As we count down the days to the OpenStack Summit in Barcelona on October 24-28, we’re getting ready to share memorable experiences, knowledge, and fun!

Come to booth C27 to see what we&;ve built with OpenStack, and join in an &;Easter Egg Hunt&; that will test your observational skills and knowledge of OpenStack, Containers, and Mirantis swag from prior summits. If you find enough Easter eggs, you&8217;re entered in our prize drawing for a $300 Visa gift card or an OpenStack certification exam from our OpenStack Training team ($400 value). And as always, we’re giving away more awesome swag you’ve come to expect from us.

If you&8217;d like to set up some time at the summit to talk with our team, simply contact us and we&8217;ll schedule a meeting.

REQUEST A MEETING

 
Free Training
Mirantis is also providing two FREE training courses based on our standard industry-leading curriculum. If you&8217;re interested in attending, please follow the links below to register:

Tuesday, October 25th: OpenStack Fundamentals
Wednesday, October 26th: Introduction to Kubernetes &; Docker

 
Mirantis Presentations
Here&8217;s where you can find us during the summit&;.
TUESDAY OCTOBER 25

Tuesday, 12:15pm-12:55pm
Level: Intermediate
Chasing 1000 nodes scale
(Dina Belova and Alex Shaposhnikov, Mirantis; Inria)

Tuesday, 12:15pm-12:55pm
Level: Intermediate
OpenStack: you can take it to the bank!
(Ivan Krovyakov, Mirantis; Sberbank)

Tuesday, 3:05pm-3:45pm
Level: Intermediate
Live From Oslo
(Oleksii Zamiatin, Mirantis; EasyStack, Red Hat, HP)

Tuesday, 3:55pm-4:35pm
Level: Intermediate
Is your cloud forecast a bit foggy?
(Oleksii Zamiatin, Mirantis; EasyStack, Red Hat, HP)

Tuesday, 5:05pm-5:45pm
Level: Intermediate
Kerberos and Health Checks and Bare Metal, Oh My! Updates to OpenStack Sahara in Newton.
(Nikita Konovalov and Vitaly Gridnev, Mirantis; Red Hat)

WEDNESDAY OCTOBER 26

Wednesday, 11:25am-12:05pm
Level: Intermediate
The race conditions of Neutron L3 HA&8217;s scheduler under scale performance
(Ann Taraday and Kevin Benton, Mirantis; Red Hat)

Wednesday, 11:25am-12:05pm
Level: Advanced
The race conditions of Neutron L3 HA&8217;s scheduler under scale performance
(Florin Stingaciu and Shaun O&8217;Meara, Mirantis)

Wednesday, 12:15pm-12:55pm
Level: Beginner
The Good, Bad and Ugly: OpenStack Consumption Models
(Amar Kapadia, Mirantis; IDC, EMC, Canonical)

Wednesday, 12:15pm-12:55pm
Level: Intermediate
OpenStack Journey in Tieto Elastic Cloud
(Jakub Pavlík, Mirantis TCP Cloud; Tieto)

Wednesday, 2:15pm-3:45pm
Level: Intermediate
User Committee Session
(Hana Sulcova, Mirantis TCP Cloud; Comcast, Workday, MIT)

Wednesday, 3:55pm-4:35pm
Level: Beginner
Lessons from the Community: What I&8217;ve Learned As An OpenStack Day Organizer
(Hana Sulcova, Mirantis TCP Cloud; Tesora, GigaSpaces, CloudDon, Intel, Huawei)

Wednesday, 3:05pm-3:45pm
Level: Beginner
Glare &; a unified binary repository for OpenStack
(Mike Fedosin and Kairat Kushaev, Mirantis)

Wednesday, 3:55pm-4:30pm
Level: Intermediate
OpenStack Requirements : What we are doing, what to expect and whats next
(Davanum Srinivas, Mirantis; RedHat)

Wednesday, 3:55pm-4:35pm
Level: Intermediate
Is OpenStack Neutron production ready for large scale deployments?
(Oleg Bondarev, Satish Salagame and Elena Ezhova, Mirantis)

Wednesday, 5:05pm-5:45pm
Level: Beginner
How Four Superusers Measure the Business Value of their OpenStack Cloud
(Kamesh Pemmaraju and Amar Kapadia, Mirantis)

THURSDAY OCTOBER 27

Thursday, 9:00am-9:40am
Level: Intermediate
Sleep Better at Night: OpenStack Cloud Auto­-Healing
(Mykyta Gubenko and Alexander Sakhnov, Mirantis)

Thursday, 11:00am-11:40am
Level: Advanced
OpenStack on Kubernetes &8211; Lessons learned
(Sergey Lukjanova, Mirantis; Intel, CoreOS)

Thursday, 11:00am-11:40am
Level: Intermediate
Unified networking for VMs and containers for Openstack and k8s using Calico and OVS
(Vladimir Eremin, Mirantis; Intel)

Thursday, 11:50am-12:30pm
Level: Intermediate
Kubernetes SDN Performance and Architecture Evaluation at Scale
(Jakub Pavlík and Marek Celoud, Mirantis TCP Cloud)

Thursday, 3:30pm-4:10pm
Level: Advanced
Ironic Grenade: Blowing up our upgrades.
(Vasyl Saienko, Mirantis; Intel)

Thursday, 3:30pm-4:10pm
Level: Beginner
Application Catalogs: understanding Glare, Murano and Community App Catalog
(Alexander Tivelkov and Kirill Zaitsev, Mirantis)

Thursday, 5:30pm-6:10pm
Level: Beginner
What&8217;s new in OpenStack File Share Services (Manila)
(Gregory Elkinbard, Mirantis; NetApp)
The post Let’s meet in Barcelona at the OpenStack Summit! appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis