Most enterprises tailor hybrid cloud to their specific needs

CIOs, CTOs and all line-of-business leaders looking to gain differentiation and strategic advantage: you&;ve come a long way in the last four years when it comes to cloud technology.
That&8217;s one of the key takeaways from a new IBM Institute for Business Value report, Tailoring Hybrid Cloud.
My co-authors — IBMers Justin Chua, Robert Freese, Anthony Karimi, Julie Schuneman — and I wanted to answer a specific question: how are organizations currently differentiating themselves using cloud? To find out, we interviewed 30 executives and surveyed 1,000 global respondents from 18 industries. Sixty-one percent of respondents held the title of CIO, CTO or head of IT.
We learned some interesting things:

In 2012, cloud was still viewed as something &;special.&; No longer. Seventy-eight percent of the executives we spoke with described their cloud initiatives as coordinated or fully integrated.
However, even with the rising use of cloud overall, almost half of computing workloads are expected to remain on dedicated, on-premises servers.

The implications of this became clear as we spoke to executives. Each enterprise is trying to tailor hybrid cloud to what best suits it.
Most often, it&8217;s a blend of public cloud, private cloud and traditional IT services. For many of these enterprises, finding the right cloud technology mix starts with deciding what to move to the cloud and addressing the challenges that can affect migration.
Our study also found that innovation advantages can be gained through rapid experimentation,strategic application programming interfaces (APIs) and extended access to external talent and technologies.
Conducting rapid experimentation gives innovative organizations the crucial ability to test and fail quickly. Cloud, with its on-demand and scalable attributes, enables this sort of nimble development and testing. What’s more, quick and automated resource provisioning can shorten development time and reduce time to market.
We discovered that executives achieved the strongest results, true strategic advantage and differentiation, by integrating cloud initiatives company-wide and tapping external resources for access to additional skills and greater efficiency.
Probably the most important thing the study revealed for organizations that are just beginning to tap into cloud technology or are ready to take the next step in digital transformation comes by way of three questions:

How is your organization planning to incorporate hybrid cloud into your overall transformation strategy?
What is the optimal combination of cloud and on-premises IT investments for your organization? What factors will you regularly monitor to identify needed changes over time?
How effective are you in tapping into external resources in assessing and implementing cloud-based solutions?

Cloud can be the centerpiece of an overall organizational transformation. Potential business impacts and the associated financial implications require ongoing scrutiny. During each stage of cloud adoption, combine the insights of business and IT. A tailor-made environment for your organization will be possible when IT employees truly understand what the business needs and line-of-business employees know what technologies/IT can do for them.
To learn more, read the IBM Institute for Business Value report, Tailoring hybrid cloud: designing the right mix for innovation, efficiency and growth.

The post Most enterprises tailor hybrid cloud to their specific needs appeared first on .
Quelle: Thoughts on Cloud

Report: IBM public cloud empowers developers

The latest edition of Forrester Research’s Forrester Wave report which evaluates global public cloud platforms characterized IBM as a “strong performer” in public cloud.
IBM earned “the highest possible score for its private and hybrid cloud strategy as well as the top ranking for IBM’s infrastructure services,” eWeek reports. Forrester’s study used 34 evaluation criteria to evaluate eight global cloud platform service providers.
In particular, IBM empowers enterprise developers with the tools they need to build applications, Forrester’s report contends. It cites “platform configuration options, app migration services, cognitive analytics services, security and compliance certifications, complex networking support, growing partner roster and native DevOps tools” as strengths.
In a statement, Bill Karpovich, general manager of the IBM Cloud Platform, said:
We believe being recognized as a strong performer in Forrester&;s latest Wave report reinforces what we hear from our clients every day—that cloud is not &;one size fits all.&8217; Enterprises require choice and expertise to evolve their diverse application portfolios, and IBM Cloud was designed to deliver on those core tenets.
For more about the Q3 Forrester Wave study, check out eWeek’s full report.
The post Report: IBM public cloud empowers developers appeared first on .
Quelle: Thoughts on Cloud

Welcome to the new Thoughts on Cloud

Notice anything different?
Today we launch a brand new design for Thoughts on Cloud, with several new features and navigational tools we hope will improve your reading experience. The changes are intended to make whatever you&;re looking for a snap to find, easy to share, enjoyable to read and open to your feedback.
The first thing you surely noticed is our new user interface, which we hope you find aesthetically pleasing and intuitive.
Here&8217;s what won&8217;t be changing: our content. We will continue to bring you the best in thought leadership and analysis from within IBM Cloud and elsewhere on topics within the sphere of , including hybrid cloud, security, app development, cognitive computing, storage, mobile, big data and more. If any of those topics are of particular interest to you, we have categorized all our posts by topic for easy access. Simply hover over the dropdown menu at the top of the page for a list of categories.
If you click on a specific post of interest, scroll to the bottom to find three recommended, related articles. If you&8217;d like to know what&8217;s popular on the particular day you&8217;re browsing the site, that&8217;s available in the sidebar to the right of the post text. Also in that sidebar is a quick, real-time look at the IBM Cloud Twitter feed so you can see the latest in cloud news.
We have opened up comments on many of our posts, so please join in our conversation about what&8217;s new in cloud computing. Thanks for reading Thoughts on Cloud. Stay tuned for much more about the world of cloud computing.
Save
The post Welcome to the new Thoughts on Cloud appeared first on Cloud Computing.
Quelle: Thoughts on Cloud

Using IBM Cloud, KLM Open mobile app gives fans real-time access

Every September, 40,000-plus fans and visitors come to the KLM Open, the oldest golf tournament on the European Tour. To keep all these visitors engaged over the four-day event, KLM Open strives each year to improve the live experience for fans by creating new features to their mobile application.
Golf has never been the kind of spectator sport that soccer or baseball are, because fans are not allowed to cheer while the golfers play (or make any noise), and, unlike stadium sports, the players do not perform in a confined viewing area. Fans must know where and when their favorite player will be on the course at any given time. They watch players when they tee off and then watch the players walk the course for three or four hours. The fans might catch the players when they arrive at the last hole, but even that is not certain because it’s hard to predict what time they will be there.
The KLM Open promoter and organizer, TIG Sports, worked with IBM Cloud Services to develop an interactive mobile app with IBM MobileFirst software hosted on IBM Cloud. The team developed the solution’s back end using the IBM Bluemix platform and IBM DevOps for Bluemix services.
Each flight, or group, of golfers carries an IBM-developed GPS tracker during the tournament. Location data is transmitted to the IBM cloud infrastructure and combined with scores and other media content using API Connect, giving fans real-time access to leaderboards, players’ locations and maps that show the user’s current location and how to get to various points of interest. Golf officials use the app on an iPad and knowing the location of the players has proven useful when there is a dispute, a rules official is needed. Location data helps the official get there sooner and keep the game moving.

The app, created in 2014, gets better and better each year. In 2015, there were three times as many downloads as the year before and it supported as many as 6,000 concurrent users. The app was even modified mid-tournament to address a fan’s suggestion and updated overnight with enhanced radio functionality. When users opened the app the next morning, they received a pop-up notification and a prompt to download the update. This MobileFirst direct update capability enables developers to bypass the review and approval cycles of app stores, which would likely take longer than the golf tournament itself. Radio broadcast integration only works with a headset so the app itself is silent.
The 2015 version of the app was nominated for a Dutch Computable Award in the Cloud category. This years’ version of the app has been created following the IBM renowned Design Thinking process and is designed using the best in class IBM iX user interface design guidelines.
Because golf courses are huge and busy and the 2016 tournament will be at a different course than in previous years, finding people will be important. “Find my Friend” is a way for fans to meet up with each other, and will include an option for navigation so they’ll even know how to get where they need to go. Additionally, the new version of the app will include the ability for fans to be alerted when their favorite player is going to tee off giving the fan ample time to get to the hole and actually watch.
Also new in 2016 is gamification at Hole 14, an experience which will be modeled after the Waste Management Phoenix Open’s 16th Hole. This is one location where neither the fans nor the app must remain silent, and in fact, clapping and shouting are allowed and interaction is expected. The app will be able to sense when a user’s mobile device is near the 14th hole and sounds are permitted. There will be a “Closest to the Pin” contest between spectators, seated on three sets of bleachers. They will use the app to predict which golfer will tee the ball closest to the flag. The Watson Internet of Things platform on Bluemix will be used to collect and analyze the data from the thousands of smartphones’ and visitors placing bets. The KLM Open tournament will provide prizes for those spectators who have the highest scores or streaks (best of three) in predicting the contest. Watson Analytics is used to analyze five years of historical players’ data to help predict the winner.
The KLM Open mobile app has indeed transformed the event for fans from a standing-around-waiting occasion to a more interactive, engaging experience — like soccer or baseball — which is attractive especially to younger fans.
Learn more about IBM Cloud solutions.

The post Using IBM Cloud, KLM Open mobile app gives fans real-time access appeared first on .
Quelle: Thoughts on Cloud

Can a secure private cloud excel in an on-premises VMware environment?

IT departments have a lot on their plates.
A typical IT department managing workloads and systems is measured by key performance indicators (KPIs) including security, addressing service failures, restoring systems quickly, minimizing the mean time to repair, helping improve staff efficiency by being proactive and not reactive, aiming for a faster deployment of services, minimizing migration costs to newer platforms, supporting service-level agreements and efficiently planning for capacity changes.
In enterprises that run VMware, IT staffs are trained to support the KPIs for on-premises production as well as development and test workloads. However, as businesses are looking to cut costs, many choose hybrid cloud with a dedicated, off-premises private cloud. While the enterprise goal is to reduce capital expenditures (CAPEX), the current ecosystem of cloud provider solutions requires migration efforts, because the source and target cloud architectures are often not based in VMware.
IBM has worked with thousands of clients. We’ve found that workload integration and migration issues are exacerbated by mergers, acquisitions, divestures, unexpected service interruptions, changes in IT productivity, the need for IT agility and when lines of business launch new innovation initiatives.
Enterprises need a cloud offering that:

Leverages the current IT department skillset, tools, processes, VM templates and scripts used on-premises, and offers an option to self-manage.
Allows root access to systems and hypervisor for system administrators to have full control over the environment And only need to open tickets on failure.
Facilitates data centers across the globe with private, high-speed connections and resources providing the highest performance.
Has secure and dedicated off-premises private cloud resources supporting VMware software-defined data center (SDDC) architectures.
Meets compliance standards such as SOC1-2-3 reports, ISO 270001, ISO 27017, ISO 27018, CSA STAR, PCI, HIPPA, Intel Trusted Execution Technology and EU clauses.
Is based on VMware SDDC architecture completely configured on-demand and ready to use.
Can scale quickly, up or down, and allows existing VMware workloads to be lifted and shifted with minimal effort, alleviating the cost of migration.
Allows live migration of workloads from and on-premises environment to and off-premises, dedicated private cloud that delivers mission-critical services on-demand.
Supports software-defined networking solutions based on VMware NSX, providing virtually unlimited networking flexibility in defining new VLANs, assigning IP addresses, and establishing network services such as load balancing, firewalls and routers.
Supports backup and disaster scenarios, as well as the ability to set up development, test, training or lab capacity.
Supports VSAN shared storage, providing highly available, high performance, single-tenant storage which grows automatically as additional capacity nodes are added.
Offers additional VMware components procured on a simple, per-CPU per-month basis with no long term contracts (Site Recovery Manager, VMware Integrated OpenStack, etc.).

What is the solution that address these requirements?
While there are offerings that support VMware based solutions, enterprises should consider the global presence of data centers and high speed private network connectivity across the globe and consistent delivery.
Learn more about VMware on IBM Cloud.
Save
The post Can a secure private cloud excel in an on-premises VMware environment? appeared first on .
Quelle: Thoughts on Cloud

Gaming is serious business

The post Gaming is serious business appeared first on Mirantis | The Pure Play OpenStack Company.

G-Core Labs had a problem.

The Luxembourg-based global IT solutions provider, which offers a wide range of services, including hosting, CDN, peering network and different levels of support, focuses on creating custom solutions designed to address specific needs of their clients. This meant that securing the online gaming company Wargaming (the makers of World of Tanks, World of Warships and World of Warplanes) as one of its first customers led to G-Core building an unprecedented network infrastructure to support the explosive growth of Wargaming’s World of Tanks audience.

This expertise led G-Core to become the hosting provider of choice for many online gaming clients, but they soon discovered the demands are intense. Analysts estimated 2015 worldwide revenue for online gaming at $65 billion (almost twice the total revenue for movie theaters), predicting a 12 percent annual growth rate through 2019.

G-Core knew these demands first-hand. The company’s infrastructure had enabled it to contribute to setting several Guinness World Records for concurrent players online, with a total peak of 1,114,000.

The company knew it had to take a step forward if it was going to continue providing this level of service to its customers. It was going to have to start looking at the cloud.

Online gaming and OpenStack cloud
The game development industry is a very complex market, with multiple use cases for cloud environments such as OpenStack. For example:

Cyber-sports need video broadcasting (LINK TO AUSTIN SUMMIT)
Game development is extremely resource-heavy; in order to enable developers to exercise their creativity (and get to market fast) their environment must be as agile as possible
In most cases, the game itself is only a portion of the computing resources needed; most games also require a significant web presence for player management, signups, and other marketing   
And of course, game processing, especially for Massively Multiplayer Online Role Playing Games (MMORP)

“We are convinced the key to success in these projects lies not only in the quality of the game itself, but the infrastructure’s ability to evolve together with the project,” said Andre Reitenbach, G-Core Director. “Mirantis OpenStack gives us the flexibility to be even more creative and adapt even faster to players’ wants and needs.”

In G-Core’s case, their biggest concern at the start was securing resources for engineer creativity and marketing.

The challenges of moving forward
Initially developed to support online gaming, G-Core has three main KPIs: low latency, high availability and cost optimization. These parameters pervade many online industries, such as banking or streaming services, thereby expanding the potential client base far beyond gaming.

But moving forward with a cloud architecture is about more than just technical issues. The company would have to overcome business and cultural issues as well, and that meant that the restructuring of business processes had to be planned, together with the technological shift. After all, it’s not enough to implement a new tool; it’s important to understand — and communicate — how it can most effectively solve specific business problems.

For example, consider the change in the internal SLA between DevOps at Wargaming and the Admins at G-Core. The two teams had to communicate and understand what each expected of the other, and how to best take advantage of the new capabilities the cloud environment would bring.

All of this required a partner who could not only provide G-Core with the technology, but also guide it down what would initially be an unfamiliar road.

“On the whole, OpenStack satisfied the requirements,” Andre Reitenbach said, “but we needed a partner that would help build and customize the solution. Mirantis is one of the top contributors to OpenStack, which immediately attracted our attention.  Whenever we dealt with a new vendor, we thought that their experience left much to be desired. We conducted independent testing of Mirantis OpenStack in our datacenter, and the results we got were key.”

G-Core’s journey
Although it’s common to focus on the actual deployment of OpenStack, perhaps the most important part of a cloud project comes long before you ever deploy a single bit. The first step is to figure out just what it is you need, and how to get there.

G-Core was coming from an architecture that had been built on other virtualization environments, and was suffering from high capital expenses and long production times due to manual operations. Even without touching the production game engine, they knew they could do better.

Ultimately the plan was to create four OpenStack private clouds: Staging Trunk for personal virtual sandboxes that developers could use to debug/troubleshoot the code, Staging Stable for pre-release testing of the new software versions, Prod Test for pre-production small scale public A/B testing, and finally Production for publicly available production workloads.  G-Core and Wargaming would also create their own Murano application to provide basic integration of the OpenStack environment with their existing CMDB system so IP addresses and hostnames of the instances would comply with Wargaming policies and get registered in the database.

To start, G-Core and Wargaming have moved the user-facing website and forums for World of Warplanes to this architecture. Ultimately their goal is to make all of G-Core’s infrastructure more flexible, but already the project has resulted in cutting capital expense by half, increasing server utilization by 50%, and reducing time-to-production by using APIs and automation for manual operations. The migration to Mirantis OpenStack helped G-Core eliminate routine labor-intensive operations and allowed the company to fully automate internal processes, giving engineers the resources to focus on implementing new, innovative technologies.

“This use case is yet another demonstration of an acute need for open cloud in the media and entertainment sector,” said Mirantis CMO, Boris Renski. “Big names in the sector, like Disney and Sony Entertainment, have already been vocal about their OpenStack usage as a means of embracing Agile IT, but having legacy-free and natively agile clients of G-Core, such as Wargaming and Hitbox, embrace the technology is a true testament to the disruptive value of OpenStack.”
Looking to the future
Overall, Mirantis OpenStack cost-effectively addresses G-Core’s current pain points and offers a flexible solution to their Online Presence team without compromising on performance and scale, but there was one more piece to the puzzle.

Because G-Core would be operating the cloud themselves, Mirantis offered not only the technical part of the solution, but also training. Company engineers and operators were able to take Mirantis’ OS100 (to gain a detailed understanding of the steps necessary to operate an OpenStack environment) and FUEL100 (for both the theoretical knowledge and hands-on skill set required to operate Fuel to enable the deployment of OpenStack) classes in order to deepen their expertise.

Looking for more?
G-Core, together with its key client, Wargaming, has been building a worldwide solution that meets the highest industry standards. The complete service provided by G-Core, from hosting and CDN to network connectivity and protection from DDoS attacks, enables companies to entrust their IT tasks to trustworthy experts and focus their efforts on their core value expertise — in this case, game development.

Interested in more information about how OpenStack enabled G-Core to provide the environment Wargaming needed? Read the full case study today.

Tank by Conal Gallagher is licensed under CC by 2.0The post Gaming is serious business appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Introducing patches to RDO CloudSIG packages

RDO infrastructure and tooling has been changing/improving with each
OpenStack release and we now have our own packaging workflow powered by RPM
factory at review.rdoproject.org, designed
to keep up with supersonic speed of upstream development.

Let’s see what it takes to land a patch in RDO CloudSIG repos with the new
workflow!

The Quest

This is a short story about backporting an upstream OpenStack Swift patch into
RDO Mitaka openstack-swift package.

Please consult
RDO Packaging docs
for additional information.

First Things First

Make sure you have latest
rdopkg from
jruzicka/rdopkg
copr.
This is a new code added alongside existing functionality and it isn’t well
tested yet, bugs need to be ironed out. If you encounter rdopkg bug, please
report how it broke.

Inspect rdoinfo package
metadata including various URLs using rdopkg info:

$ rdopkg info openstack-swift

name: openstack-swift
project: swift
conf: rpmfactory-core
upstream: git://git.openstack.org/openstack/swift
patches: http://review.rdoproject.org/r/p/openstack/swift.git
distgit: http://review.rdoproject.org/r/p/openstack/swift-distgit.git
master-distgit: http://review.rdoproject.org/r/p/openstack/swift-distgit.git
review-origin: ssh://review.rdoproject.org:29418/openstack/swift-distgit.git
review-patches: ssh://review.rdoproject.org:29418/openstack/swift.git
tags:
liberty: null
mitaka: null
newton: null
newton-uc: null
maintainers:
– zaitcev@redhat.com

Yeah, that’s the Swift we want. Let’s use rdopkg clone to clone the distgit
and also setup remotes according to rdoinfo entry above:

$ rdopkg clone [-u githubnick] openstack-swift

Which results in following remotes:

* origin: http://review.rdoproject.org/r/p/openstack/swift-distgit.git
* patches: http://review.rdoproject.org/r/p/openstack/swift.git
* review-origin: ssh://githubnick@review.rdoproject.org:29418/openstack/swift-distgit.git
* review-patches: ssh://githubnick@review.rdoproject.org:29418/openstack/swift.git
* upstream: git://git.openstack.org/openstack/swift

Send patch for review

Patches are now stored as open gerrit review chains on top of upstream
version tags so patches remote is now obsolete legacy.

Start with inspecting distgit:

$ git checkout mitaka-rdo
$ rdopkg pkgenv

Package: openstack-swift
NVR: 2.7.0-1
Version: 2.7.0
Upstream: 2.9.0
Tag style: X.Y.Z

Patches style: review
Dist-git branch: mitaka-rdo
Local patches branch: mitaka-patches
Remote patches branch: patches/mitaka-patches
Remote upstream branch: upstream/master
Patches chain: unknown

OS dist: RDO
RDO release/dist guess: mitaka/el7

rdopkg patchlog doesn’t support review workflow yet, sorry.

Next, use rdopkg get-patches to create local patches branch from associated
gerrit patches chain and switch to it:

$ rdopkg get-patches

Cherry-pick the patch into newly created mitaka-patches branch. Upstream
source is available in upstream remote.

$ git cherry-pick -x deadbeef

Finally, send the patch for review with rdopkg review-patch which is
just a convenience shortcut to git review -r review-origin $BRANCH:

$ rdopkg review-patch

This will print an URL to patch review such as

https://review.rdoproject.org/r/#/c/1145/.

Get +2 +1V on the patch review

Patches are never merged, they are kept as open review chains in order to
preserve full patch history.

You need to get +2 from a reviewer and +1 Verified from the CI.

Update .spec and send it for review

Once the patch has been reviewed, update the .spec file in mitaka-rdo:

$ git checkout mitaka-rdo
$ rdopkg patch

You can also select specific patches chain by review number with
-g/–gerrit-patches-chain:

$ rdopkg patch -g 1337

Inspect the newly created commit which should contain all necessary changes.
If you need to adjust something, do so and use rdopkg amend to git commit
-a –amend with nice commit message generated from changelog.

Finally, submit distgit change for review with

$ rdopkg review-spec

Review URL is printed. This is a regular review and once it’s merged, you’re
done.

Happy packaging!
Quelle: RDO

GUEST BLOG: Forrester’s Total Economic Impact Study of Red Hat CloudForms

The Red Hat Management portfolio has seen significant upgrades, including the latest release of Red Hat CloudForms 4.1, and has delivered efficiency and costs savings to worldwide customers. This is confirmed by analysts research and testimonials from customers like Cox Automotive, who saved almost 10 years of time and almost $5 million in soft savings. Moreover, because CloudForms supports a wide variety of platforms, including three of the largest public clouds – Amazon Web Service, Microsoft Azure and Google Cloud Platform, CloudForms is well-suited for enterprises seeking to manage various stages of virtualization and cloud deployments, as well as containers. The following blog, written by Forrester Consulting, provides an example of the economic impact CloudForms could have on your business.
Author’s note: The following blog details a recent Total Economic Impact (TEI) study conducted by Forrester Consulting centered around Red Hat CloudForms. The results of that study can be viewed here.

GUEST BLOG: Forrester’s Total Economic Impact Study of Red Hat CloudForms
By: Forrester Consulting
Red Hat commissioned Forrester Consulting to conduct a Total Economic Impact (TEI) study to examine the value that Red Hat CloudForms customers could achieve by deploying Red Hat CloudForms. We spoke with a large US software company about the benefits, costs, risks, and flexibility of Red Hat CloudForms and cloud management platforms. This company has roughly $4B in annual revenue, 7,500 staff, and sells its software to both businesses and consumers.
Prior to Red Hat CloudForms, this customer had developed and maintained an internal, “homegrown” solution. 45 people from different teams regularly worked on this solution to both “keep the lights on” and make incremental improvements. However, with growing and more frequent requests from the business and staff turnover challenges, the company found it difficult to continue using its homegrown solution.
The company then embarked on a selection process to evaluate different vendor technologies that could replace the internal solution. After researching 10 vendors and narrowing down to 6 for technical assessments and conceptual discussions, the customer offered a 2-week POC opportunity to 2 vendors. The POC would consist of proving out the vendor’s performance and capability in completing the company’s 140 use-case test. While the alternative vendor was unable to complete all the use cases given an extended 6 week timeline, Red Hat was able to complete more than 140 use cases in 1.5 weeks.
Readers should also consider running similar tests that relate a conceptual POC to scenarios that are more realistic and frequent to your environments. Some example use cases that the company mentioned were:

Due to the company’s heavy security constraints, apply different provisioning restrictions to different groups and adjust them uniquely and under a single tenant.
Integrate with Active Directory (AD) to have the same message groups.
Apply single sign-on (SSO).

After setting deployment goals, the company began to introduce Red Hat CloudForms into its environment. The two main benefits that the company experienced were unified service management efficiency and unified service delivery efficiency.
Unified service management efficiency focuses on the reduction in labor and effort to develop, maintain, and upgrade the internally built solution. The company was able to reduce 45 allocated resources to 10 resources in the first year of deploying Red Hat CloudForms. This 10-person team was reduced to 8 in Year 2 and 7 by Year 3. This allowed the customer to reallocate resources to other business enabling and future-thinking custom projects. This can be interpreted as either an approximately 80% efficiency improvement or that the previous state was 4.5x less efficient.
Unified service delivery efficiency centers on the reduction in labor and effort to provision and answer business user requests during the organization’s three-month peak season. In past peak seasons, a group of 100 internal resources from different departments and 30 contractors would be co-located for three months to answer all business unit requests. After the first year of deploying Red Hat CloudForms, the customer was able to provision 50% quicker with the same volume of staff.
By the second year, the customer was able to provision 91.7% quicker and did not need any of the 30 contractors anymore. After accounting for initial and recurring costs, risk-adjusting for realistic and conservative estimates, and the future value and scalability of Red Hat CloudForms, we found that the interviewed company experienced a 97% ROI, $ 5.95M net present value, and 6.8 month payback period over a three-year model. In addition to reallocating 35-38 resources to more value-add activities, the company avoided $900K-$1.8M in peak-season contractor costs by engaging a more efficient platform.
For more information on the full June 2016 case study, The Forrester Total Economic Impact of Red Hat CloudForms, please reach out to your Red Hat representative.
Quelle: CloudForms

Infrastructure software is dead. Long live infrastructure software.

The post Infrastructure software is dead. Long live infrastructure software. appeared first on Mirantis | The Pure Play OpenStack Company.
Mirantis Co-Founder and CMO Boris Renski recently stirred discussion with his blog post that infrastructure software is dead. At this year&;s OpenStack Days Silicon Valley, he sat down with Battery Ventures Technology Fellow Adrian Cockcroft to talk about the changing paradigms in software and in delivery models, and the results were not what you might think.
In general, there are two different methods for deploying software.  Traditionally, in the pre-cloud paradigm, software is deployed as a monolithic package.  You deploy it, and 6 months, or 12 months, or 7 years later, when a new version comes out, you basically throw it out and start again, hoping your data and processes will still be compatible with the new version.
But those days are over, Boris argued in his blog post.  They simply aren&8217;t sustainable. Things move too fast; improvements are available for months or years before you can take advantage of them under this model.  So what do you do instead?
That question was on the mind of most of the audience for Boris and Adrian&8217;s discussion.
OpenStack and the old way
In the early days of Mirantis, Boris explained, the company used the pre-cloud paradigm, where the product is packaged as a whole, delivered, and then periodically updated. They quickly learned — and as anyone who has attempted to upgrade OpenStack knows — this isn’t feasible for OpenStack, which itself uses the Infrastructure as Code (IaC) model.
What&8217;s more, as cloud technology proliferates, the shift in paradigm away from traditional, pre-cloud views has become less about software and more about the delivery model.
So what do you do?
You abstract. Boris clarified this shift in paradigm with AWS as an example. AWS users aren’t provided the infrastructure software but rather an API to the interface. That way, AWS can change whatever it needs to in the infrastructure software without disrupting clients and users.
But it&8217;s more than that, Adrian explains. People initially want something that works without change — until they need a new feature. Such project-based thought was built on the fact that coding is expensive and slow, which is why bundling a package periodically was the norm. Now, with procuring hardware and downloading software from places like Github taking minutes, the purchasing and deployment cycle has collapsed. A deployment can take seconds simply by firing up a Docker container.
Basically, the entire reason for bundling has gone away.
Taking advantage of the new software paradigm
To adapt, the software community has learned to break everything into microservices that can deploy independently, resulting in lots of versions of things constantly changing.
But &; doesn’t that break a lot then? Of course, Boris explained, but because you end up with a series of very small steps, it’s actually easier to detect problems and roll back to the previous version. As programmers will recognize, this is the same process used to debug, one step at a time, and it allows continuous change.
This process also solves the issues that arise regarding operations when updates need to be made. Previously, you’d have to wonder if you needed to bring all or part of your system down to make the updates. With containerized OpenStack services, you could upgrade each one independently.
And don’t forget the security benefits of updating in place.
Exploits of exposed software are proliferating, and as Adrian says, people are still downloading the same old vulnerable applications. He advised building around good source components that you can verify with services like JFrog Xray and use security scanners (Docker has one) to check your products.
Looking at the future
There are still a lot of issues that need solutions, of course.
Adrian pointed out that managing a multi-vendor dependency tree is a complex problem with no good fix. “You have to figure out how to keep everything going while trying to change everything,” he explained.
The goal is to keep the “northbound” components, that is, the APIs and so on, that developers want to use, evolving, but remember that the “southbound,” or hardware-facing components, act as constraints. This problem requires collaboration and partnerships to support these devices and to work out ways to get all the versions of hardware and software to work together.
Missed this year&8217;s OpenStack Days Silicon Valley? You can see the whole panel. Just head on over to the OpenStack Days Silicon Valley 2016 videos page and scroll down to &;Infrastructure Software Is Dead…Or Is It?&;
The post Infrastructure software is dead. Long live infrastructure software. appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis