New data center brings security, integration and choice to Nordic region

Demand for turnkey cloud solutions is on the rise in the Nordic countries. By 2019, the cloud market is expected to reach a whopping $18 billion.
Enterprises in Norway, Sweden, Denmark, Finland and Iceland aren’t just looking for cloud capabilities, though. They’re also looking for ways to ensure security, perform hybrid integration and choose the absolute best cloud services for them.

IBM has opened its brand new IBM Cloud Nordic Data Center — the 48th to open worldwide and the 12th in Europe — in Fetsund, Norway, to address just those challenges. The data center, located about 30 km outside of Oslo, enables local organizations to access the full IBM portfolio of cloud services, including Bluemix and Watson, to run critical workloads close to home.
Though the center itself is in Norway, Swedish customers will have a point of presence in their home country to ensure high-speed connections to the IBM Cloud network.
In addition to ensuring network speed, the local data center will help companies keep up with regulatory requirements. The data center is compliant with European Union data and privacy regulations and the upcoming 2018 General Data Protection Regulation.
Cloud bare metal capabilities will help clients extend onto the cloud securely, an open source approach and innovative cloud platform offer them choice, and a hybrid cloud approach will help them make the best possible choices on their cloud journeys.
“We are committed to providing global and local clients the fastest and easiest onramp to the IBM Cloud to accelerate their digital transformation,” said Robert LeBlanc, senior vice president of IBM Cloud. “This investment will provide Nordic customers, especially those in regulated industries, with more flexibility to manage and gain insight into data within the region.”
The new cloud data center demonstrates the commitment of IBM to foster the growth of cognitive and artificial intelligence technologies throughout the region. Developers will have access to Bluemix &; IBM’s Cloud innovation platform – and more than 150 APIs and services spanning key areas such as cognitive, blockchain, Internet of Things and big data.
Learn more about the new IBM Cloud Nordic Data Center.
Read the press release.
The post New data center brings security, integration and choice to Nordic region appeared first on news.
Quelle: Thoughts on Cloud

Notify VM Owner of Upcoming Retirement

One side effect of quick and easy provisioning of virtual machines (VMs) is VM sprawl. To keep the number of VMs manageable, administrators set retirement dates to automatically retire the VM and free the hardware resources.
The risk with setting a retirement date is that the VM owner may not know (or may forget) that an active VM will be automatically retired. CloudForms has the ability to warn the VM owner that retirement of a VM is approaching. Customers want to be able to send multiple retirement warning emails to the VM owner. This can be achieved by modifying the retirement email methods in the Automate model.
As an administrator of CloudForms that has both Cloud and Infrastructure providers in the environment, it is most effective if Cloud Instances and Infrastructure VMs send the same email for retirement warnings. Imagine the request is to send warnings 30 days before, 7 days before, and 1 day before the retirement date.
First, create a namespace to place our Email class in. Create the System / CommonMethods namespace in your domain.
Next, modify the ManageIQ / Infrastructure / VM / Retirement / vm_retirement_emails instance and method. To be able to modify the instance and method, they need to be copied to the System / CommonMethods namespace in your domain. (As a side effect of the copy the Email class will be created.) With the vm_retirement_emails method selected select Configuration → Copy this Method button. In the Copy Automate Method dialog uncheck the “Copy to same path” checkbox. In the “Namespace” box select the System / CommonMethods namespace. Finally press the Copy button. Repeat this process for the vm_retirement_emails instance.

To allow for easily changing the number of days for each warning we will modify the schema for the Email class to add warn_days_1, warn_days_2, and warn_days_3 integer attributes. Navigate to the Email class and select the Schema tab. With the schema displayed select Configuration → Edit selected Schema. Click the plus icon to add the new fields. The picture below shows the schema with all of the new attributes. After the new fields have been added click Save.

The newly added attributes are added to the end of the schema, but they need to be moved to the top. To adjust the order select Configuration → Edit sequence to move the newly added fields to the top using the arrows and then click Save.

Next, the vm_retirement_emails instance is modified to set the number of days to warn in the vm_retirement_emails instance. Edit the instance, setting warn_days_1 to 30, warn_days_2 to 7, and warn_days_3 to 1. While editing the instance you may want to also change the from_email_address, to_email_address, and signature.
The vm_retirement_emails method is modified to add the update_retirement_warning function. This function changes the number of warning days for the VM. A call to the function is added to the VM Retirement Warning Email and VM Retirement Extended Email sections. The modified method code can be found at https://github.com/branic/cloudforms/blob/master/multiple_retirement_emails/vm_retirement_emails.rb
Now that the modified code is in place, plumb in our new method by copying the ManageIQ / System / Event / MiqEvent / Policy Event / vm_retire_warn instance to your domain. This time when copying, leave the “Copy to same path” option selected.
After the vm_retire_warn instance has been copied, edit the instance and change the rel5 field to point to the modified vm_retirement_emails instance.

With these changes CloudForms will now send warning emails to VM owners at the defined intervals. This can further be extended to ensure that the number of warning days are reset when the retirement date is modified by creating a custom button and supporting automation code.
Quelle: CloudForms

Red Hat OpenStack Platform and Tesora Database-as-a-Service Platform: What’s New

As OpenStack users build or migrate more applications and services for private cloud deployment, users are expanding their plans for how these deployments will be serviced by non-core, emerging components. Based on the April 2016 OpenStack User Survey (see page 35), Trove is among the top “as a service” non-core components that OpenStack users are deploying or plan to deploy on top of the core components. This comes as no surprise as every application requires a database and Trove provides OpenStack with an integrated Database-as-a-Service option that works smoothly with the core OpenStack services.
Recently, Red Hat and Tesora jointly announced that we have collaborated to certify Tesora Database as a Service (“DBaaS”) Platform on the Red Hat OpenStack Platform. When we at Red Hat announced our strategic decision to focus our development and contribution efforts on the core OpenStack services, we did so with confidence, due in large part to our expanded relationship with Tesora. Tesora is a recognized thought leader and the top contributor to upstream OpenStack Trove. They understand the needs of the Trove community, but more importantly they have a reputation for understanding, and focusing, on the needs of the those developing and supporting applications running in a heterogeneous database environment. Adding Tesora DBaaS Platform as a certified workload on top of Red Hat OpenStack Platform addresses our customer requirements and provides an immediate, production-ready DBaaS option that can be deployed within their current Red Hat OpenStack Platform 8 and higher environments.
What’s New for Red Hat OpenStack Platform Users?  

From a technical and operational standpoint, this move offers users the following benefits:

The Tesora DBaaS Platform is now tested and certified as a supported workload on top of Red Hat OpenStack Platform 8 and higher. From a technical standpoint the Tesora solution is fully compatible with the core OpenStack service APIs and includes all upstream features and bug fixes. The Tesora Trove controller is a drop-in replacement for the upstream Trove controller so users can more easily move to the Tesora solution with minimal effort.
Tesora has addressed the database “golden” image building and maintenance problem. Most organizations use multiple database platforms depending on their specific business needs. Building, tuning, securing and maintaining each database platform requires a specific DBA skillset and some level of administrative time and effort. Cloud enabled DBaaS enables DBAs and developers to be more productive in these areas, but there is still a gap when it comes to creating, testing and maintaining “golden” images for each database platform used in a heterogeneous environment. The OpenStack Trove community provides tooling for this in the form of disk-image-builder (“DIB”). While DIB is functional and enables the development of standardized database images, it is very verbose and complex to maintain, specifically for images created for operating systems that are commercially supported and maintained, such as Red Hat Enterprise Linux (RHEL); in short updating DIB rendered images often is difficult and time consuming. Tesora has addressed this problem by providing production-ready solutions, pre-built database images, and support for the SQL, NoSQL, open source, and proprietary database platforms that are most commonly used in heterogeneous environments. Red Hat OpenStack Platform users can now deploy Tesora DBaaS Platform on top of their existing environments and can provide their users with DBaaS with 15 different database platforms, using out of the box, pre-built, certified and easily updatable images running on Red Hat Enterprise Linux and other commercially supported and maintained operating systems.
Tesora DBaaS Platform CLI and Dashboard make database creation and administration quicker and easier across different database platforms. The biggest challenge most DBAs and developers face is managing and monitoring the databases under their care. Tesora DBaaS Platform provides a rich set of CLI commands, APIs and a web-based GUI that provide a common interface across heterogeneous platforms. Users are shielded from syntax and platform specifics, so they can focus on the “what” they need to do vs the “how” they will do it for things like backup and recovery, database creation, configuration, replication and clustering.  
Tesora DBaaS Platform can be deployed on top of an existing Red Hat OpenStack Platform 8 and higher deployment. Users can gain the immediate benefits of DBaaS with little to no change to their existing Red Hat OpenStack Platform deployment. As mentioned earlier, the Tesora solution is fully compatible with the core OpenStack (and Red Hat OpenStack Platform) service APIs. Red Hat users can easily install and configure the Trove DBaaS controller, dashboard, database images with minimal disruption to the other OpenStack services. For new and existing Red Hat OpenStack Platform deployments installations, there are some things that should be taken into consideration:

Red Hat OpenStack Platform director is used to deploy and upgrade the OpenStack core components. Users who choose not to use director to deploy and upgrade Red Hat OpenStack Platform will have a different support and upgrade path from those who do.
Tesora deployment and upgrades have yet to be integrated with Red Hat OpenStack Platform director. Tesora provides installation and upgrade guides for manually deploying and upgrading Tesora DBaaS Platform on top of an existing Red Hat OpenStack Platform installation. Director integration is in the works, but Tesora has provided no delivery timeframe on when this will happen.
Red Hat and Tesora are collaborating to support our joint customers. While you will need a support agreement with each of us, the support experience should be pleasant.

The certified Tesora DBaaS Platform is available for immediate evaluation, you can get a free trial here.

I speak for everyone at Red Hat and Tesora when I say we believe this certification and ongoing collaboration will bring tremendous benefit and value add to our customer base from both a technical and business standpoint and we look forward to your feedback.
For more information:
* Tesora Press Release http://www.tesora.com/press-releases/tesora-collaborates-red-hat-deliver-certified-openstack-based-database-service-platform/
 
Quelle: RedHat Stack

Go agile to slash costs, reduce failure rates and speed deployments

Continuous deployment is the mainstay of a DevOps practice. Because accelerated delivery of software services is linked to business growth and innovation, it’s becoming de facto as companies pursue digital transformation.
But if you’re somewhat new to agile-based continuous deployment, you’re probably wondering about things such as app quality, speed and delivery, not to mention how much will getting rid of manual processes save your organization so you can re-invest in innovation?
The cost of failed apps
For a great example of how a major client reaped benefits from the IBM deployment automation tool UrbanCode Deploy, read this case study to learn how Fidelity cut release times from days to hours, saving $2.3 million annually by reducing manual labor, resource wait time and rework.
You can find out how much your organization might be throwing away on fixing deployment errors versus saving with automation by trying this new savings calculator. Looking to justify your department’s investments? The calculator comes in very handy.
What automation can do
If you want to win in today’s marketplace, you’re likely focused on delivering great apps that make your brand and product shine, but how do you do that?
IBM supports full automation features in UrbanCode Deploy for continuous app deployments anytime and anywhere, helping companies get to market faster, simplify their release processes and cut costs while reducing failure rates. UrbanCode Deploy can also deliver your applications to OpenStack, IBM SoftLayer, Amazon, and VMWare with a consistent and portable infrastructure-as-a-service approach.
To see how UrbanCode Deploy actually does all that check out the seven-part, animated guided tour. These short, narrated video segments cover all the standout features that allow you to test and optimize as well as configuring and provisioning clouds.
What’s even better is that UrbanCode supports a wide variety of application types, including distributed, mobile, mainframe and databases. This is especially good news for companies running legacy systems.
Here’s a preview:

The case study, guided tour and the calculator can be found here.
The post Go agile to slash costs, reduce failure rates and speed deployments appeared first on .
Quelle: Thoughts on Cloud

A practical guide to platform as a service: Acquiring and using PaaS

Fully embracing platform as a service (PaaS) implies a number of technical and organizational changes for customers. While it may be possible to port an existing, non-cloud-based application to use a PaaS environment, this is not likely to realize major benefits. A certain degree of change is required at the technological level, and perhaps a larger degree of change is necessary at the organizational level.
Here are eight critical technological changes that cloud service customers should carefully consider:

Understand PaaS end-to-end application architecture. A key architectural element of cloud applications is the principle that they are stateless and use separately deployed services to build their functionality.
Understand how containers enable applications. Many PaaS platforms use containers to run applications. Applications are deployed to the PaaS as complete container stacks, combining the application code with its underlying software stack.
Understand how services and microservices are used. Services and microservices simplify the development and operation of applications in a PaaS system. The term “microservices” describes a design approach for building the application itself, dividing the application into a series of separate processes that are deployed independently and connected via service interfaces.
Address integration between PaaS applications and existing systems. Seldom are applications developed and run on a PaaS platform completely standalone. Commonly, these applications must integrate with existing enterprise systems – applications, services and data – some of which likely reside in an existing, enterprise environment outside the cloud system.
Ensure appropriate security components. Security and data protection for personal data are key elements of any information system, so it is important that the PaaS offering provides appropriate capabilities to enable end-to-end security for deployed applications.
Consider development tools and PaaS. PaaS systems typically support a variety of tooling: code editing, code repositories, build functions and test capabilities. Such tools can either be provided as part of a PaaS offering, or alternatively, the PaaS offering can provide open interfaces that enable third-party tools to be plugged in by the developers.
Expect support for agile development and DevOps. One aspect of the agile approach is the concept of continuous integration: an emphasis on getting a working application quickly and incrementally adding functionality frequently with rapid deployment to production. This can allow for early user testing of new functionality and rapid feedback to the developers.
Consider the deployment aspects of PaaS. In many cases, it is best to consider a hybrid deployment of applications, services and data in which each element is placed in a private cloud environment, placed in a public cloud environment, or left “as-is” in an on-premises environment based on the risk profile and costs associated with each element.

Keys to success
Cloud service customers should take the following actions when considering the move to PaaS and contrasting PaaS offerings from different cloud service providers:

Perform cost/benefit analysis
Assess your security risk versus innovation imperatives
Assess value provided by a given PaaS versus alternatives
Consider the long term viability of PaaS provider ecosystem
Consider portability both in terms of lock-in and deployment model (public, private, hybrid) as needs evolve
Ensure integration between PaaS environment and on-premises systems is straightforward and secure
Consider transitioning to Agile methods and DevOps processes

Interested in learning more about PaaS and getting a better picture of its implementation best practices? Check out my posts on PaaS basics and PaaS benefits, and be sure to download the Cloud Standards Customer Council’s “Practical Guide to Platform as a Service.”
The post A practical guide to platform as a service: Acquiring and using PaaS appeared first on .
Quelle: Thoughts on Cloud

IBM Cloud SVP: Cloud is moving into its next phase of innovation

During the first wave of , many organizations viewed it as little more than a way to cut costs, but now, they’re seeing it as something far more versatile.
In an interview with Network World, IBM Cloud Senior Vice President Robert LeBlanc said, “I think we’re moving to the next phase of cloud and that’s where the cloud is becoming a platform for innovation.”
LeBlanc said the cloud is enabling clients to develop new business models and processes, as well as dig into mobile technology much faster than they could in the past. He explained:
The reason is they now have access to capability and technology that before would have required a level of investment. They would have to procure servers, configure them, get them all ready, buy software and that literally can take months when in most cases now, with little or no investment, they can get access to newer technologies. That’s what I call the value and where the cloud is now becoming a platform for innovation.
He cited the 30 cognitive services now available on IBM Cloud through Watson and 150 services available on the Bluemix platform. “Instead of having to build everything from scratch, I now have a real set of building blocks on which to build those next-generation applications,” he said.
LeBlanc also noted that cloud is “everything we do.” IBM is building new solutions on cloud, as well as bringing traditional offerings, such as middleware, onto the cloud.
For a deeper dive into the next phase of cloud, cloud revenue, private vs. public cloud, the CIO perspective and more, read the full interview at Network World.
The post IBM Cloud SVP: Cloud is moving into its next phase of innovation appeared first on Cloud Computing.
Quelle: Thoughts on Cloud

Install your OpenStack Cloud before lunchtime

Figure 1. The inner workings of QuickStart Cloud Installer
What if I told you that you can have your OpenStack Cloud environment setup before you have to stop for lunch?
Would you be surprised?
Could you do that today?
In most cases I am betting your answer would be not possible, not even on your best day. Not to worry, a solution is here and it&;s called the QuickStart Cloud Installer (QCI).
Let&8217;s take a look at the background of where this Cloud tool came from, how it evolved and where it is headed.
 
Born from need
As products like Red Hat Cloud Suite emerge onto the technology scene, it exemplifies the need for companies to be able to support infrastructure and application development use cases such as the following:

Optimize IT
Accelerate Service Delivery
Modernize Development and Operations
Scalable Infrastructure

The problem is how to streamline the setup of such intricate and complex solutions?
Figure 2. Getting the installation of complex infrastructure solutions down from a month, to days, to just hours based on testing by Red Hat.
It started with researching in 2013 how the product Red Hat Cloud Infrastructure (RHCI) was being deployed by Red Hat customers. That information was used to start an effort creating several simple, reproducible installation guides that could cut down the time needed to install the following products.

Red Hat Virtualization (RHV)
OpenStack Platform (OSP)
CloudForms

The final product installation documentation brought the deployment time for this infrastructure solution down to just several days, instead of a month. Figure 2 shows the progress made between the efforts of installing RHCI.
The next evolution included Satellite and OpenShift offerings that you now find in the Red Hat Cloud Suite solution. This brought more complexity into the installation process and a push was made to go beyond just documentation. An installation effort commenced that had to bring together all the products, deal with their configurations and manage it all to a full deployment in a faster time frame than several days.
 
How it works
The QCI progressed and expanded by functioning as an extension (plugin) of Satellite with intentional roadmap alignment. It uses specific product plugins that interface with their individual APIs so that they can be used for both individual product installations and complete solution base installs.
Figure 1 shows you the architectural layout of QCI as it relates to Satellite. See the online documentation for the versions supported by QCI at the time of this writing, we expect to update the documentation on a regular basis as products are released that QCI supports.
The installer, when first started, spins up the Fusor Installer. This is a plugin to Foreman and is used to perform the initial setup such as networking and provisioning within Satellite to be used later in the deployment.
Some of the deployment steps depend on the path you have chosen when specifying the products you wish to install:

if a RHV with CloudForms deployment is chosen, the QCI calls Puppet modules for configuring and setting up the RHV environment. It installs RHV-M and runs Python scripts which will set the RHV Datacenter up.
CloudForms management engine is deployed as a Satellite resource and as such can be launched on top of RHV.
Most of the OpenShift product deployment uses Ansible to facilitate the installation and setup of the environment.
OpenStack uses what is known as the TripleO installation. This means OpenStack installed on OpenStack (hence the three O&8217;s). It uses an all-in-one ISO image containing OpenStack which then deploys a customized version configured through QCI user interface.

The two deployment patterns supported by QCI are:

Red Hat Cloud Infrastructure

Satellite, RHV, OpenStack and CloudForms

Red Hat Cloud Suite 

Satellite, RHV, OpenStack, CloudForms and OpenShift product

Now here is the unbelievable part we suggested in the title, that both deployment patterns can be installed in under four hours.
Figure 3. The timeline from pushing the deploy button to completion of your OpenStack deployment.
Yes, you can arrive in the morning to work and have your OpenStack Cloud infrastructure setup by the time you have to break for lunch!
Figure 3 shows you a condensed timeline of our testing of the RHCI installation as an example, but the same is possible with Red Hat Cloud Suite.
 
The future is bright
We can&8217;t think of anything brighter for you than a future where you can reduce deployment times for your complex Cloud infrastructure, but there are more positive points to take note of when you leverage QCI:

Simple fully integrated deployments of RHCI and Red Hat Cloud Suite requiring only minimal documentation.
Easy to use, single graphical web-based user interface for deploying all products.
Leverages existing Red Hat Storage (Ceph and Gluster) deployments for Red Hat Virtualization, Red Hat OpenStack, and OpenShift product installations.
Integrated with Red Hat’s Customer Portal for automated subscription management.
Avoid the need for costly consultants when deploying proof-of-concept environments.

With this in mind the team behind this technology is busy looking at expanding into more products and solutions within the Red Hat portfolio. Who knows, maybe the next step could be including partner technologies or other third-party solutions?

No time like the present for you to dive right and take QCI for a spin and be sure to let us know what you think of it.

(This article written together with Red Hat Software Engineer Nenad Peric)
Quelle: RedHat Stack

Red Hat Announces Schedule and Speaker Line-Up for OpenShift Commons Gathering November 7th in Seattle

The OpenShift Commons Gathering will bring together the brightest technical minds to discuss the future of OpenShift and it’s related upstream open source projects. The 2016 event will gather developers, DevOps professionals and SysAdmins together to explore the next steps in making container technologies successful and secure.
Quelle: OpenShift

One cloud to rule them all — or is it?

The post One cloud to rule them all &; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
So you’ve sold your organization on private cloud.  Wonderful!  But to get that ROI you’re looking for, you need to scale quickly and get paying customers from your organization to fund your growing cloud offerings.
It’s the typical Catch-22 situation when trying to do something on the scale of private cloud: You can’t afford to build it without paying customers, but you can’t get paying customers without a functional offering.
In the rush to break the cycle, you onboard more and more customers.  You want to reach critical mass and become the de-facto choice within your organization.  Maybe you even have some competition within your organization you have to edge out.  Before long you end up taking anyone with money.  
And who has money?  In the enterprise, more often than not it&;s the bread and butter of the organization: the legacy workloads.
Promises are made.  Assurances are given.  Anything to onboard the customer.  “Sure, come as you are, you won’t have to rewrite your application; there will be no/minimal impact to your legacy workloads!”
But there&8217;s a problem here. Legacy workloads &8212; that is, those large, vertically scaled behemoths that don&8217;t lend themselves to &;cloud native&; principles &8212; present both a risk and an opportunity when growing your private cloud, depending how they are handled.
(Note: Just because a workload has been virtualized does not make it &8220;cloud-native&8221;. In fact, many virtualized workloads, even those implemented using SOA, service-oriented architecture, will not be cloud native. We&8217;ll talk more about classifying, categorizing and onboarding different workloads in a future article.)
&8220;Legacy&8221; cloud vs &8220;Agile&8221; cloud
The term &8220;legacy cloud&8221; may seem like a bit of an oxymoron, but hear me out. For years, surveys that ask people about their cloud use have had to include responses from people who considered vSphere cloud because the line between cloud and virtualization is largely irrelevant to most people.
Or at least it was, when there wasn&8217;t anything else.
But now there&8217;s a clear difference. Legacy cloud is geared towards these legacy workloads, while agile cloud is geared toward more &8220;cloud native&8221; workloads.
Let’s consider some example distinctions between a “Legacy Cloud” and an “Agile Cloud”. This table shows some of the design trade-offs between environments built to support legacy workloads versus those built without those restrictions:

Legacy Cloud
Agile Cloud

No new features/updates (platform stability emphasis), or very infrequently, limited & controlled
Regular/continuous deployment of latest and greatest features (platform agility emphasis)

Live Migration Support (redundancy in the platform instead of in the app), DRS (in case of ESXi hypervisors managed by VMWare)
Highly scalable and performant local storage, ability to support other performance enhancing features like huge pages.  No live migration security and operational burdens.

VRRP for Neutron L3 router redundancy
DVR for network performance & scalability; apps built to handle failure of individual nodes

LACP bonding for compute node network redundancy
SR-IOV for network performance; apps built to handle failure of individual nodes

Bring your own (specific) hardware
Shared, standard hardware defrayed with tenant chargeback policies (white boxes)

ESXi hypervisor or bare metal as a service (Ironic) to insulate data plane, and/or separate controllers to insulate control plane
OpenStack reference KVM deployment

A common theme here are features that force you to choose whether you are designing for performance & scalability (such as Neutron DVR) versus HA and resiliency (such as VRRP for Neutron L3 agents).
It’s one or the other, so introducing legacy workloads into your existing cloud can conflict with other objectives, such as increasing development velocity.
So what do you do about it?
If you find yourself in this situation, you basically have three choices:

Onboard tenants with legacy workloads and force them to potentially rewrite their entire application stack for cloud
Onboard tenants with legacy workloads into the cloud and hope everything works
Decline to onboard tenants/applications that are not cloud-ready

None of these are great options.  You want workloads to run reliably, but you also want to make the onboarding process easy without imposing large barriers of entry to tenants applications.
Fortunately, there&8217;s one more option: split your cloud infrastructure according to the types of workloads, and engineer a platform offering for each. Now, that doesn&8217;t necessarily mean a separate cloud.
The main idea is to architect your cloud so that you can provide a legacy-type environment for legacy workloads without compromising your vision for cloud-aware applications. There are two ways to do that:

Set up a separate cloud with an entirely new control plane for associated compute capacity.  This option offers a complete decoupling between workloads, and allows for changes/updates/upgrades to be isolated to other environments without exposing legacy workloads to this risk.
Use compute nodes such as ESXi hypervisor or bare metal (e.g., Ironic) for legacy workloads. This option maintains a single OpenStack control plane while still helping isolate workloads from OpenStack upgrades, disruptions, and maintenance activities in your cloud.  For example, ESXi networking is separate from Neutron, and bare metal is your ticket out of being the bad guy for rebooting hypervisors to apply kernel security updates.

Keep in mind that these aren’t mutually exclusive options; it is possible to do both.  
Of course each option come with their own downsides as well; an additional control plane involves additional overhead (to build and operate), and running a mixed hypervisor environment has its own set of engineering challenges, complications, and limitations.  Both options also add overhead when it comes to repurposing hardware.
There&8217;s no instant transition
Many organizations get caught up in the “One Cloud To Rule Them All” mentality, trying to make everything the same and work with a single architecture to achieve the needed economies of scale, but ultimately the final decision should be made according to your situation.
It&8217;s important to remember that no matter what you do, you will have to deal with a transition period, which means you need to provide a viable path for your legacy tenants/apps to gradually make the switch.  But first, asses your situation:

If your workloads are all of the same type, then there’s not a strong case to offer separate platforms out of the gate.  Or, if you’re just getting started with cloud in your organization, it may be premature to do so; you may not yet have the required scale, or you may be happy with onboarding only those applications which are cloud ready.
When you have different types of workloads, with different needs &8212; for example, Telco/NFV vs Enteprise/IT vs BigData/IoT workloads &8212; you may want to think about different availability zones inside the same cloud, so specific nuances for each type can be addressed inside it’s own zone while maintaining one cloud configuration, life cycle management and service assurance perspective, including having similar hardware. (Having similar hardware makes it easier to keep spares on hand.)
If you find yourself in a situation where you want to innovate with your cloud platform, but you still need to deal with legacy workloads with conflicting requirements, then workload segmentation is highly advisable.  In this case, you&8217;ll probably want to break from the “One Cloud” mentality in favor of the flexibility of multiple clouds  If you try to satisfy both your &8220;innovation&8221; mindset and your legacy workload holders on one cloud, you&8217;ll likely disappoint both.

After making this choice, you may then plan your transition path accordingly.
Moving forward
Even if you do create a separate legacy cloud, you probably don&8217;t want to maintain it in perpetuity.  Think about your transition strategy; a basic and effective carrot and stick approach is to limit new features and cloud-native functionality to your agile cloud, and to bill/chargeback at higher rates in your legacy cloud (which are, at any rate, justified by the costs incurred to provide and support this option).
Whatever you ultimately decide, the most important thing to do is make sure you&8217;ve planned it out appropriately, rather than just going with the flow, so to speak. If you need to, contact a vendor such as Mirantis; they can help you do your planning and get to production as quickly as possible.
The post One cloud to rule them all &8212; or is it? appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis