Introducing Mirantis Cloud Platform — Webinar Q&A

The post Introducing Mirantis Cloud Platform — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Just before the OpenStack Summit we gave a webinar introducing Mirantis Cloud Platform. Here are the answers to your questions.
Why does Mirantis Cloud Platform have two SDNs?
The Calico SDN is only used for K8s networking. It’s used for container-to-container connectivity. For the OpenStack SDN we support both Neutron ML2 OVS and OpenContrail in MCP 1.0. In general most applications only require L3 connectivity and do not require the same overlay network between containers and OpenStack. There are some exceptions, such as NFV workloads, that could demand that the container in K8s has L2 connectivity to a VM inside of OpenStack, however, and for that specific NFV workload case, we’ve shown a demo of using OpenContrail as the SDN for both OpenStack and K8s. You can view it here:
Can existing MOS environments be upgraded to MCP 1.0? If yes, which version(s) of MOS can be upgraded?
Transition from MOS to MCP for particular customers will require engagement with the Mirantis Services organization. The migration path will largely depend on the configuration of the original cloud and installed extensions/plugins, and in most cases will require additional engineering effort to develop. Clouds running on versions of MOS earlier than 9.0 are unlikely to transition to MCP in-place or in-service due to the extreme complexity of the transition path. Customers can still go from MOS to MCP by installing MCP separately and gradually moving data and workloads to the MCP cloud.
Why isn’t Mirantis Cloud Platform available for download like MOS 9?
As we transition from Mirantis OpenStack to MCP, which includes DriveTrain, a new operations-centric deployment and lifecycle management capability based on a CI/CD pipeline, a number of new considerations need to be in place for successful access, download and evaluation of the technology.  Please be patient as we work towards a new download capability that will enable the evaluation of MCP.
Are there any plans to support plugins, similar to MOS version 9.x ?
MCP is highly customizable, so it can be extended to enable deployment of a wide range of infrastructure (such as SDNs, storage back-ends, and so on) beyond the default choices. For the time being, however, Mirantis intends to avoid making such customizations except as required by customers, or where otherwise justified in terms of closing engineering gaps or satisfying business objectives. We do not anticipate creating frameworks (like Fuel Plugin SDK) enabling third parties to engineer infrastructure-level integrations of their products with MCP.
So are you going to continue to develop MOS? Will we see MOS 10?
We truly believe that the MCP model is the right way to deliver OpenStack and other private cloud components, so we feel like we’d be doing our customers a disservice by providing it any other way.
That said, MOS is part of MCP, so we’ll continue to develop it. As far as a standalone OpenStack that’s deployed via Fuel, we’ll continue to support MOS 9.2 through at least mid-2019 to give our customers a chance to migrate to MCP at their own convenience, rather than on an arbitrary schedule.
Are containers deployed on bare metal or on VMs? Or both?
MCP currently incorporates two independent cloud frameworks: OpenStack, which is an IaaS framework used to host virtual machines, and Kubernetes, an orchestration framework for (Docker) containers and containerized applications. At present, in MCP, the OpenStack and Kubernetes frameworks are deployed separately on aggregations of bare metal nodes provisioned with Linux. In the simplest case, an MCP cloud operator could deploy container workloads on bare-metal Linux-provisioned nodes in the Kubernetes cluster (which have nothing to do with OpenStack). The operator could also start VMs on OpenStack, provisioned with a Linux guest OS, and deploy a container host (such as Kubernetes, Docker Swarm, and so on) on these VM nodes, using these to host containers. Finally, the operator could use MaaS to provision bare-metal nodes separate from either the OpenStack or Kubernetes clusters, deploy Linux and a container host on these nodes (such as Docker Swarm), and deploy container workloads to them directly. Other layerings are also possible.
So is MCP a bundle of pieces with OpenStack and Kubernetes working together for OpenStack and container platforms or is OpenStack deployed on Kubernetes? Please clarify.
Mirantis Cloud Platform currently incorporates two independent cloud frameworks: OpenStack, which is an IaaS framework used to host virtual machines; and Kubernetes, an orchestration framework for (Docker) containers and containerized applications. At present, in MCP, the OpenStack and Kubernetes frameworks are deployed separately on aggregations of bare metal nodes provisioned with Linux. Mirantis has also demonstrated deployment of a containerized OpenStack control plane on Kubernetes: an architecture that enables agile scale-out of OpenStack capacity on demand and facilitates in-place-updating of OpenStack components with minimal downtime. No release date for availability of this architecture in production MCP has yet been provided.
Is a specific version of OpenContrail and Kubernetes used, or will it be continuously integrated by the CI/CD chain more or less live from upstream?
OpenContrail, Kubernetes, and all of the other components of Mirantis Cloud Platform will be updated as appropriate. For example, a needed driver might lag behind, delaying the release of a component, but as soon as it’s ready, the component will be released into the toolchain with the new version of the driver. All components are tested and hardened before being added to or upgraded in MCP.
Is DriveTrain free?
DriveTrain is included in the overall pricing of MCP and MMO.
Is DriveTrain only for MCP, or can it be used for MOS?
DriveTrain is a core component to MCP. It is used as the LCM for all open cloud software within MCP, and can also be utilized for other software deployment and LCM.  Part of that open cloud software includes the latest OpenStack from Mirantis, Mirantis OpenStack, but MOS 9.x and earlier versions included Fuel as the means of deployment.  DriveTrain is not for use with earlier versions of MOS.
What is the difference between Mirantis Cloud Platform and Red Hat’s Openshift?
MCP is a complete cloud platform that includes OpenStack, Kubernetes, and various other services. OpenShift is basically a Kubernetes distribution and PaaS.
Does MOS still exist, or it is completely obsoleted by MCP?
MOS is a component of Mirantis Cloud Platform, so is not obsoleted in any sense. With the release of MCP 1.0, the importance of MOS as a stand-alone product deployed by Fuel is reduced — though Mirantis will continue supporting and evolving MOS to serve the requirements of customers who cannot, or prefer not to adopt MCP.
How is Mirantis prepared to support their customers across all the components of the private cloud ecosystem, such as the host and guest OS, hypervisor, and virtual switch?
Mirantis — historically a top 3 contributor to OpenStack — also contributes to Kubernetes, OpenContrail and many other open source projects encompassed by Mirantis Cloud Platform, and is a member of key industry groups such as OPNFV. Mirantis works closely with providers of Linux distributions, platform and network hardware, and other MCP components.
Are acceleration technologies such as SR-IOV, DPDK supported? Are Smart-NICs supported?
Yes, SR-IOV and DPDK environments are supported depending on the NIC that is being used. Smart-NICs may be supported upon validation of them on a case by case basis per customer request.
What is Jenkins’ role?
Jenkins serves as the pipeline automation tooling within DriveTrain for delivering LCM features to MCP.
Can Cisco ACI be used instead of openContrail?
Not currently, but features and additional support are being integrated continually.
Will it ever be possible for smaller operators with order of 10-15 OpenStack compute nodes to “self-consume” MCP (or perhaps a subset thereof) the way they could with MOS? Otherwise, where do you suggest they turn to so they can adopt MCP once they reach critical mass?
The best way to prepare for MCP is to transform your company culture into a cloud-native organization, and to start using DevOps principles such as CI/CD. These changes will benefit your organization by giving you greater agility and development speed outside regardless of the cloud platform you’re using.  MOS 9.2 will be available for the next few years, so you can gradually ramp up to the point where you’re ready for MCP.
How does MCP differ from MOS ?
MCP is an ecosystem of around 100 cloud services, including OpenStack and Kubernetes environments deployed with DriveTrain (SaltStack/Reclass) lifecycle management capabilities. MOS is strictly OpenStack with Fuel as a deployment tool and plugin framework.
DriveTrain is the new Fuel?
In a manner of speaking, yes. DriveTrain is the new way we are deploying both Mirantis Cloud Platform OpenStack and Kubernetes environments, as well as managing their lifecycles. Fuel is the deployment and LCM tool strictly for MOS.
What is the timeline for release for MCP?
MCP 1.0 became generally available in mid-April, 2017.
How do I get the details from StackLight? What is the client using?
StackLight is a toolchain of many various logging, metering, and alerting applications; including but not limited to Kibana, Elasticsearch, Grafana, InfluxDB, Sensu, and Uchiwa, wrapped around a Heka framework. Please visit the StackLight page for additional details.
With MCP, do you first build a Kubernetes cluster as a base and then run OpenStack services in Kubernetes?
MCP deploys both OpenStack and Kubernetes as separate environments under the same lifecycle management architecture. Currently, both environments are parallel in nature, and not installed on top of each other.
In the managed solution, will Mirantis deploy MCP on the customer’s own infrastructure (servers/storage) or 3rd party clouds (AWS, Azure, etc)?
MMO on-premises delivery is onto customer managed hardware, this can be in their own physically owned datacenter or a colocation facility. However, we do not deliver MMO onto public cloud infrastructures such as AWS or Azure.
Can MCP span private and public clouds?
One of the great things about MCP is that it leaves things open for the latest technology.  Mirantis delivers hybrid cloud solutions through MCP technology and professional services. One of those technologies is Kubernetes Federation. Once this is in place, your applications will be able to easily span Kubernetes to both public and private clouds.
Did you miss the original webinar?  Check out the video and the slides.
The post Introducing Mirantis Cloud Platform — Webinar Q&A appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Securing Containers on OpenShift with Aqua

Red Hat OpenShift Container Platform is one of the popular and mature platforms for developing and managing container deployments. While it has many built-in security features, Aqua provides an additional layer of security both in development as well as for protecting containerized applications in runtime.
Quelle: OpenShift

Kubernetes: State and Storage

I oftentimes hear folks stating that Kubernetes is great for stateless applications but when it comes to stateful applications, questions like: ‘Can it be done?’ or even ‘Should it be done?’ come up frequently. In this post, I’d like to offer a slightly more differentiated point of view and provide you with some resources that might help you dealing with stateful applications.
Quelle: OpenShift

Business agility powered by decision management

Agility is the ability to move quickly and act fast. Business agility is the ability to adapt and respond to the change rapidly. Notice the difference? In business, the emphasis is on flexibility.
Of course, I am borrowing this concept from software development, where agile development is the new norm. But think about this: if businesses are built on systems that are, in turn, built using agile methods, then does that make your business agile?
No. An emphatic no.
You still need to engineer the systems that run your business to be accommodating to changes, to be flexible and without rigidity.
All businesses run on a set of business processes and rules. Systems are engineered—or over-engineered, if you will—to automate these complex processes and business rules. IT takes the business rules and hard-codes them into the application in a language business people can’t always understand. But business rules change frequently. When there’s a new regulation, they change. When there’s a new competitor in the market, they change. It can change over the course of a meeting.
When sales at ControlExpert, the German insurance, automotive and car leasing industries services provider, started skyrocketing, they transformed their processes. They didn’t just automate claims management, they infused it with intelligence to identify patterns and discrepancies in claims. As a result, their agents could make the right decisions faster to accept payouts. This enabled them to minimize overhead costs and exceed customer service expectations.
Business rules are not simply the rules you play by. In fact, they are very specific statements that are the foundation of how business decisions get made.  They answer questions like:

Who’s eligible for loans and what are the terms of repayment?
Should the pricing of the product vary by state?
At what purchase value should a customer become eligible for a loyalty bonus?

Each gives well-formed, practical guidance focused on making a specific decision. Each uses terms and facts about business concepts, which should be well-defined.  Each is declarative, rather than procedural.
This requires business and IT to work together to make improvements and a new architecture for systems that drive business.

“Agility increases when companies use the expertise of all of the major stakeholders to identify, understand and respond to accelerating change and disruption as it occurs.” – IBM Business Agility Study, April 2011

Operational Decision Management allows and accommodates this flexibility for line of business to make changes to the decision logic as and when needed and in natural language. It is a strategic advantage and perhaps a vital missing link in your quest to define your organization as agile.
I encourage you to read more in this whitepaper that articulates how decision management provides the critical link to business agility. You can also follow the conversation on @BPMfromIBM. And feel free to comment below about how your organization is embracing agile methodology to define your business.
The post Business agility powered by decision management appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Running (and recording) fully automated GUI tests in the cloud

The problem

Software Factory is a
full-stack software development platform: it hosts repositories, a bug tracker and
CI/CD pipelines. It is the engine behind RDO’s CI pipeline,
but it is also very versatile and suited for all kinds of software projects. Also,
I happen to be one of Software Factory’s main contributors. :)

Software Factory has many cool features that I won’t list here, but among these
is a unified web interface that helps navigating through its components. Obviously
we want this interface thoroughly tested; ideally within Software Factory’s
own CI system, which runs on test nodes being provisioned on demand on an OpenStack
cloud (If you have read Tristan’s previous article,
you might already know that Software Factory’s nodes are managed and built
by Nodepool).

When it comes to testing web GUIs, Selenium is
quite ubiquitous because of its many features, among which:

it works with most major browsers, on every operating system
it has bindings for every major language, making it easy to write GUI tests
in your language of choice.¹

¹ Our language of choice, today, will be python.

Due to the very nature of GUI tests, however, it is not easy to fully automate
Selenium tests into a CI pipeline:

usually these tests are run on dedicated physical machines for each operating
system to test, making them choke points and sacrificing resources that could be
used somewhere else.
a failing test usually means that there is a problem of a graphical nature;
if the developer or the QA engineer does not see what happens it is difficult
to qualify and solve the problem. Therefore human eyes and validation are still
needed to an extent.

Legal issues preventing running Mac OS-based virtual machines on non-Apple
hardware aside, it is
possible to run Selenium tests on virtual machines without need for a physical
display (aka “headless”) and also capture what is going on during these tests for
later human analysis.

This article will explain how to achieve this on linux-based distributions,
more specifically on CentOS.

Running headless (or “Look Ma! No screen!”)

The secret here is to install Xvfb (X virtual framebuffer) to emulate a display
in memory on our headless machine …

My fellow Software Factory dev team and I have configured Nodepool to provide us
with customized images based on CentOS on which to run any kind of
jobs. This makes sure that our test nodes are always “fresh”, in other words that
our test environments are well defined, reproducible at will and not tainted by
repeated tests.

The customization occurs through post-install scripts: if you look at our
configuration repository,
you will find the image we use for our CI tests is sfstack-centos-7 and its
customization script is sfstack_centos_setup.sh.

We added the following commands to this script in order to install
the dependencies we need:

sudo yum install -y firefox Xvfb libXfont Xorg jre
sudo mkdir /usr/lib/selenium /var/log/selenium /var/log/Xvfb
sudo wget -O /usr/lib/selenium/selenium-server.jar http://selenium-release.storage.googleapis.com/3.4/selenium-server-standalone-3.4.0.jar
sudo pip install selenium“`

The dependencies are:

* __Firefox__, the browser on which we will run the GUI tests
* __libXfont__ and __Xorg__ to manage displays
* __Xvfb__
* __JRE__ to run the __selenium server__
* the __python selenium bindings__

Then when the test environment is set up, we start the selenium server and Xvfb
in the background:

“`bash
/usr/bin/java -jar /usr/lib/selenium/selenium-server.jar -host 127.0.0.1 >/var/log/selenium/selenium.log 2>/var/log/selenium/error.log
Xvfb :99 -ac -screen 0 1920x1080x24 >/var/log/Xvfb/Xvfb.log 2>/var/log/Xvfb/error.log“`

Finally, set the display environment variable to :99 (the Xvfb display) and run your tests:

“`bash
export DISPLAY=:99
./path/to/seleniumtests“`

The tests will run as if the VM was plugged to a display.

## Taking screenshots

With this headless setup, we can now run GUI tests on virtual machines within our
automated CI; but we need a way to visualize what happens in the GUI if a test
fails.

It turns out that the selenium bindings have a screenshot feature that we can use
for that. Here is how to define a decorator in python that will save a screenshot
if a test fails.

“`python
import functools
import os
import unittest
from selenium import webdriver

[…]

def snapshot_if_failure(func):
@functools.wraps(func)
def f(self, *args, **kwargs):
try:
func(self, *args, **kwargs)
except Exception as e:
path = ‘/tmp/gui/’
if not os.path.isdir(path):
os.makedirs(path)
screenshot = os.path.join(path, ‘%s.png’ % func.__name__)
self.driver.save_screenshot(screenshot)
raise e
return f

class MyGUITests(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.maximize_window()
self.driver.implicitly_wait(20)

@snapshot_if_failure
def test_login_page(self):

If test_login_page fails, a screenshot of the browser at the time of the exception
will be saved under /tmp/gui/test_login_page.png.

Video recording

We can go even further and record a video of the whole testing session, as it
turns out that ffmpeg can capture X sessions with the “x11grab” option. This
is interesting beyond simply test debugging, as the video can be used to illustrate
the use cases that you are testing, for demos or fancy video documentations.

In order to have ffmpeg on your test node, you can either add
compilation steps to the
node’s post-install script or go the easy way and use an external repository:

# install ffmpeg
sudo rpm –import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro
sudo rpm -Uvh http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-1.el7.nux.noarch.rpm
sudo yum update
sudo yum install -y ffmpeg

To record the Xfvb buffer, you’d simply run
bash
export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920×1080 -i 127.0.0.1$DISPLAY -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi

The catch is that ffmpeg expects the user to press q to stop the recording
and save the video (killing the process will corrupt the video). We can use
tmux to save the day; run your GUI tests like so:

export DISPLAY=:99
tmux new-session -d -s guiTestRecording ‘export FFREPORT=file=/tmp/gui/ffmpeg-$(date +%Y%m%s).log && ffmpeg -f x11grab -video_size 1920×1080 -i 127.0.0.1’$DISPLAY’ -codec:v mpeg4 -r 16 -vtag xvid -q:v 8 /tmp/gui/tests.avi && sleep 5′
./path/to/seleniumtests
tmux send-keys -t guiTestRecording q

Accessing the artifacts

Nodepool destroys VMs when their job is done in order to free resources (that is,
after all, the spirit of the cloud). That means that our pictures and videos will
be lost unless they’re uploaded to an external storage.

Fortunately Software Factory handles this: predefined publishers can be appended
to our jobs definitions; one of
which
allows to push any artifact to a Swift object store. We can then retrieve our
videos and screenshots easily.

Conclusion

With little effort, you can now run your selenium tests on virtual hardware as
well to further automate your CI pipeline, while still ensuring human supervision.

Further reading

This article
helped a lot in setting up our selenium environment.
If you want to run your tests on docker containers rather than VMs, this article
explains how to configure Xvfb for
that.
Apparently Selenium can run on headless Windows VMs as well,
although I have not tested this.

Quelle: RDO

Boost IBM WebSphere with IBM Cloud Product Insights

This needs some serious thought
Rapid deployment is the name of the game for IBM WebSphere Application Server (WAS). Speed is critical for companies that will spin up and down WebSphere instances to accommodate the agility required by many types of projects. So how can an IT team manage and support and keep track of all the different ways they’re using WAS?
You can use IBM Cloud Product Insights, a new software as a service (SaaS) offering that can help support product inventory management and show some of the essential usage metrics for each WebSphere instance.
Let’s say you are a product inventory controller or capacity planner responsible for keeping track of which products are used across your company. What types of reports do you get from your IT team? Are they automated and accurate, or is there a worrying margin of error that can be introduced with manual tracking? Are you aware of the WebSphere versions being used? Have the latest fixpacks been installed to ensure the reliability and security of your environment?
With the latest support for IBM WebSphere there is now built in functionality that connects to IBM Cloud Product Insights. You can track the inventory all your WebSphere instances in a single dashboard.
IBM Cloud Product Insights was built to help alleviate inventory and tracking issues for rapidly changing IT infrastructure. In this way, it also facilitates extending WebSphere products to a hybrid cloud infrastructure. You can take advantage of the flexibility and resiliency of the Cloud and potentially forego buying additional licenses or acquiring new hardware. After deploying WebSphere to the cloud, IBM Cloud Product Insights will automatically update its dashboard view of WebSphere deployments, providing a dynamic and accurate view of your WebSphere environment.
Beyond inventory, there are also several key usage metrics that provide a high-level view of how these WebSphere instances are being used across your company. The intent is not to replace robust monitoring products, but to provide essential metrics—and no-cost.
You can see CPU and memory use by hardware, either real or virtual as well as servlet requests handled, giving you a view of WAS usage. These metrics provide enough of a usage indicator to understand how deployed WebSphere instances are being used, along with an indication of whether you might need to take a deeper look at performance issues.
Of course, none of this would be of value if you couldn’t continue to guarantee the security of your environment and the privacy of your data. IBM Cloud Product Insights provides gateway support for infrastructure running behind your company firewall. You also get the ability to audit all deployment and usage data sent to IBM Cloud Product Insights.
IBM Cloud Product Insights is available with WAS V8.5.5 or V9.0 and supports both traditional WAS and Liberty application servers. We encourage you to explore the possibilities.
The post Boost IBM WebSphere with IBM Cloud Product Insights appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mobile app breathes new life into ancient cathedral

Preserving one of Europe’s most popular and iconic cathedrals, the Duomo di Milano, along with its associated Duomo Museum, Archaeological Area and Terraces, is no small task. Federal budget cuts have threatened to make things even harder for Veneranda Fabbrica del Duomo di Milano (VFD), the historic organization responsible for preservation and restoration of the cathedral.
The cathedral was built in 355 AD. VFD was founded in 1387 and has maintained contact with the arts, science and technologies used to build the monument to ensure it has the distinctive attributes of continuing history.
Catalyst for change
A citywide event, Expo 2015, meant VFD would have the opportunity to reach more people than ever before. With more than 140 participating countries, the city welcomed in excess of 21 million visitors to its landmarks over the course of the six-month summit. The VFD knew it would need to upgrade its digital technologies to enhance the visitor experience.
VFD wanted to share the cathedral’s heritage with the world, especially with young people. Today’s tourists expect more than following a tour guide around or renting an audio device. Most people carry mobile devices and are accustomed to having nearly limitless information at their fingertips.
App transforms the tourist experience
In line with its technology strategy, VFD developed a mobile app, Duomo Milano, built on the secure and flexible IBM Bluemix virtual server platform.
The organization worked with IBM to develop a new infrastructure-as-a-service (IaaS) solution that was designed for reliability and provided a scalable platform for developing a feature-rich application for visitors to the Duomo and its ancillary attractions. The solution is hosted in an IBM Cloud Data Center, and VFD can easily manage unpredictable download volumes and support a high number of connections because it defines private, public and management traffic travel on separate network interfaces.
The app transforms the tourist experience, providing visitors with interactive tours of the cathedral and access to enhanced content, including a 360-degree interactive map of the Milan skyline. During the first week of Expo 2015, users downloaded the app more than 800 times, and it maintained an average rating of five stars, exceeding even our expectations. It set the stage for an extremely successful first three months with a total of 15,000 downloads.
The app uses real-time location information to make sales suggestions and drives potential donors to our Get Your Spire fund-raising campaign. Recently, we incorporated The Weather Company data for IBM Bluemix API into the app. This added functionality collects weather data and sends a push notification to app users whenever they are near the cathedral and conditions are favorable for an excursion to the Duomo Terraces.
Seeing the Duomo di Milano in a modern way
For VFD, digital transformation mainly means new ways of communicating with tourists from all over the world, not only about the Duomo, but also about fund-raising initiatives to support the sites. Cloud represents a total paradigm shift in IT. Its potential goes beyond technological innovation: it provides extraordinary leverage, enabling new ways of doing business, creating value and enhancing existing services.
VFD wanted to share this incredible experience with the world, and IBM created the structure that enables it. The app has improved the way the Duomo di Milano is perceived among young and tech-savvy audiences by providing them with an interesting and modern experience as they tour this ancient treasure. With help from IBM, today we can tell a real story made by the Milanese.
Learn more about the Duomo di Milano and download the app.
The post Mobile app breathes new life into ancient cathedral appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

What’s new in Red Hat OpenStack Platform 11?

We are happy to announce that Red Hat OpenStack Platform 11 is now Generally Available (GA).
Version 11 is based on the upstream OpenStack release, Ocata, the 15th release of OpenStack. It brings a plethora of features, enhancements, bugfixes, documentation improvements and security updates. Red Hat OpenStack Platform 11 contains the additional usability, hardening and support that all Red Hat releases are known for. And with key enhancements to Red Hat OpenStack Platform’s deployment tool, Red Hat OpenStack Director, deploying and upgrading enterprise, production-ready private clouds has never been easier. 
So grab a nice cup of coffee or other tasty beverage and sit back as we introduce some of the most exciting new features in Red Hat OpenStack Platform 11!

Composable Upgrades
By far, the most exciting addition brought by Red Hat OpenStack Platform 11 is the extension of composable roles to now include composable upgrades.
But first, composable roles
As a refresher, a composable role is a collection of services that are grouped together to deploy the Overcloud’s main components. There are five default roles (Controller, Compute, BlockStorage, ObjectStorage, and CephStorage) allowing most common architectural scenarios to be achieved out of the box. Each service in a composable role is defined by an individual Heat template following a standardised approach that ensures services implement a basic set of input parameters and output values. With this approach these service templates can be more easily moved around, or composed, into a custom role. This creates greater flexibility around service placement and management.
And now, composable upgrades …
Before composable roles, upgrades were managed via a large set of complex code to ensure all steps were executed properly. By decomposing the services into smaller, standardized modules, the upgrade logic can be moved out of the monolithic and complex script into the service template directly. This is done by a complete refactoring of the upgrade procedure into modular snippets of Ansible code which can then be integrated and orchestrated by Heat. To do this each service’s template has a collection of Ansible plays to handle the upgrade steps and actions. Each Ansible play has a tagged value to allow heat to step through the code and execute in a precise and controlled order. This is the same methodology used by puppet and the “step_config” parameter already found in the “outputs” section of each service template.
Heat iterates through the roles and services and joins the services’ upgrade plays together into a larger playbook. It then executes the plays, by tag, moving through the upgrade procedure.
For example, take a look at Pacemaker’s upgrade_tasks section (from tripleo-heat-templates/puppet/services/pacemaker.yaml):
     upgrade_tasks:
       – name: Check pacemaker cluster running before upgrade
         tags: step0,validation
         pacemaker_cluster: state=online check_and_fail=true
         async: 30
         poll: 4
       – name: Stop pacemaker cluster
         tags: step2
         pacemaker_cluster: state=offline
       – name: Start pacemaker cluster
         tags: step4
         pacemaker_cluster: state=online
       – name: Check pacemaker resource
         tags: step4
         pacemaker_is_active:
           resource: “{{ item }}”
           max_wait: 500
         with_items: {get_param: PacemakerResources}
       – name: Check pacemaker haproxy resource
         tags: step4
         pacemaker_is_active:
           resource: haproxy
           max_wait: 500
         when: {get_param: EnableLoadBalancer}

Heat executes the play for step0, then step1, then step2 and so on. This is just like running ansible-playbook with the -t or –tags option to only run plays tagged with these values.
Composable upgrades help to support trustworthy lifecycle management of deployments by providing a stable upgrade path between supported releases. They offer simplicity and reliability to the upgrade process and the ability to easily control, run and customize upgrade logic in a modular and straightforward way.
Increased “Day 0” HA (Pacemaker) Service placement flexibility
New in version 11, deployments can use composable roles for all services. This means the remaining pacemaker-managed services, such as RabbitMQ and Galera, traditionally required to be collocated on a single controller node, can now be deployed as custom roles to any nodes. This allows operators to move core service layers to dedicated nodes increasing security, scale, and service design flexibility.
Please note: Due to the complex-nature of changing the pacemaker-managed services in an already running Overcloud we recommend consulting Red Hat support services before attempting to do so.
Improvements for NFV
Co-location of Ceph on Compute now supported in production (GA)
Co-locating Ceph on Nova is done by placing the Ceph Object Storage Daemons (OSDs) directly on the compute nodes. Co-location lowers many cost and complexity barriers for workloads that have minimal and/or predictable storage I/O requirements by reducing the number of total nodes required for an OpenStack deployment. Hardware previously dedicated for storage-specific requirements can now be utilized by the compute footprint for increased scale. With version 11 co-located storage is also now fully supported for deployment by director as a composable role. Operators can more easily perform detailed and targeted deployments of co-located storage, including technologies such as SR-IOV, all from a custom role. The process is fully supported with comprehensive documentation and tuning support (track this BZ for version 11 specifics).
For Telcos, support for co-locating storage can be helpful for optimizing workloads and deployment architectures on a varied range of hardware and networking technologies within a single OpenStack deployment.
VLAN-Aware VMs now supported in production (GA)
A VLAN-aware VM, or more specifically, “Neutron Trunkports,” is how an OpenStack instance can support VLAN tagged frames across a single vNIC. This allows an operator to use fewer vNICs to access many separate networks, significantly reducing complexity by reducing the need for one vNIC for each network. Neutron does this by allowing subports off the original parent, effectively turning the main parent port into a virtual trunk. These subports can have their own segmentation id’s assigned directly to them allowing an operator to assign each port its own VLAN.

(Image courtesy of https://wiki.openstack.org/wiki/Neutron/TrunkPort; used under Creative Commons)
Version bumps for key virtual networking technologies
DPDK now version 16.11
DPDK 16.11 brings non-uniform memory access (NUMA) awareness to openvswitch-dpdk deployments. Virtual host devices comprise of multiple different types of memory which should all be allocated to the same physical node. 16.11 uses NUMA awareness to achieve this in some of the following ways:

16.11 removes the requirement for a single device-tracking node which often creates performance issues by splitting memory allocations when VMs are not on that node
NUMA ID’s can now be dynamically derived and that information used by DPDK to correctly place all memory types on the same node
DPDK now sends NUMA node information for a guest directly to Open vSwitch (OVS) allowing OVS to allocate memory more easily on the correct node
16.11 removes the requirement for poll mode driver (PMD) threads to be on cores of the same NUMA node. PMDs can now be on the same node as a device’s memory allocations

Open vSwitch now version 2.6
OVS 2.6 lays the groundwork for future performance and virtual network requirements required for NFV deployments, specifically in the ovs-dpdk deployment space. Immediate benefits are gained by currency of features and initial, basic OVN support. See the upstream release notes for full details.
CloudForms Integration
Red Hat OpenStack Platform 11 remains tightly integrated with CloudForms. It has been fully tested and supports features such as:

Tenant Mapping: finds and lists all OpenStack tenants as CloudForms tenants and they remain in synch. Create, update and delete of CloudForms tenants are reflected in OpenStack and vice-versa
Multisite support where one OpenStack region is represented as one cloud provider in CloudForms
Multiple domains support where one domain is represented as one cloud provider in CloudForms
Cinder Volume Snapshot Management can be done at volume or instance level. A snapshot is a whole new volume and you can instantiate a new instance from it, all from Cloudforms

OpenStack Lifecycle: Our First “Sequential” Release
Long Life review …
With OSP 10 we introduced the concept of the Long Life release. Long Life releases allow customers who are happy with their current release and without any pressing need for specific feature updates to remain supported for up to five years. We have designated every 3rd release as Long Life. For instance, versions 10, 13, and 16 are Long Life, while versions 11, 12, 14 and 15 are sequential. Long Life releases allow for upgrades to subsequent Long Life releases (for example, 10 to 13 without stepping through 11 and 12). Long Life releases generally have an 18 month cadence (three upstream cycles) and do require additional hardware for the upgrade process. Also, while procedures and tooling will be provided for this type of upgrade, it is important to note that some outages will occur.
Now, Introducing … Sequential!
Red Hat OpenStack Platform 11 is the first “sequential” release (i.e. N+1). It is supported for one year and is released immediately into a “Production Phase 2” release classification. All upgrades for this type of release must be done sequentially (i.e. N+1). Sequential releases feature tighter integration with upstream projects and allow customers to quickly test new features and to deploy using their own knowledge of continuous integration and agile principles. Upgrades are generally done without major workload interruption and customers typically have multiple datacenters and/or highly demanding performance requirements. For more details see Red Hat OpenStack Platform Lifecycle (detailed FAQ as pdf) and Red Hat OpenStack Platform Director Life Cycle.
Additional notable new features of version 11
A new Ironic inspector plugin can process Link Layer Discovery Protocol (LLDP) packets received from network switches during deployment. This can significantly help deployers to understand the existing network topology during a deployment and reduces trial-and-error by helping to validate the actual physical network setup presented to a deployment. All data is collected automatically and stored in an accessible format in the Undercloud’s Swift install.
There is now full support for collectd agents to be deployed to the Overcloud from director using composable roles. Performance monitoring is now easier to do as collectd joins the other fully supported OpsTools services for availability monitoring (sensu) and log management (fluentd) present starting with version 10.
And please remember, this are agents, not the full server-side implementations. Check out how to implement the server components easily with Ansible by going to the CentOS OpsTools Special Interest Group for all the details.
Additional features landing as Tech Preview
Tech Preview Features should not be implemented in production. For full details please see: https://access.redhat.com/support/offerings/techpreview/
Octavia
Octavia brings a robust and mature LBaaS v2 API driver to OpenStack and will eventually replace the legacy HAProxy namespace driver currently found in Newton. It will become not only a load balancing driver but also the load balancing API hosting all the other drivers. Octavia is a now a top level project outside of Neutron; for more details see this excellent update talk from the recent OpenStack Summit in Boston.
Octavia implements load balancing via a group of virtual machines (or containers or bare metal servers) controlled via a controller called “Amphora.” It manages, among other things, the images used for the balancing engine. In Ocata, Amphora introduces image support for Red Hat Enterprise Linux, Centos and Fedora. Amphora images (collectively known as amphorae) utilize HAProxy to implement load balancing. For full details of the design, consult the Component Design document.
To allow Red Hat OpenStack Platform users to try out this new implementation in a non-production environment operators can deploy a Technology Preview with director starting with version 11.
Please Note: Octavia’s director-based implementation is currently scheduled for a z-stream release for Red Hat OpenStack Platform Version 11. This means that while it won’t be available on the day of the release it will be added to it shortly. However, please track the following bugzilla, as things may change at the last moment and affect this timing.
OpenDaylight
Red Hat OpenStack Platform 11 increases ODL support in version 10 by adding deployment of the OpenDaylight Boron SR2 release to director using a composable role.
Ceph block storage replication
The Cinder RADOS block driver (RBD) was updated to support RBD mirroring (promote/demote location) in order to allow customers to support essential concepts in disaster recovery by more easily managing and replicating their data using RBD-mirroring via the Cinder API.
Cinder Service HA 
Until now the cinder-volume service could run only in Active/Passive HA fashion. In version 11, the Cinder service received numerous internal fixes around locks, job distribution, cleanup, and data corruption protection to allow for an Active/Active implementation. Having a highly available Cinder implementation may be useful for uptime reliability and throughput requirements.
To sum it all up
Red Hat OpenStack Platform 11 brings important enhancements to all facets of cloud deployment, operations, and management. With solid and reliable upgrade logic enterprises will find moving to the next version of OpenStack is easier and smoother with a lower chance for disruption. The promotion of important features to full production support (GA) keeps installs current and supported while the introduction of new Technology Preview features gives an accessible glimpse into the immediate future of the Red Hat OpenStack Platform.
More info
For more information about Red Hat OpenStack Platform please visit the technology overview page, product documentation, release notes and release annoucement.
To see what others are doing with Red Hat OpenStack Platform check out these use cases. 
And don’t forget you can evaluate Red Hat OpenStack Platform for free for 60 days to see all these features in action.
Quelle: RedHat Stack