RDO Train Released

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Train for RPM-based distributions, CentOS Linux and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Train is the 20th release from the OpenStack project, which is the work of more than 1115 contributors from around the world.
The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/7/cloud/x86_64/openstack-train/. While we normally also have the release available via http://mirror.centos.org/altarch/7/cloud/ppc64le/ and http://mirror.centos.org/altarch/7/cloud/aarch64/ – there have been issues with the mirror network which is currently being addressed via https://bugs.centos.org/view.php?id=16590.
The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Linux and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS Linux users looking to build and maintain their own on-premise, public or hybrid clouds.
All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first.
PLEASE NOTE: At this time, RDO Train provides packages for CentOS7 only. We plan to move RDO to use CentOS8 as soon as possible during Ussuri development cycle so Train will be the last release working on CentOS7.

Interesting things in the Train release include:

Openstack Ansible, which provides ansible playbooks and roles for deployment, added murano support and fully migrated to systemd-journald from rsyslog. This project makes deploying OpenStack from source in a way that makes it scalable while also being simple to operate, upgrade, and grow.
Ironic, the Bare Metal service, aims to produce an OpenStack service and associated libraries capable of managing and provisioning physical machines in a security-aware and fault-tolerant manner. Beyond providing basic support for building software RAID and a myriad of other highlights, this project now offers a new tool for building ramdisk images, ironic-python-agent-builder.

Other improvements include:

Tobiko is now available within RDO! This project is an OpenStack testing framework focusing on areas mostly complementary to Tempest. While the tempest main focus has been testing OpenStack rest APIs, the main Tobiko focus would be to test OpenStack system operations while “simulating” the use of the cloud as the final user would. Tobiko’s test cases populate the cloud with workloads such as instances, allows the CI workflow to perform an operation such as an update or upgrade, and then runs test cases to validate that the cloud workloads are still functional.
Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/train/highlights.html.

Contributors
During the Train cycle, we saw the following new RDO contributors:

Joel Capitao
Zoltan Caplovic
Sorin Sbarnea
Sławek Kapłoński
Damien Ciabrini
Beagles
Soniya Vyas
Kevin Carter (cloudnull)
fpantano
Michał Dulko
Stephen Finucane
Sofer Athlan-Guyot
Gauvain Pocentek
John Fulton
Pete Zaitcev

Welcome to all of you and Thank You So Much for participating!
But we wouldn’t want to overlook anyone. A super massive Thank You to all 65 contributors who participated in producing this release. This list includes commits to rdo-packages and rdo-infra repositories:

Adam Kimball
Alan Bishop
Alex Schultz
Alfredo Moralejo
Arx Cruz
Beagles
Bernard Cafarelli
Bogdan Dobrelya
Brian Rosmaita
Carlos Goncalves
Cédric Jeanneret
Chandan Kumar
Damien Ciabrini
Daniel Alvarez
David Moreau Simard
Dmitry Tantsur
Emilien Macchi
Eric Harney
fpantano
Gael Chamoulaud
Gauvain Pocentek
Jakub Libosvar
James Slagle
Javier Peña
Joel Capitao
John Fulton
Jon Schlueter
Kashyap Chamarthy
Kevin Carter (cloudnull)
Lee Yarwood
Lon Hohberger
Luigi Toscano
Luka Peschke
marios
Martin Kopec
Martin Mágr
Matthias Runge
Michael Turek
Michał Dulko
Michele Baldessari
Natal Ngétal
Nicolas Hicher
Nir Magnezi
Otherwiseguy
Gabriele Cerami
Pete Zaitcev
Quique Llorente
Radomiropieralski
Rafael Folco
Rlandy
Sagi Shnaidman
shrjoshi
Sławek Kapłoński
Sofer Athlan-Guyot
Soniya Vyas
Sorin Sbarnea
Stephen Finucane
Steve Baker
Steve Linabery
Tobias Urdin
Tony Breeds
Tristan de Cacqueray
Victoria Martinez de la Cruz
Wes Hayutin
Yatin Karel
Zoltan Caplovic

The Next Release Cycle
At the end of one release, focus shifts immediately to the next, Ussuri, which has an estimated GA the week of 11-15 May 2020. The full schedule is available at https://releases.openstack.org/ussuri/schedule.html.
Twice during each release cycle, RDO hosts official Test Days shortly after the first and third milestones; therefore, the upcoming test days are 19-20 December 2019 for Milestone One and 16-17 April 2020 for Milestone Three.
Get Started
There are three ways to get started with RDO.
To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works.
For a production deployment of RDO, use the TripleO Quickstart and you’ll be running a production cloud in short order.
Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world.
Get Help
The RDO Project participates in a Q&A service at https://ask.openstack.org. We also have our users@lists.rdoproject.org for RDO-specific users and operrators. For more developer-oriented content we recommend joining the dev@lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org.
The #rdo channel on Freenode IRC is also an excellent place to find and give help.
We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues.
Get Involved
To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation.
Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube.
Quelle: RDO

Installing Service Mesh is Only the Beginning

A microservice architecture breaks up the monolith application into many smaller pieces and introduces new communication patterns between services like fault tolerance and dynamic routing.One of the major challenges with the management of a microservices architecture is trying to understand how services are composed, how they are connected and how all the individual components operate, from global perspective and drilling down into particular detail.
Besides the advantages of breaking down services into micro services (like agility, scalability, increased reusability, better testability and easy upgrades and versioning), this paradigm also increases the complexity of securing them due to a shift of the method calls via in-process communication into many separate network requests which need to be secured. Every new service you introduce needs to be protected from man-in-the-middle attacks and data leaks, manage access control, and audit who is using which resources and when. Not forgetting the fact that each service can be written in different programming languages. A Service Mesh like Istio provides traffic control and communication security capabilities at the platform level and frees the application writers from those tasks, allowing them to focus on business logic.
But just because the Service Mesh helps to offload the extra coding, developers still need to observe and manage how the services are communicating as they deploy an application.  With the OpenShift Service Mesh, Kiali has been packaged along with Istio to make that task easier. In this post we will show how to use Kiali capabilities to observe and manage an Istio Service Mesh. We will use a reference demo application to demonstrate how Kiali can compare different service versions and how you can configure traffic routing using Istio config resources. Then we will add mutual TLS to all the demo components in order to make our deployment communications more secure. Kiali will assist in this process helping to spot misconfigurations and unprotected communications.
How Does Kiali Work? Using A/B Testing as an Example
One pretty common exercise that Service Mesh is perfect for is performing A/B testing to compare application versions.  And with a microservices application, this can be more complex than with a single monolith. Let’s use a reference demo application to demonstrate how Kiali can compare different service versions and how you can configure traffic routing using Istio config resources.
Travel Agency Demo
Travel Agency is a demo microservices application that simulates a travel portal scenario. It creates two namespaces:

A first travel-portal namespace will deploy an application per city representing a personalized portal where users enter to search and book travels, in our example we will have three applications: paris-portal, rome-portal and london-portal. Every portal application will have two services: web service will handle users from the web channel, meanwhile vip service will take requests from priority channels like special travel brokers. All portals consume a travel quote service hosted in the next namespace.
A second travel-agency namespace will host services that will calculate quotes for travel. A main travels service will be the business entry point for the travel agency. It receives a city and a user as parameters and it calculates all elements that compose a travel budget: airfares, lodging, car reservation and travel insurances. There are several services that calculate separate prices and the travel service is responsible to aggregate them in a single response. Additionally, some users like vip users can have access to special discounts, managed by a specific discounts service.

The interaction between all services from the example can be shown in the following picture:

In the next steps we are going to deploy a new version of every service in the travel agency namespace that will run in parallel with the first version deployed. Let’s imagine that the next version will add new features that we want to test with live users and compare how are the results. Obviously, in the real world this could be complex and highly dependent on the domain, but for our example, we will focus on the response time that portals will get assuming that a slower portal will cause our users to lose interest.
One of the first steps we can do in Kiali is to enable Response time labels on the Graph:

The graph helps us to identify those services that could have some problems. In our example everything is green and healthy, but the Response time shows some suspects that the new version 2 probably has some slower features compared with version 1.
Our next stop will be to take a closer look into the travels application metrics:

Under the Inbound Metrics tab we will have data about the portal calls, Kiali can show metrics split by several criteria. Grouping by app shows that all portals have increased the response time since the moment version 2 was deployed.

If we show Inbound metrics grouped by app and version, then we spot an interesting difference: response time in general has been increased, but portals that handle vip users have worse behaviour.
Also, we can continue using Kiali to investigate and correlate these results with traces:

And also with logs from the workloads if it would be necessary to get more information:

Taking Action with Kiali
From our investigation phase we have spotted a slower response time from version 2 and even slower for vip user requests.
There can be multiple strategies from here, like undeploying the whole version 2, partial deployment of version 2 service by service, limiting which users can access the new version, or a combination of all of those.
In our case, we are going to show how we can use Kiali Actions to add Istio traffic routing into our example that can help to implement some of the previous strategies.
Matching routing
A first action we can perform is to add Istio resources to route traffic coming from vip users to version 2 and the rest of the requests to version 1.
Kiali allows to create Istio configurations from a high level user interface. From the actions located in the service details we can open the Matching Routing Wizard and discriminate requests using headers as it is shown in the picture:
Kiali will create the specifics VirtualService and DestinationRule resources under the service. As part of our strategy we will add similar rules for the suspected services: travels, flights, hotels, cars and insurances.
When we have finished creating Matching Routing for our version 2 services we can check that Kiali has created the correct Istio configuration using the “Istio Config” section:

Once this routing configuration is applied we can see the results in the Response time edges of the Graph:

Now in our example, all traffic coming from vip portals will be routed to the version 2, meanwhile the rest of the traffic is using the previous version 1 which has returned to its normal response time. The graph also shows that vip user requests have extra load as they need to access the discounts service.
Weighted routing
If we examine the discounts service, we can see big differences between response time from version 1 versus version 2:

Once we have spotted a clear cause for the slower response, we can decide to move most of the traffic to the version 1 but maintain some of the traffic to version 2 to get more data and observe the differences. This action will help to not impact too much into the overall performance of the app.
We can use the Weighted Routing Wizard to set 90% of the traffic into version 1 and maintain only a 10% for version 2:
Once the Istio configuration is created we can enable Request percentage in the graph and examine the discounts service:

Suspend traffic
Kiali also allows to suspend traffic partially or totally for a specific destination using the Suspend Traffic Wizard:

This action allows you to stop traffic for a specific workload, in other words, to distribute the traffic between the rest of connected workloads. Also, user can stop the whole service returning a fast HTTP 503 error to implement a strategy to “fail sooner” and recover rather than letting the slow requests flood the overall application.
Make the Service Mesh Work for You
Microservices scenarios demand good observability tooling and practices. In this post,  we have showed how to combine Kiali capabilities to observe, correlate traces via Jaeger Integration, define strategies and perform actions on a Istio based microservices application.
The ServiceMesh is a great tool that solves complex problems introduced by the Microservices paradigm. As a component of the OpenShift Service Mesh, Kiali provides the key observability features you need to truly understand all the telemetry and distributed tracing out of the service mesh. Kiali does all the work of correlating and processing the status of the Service Mesh, which means it becomes easier to quickly take a look at the status of the service mesh.  
No longer do you have to deal with separate consoles, or understanding how to configure special dashboards.  You don’t have to learn how to fetch traces for each service or understand which rules are needed to apply an A/B test. Kiali builds a topology from traffic metrics and it combines multiple namespace in a single view. Using animations, user can identify a slow bottleneck mapped with a slow animation in the graph. And it is all seamlessly integrated under the OpenShift Service Mesh console.
With OpenShift Service Mesh and Kiali, developers have the tools they need to offload the complexity of creating and managing the complex intraservice communications that are the glue to deploying a microservices-based application.  
The post Installing Service Mesh is Only the Beginning appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

5 key elements for a successful cloud migration

As organizations continue to increase their cloud investment to drive business forward, cloud adoption has become integral to IT optimization. Cloud migration allows businesses to be more agile, improve inefficiencies and provide better customer experiences. Today, the need for stability and flexibility has never been more necessary.
An optimized IT infrastructure is different for each organization, but it often consists of a combination of public cloud, private cloud and traditional IT environments. In an IDG cloud computing survey, 73 percent of key IT decision-makers reported having already adopted this combination of cloud technology, and another 17 percent intended to do so in the next 12 months.
However, for businesses that are worried about disruption to their operations, adopting a cloud infrastructure doesn’t have to be an all-or-nothing proposition. Companies can start reaping the benefits from cloud technologies while continuing to run assets on existing on-premises environments by incorporating applications into a hybrid cloud model.
5 steps to successfully migrate applications to the cloud
Migrating to a cloud environment can help improve operational performance and agility, workload scalability, and security. From virtually any source, businesses can migrate workloads and quickly begin capitalizing on the following hybrid cloud benefits:

Greater agility with IT resources on demand, which enables companies to scale during unexpected surges or seasonal usage patterns.
Reduced capital expenditure becomes possible by shifting from an operating expenses model to pay-as-you-go approach.
Enhanced security with various options throughout the stack—from physical hardware and networking to software and people.

Before embarking on the cloud migration process, use the steps below to gain a clear understanding of what’s involved.
1. Develop a strategy.
This should be done early and in a way that prioritizes business objectives over technology.
2. Identify the right applications.
Not all apps are cloud friendly. Some perform better on private or hybrid clouds than on a public cloud. Some may need minor tweaking while others need in-depth code changes. A full analysis of architecture, complexity and implementation is easier to do before the migration rather than after.
3. Secure the right cloud provider.
A key aspect of optimization will involve selecting a cloud provider that can help guide the cloud migration process during the transition and beyond. Some key questions to ask include: What tools, including third-party, does the cloud provider have available to help make the process easier? Can it support public, private and multicloud environments at any scale? How can it help businesses deal with complex interdependencies, inflexible architectures or redundant and out-of-date technology?
4. Maintain data integrity and operational continuity.
Managing risk is critical, and sensitive data can be exposed during a cloud migration. Post-migration validation of business processes is crucial to ensure that automated controls are producing the same outcomes without disrupting normal operations.
5. Adopt an end-to-end approach.
Service providers should have a robust and proven methodology to address every aspect of the migration process. This should include the framework to manage complex transactions on a consistent basis and on a global scale. Make sure to spell all of this out in the service-level agreement (SLA) with agreed-upon milestones for progress and results.
IT optimization drives digital transformation
The results of IT optimization—including accelerated adoption, cost-effectiveness, and scalability—will help drive business innovation and digital transformation. Adopting cloud with a phased approach by carefully considering which applications and workloads to migrate can help companies obtain these benefits without disruption business operations.
Learn more about improving IT flexibility and IBM services for extending workloads to cloud by reading the smart paper “Optimize IT: Accelerate digital transformation”.
The post 5 key elements for a successful cloud migration appeared first on Cloud computing news.
Quelle: Thoughts on Cloud