Community Blog Round Up 13 August 2019

Making Host and OpenStack iSCSI devices play nice together by geguileo

OpenStack services assume that they are the sole owners of the iSCSI connections to the iSCSI portal-targets generated by the Cinder driver, and that is fine 98% of the time, but what happens when we also want to have other non-OpenStack iSCSI volumes from that same storage system present on boot? In OpenStack the OS-Brick […]

Read more at https://gorka.eguileor.com/host-iscsi-devices/
Service Assurance on small OpenShift Cluster by mrunge

This article is intended to give an overview on how to test the

Read more at http://www.matthias-runge.de/2019/07/09/Service-Assurance-on-ocp/
Notes on testing a tripleo-common mistral patch by JohnLikesOpenStack

I recently ran into bug 1834094 and wanted to test the proposed fix. These are my notes if I have to do this again.

Read more at http://blog.johnlikesopenstack.com/2019/07/notes-on-testing-tripleo-common-mistral.html
Developer workflow with TripleO by Emilien

In this post we’ll see how one can use TripleO for developing & testing changes into OpenStack Python-based projects (e.g. Keystone).

Read more at https://my1.fr/blog/developer-workflow-with-tripleo/
Avoid rebase hell: squashing without rebasing by OddBit

You’re working on a pull request. You’ve been working on a pull request for a while, and due to lack of sleep or inebriation you’ve been merging changes into your feature branch rather than rebasing. You now have a pull request that looks like this (I’ve marked merge commits with the text [merge]):

Read more at https://blog.oddbit.com/post/2019-06-17-avoid-rebase-hell-squashing-wi/
Git Etiquette: Commit messages and pull requests by OddBit

Always work on a branch (never commit on master) When working with an upstream codebase, always make your changes on a feature branch rather than your local master branch. This will make it easier to keep your local master branch current with respect to upstream, and can help avoid situations in which you accidentally overwrite your local changes or introduce unnecessary merge commits into your history.

Read more at https://blog.oddbit.com/post/2019-06-14-git-etiquette-commit-messages/
Running Keystone with Docker Compose by OddBit

In this article, we will look at what is necessary to run OpenStack’s Keystone service (and the requisite database server) in containers using Docker Compose.

Read more at https://blog.oddbit.com/post/2019-06-07-running-keystone-with-docker-c/
The Kubernetes in a box project by Carlos Camacho

Implementing cloud computing solutions that runs in hybrid environments might be the final solution when comes to finding the best benefits/cost ratio.

Read more at https://www.anstack.com/blog/2019/05/21/kubebox.html
Running Relax-and-Recover to save your OpenStack deployment by Carlos Camacho

ReaR is a pretty impressive disaster recovery solution for Linux. Relax-and-Recover, creates both a bootable rescue image and a backup of the associated files you choose.

Read more at https://www.anstack.com/blog/2019/05/20/relax-and-recover-backups.html
Quelle: RDO

A Quick Reader Survey

If you have the time, we’d appreciate it if you could answer 4 simple questions for us. We’re working to improve the OpenShift blog overall, with the end goal of best serving our readers what they are interested in. Much like a magazine, the more we know who our readers are, the easier it is […]
The post A Quick Reader Survey appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Disaster Recovery Strategies for Red Hat OpenShift

This is a guest post written by Gou Rao, CTO of Portworx As increasingly complex applications move to the Red Hat OpenShift platform, IT teams should have disaster recovery (DR) processes in place for business continuity in the face of widespread outages. These are not theoretical concerns. Many industries are subject to regulations that require […]
The post Disaster Recovery Strategies for Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

IBM is bringing Red Hat OpenShift to Its Platforms

IBM is fully embracing Red Hat OpenShift. The company recently announced that it will use Red Hat OpenShift as the primary container environment for all its hybrid cloud offerings. This includes IBM Cloud, IBM Cloud Paks running on OpenShift, an entire field of IBM consultants and services people being trained on OpenShift, and OpenShift on […]
The post IBM is bringing Red Hat OpenShift to Its Platforms appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Chatmantics improves customer experiences by replacing call center IVR with AI on IBM Cloud

We’ve all called a company’s customer service number only to be greeted by an interactive voice response (IVR) system.
The IVR will say to press one for this and two for that and so on. You must wait for the correct number to enter your account information “so that we may better serve you” but “please do not press zero as that will only delay your call.”
If you already know the correct button to press, some IVRs don’t even let you skip ahead. You’re instead forced to listen to the recording explain what company function is aligned with all 12 buttons on your telephone keypad.
People are more likely to interact with a brand or talk to a representative when they feel like there’s a conversation happening. Robotic IVR systems are the opposite of the genuine interactions customers want.
Chatmantics is a conversational automation platform that leaves IVRs in the dust. Chatmantics is helping companies integrate artificial intelligence (AI) assistants into their call centers to handle initial interactions with customers. AI assistants can take payments over the phone, qualify people for a specific service, help them with support issues and more.
Creating a call center IVR alternative with IBM Cloud
We had a cloud-first strategy out the gate because we knew that it would allow us to be nimble and dynamic in how we built our platform. We looked at various cloud providers, but IBM was always at the top of the list because of Watson, which I view as having brought on the whole AI age. After looking at the features of the IBM Watson platform, we discovered more of the IBM Cloud capabilities that we really liked, including a full set of plug-and-play API options.
We also liked the hybrid cloud options that IBM offers. These enable us to stay disciplined in the DevOps methodology. We can maintain not only private and public clouds, but also clouds on different platforms. This means we can develop, test and perform quality assurance in separate clouds before we release our solution into production.
 
We use IBM Watson Assistant and IBM Watson Voice Agent for two-way communication, and IBM Watson Studio to analyze how our AI assistants are performing with users. We store and transform data in a data warehouse and then visualize it to show our clients.
The IBM Cloud Kubernetes Service helps us virtualize, scale and maintain the reliability of our platform, especially when there are large spikes in traffic or lots of clients onboarding at the same time.
Without IBM Cloud, it likely would have taken at least a year and a half to get a live AI assistant platform and a live analytics product. We were able to orchestrate multiple services from the IBM Cloud Catalog, which helped us to go live within three months of planning the initial platform.
Improving customer experience and call center productivity with AI
Using a Chatmantics AI assistant is far more efficient than IVR. People are always going to try to go around an IVR to get to a live person. Our AI assistant knows that and simply talks to callers, asking them to answer some questions without pressing any buttons. We can create a customized intent to pull up many different nuances about why people want to talk to a customer service rep before they are routed to the correct specialist.
Our AI assistant can also stop ill-intent calls with customers who swear or just come across as hostile. We use IBM Watson Natural Language Understanding to understand sentiment and determine if a call should be ended without unleashing an insulting, angry customer on a call center agent.
Anybody can use an AI assistant to talk to customers, but without any real insight as to what is actually happening or what it all means, we don’t know how it’s working or how to optimize it if we need to make adjustments. This is where our analytics capabilities help our clients better evaluate call center customer engagements. We also use analytics to help our clients determine if an AI assistant is even justified for their call center in the first place.
Chatmantics is an AI-driven company, and that’s the future we anticipate for the call center services industry as a whole. AI assistants eliminate human error, human deviation, fraud and privacy issues. An AI-first strategy is the most cost-effective and reliable strategy to scale for growth and serve customers.
Read the case study for more details.
The post Chatmantics improves customer experiences by replacing call center IVR with AI on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Comprehensive strategy for enterprise cloud: Where power meets flexibility

Enterprises have embraced the cloud: according to the 2018 IDG Cloud Computing study, 73 percent of enterprises surveyed now host at least a portion of their enterprise computing infrastructure on the cloud, and many have lifted and shifted their simplest workloads to the cloud. But recent survey data also reveals multiple concerns about security, vendor lock-in, compliance and in-house integration.
The message is clear: Cloud strength alone may not be enough to compel substantial movement. Creating an enterprise cloud capable of handling current needs and tackling future challenges requires organizations to do more than heavy lifting. Instead, a comprehensive cloud strategy — designed to help maximize both power and flexibility — is an important step towards realizing lasting cloud gains.
Application modernization
Heritage services and applications account for the vast majority of enterprise IT workloads. Many of them are mission-critical, built from the ground up to empower business-specific strategies or to maintain essential compliance. They can also be costly. As a recent IBM-sponsored Forrester study notes, outdated enterprise applications require organizations to shell out extortionate sums of money to keep them running.
Simply moving to the cloud may not be enough to offset the digital debt. The more pressing question is whether existing applications are ready or right for the cloud. What if you’re running apps that require significant modernization before they’re cloud-ready? What if they simply need to last another year or two while new systems are developed, and there’s no need to move them?
To use cloud power where it makes the most sense, organizations need the flexibility of modernization road maps that help identify what needs to move, what needs to be modified and what isn’t worth lifting or shifting to the cloud. This means using solutions like the Red Hat OpenShift platform, which offers a cloud-agnostic staging ground.
Stretching the limits
Cloud lock-in remains a concern for organizations. Lock-in is the polar opposite of the cloud’s vaunted flexibility: What good are powerful, scalable computing services if enterprises are locked to single-cloud models? What happens if you want to move your application to another cloud platform that provides a better, more optimized environment? What if the proprietary code used to develop these apps can’t be used elsewhere? From our experience with clients, pure power plays offer up-front benefits, but long-term inflexibility can harm organizations over time.
According to IBM research, 85 percent of enterprises surveyed run multicloud environments to maximize their IT impact. What’s more, they’re shifting toward a mix of private and public clouds to create robust hybrid cloud environments rather than going all-in on either type. To meet this growing demand, industry tech leaders are developing multicloud management solutions that empower enterprise IT to move and monitor container-based applications and services anywhere and at any time.
Owning the outcome
Data drives success. Some experts call data the new oil, though others argue that’s an oversimplification. Some worry about its volume; others prioritize accuracy. The common thread? Without data, organizations won’t succeed in the enterprise cloud.
The challenge, though, is that app development and deployment often rely on proprietary coding technologies that offer low bars for entry but are difficult to exit. This is especially critical as organizations move toward cognitive enterprise IT models that prioritize identifying and resolving key business issues by maximizing big data. While proprietary tools may empower app development, enterprises may not be able to uncode apps if provider terms change or separate from providers when business strategy shifts.
Flexibility means owning the outcome by using industry-standard containers to develop plug-and-play microservices and applications that can be moved or modified on demand. For many enterprises, this isn’t something that can be tackled in-house. Instead, it’s worth partnering with providers that prioritize open-source, cloud-agnostic development over data ownership.
Security first
As multicloud and hybrid cloud services evolve, there’s a push for organizations to become cloud-native, which is predicated on flexibility and efficiency. But the rush to the cloud comes with commensurate concerns. Without preparation, data and application security could be at risk, and as organizations look to move more of their critical workloads to the cloud, this becomes a priority.
How do you address this issue? Find a provider capable of supplying end-to-end, layered security support that allows businesses to develop cloud-native strategies while reducing overall risk.
Sheer computing power is no longer enough. Organizations need modernization road maps that empower flexibility with multicloud and hybrid cloud deployments powered by open and secure application development.
Learn more about IBM multicloud services that offer open and secured multicloud strategies for application development, migration, modernization and management.
The post Comprehensive strategy for enterprise cloud: Where power meets flexibility appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

CloudForms 5.0 Beta

The CloudForms team is proud to announce the release of CloudForms 5.0 Beta1. Based on the ManageIQ Ivanchuk release, this release contains several enhancements and bug fixes and is the result of about many months of upstream development.
Some of the notable enhancements include:

Ansible improvements, making easier to exchange parameters between CloudForms and Ansible.
Reduced appliance resource footprint. 
Lots of improvements on the User Experience UX and scalability

For more details and a list of all other improvements see the release notes.
We’d like to encourage all of our customers to download the beta from access.redhat.com  and to try it out. Any issues can be reported by opening a case with Red Hat support.
Quelle: CloudForms

How an AI application is helping improve quality control in the egg farming industry

People typically open a carton of eggs before buying it to be sure none of the eggs are cracked. No one wants to deal with the mess of cracked eggs. For egg farmers, one “bad egg” in a carton means the retail store must be credited for the entire carton. That could potentially waste as many as 23 good eggs, depending on carton size. It’s in farmers’ best interest to detect egg cracks before the eggs enter the supply chain.
Plus, if an egg is cracked, there’s a risk that the egg is in bad condition for consumers. Egg farmers typically perform egg inspection manually, which is time consuming. Pixelabs, a digital solutions provider and IBM Business Partner in Spain, has developed an innovative artificial intelligence (AI) quality control application that optimizes and automates the egg inspection process.
Creating the quality control app at the Watson Build Challenge
Pixelabs developed its “Deteggtor” egg quality control application on IBM Cloud with IBM Watson Visual Recognition technology. The solution uses machine learning to detect egg cracks and provide visual feedback. We gathered thousands of images of cracked and uncracked eggs to train Deteggtor. Following this training, cracked egg detection with the Deteggtor application was 97 percent accurate in a fraction of the time.
We developed Deteggtor during participation in the Watson Build Challenge. Our close collaboration with IBM on both the technology and business fronts helped us understand that beyond just participating in a competition, we could actually develop a viable product and bring it to market. We have appreciated the feedback and support from IBM. We’ve always had someone by our side to help us, not only learn how to use the Watson technology, but also to discover what is possible with Watson.
Helping deliver faster, more accurate egg quality control
The benefits we have seen in our initial pilot are related to quality control and speed of production. Egg crack detection is more accurate and quicker with Deteggtor than with a human.
Another benefit I envision is that egg farmers won’t have to teach new employees how to distinguish which egg is broken or not. Deteggtor has modernized that process.
Additionally, by not introducing cracked eggs into the supply chain, egg farmers have less waste, reduced worry about selling cracked eggs and more confidence that they’re complying with industry regulations. The system is scalable and replicable, thus can adjust to industry demand.
Read the case study for more details.
The post How an AI application is helping improve quality control in the egg farming industry appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

How to utilize Ansible Tower Jobs in Catalog Bundles

Red Hat CloudForms allows users to put both VM provisioning and Ansible Tower jobs in the same catalog bundle, with the intention of having that tower job of customizing the VM that was just provisioned.  However, it’s not as simple as adding a VM catalog item, and then an Ansible Tower catalog item. This post will guide you through the nuances of utilizing Tower jobs in CloudForms step by step.
 
Why can’t I just use Update on Launch when CloudForms is a source in Red Hat Tower’s inventory?
 
You can, as long as you don’t mind the jobs not being concurrent for that inventory.  If you have this option checked, then whatever concurrent jobs you have, will wait, since Tower does not update the inventory while a job is being executed.
 
However, if you wish to have concurrent jobs utilizing CloudForms, please continue reading this blog post.  This is the most efficient method utilizing CloudForms I’ve run across and we are currently using this in our lab environment for reproducers in North America for CloudForms CEE team.  A caveat of this is that you cannot have 2 VMs of the same name in your CloudForms environment, if you do, the limit will potentially be set to all of the VMs that match that name.
 
So how do I update the inventory for the newly provisioned host to Tower if we’re not updating on launch?
 
We will be utilizing an ansible playbook to use an API call to ad hoc add a host to the CloudForms inventory in Ansible Tower
 
What roles do I need to have enabled on the worker appliance for this to work?
 

Automate
Embedded Ansible
Provider Inventory
Provider Operations

 
You will also need to add the appropriate VM provider and Ansible Tower provider
 
Setting up the repository for embedded ansible
 
Go to Automation > Ansible > Repositories > Add New Repository, and use the following URL:

https://github.com/rh-dluong/cloudforms-tower-example.git

 
You will need to set the tower_user and tower_password extra vars on your own for this to work.  This user will need to have modification rights on the inventory you’re utilizing for the tower job.  You can do this through automate, service dialog, a vault/vars file or by modifying the playbook directly.
 
Customizing automate
 
You will need to create a new automate domain, for the example, my GitHub repository, it will be lab_maintenance.
 

Copy and modify AutomationManagement/AnsibleTower/Operations/StateMachines/Job/wait_for_ip method

 

Original line:   ip_list = vm.ipaddresses
Changed line:  ip_list = vm.ipaddresses.grep(/./)

 
The reason for this modification is to wait for an ipv4 address, as CloudForms by default just waits for the first IP it discovers and it’s usually an ipv6 address.
 

Then create an ansible playbook method, picking the playbook found in the synced repository, with the following modifications:

Hosts is set to ${/#miq_provision.destination.ipaddresses.sort.first}
Input parameters are set as followed:

You will need to change the inventory number to whatever your inventory number is from tower that you want to utilize with the job template.  You can find the inventory number by navigating to the inventory in the Ansible Tower UI and inspecting the URL

Example, 3 is the inventory number according to this URL: https://tower.example.com/#/inventories/inventory/3?inventory_search=page_size:20;order_by:name

 

Create an instance called vm_name_limit and set it to the following:

Create a method in the same class and have it filled out like so:
 
vm_name = $evm.root[‘dialog_option_1_vm_name’]
unless vm_name.nil? || vm_name.length.zero?
limit = $evm.object
limit[“data_type”] = “string”
limit[“required”] = “true”
limit[“value”] = vm_name
end
 
This sets the limit to the dialog_option_1_vm_name dialog field, which ansible tower uses as the host, if you’re using CloudForms as a source for the inventory
 

Copy and modify the instance /Infrastructure/VM/Provisioning/StateMachines/VMProvision_VM/template

 
Add the WaitForIP and Playbook steps to the VMProvision_VM class after PostProvision and fill out the values like so:

Creating the Service Dialog
 
We will need to create a service dialog with 2 necessary fields:
 

option_1_vm_name
limit

 
The limit is dynamic, and the entry point is set to the vm_name_limit instance we created earlier.  This is so we can set the limit to the vm that was just provisioned. Create this field first.
 
Create the option_1_vm_name field second, as it’s static, and you’ll need to select “limit” under Fields to Refresh under the Options tab
 
Creating the Catalog bundle
 
Create a VM catalog item and then create an ansible tower job catalog item.  Then add both items to the bundle. Have the VM catalog item be 1st, and the tower job second.  In the following example, the CloudForms 4.7 catalog item is the VM catalog item, and Quick Setup is the ansible tower job item

Wrapping up
 
After all that you should be good to go!  These are the steps I’ve had to create since provisioning a CloudForms appliance and having the post provision modifications from Tower took around 30 minutes each.  If I had just done the update on launch option on the inventory, we would only be able to provision a reproducer every 30 minutes. Now with this, we can have concurrent jobs, enabling us to provision any number of reproducers in the same 30 minute time window.  I have examples of an automate domain and service dialog you can utilize in the same github repository as the one mentioned for the playbook, so you can skip all of this and import them as you wish.  
Quelle: CloudForms

Portworx Enterprise Operator on Red Hat OpenShift

This is a guest blog by Vick Kelkar from Portworx. Vick is a Product person at Portworx. With over 15 years in the technology and software industry, Vick’s focus is on developing new data infrastructure products for platforms like PCF, PKS, Docker, and Kubernetes. Kubernetes adoption initially was powered by stateless applications. As the project […]
The post Portworx Enterprise Operator on Red Hat OpenShift appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift