OpenShift 3.3 Pipelines – Deep Dive

Continuous Integration and Continuous Deployment have been hot topics in our industry for the last few years. From the initial release of OpenShift 3.0, we’ve included features to let you build automated workflows to consume changes and redeploy applications. This article dives into CI/CD and pipeline management on OpenShift 3.3.
Quelle: OpenShift

Blog posts last week

With OpenStack Summit last week, we have a lot of summit-focused blog posts today, and expect more to come in the next few days.

Attending OpenStack Summit Ocata by Julien Danjou

For the last time in 2016, I flew out to the OpenStack Summit in Barcelona, where I had the chance to meet (again) a lot of my fellow OpenStack contributors there.

Read more at http://tm3.org/bu

OpenStack Summit, Barcelona, 2 of n by rbowen

Tuesday, the first day of the main event, was, as always, very busy. I spent most of the day working the Red Hat booth. We started at 10 setting up, and the mob came in around 10:45.

Read more at http://tm3.org/bx

OpenStack Summit, Barcelona, 1 of n by rbowen

I have the best intentions of blogging every day of an event. But every day is always so full, from morning until the time I drop into bed exhausted.

Read more at http://tm3.org/by

TripleO composable/custom roles by Steve Hardy

This is a follow-up to my previous post outlining the new composable services interfaces , which covered the basics of the new for Newton composable services model.

Read more at http://tm3.org/bo

Integrating Red Hat OpenStack 9 Cinder Service With Multiple External Red Hat Ceph Storage Clusters by Keith Schincke

This post describes how to manually integrate Red Hat OpenStack 9 (RHOSP9) Cinder service with multiple pre-existing external Red Hat Ceph Storage 2 (RHCS2) clusters. The final configuration goals are to have Cinder configuration with multiple storage backends and support …

Read more at http://tm3.org/bz

On communities: Sometimes it’s better to over-communicate by Flavio Percoco

Communities, regardless of their size, rely mainly on the communication there is between their members to operate. The existing processes, the current discussions, and the future growth depend heavily on how well the communication throughout the community has been established. The channels used for these conversations play a critical role in the health of the communication (and the community) as well.

Read more at http://tm3.org/c0

Full Stack Automation with Ansible and OpenStack by Marcos Garcia – Principal Technical Marketing Manager

Ansible offers great flexibility. Because of this the community has figured out many useful ways to leverage Ansible modules and playbook structures to automate frequent operations on multiple layers, including using it with OpenStack.

Read more at http://tm3.org/bs
Quelle: RDO

Cognitive capabilities revolutionize business operations

External forces make today’s business world increasingly challenging.
On the one hand, due to a continuing global financial crisis, businesses must deal with pressure to reduce costs while increasing visibility and control. They also must abide by new, stringent regulatory compliance rules.
On the other hand, client expectations are changing radically. The high-touch, expert-driven models that older generations were so used to must move aside for a fast, innovative, personalized and digital approach to interact and transact with clients.
Business process management as a discipline is, more than ever, pivotal to the success of any enterprise, large or small. Business processes underpin any product or service. Core services such as opening an account, fulfilling an order or registering a tax filing are a repeatable and structured set of activities that constitute a process.
Improving business processes has a profound impact on financial outcomes. It drives cost reduction by automating procedures, eliminating paper processing and reducing error. It can increase revenue by helping launch new products faster.
IBM has been a leader working with clients to improve their business processes. IBM solutions are used to:

Model and document business processes for regulatory compliance
Execute and automate processes to reduce reliance on paper
Monitor business processes through real time dashboards

These dashboards are the springboards for enhancing or improving processes.
For the past several years, IBM has been perfecting capabilities and tools for cognitive APIs. Ever since Watson won on Jeopardy, IBM has made strides in artificial intelligence in several industries, including health care and financial services. Today, a variety of cognitive services, such as tone analyzer, personality insight, speech recognition and natural language processors, are available to developers on the IBM cloud platform.
When these cognitive tools are combined with a business process management (BPM) approach, immense value can be unleashed  scaling human expertise while ensuring cost reductions and enhancing customer experience.

Banks are often challenged with thousands of emails each month: complaints, requests for changes to personal information or inquiries about services. Handling these communications requires a huge amount of manpower. It is a source of customer frustration and is one of the main causes of churn.
Cognitive capabilities can help solve this complex problem. It can scan through email communication and, using a natural language classifier, identify the intent of the client’s email. Subsequently, the request is automatically routed by triggering the appropriate business process, from complaint management to the department that handles credit card transaction disputes.
Plus, the tone analyzer can gauge the client’s state of mind. In the case of perceived dissatisfaction, it can trigger a churn-prevention process to mitigate the risk of losing a customer to a competitor. These cognitive capabilities can result in millions of dollars in cost savings and can improve response times by up to 50 percent.
BPM and cognitive can also help with the early detection of flu outbreaks. Hospitals and concerned government agencies all have processes in place to respond to such epidemics, but they incur additional costs when relying on admission data, which comes late in the process.
A cognitive API can help identify a flu epidemic much earlier. It can scan through social feeds and detect, then correlate tweets related to people with fevers, coughing or dizziness. This correlation across a massive number of Twitter feeds can indicate a flu outbreak up to 24 hours earlier. It can then trigger business rules and processes that enable hospitals and government agencies to be proactive and ready to handle the influx of sick people. It also helps epidemiologists produce better drugs.
Human resource departments can augment the hiring process using cognitive computing. Traditional hiring processes using resumes, interviews and references are often insufficient to hire the best candidate for a job. The hiring process can be transformed with a personality insight cognitive service that scans through the candidate’s social profile, posts, likes and mentions. HR pros can better qualify if the candidate is a fit for the job taking into consideration these social characteristics.
This is by no means an exhaustive list of what can be done. The possibilities are endless. BPM is already a process-improvement platform, but when combined with cognitive process, improvements can be achieved at scale.
Learn more about how Watson and the IBM Cloud platform support cognitive solutions.
The post Cognitive capabilities revolutionize business operations appeared first on news.
Quelle: Thoughts on Cloud

Corporate training is about to get a whole lot smarter

Video&;s power to engage makes it a go-to tool for employee onboarding and training, especially among companies looking to reach large groups through a single stream.
A majority of organizations surveyed by Wainhouse Research use online video for one-to-many training scenarios. But companies that merely accumulate volumes of training video, risk diminishing returns without a deeper understanding of the content. They are not deriving maximum value from it.
Generating and cataloging training videos, after all, does little good if companies have only a superficial understanding of what&8217;s inside. IBM Watson&8217;s machine learning and advanced analytics capabilities helps companies build and maintain a searchable, easily accessible library of video content.
Here&8217;s how:
Less searching, more learning
The ability to provide effective video training is important not only for employee retention and development, but also for preserving companies&8217; bottom lines. U.S. employers spent more than $70 billion on workforce training last year, and video was a top technology investment.
Some companies are going a step further to ensure that investment pays off. Using Watson cognitive capabilities — including facial recognition, audio recognition, and speech-to-text — companies can better index and classify new video content along with their existing videos. A benefit of Watson&8217;s analysis is the ability for HR teams and employees to easily search the library for specific topics or information.
If an employee wants to review a particular aspect of safety guidelines, for example, Watson could serve up a video clip along with instructions to tune in at the 15-minute mark to find the exact information requested. The payoff? More time spent learning and staying productive.
Longer live content shelf life
Companies can also extend the reach of their live content with Watson.
With speech-to-text capabilities, Watson can transcribe live-stream learning sessions and classify that information so employees can access it from the content library. Watson can also automatically clip specific highlights from a live event, whether it’s a training session or the CEO&8217;s company-wide address. It can then post those clips to corporate channels such as the company&8217;s intranet.
Smart, relevant recommendations
Over time, as Watson learns more about the learning needs and preferences of individuals or teams, it will be able to automatically recommend relevant videos, or even specific clips, to meet any corporate training scenario. Whether an inexperienced employee needs advice on how to fix the latest Internet of Things (IoT) enabled machine, or a customer asks an unexpected question during a demo, the answer is a simple search query away.
Training the trainers
Training isn&8217;t a static problem, of course, given evolving technological, demographic and market changes. Ineffectual training can have severe consequences. Research shows that businesses lose millions of dollars annually due to ineffective training. But Watson can help companies refine their approach. Soon, HR teams could gauge the social sentiment of participants in a live training event by analyzing their Q&A and social activity, for instance.
The insight Watson provides today and the advanced analytics on the horizon will help businesses develop more compelling video content for their workforce while avoiding investing in material that won&8217;t resonate. Companies will be able to efficiently identify gaps in their training programs and keep their content library fresh, ensuring it remains relevant to an evolving workforce.
Learn more about IBM Cloud Video solutions for the workforce.
The post Corporate training is about to get a whole lot smarter appeared first on news.
Quelle: Thoughts on Cloud

Why OpenShift Picked Ansible

Configuration management is a competitive field. Prior to OpenShift 3.0, OpenShift (and largely Red Hat as a whole) had mostly been in Puppet‘s camp with the other major competitor being Chef. When OpenShift started working on its install/configuration for 3.0, it very quickly became clear that Puppet was no longer the obvious choice. So after a large amount of investment in our 2.x Puppet based installer and operational tooling, we decided to start over with Ansible. I won’t claim this route is correct for everyone, but I’ll try to explain our thinking behind the switch.
Quelle: OpenShift

Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack

The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
The Tesora Database as a Service platform is an enterprise-hardened version of Openstack Trove, offering secure private cloud access to the most popular open source and commercial databases through a single consistent interface.
In this guide we will show you how to install Tesora in a Mirantis Openstack environment.
Prerequisites
In order to deploy Tesora DBaaS, you will need a fuel with the Tesora plugin installed.  Start by making sure you have:

A Fuel server up and running. (See the Quick Start for instructions if necessary.)
Discovered nodes for controllers, compute and storage
A discovered node for the dedicated node for the Tesora controller.

Now let&;s go ahead and add the plugin to Fuel.
Step 1 Adding the Tesora Plugin to Fuel
To add the Tesora plugin to Fuel, follow these steps:

Download the tesora plugin from the Mirantis Plugin page, located at:

https://www.mirantis.com/validated-solution-integrations/fuel-plugins/

Once you have downloaded the plugin, copy the plugin file to your Fuel Server using the scp command, as in:
$scp tesora-dbaas-1.7-1.7.7-1.noarch.rpm root@[fuel s:/tmp

After copying the Fuel Plugin to the fuel server, add it to the fuel plugin list. First ssh to the Fuel server:
$ssh root@[fuel server ipi]

Next, add the plugin to Fuel:
[root@fuel ~]# fuel plugins –install tesora-dbaas-1.7-1.7.7-1.noarch.rpm

Finally, verify that the plugin has been added to Fuel:
[root@fuel ~]# fuel plugins
id | name                     | version | package_version
—|————————–|———|—————-
1  | fuel-plugin-tesora-dbaas | 1.7.7   | 4.0.0

If the plugin was successfully added, you should see it listed in the output.
Step 2 Add Tesora DBaaS to an Openstack Environment
From here, it&8217;s a matter of creating an OpenStack cluster that uses the new plugin.  You can do that by following these steps:

Connect to the Fuel UI and log in with the admin credentials using your browser.
Create a new Openstack environment. Follow the prompts and either leave the defaults or alter them to suit your environment.
Before adding new nodes, enter the environment and select the settings tab and then other on the left hand side of the window.
Select Tesora DBaaS Platform and enter the username and password supplied to you by Tesora.  The username and password will be used to download the Database images provided by Tesora to the Tesora DBaaS controller.  Finish by typing &;I Agree&; to show that you agree to the Terms of Use and click Save Settings.
Now create your Environment by assign nodes to the roles for

Compute
Storage
Controller
Tesora DBaaS Controller

As shown in the image below:

After you have finished adding the roles go ahead and deploy the environment.

Step 3 Importing the Database image files to the Tesora DBaaS Controller
Once the environment is built, it&8217;s time to import the database images.  

From the Fuel Server, SSH to the Tesora DBaaS controller server.  You can find the IP address of the Tesora DBaaS controller by entering the following command:
[root@fuel ~]# sudo fuel node list | grep tesora
9  | ready  | Untitled (61:ef) | 4       | 10.20.0.6 | 08:00:27:a3:61:ef | tesora-dbaas    |               | True   | 4

After identifying the IP address you will need to ssh to the fuel server:
[root@fuel ~]# sudo ssh root@10.20.0.6
Warning: Permanently added ‘10.20.0.6’ (ECDSA) to the list of known hosts.
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-93-generic x86_64)
* Documentation:  https://help.ubuntu.com/
Last login: Wed Aug 10 23:51:29 2016 from 10.20.0.2

Next load the the pre-built Database images.  After logging into the DBaaS Controller, change your working directory to /opt/tesora/dbaas/bin:
root@node-9:~# cd /opt/tesora/dbaas/bin

Now export your tesora variables:
root@node-9:/opt/tesora/dbaas/bin# source openrc.sh

After setting your variables, you can now import your database images with the following command:
root@node-9:/opt/tesora/dbaas/bin# ./add-datastore.sh mysql 5.6
Installing guest ‘tesora-ubuntu-trusty-mysql-5.6-EE-1.7′

Above is an  example of loading mysql version 5.6.  The format of the command is:
add-datastore.sh DBtype version

To get a list of Database that are available and version please see the link below:

https://tesoradocs.atlassian.net/wiki/display/EE17CE16/Import+Datastore

Once you have imported your Database images, it&8217;s time to go to Horizon.
Step 4 Create and Access a Database Instance
Now you can go ahead and create the actual database. Log into your Horizon dashboard from within Fuel. On the lefthand side, click Tesora Databases.  

From here, you have the following options:

Instances: This option enables you to create, delete and display any database instances that are current running.
Clusters: This option enables you to create and manage a cluster Database environment.
Backups: Create or view backups of any current running Database Images.
Datastores: List all Databases that have been imported
Configuration Groups: This option enables you to manage database configuration tasks by using configuration groups, which make it possible to set configuration parameters, in bulk, on one or more databases.

At this point Tesora DBaaS should be up and running, enabling you to deploy, configure and manage databases in your environment.
The post Fuel plugins: Getting Started with Tesora DBaaS Enterprise Edition and Mirantis OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

The difference between open source and open governance

When Sun and then Oracle bought MySQL AB, the company behind the original development, MySQL open source database development governance gradually closed. Now, only Oracle writes updates. Updates from other sources — individuals or other companies — are ignored. MySQL is still open source, but it has a closed governance.
MySQL is one of the most popular databases in the world. Every WordPress and Drupal website runs on top of MySQL, as well as the majority of generic Ruby, Django, Flask and PHP apps which have MySQL as their database of choice.
When an open source project becomes this popular and essential, we say it is gaining momentum. MySQL is so popular that it is bigger than its creators. In practical terms, that means its creators can disappear and the community will take over the project and continue its evolution. It also means the software is solid, support is abundant and local, sometimes a commodity or even free.
In the case of MySQL, the source code was forked by the community, and the MariaDB project started from there. Nowadays, when somebody says he is “using MySQL”, he is in fact probably using MariaDB, which has evolved from where MySQL stopped in time.
Open source vs. open governance
Open source software’s momentum serves as a powerful insurance policy for the investment of time and resources an individual or enterprise user will put into it. This is the true benefit behind Linux as an operating system, Samba as a file server, Apache HTTPD as a web server, Hadoop, Docker, MongoDB, PHP, Python, JQuery, Bootstrap and other hyper-essential open source projects, each on its own level of the stack. Open source momentum is the safe antidote to technology lock-in. Having learned that lesson over the last decade, enterprises are now looking for the new functionalities that are gaining momentum: cloud management software, big data, analytics, integration middleware and application frameworks.
On the open domain, the only two non-functional things that matter in the long term are whether it is open source and if it has attained momentum in the community and industry. None of this is related to how the software is being written, but this is exactly what open governance is concerned with: the how.
Open source governance is the policy that promotes a democratic approach to participating in the development and strategic direction of a specific open source project. It is an effective strategy to attract developers and IT industry players to a single open source project with the objective of attaining momentum faster. It looks to avoid community fragmentation and ensure the commitment of  IT industry players.
The value of momentum
Open governance alone does not guarantee that the software will be good, popular or useful (though formal open governance only happens on projects that have already captured some attention of IT industry leaders). A few examples of open source projects that have formal open governance are CloudFoundry, OpenStack, JQuery and all the projects under the Apache Software Foundation umbrella.
For users, the indirect benefit of open governance is only related to the speed the open source project reaches momentum and high popularity.
Open governance is important only for the people looking to govern or contribute. If you just want to use, open source momentum is far more important.
IBM Cloud is open by design. Find out more.
The post The difference between open source and open governance appeared first on news.
Quelle: Thoughts on Cloud

OPNFV Functional Testing, TOSCA Orchestration, and vIMSUseCases

The post OPNFV Functional Testing, TOSCA Orchestration, and vIMSUseCases appeared first on Mirantis | The Pure Play OpenStack Company.
The entire purpose of OPNFV, an open source project from the Linux Foundation that brings together the work of the various standards bodies and open source NFV projects into a single platform, is the provide a way for carriers and vendors to easily test and release virtual network functions (VNFs), and for users to understand what components will work together, so it&;s especially important that the Functest team can provide appropriate test coverage.
This week Cloudify Director of Product, Arthur Berezin, together with OPNFV’s Morgan Richomme and Valentin Boucher of Orange Labs, spoke at the OpenStack Summit in a session titled “Project: OPNFV &; Base System Functionality Testing (Functest) of a vIMS on OpenStack,” so we thought we&8217;d take a moment to look at what that means.
About Functest
OPNFV puts a lot of emphasis on ensuring all components are fully tested and ready for production. The Functest group, specifically, is the team that tests and verifies all OPNFV Platform functionality, which covers the VIM and NFVI components.
The key objectives of the Functest project in OPNFV are to:

Define tooling for tests
Define test suites (SLA)
Installation and configuration of the tools
Automate test with CI
Provide API and dashboard functions for Functest and other test projects

But doing all that involves orchestration, and that involves having an appropriate tool.
Choosing an Orchestrator for Testing
The Functest team, as part of their use case testing, sought an orchestration tool based on certain criteria. They were looking for an open source orchestrator and VNF Manager.  The tool had to satisfy a number of different requirements:
“To manage a complex VNF, it’s necessary to use an orchestrator and we selected Cloudify because it fits all the vIMS test-case requirements (open source solution, workflow, TOSCA modeling, good integration with OpenStack components, openness with plugins…).”
To satisfy these requirements, the team chose the open source Cloudify tool.
The second OPNFV release, Brahmaputra, includes test cases for more complete platform capacity checks of the OPNFV platform to host complex VNFs. In order to truly verify that everything is working properly, however, the tests needed a use case that was sufficiently complex.
The team needed a VNF that:

Includes various components
Requires component configuration for communication between VMs
Involves a basic workflow in order to properly complete setup

The team chose Clearwater, open source vIMS from MetaSwitch.  
But what did they actually test?
vIMS Test Cases
Functest team runs a number of different vIMS test cases, including:

  Environment preparation, such as creating a user/tenant, choosing a flavor, and uploading OS images
  Orchestrator deployment, including creating the Cloudify manager router, network and VM
  VNF deployment with Cloudify, including create 7 VMs and installing and configuring software
  VNF tests, including creating users and launching more than 100 tests
  Pushing deployment duration and test results

If you&8217;re interested in getting more details about the test cases, you can read more about the details on the Cloudify blog in this post contributed by the OPNFV team.

Joint Talk at OpenStack Summit
Cloudify Director of Product, Arthur Berezin, together with OPNFV’s Morgan Richomme and Valentin Boucher of Orange Labs, will be speaking at the OpenStack Summit in a session titled “Project: OPNFV &8211; Base System Functionality Testing (Functest) of a vIMS on OpenStack.” The session, taking place on Wednesday, October 26 from 3:05pm-3:45pm, will include a lot more technical information about how Functest uses Cloudify within the vIMS use case from OPNFV.
The OPNFV team will be at booth D15 and Cloudify at booth C4 in the marketplace at the OpenStack Summit in Barcelona.

The post OPNFV Functional Testing, TOSCA Orchestration, and vIMSUseCases appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Improving video engagement with an assist from Watson

Living in the golden age of video is a blessing and a curse.
There’s more video content than ever to choose from, but we can spend more time deciding than actually watching. It’s a frustrating experience for consumers, but it’s especially worrisome for video content providers.
As a result, media and entertainment companies are shifting their focus from merely providing customers with more video to serving up just the right type of content at the right time. While many companies already account for elements like genre or cast, more sophisticated analysis is possible thanks to machine learning technologies like IBM Watson that bring previously unstructured data — objects and faces in a particular scene, for example — into the open. Armed with this new, rich layer of insights, media and entertainment companies can identify ways to serve up the most relevant content to viewers, increasing engagement and reducing churn.
Finding hidden gems
When Watson watches a video, it uses tools such as facial recognition, audio recognition, speech-to-text and tone analytics. This advanced metadata gives companies a more specific, accurate understanding of both their video content and what customers truly want to watch.
For example, Watson could identify that users enjoy sports movies with rousing motivational speeches. Companies can enable viewers to search the video catalogue with these specifications in mind or highlight gaps in the company’s catalogue. The next time, say, a soccer team gets together for movie night before a playoff game, Watson could recommend a movie with a moving locker-room speech.
Next-gen recommendations
Watson can also find patterns in the way people interact with video content, from the selections they make to how often they fast-forward. Insight into viewers’ watching habits can help companies make personalized recommendations that will keep them engaged and coming back for more. It could even find commonalities between the romantic comedies and action movies a viewer enjoys to serve up a surprising, though spot-on, recommendation.
By generating deep, conceptual metadata on what’s happening in specific videos, Watson may one day be able to make recommendations based on everything from the local weather to what a person’s recent tweets suggest about his mood.
Auto-generated, instant highlight reels
Watson’s ability to index, categorize and clip video content has already been put to use in developing a recent horror film trailer. Watson’s AI-enabled clipping capabilities could also soon help broadcasters that stream, rather than create, video content.
An example that’s currently in development is Watson’s ability to “watch” live sports and automatically clip highlights. While watching Sunday Night Football, for example, Watson could clip a wide receiver’s spectacular catch and instantly post the highlight to social media.
Improving the all-important engagement metric
This ability to catalogue, organize and distribute video is essential to today’s video products. U.S. adults already spend 5.5 hours per day watching video programming, and research suggests that all video formats — TV, video on demand (VoD), and internet — will represent 80 to 90 percent of global consumer internet traffic in 2019.
It’s no longer enough to just give people more, or even higher quality, video content. It’s all about engagement. Even the most expansive and diverse library of video content on the planet is useless unless people are able to access the content they’re interested in, now.
Learn how IBM Cloud optimizes content for media and entertainment.
The post Improving video engagement with an assist from Watson appeared first on news.
Quelle: Thoughts on Cloud