Recent blog posts

It’s been a few weeks since I did one of these blog wrapups, and there’s been a lot of great content by the RDO community recently.

Here’s some of what we’ve been talking about recently:

Project Teams Gathering (PTG) report – Zuul by tristanC

The OpenStack infrastructure team gathered in Denver (September 2017). This article reports some of Zuul’s topics that were discussed at the PTG.

Read more at http://rdoproject.org/blog/2017/09/PTG-report-zuul/

Evaluating Total Cost of Ownership of the Identity Management Solution by Dmitri Pal

Increasing Interest in Identity Management: During last several months I’ve seen a rapid growth of interest in Red Hat’s Identity Management (IdM) solution. This might have been due to different reasons.

Read more at http://rhelblog.redhat.com/2017/09/18/evaluating-total-cost-of-ownership-of-the-identity-management-solution/

Debugging TripleO Ceph-Ansible Deployments by John

Starting in Pike it is possible to use TripleO to deploy Ceph in containers using ceph-ansible. This is a guide to help you if there is a problem. It asks questions, somewhat rhetorically, to help you track down the problem.

Read more at http://blog.johnlikesopenstack.com/2017/09/debug-tripleo-ceph-ansible.html

Make a NUMA-aware VM with virsh by John

Grégory showed me how he uses virsh edit on a VM to add something like the following:

Read more at http://blog.johnlikesopenstack.com/2017/09/make-numa-aware-vm-with-virsh.html

Writing a SELinux policy from the ground up by tristanC

SELinux is a mechanism that implements mandatory access controls in Linux systems. This article shows how to create a SELinux policy that confines a standard service:

Read more at http://rdoproject.org/blog/2017/09/SELinux-policy-from-the-ground-up/

Trick to test external ceph clusters using only tripleo-quickstart by John

TripleO can stand up a Ceph cluster as part of an overcloud. However, if all you have is a tripleo-quickstart env and want to test an overcloud feature which uses an external Ceph cluster, then can have quickstart stand up two heat stacks, one to make a separate ceph cluster and the other to stand up an overcloud which uses that ceph cluster.

Read more at http://blog.johnlikesopenstack.com/2017/09/trick-to-test-external-ceph-clusters.html

RDO Pike released by Rich Bowen

The RDO community is pleased to announce the general availability of the RDO build for OpenStack Pike for RPM-based distributions, CentOS Linux 7 and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Pike is the 16th release from the OpenStack project, which is the work of more than 2300 contributors from around the world (source).

Read more at http://rdoproject.org/blog/2017/09/rdo-pike-released/

OpenStack Summit Sydney preview: Red Hat to present at more than 40 sessions by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform

The next OpenStack Summit will take place in Sydney, Australia, November 6-8. And despite the fact that the conference will only run three days instead of the usual four, there will be plenty of opportunities to learn about OpenStack from Red Hat’s thought leaders.

Read more at http://redhatstackblog.redhat.com/2017/08/31/openstack-summit-fall2017-preview/

Scheduled snapshots by Tim Bell

While most of the machines on the CERN cloud are configured using Puppet with state stored in external databases or file stores, there are a few machines where this has been difficult, especially for legacy applications. Doing a regular snapshot of these machines would be a way of protecting against failure scenarios such as hypervisor failure or disk corruptions.

Read more at http://openstack-in-production.blogspot.com/2017/08/scheduled-snapshots.html

Ada Lee: OpenStack Security, Barbican, Novajoin, TLS Everywhere in Ocata by Rich Bowen

Ada Lee talks about OpenStack Security, Barbican, Novajoin, and TLS Everywhere in Ocata, at the OpenStack PTG in Atlanta, 2017.

Read more at http://rdoproject.org/blog/2017/08/ada-lee-openstack-security-barbican-novajoin-tls-everywhere-in-ocata/

Octavia Developer Wanted by assafmuller

I’m looking for a Software Engineer to join the Red Hat OpenStack Networking team. I am presently looking to hire in Europe, Israel and US East. The candidate may work from home or from one of the Red Hat offices. The team is globally distributed and comprised of talented, autonomous, empowered and passionate individuals with a healthy work/life balance. The candidate will work on OpenStack Octavia and LBaaS. The candidate will write and review code while working with upstream community members and fellow Red Hatters. If you want to do open source, Red Hat is objectively where it’s at. We have an institutional culture of open source at all levels and this has a ripple effect on your day to day and your career at the company.

Read more at https://assafmuller.com/2017/08/18/octavia-developer-wanted/
Quelle: RDO

5 ways to unlock the value of video with the help of IBM Watson Media

Understanding video content is a significant challenge for media companies.
The biggest obstacle? Data within a video is largely unstructured and requires complex analysis. In a crowded landscape, it’s becoming essential for media and entertainment companies to extract new insights from video to meet consumers’ and advertisers’ needs.
AI technology can give streaming services a competitive edge. A new set of packaged services available through IBM Cloud that companies can scale and use across their video assets, IBM Watson Media, provides a valuable new resource for clients who need to solve key industry problems.
Here are a few ways cognitive capabilities can increase impact and efficiency across all aspects of streaming video:
1. Finding a needle in a haystack: Content search and discovery
One of the biggest resource drains for production teams is manually scanning stockpiles of footage to find relevant content. By tapping into rich metadata, IBM Watson Media expedites this process to make video more searchable, thereby enabling editors to discover and use archival assets faster.
2. Personalized recommendations: More detailed data for better content matches
In today’s streaming world, it’s essential to deliver the right content to the right viewer at the right time. To meet consumer demand for relevant programming, streaming services must provide highly specific content recommendations.
With deep insights into video content through enhanced metadata, IBM Watson Media provides media companies with a better understanding of what’s inside a video. This enables streaming services to improve their recommendation engines for viewers by analyzing this detailed data to find better matches. With increased personalization, streaming services can optimize the viewer experience and reduce churn.
3. Intelligent closed captioning: What’s love got to do with it?
Media companies rely on speech-to-text technology to deliver a near real-time transcript of commentary. However, closed captioning can be inaccurate, especially during sporting events that require an understanding of specific terminology.
Cognitive capabilities solve those challenges by unlocking what’s inside a video. For instance, IBM Watson understands the difference between romantic “love” and “love” as a score in tennis. This enables Watson to provide more accurate, intelligent captioning to live-streamed events such as the US Open, since it understands the context of the video.
4. Don’t let video go awry: AI helps media companies comply
Service providers and advertisers that encounter regulations around explicit content or product placement may use AI technology as a resource to support their compliance efforts.
Rather than manually digging through footage to flag violence or objectionable language, production teams can use AI to help facilitate their meeting their compliance obligations. By sourcing rich metadata to understand elements within a video, IBM Watson Media can be a resource to help identify specific content that should be screened for approval.
5. Take another look: Highlight clipping
IBM Watson Media can identify the most exciting parts of video footage in near real time. This functionality can be crucial for action-packed sporting events such as the US Open, where IBM Watson Media helped video editors quickly package and distribute highlight reels.
Cognitive technology can automatically identify exciting moments based on player movement, match data and crowd noise. By streamlining the process to create highlight reels, IBM Watson Media helps ensure fans won’t miss the action.
In a competitive landscape, media companies need AI technology to help solve pressing industry challenges. By bringing the cognitive power of Watson to video, IBM Watson Media empowers companies to unlock new value from their content.
Learn more about new Watson-powered video services.
The post 5 ways to unlock the value of video with the help of IBM Watson Media appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Project Teams Gathering (PTG) report – Zuul

The OpenStack infrastructure team gathered in Denver (September 2017).
This article reports some of Zuul’s topics that were discussed at the PTG.

For your reference, I highlighted some of the new features comming in the Zuul version 3
in this article.

Cutover and jobs migration

The OpenStack community grew a complex set of CI jobs over the past several years,
that needs to be migrated.
A zuul-migrate script has been created to automate the migration from the
Jenkins-Jobs-Builder format to the new Ansible based job definition.
The migrated jobs are prefixed with “-legacy” to indicate they still need
to be manually refactored to fully benefit from the ZuulV3 features.

The team couldn’t finish the migration and disable the current ZuulV2 services
at the PTG because the jobs migration took longer than expected.
However, a new cutover attemp will occur in the next few weeks.

Ansible devstack job

The devstack job has been completely rewritten to a fully fledged Ansible job.
This is a good example of what a job looks like in the new Zuul:

The devstack job definition
The devstack pre playbook
The devstack’s roles
A devstack job added to shade

A project that needs a devstack CI job needs this new job definition:

– job:
name: shade-functional-devstack-base
parent: devstack
description: |
Base job for devstack-based functional tests
pre-run: playbooks/devstack/pre
run: playbooks/devstack/run
post-run: playbooks/devstack/post
required-projects:
# These jobs will DTRT when shade triggers them, but we want to make
# sure stable branches of shade never get cloned by other people,
# since stable branches of shade are, well, not actually things.
– name: openstack-infra/shade
override-branch: master
– name: openstack/heat
– name: openstack/swift
roles:
– zuul: openstack-infra/devstack-gate
timeout: 9000
vars:
devstack_localrc:
SWIFT_HASH: “1234123412341234”
devstack_local_conf:
post-config:
“$CINDER_CONF”:
DEFAULT:
osapi_max_limit: 6
devstack_services:
ceilometer-acentral: False
ceilometer-acompute: False
ceilometer-alarm-evaluator: False
ceilometer-alarm-notifier: False
ceilometer-anotification: False
ceilometer-api: False
ceilometer-collector: False
horizon: False
s-account: True
s-container: True
s-object: True
s-proxy: True
devstack_plugins:
heat: https://git.openstack.org/openstack/heat
shade_environment:
# Do we really need to set this? It’s cargo culted
PYTHONUNBUFFERED: ‘true’
# Is there a way we can query the localconf variable to get these
# rather than setting them explicitly?
SHADE_HAS_DESIGNATE: 0
SHADE_HAS_HEAT: 1
SHADE_HAS_MAGNUM: 0
SHADE_HAS_NEUTRON: 1
SHADE_HAS_SWIFT: 1
tox_install_siblings: False
tox_envlist: functional
zuul_work_dir: src/git.openstack.org/openstack-infra/shade

This new job definition simplifies a lot the devstack integration tests
and projects now have a much more fine grained control over their integration
with the other OpenStack projects.

Dashboard

I have been working on the new zuul-web interfaces to replace the scheduler webapp
so that we can scale out the REST endpoints and prevent direct connections
to the scheduler. Here is a summary of the new interfaces:

/tenants.json : return the list of tenants,
/{tenant}/status.json : return the status of the pipelines,
/{tenant}/jobs.json : return the list of jobs defined, and
/{tenant}/builds.json : return the list of builds from the sql reporter.

Moreover, the new interfaces enable new use cases, for example, users can now:

Get the list of available jobs and their description,
Check the results of post and periodic jobs, and
Dynamically list jobs’ results using filters, for example,
the last tripleo periodic jobs can be obtained using:

$ curl ${TENANT_URL}/builds.json?project=tripleo&pipeline=periodic | python -mjson.tool
[
{
“change”: 0,
“patchset”: 0,
“id”: 16,
“job_name”: “periodic-tripleo-ci-centos-7-ovb-ha-oooq”,
“log_url”: “https://logs.openstack.org/periodic-tripleo-ci-centos-7-ovb-ha-oooq/2cde3fd/”,
“pipeline”: “periodic”,

},

]

OpenStack health

The openstack-health service is likely
to be modified to better interface with the new Zuul design.
It is currently connected to an internal gearman bus to receive job completion
events before running the subunit2sql process.

This processing could be rewritten as a post playbook to do the subunit processing
as part of the job. Then the data could be pushed to the SQL server with the credencials
stored in a Zuul’s secret.

Roadmap

The last day, even though most of us were exhausted, we spend some time discussing
the roadmap for the upcoming months. While the roadmap is still being defined,
here are some hilights:

Based on new user’s walkthrough, the documentation will be greatly improved,
For example see this nodepool contribution.
Jobs will be able to return structured data to improve the reporting.
For example a pypi publisher may return the published url.
Similarly, a rpm-build job may return the repository url.
Dashboard web interface and javascript tooling,
Admin interface to manually trigger unique build or cancel a buildset,
Nodepool quota to improve performances,
Cross source dependencies, for example a github change in Ansible could depends-on a gerrit change in shade,
More Nodepool drivers such as Kubernetes or AWS, and
Fedmsg and mqtt zuul driver for message bus repporting and trigger source.

In conclusion, the ZuulV3 efforts were extremly fruitful and this article only covers
a few of the design sessions. Once again, we have made great progress and I’m looking forward to further
developments. Thanks you all for the great team gathering event!
Quelle: RDO

[Podcast] PodCTL #6 – What’s included with Kubernetes?

This week we discuss a topic that often comes up with companies that want to build a DIY platform using Kubernetes. How much is included in the Kubernetes open source project, and how many other things have to be integrated to create a functional platform for deploying applications into production? We explore: What’s included? What’s […]
Quelle: OpenShift

Sincrolab counts on IBM Bluemix to support cognitive therapy app

For parents of children with cognitive difficulties such as autism or ADHD, the time, travel and expense associated with cognitive therapy can be barriers to getting needed treatment.
Some families have incurred significant debt to secure treatment for their children. Some have even been forced to sell their homes to afford care or move to an area where needed services are available.
Developing and recovering cognitive abilities
Sincrolab, a Spanish provider of technology tools for neuropsychological rehabilitation, has developed a web platform and two mobile applications (one for children, the other for adults) that mental health professionals can bring to the patients to treat their cognitive disabilities.
The company’s training platform, consisting of a system of personalized cognitive stimulation, helps with the development and recovery of cognitive abilities for children with neurodevelopmental disorders and adults with learning disabilities or neurodegenerative disorders.
The application enables health professionals to remotely manage and supplement ongoing treatment for their patients.
Realizing success with IBM Bluemix
Because the application’s focus is personalized training, the infrastructure must be security-rich and enable around-the-clock availability so therapists can have anytime, anywhere access for their patients with cognitive disabilities. Sincrolab counts on IBM Bluemix bare metal servers to develop and support its cognitive therapy application.
Currently, there are more than 200 active patients, and Sincrolab has already worked with more than one thousand since the platform’s inception three years ago.
In a current project, teachers have been working for the past year with 20 children ages 7 to 11 who have autism and other cognitive diseases.
They are using the Sincrolab app to work on exercising memory functions using video games based on cognitive neurosigns. The app stimulates memory, attention, executive functions, language, and calculations.

The teachers found that the cognitive stimulation through Sincrolab increased the children’s cognitive performance as well as improved their moods and behavior. The children have been training every day for nine months. One of the children has even begun to speak a bit after the Sincrolab program intervention. Parents have reported that the children are more focused on tasks and instructions.
Thanks to the Sincrolab app, teachers are able to offer a new cognitive therapy program in school and continue the project for at least another two years.
Going forward, Sincrolab is investigating ways to implement IBM Watson technology into its solution. Currently, the therapy is available in Spanish for the local market in Spain.
Read the case study to learn more.
The post Sincrolab counts on IBM Bluemix to support cognitive therapy app appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

DevOps Insights: 9 Lessons from our Customers

As we wind down our Containers, Culture and DevOps roadshow with Gene Kim, I’ve had the opportunity to look back at some of the really valuable insight that we’ve learned from the speakers and customer panels. I wrote about a few of them in: Most Important DevOps Measurement and 7 Habits of Highly Effective DevOps, but […]
Quelle: OpenShift

A boost to advanced networking on IBM Cloud

In September, IBM acquired a high-performance team focused on advanced networking technology that moves the networking function from the server to the edge, increasing data center efficiency. The Cloudigo, Ltd. team brings talent and technology that closely aligns with IBM investments in advanced network processing, as part of its cloud platform. The team will work in the Cloud Innovation Lab, which is part of the IBM Cloud Infrastructure group.
The post A boost to advanced networking on IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Taking digital marketing to the next level with AI and cloud

Let’s talk about how far digital marketing has come and be proud of that for a moment.
Twenty years ago, it was just an idea. Then, 10 years later, programmatic marketing heralded a necessary move toward a better marketing future fueled by data, powered by technology and driven by math.
Today, advertising can be found across connected screens, all controllable with touch of button. Ads get billions of impressions a day, touchable through APIs and UIs, which is something that was impossible just five years ago.
Marketers are breaking down organizational silos where collaboration across brand, agency, tech, media and data is finally seen as not simply necessary, but right. Real-time machine learning is used for more than half of every dollar spent in digital, where a 100 percent programmatic future is on our doorstep. Data-driven marketing is moving from one way to do marketing to the way marketing is done.
Still, the reality is we need to go much further.
If you listened only to the press, you’d hear a cacophonous cry of “fake news, fake traffic, fake metrics.” You hear that the infrastructure that manages the now billions of dollars flowing through digital marketing pipes isn’t up to the task anymore. Pixels, redirects, JavaScript and headers are the stuff of a startup industry, not the foundation for mature marketing at scale. Perhaps worst of all, experiencing that moment when kids see an ad and exclaim, “Ugh, I hate advertising.” This, above all, is a daily reminder that we can and must do more.
If marketers really want to pay off the promise of marketing as an engine of business, the connection of thought and deed — the 3 percent of the gross domestic product that powers the other 97 percent, that enables the free internet, that consumers don’t hate and could even learn to love — to move from rendering banner ads to driving business, they know they need to change.
MediaMath and IBM saw in each other something important: a shared worldview, a desire to do better and the will and capability to make it happen. So we’ve partnered to take the next evolutionary steps together. What does this mean? It means we’ll work to:

Develop infrastructure that connects brands, consumers and all of the companies in between in a way that is enterprise-class, open and smart.
Infuse AI into real-time marketing decisions across all channels, arming the marketer to do her job better with insights as opposed to reports.
Delight the human behind the screen with advertising people don’t just tolerate, but appreciate as entertaining, informative and meaningful.

By providing marketers with a neutral, security-rich computing environment and giving them the ability to maintain ownership of their data through the IBM Cloud, marketers will have the insights they need to deliver the campaigns consumers want.
MediaMath and IBM are building the foundation that makes great marketing that moves at the speed of human beings possible, and we are incredibly excited to see what you make of it.
Learn more about IBM Watson Marketing.
The post Taking digital marketing to the next level with AI and cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Managing mountains of media monitoring data with IBM Cloud

Over the past 30 years, the media industry has undergone a digital revolution.
In the “old days,” there was traditional media such as television, print, and radio. Today, social and digital channels are as just as much a concern as traditional media.
PRIME Research performs media monitoring services for more than 500 global customers including 35 of the top 100 brands in the world. That means the company integrates, examines and evaluates as many as 5 million documents a day to keep tabs on public sentiment for its global customers.
Huge amounts of data
PRIME Research customers use a dashboard to understand public sentiment about their organizations’ brands, competitors and peers as well as the larger business landscape. Content is monitored across all media channels, and the dashboard provides real-time analytics and instantaneous translations in more than 100 languages.
The company crawls online media, including social media, e-papers, TV, radio and digitized print content and tells its customers who is talking about them. PRIME Research teams are spread from China to Europe to the US, to be able to cover them anywhere at any time.
Sometimes something comes up that increases the amount of text or media information enormously. It’s nothing that can be predicted or calculated: a crisis happens, then suddenly information grows like crazy. That’s the challenge.
Additionally, data must be stored for monthly or quarterly analysis, which means that there is an ever-growing body of unstructured data.
The need for a reliable partner
PRIME Research found that it didn’t have enough performance to serve peak usage on both sides of the portal. On the incoming side, its readers and crawlers couldn’t cope with the amount of data suddenly appearing. On the customer side, when there was an incident and lots of customers wanted to see what the media coverage was, they were all accessing the portals simultaneously. The portal and the readers were okay, but the back-end systems were sometimes overwhelmed by the flood of requests.
The solution for PRIME Research was to move its database servers to IBM Bluemix bare metal servers using an IBM Bluemix platform provisioned through IBM Cloud Services. The move increased performance, scalability, flexibility and global reach. Additionally, the company extended its use of IBM Cloud Services to support analytics and mobile services. Further, PRIME Research uses IBM Cloud Object Storage for the timely storage, availability and provisioning of media data.
Better media monitoring service
PRIME Research can now handle media monitoring data peaks. It can perform analytics for and deliver insights to customers worldwide in real time. With Cloud Object Storage, data is accessible, safe and can be provisioned quickly worldwide. This is important in case a customer asks for a custom report and PRIME Research must retrieve older data for easy customer comparison and to provide greater context on the crisis at hand.
The company is also evaluating IBM Watson Explorer to improve its current capabilities and services, as well as to introduce new ones, including a multi-dimensional visualization tool showing how words are interrelated. IBM Cloud Object Storage is a foundational part of the Watson Data Platform. With data firmly established in object storage, Watson analytic and visualization tools can be an easy next step in providing a visual assessment of the media landscape when time is of the essence.
Learn more about IBM Cloud Object Storage.
The post Managing mountains of media monitoring data with IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud