Cognitive for the greater good at the Watson Developer Conference

Three years ago, I decided to learn how to code. A large part of the reason why I decided to embark on a career in tech was to empower myself with the ability to create an application, thereby providing value to society.
I was reminded once again why I chose to go down this route during IBM Chairman, President and CEO Ginni Rometty’s opening presentation at the Watson Developer Conference in San Francisco this November. Rometty invited Joshua Browder, a 19 year-old student at Stanford and co-founder of DoNotPay, and Ashok Goel, professor of computer science at Georgia Institute of Technology, on stage with her.

Browder was there to talk about the DoNotPay application he created. With it, users can appeal parking tickets automatically and — get this — for free. Browder said he was inspired to create his savvy legal bot after receiving a hefty number of parking tickets. I think it’s safe to say we can all relate to this dilemma.
According to Browder, his bot has successfully appealed 4 million in tickets in New York and London. This is, quite frankly, mind-boggling. Legal services are some of the most sought-after and costly services that most people use at some point in their lives. The problem is that most of us can barely afford legal fees. This application saves not only money, but also the time and headaches necessary to find legal counsel.
What’s even more impressive — and touching — was the direction in which Browder announced he is taking his application development efforts. He intends to use Watson Language Translator to create an application with an Arabic language feature to aide Syrian refugees through the legal technicalities of seeking asylum in the United Kingdom. Cognitive is not only applicable in the business domain, but can also be applied to multiple domains, including the advancement of human rights.
Goel discussed how he secretly created a teaching assistant (TA) bot using the Conversation API for his online artificial intelligence course. With only nine teaching assistants and 300 students, Goel’s bot, “Jill Watson,” reduced some of the TA workload in the class-wide forum, answering questions about the class, assignments and the subject matter.
Using the Conversation API and training in natural language classes, student teams at Georgia Tech used 1,200 question-answer pairs that enabled them to train and chat with Watson to answer engineering, systems, architecture and computing questions. Comically, only at the end of the semester, Professor Goel unveiled that Jill Watson was not an actual person. Professor Goel’s approach scores an A in my book.
As I reflected throughout the day, I concluded that the benefits and democratization of cognitive and technology in 2016 are truly a privilege. The types of problems we can solve are not unique to one domain, and the technology is easily and quickly accessible through cloud computing and the API economy.
It is an unprecedented time to live in a society in which the tools of modern technology merge with creative minds to solve business and social issues spanning the fields of healthcare, law, education and international relations. With that in mind, my question to you is: what will you create? As John F. Kennedy once said, “Ask not what can your country do for you, but what can you do for your country?”
Let Watson help you change the world. Learn how.
The post Cognitive for the greater good at the Watson Developer Conference appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Surviving Black Friday and holiday shopping with cloud

Just as steel is strengthened through the application of heat, human character is “strengthened” through surviving the crucible of the holiday shopping season.
My family has attempted to take me shopping in the wee hours of the morning on “Black Friday,” the day after the US Thanksgiving holiday, to no avail.  I’ve seen enough news reports about Black Friday to cure me of any desire to experience the phenomenon. I heard recently that there have been four deaths and 76 injuries over the last seven years of Black Friday shopping.
Thank goodness, then, for online shopping. Last year, between Black Friday and Cyber Monday, online orders topped $5.5 billion. While online shopping has been of great benefit to mankind by helping us avoid crowds, it also has introduced another experience that “strengthens” our character: coping with slow responsiveness.
There is a direct correlation between website response time while shopping, unhappy online shoppers, and impact to bottom-line sales. I’ve seen reports that indicate more than 40 percent of online buyers abandon sites that take longer than three seconds to load. More than 80 percent of online buyers who are dissatisfied with their online shopping experience are unlikely to buy again from that site.
Now imagine a retailer that delivers its online shopping experience via the cloud. Using cloud, a key design point and benefit for retailers is elasticity through something referred to as “auto-scaling.”
Auto-scaling is fairly straightforward to implement. Cloud service providers have auto-scaling policies that are created when these workloads are deployed on cloud. These policies are defined and triggered based on environment resource usage (such as CPU, memory usage, or network traffic) or time-based settings (for example, Friday at 6 AM until Sunday at 7 AM).  Once deployed, additional instances are spun up and scaled when these thresholds are exceeded.
Of course there are a few other things that go with auto-scaling that must be considered. Load balancing, a properly architected online shopping application and having a disaster recovery / high availability solution as part of the overall online shopping solution will help provide a delightful shopping experience, regardless of the shopping season.
With these things in place, retailers may rejoice that they can scale to support all their online users (which means more revenue for them).
Online buyers can rejoice because not only can they purchase your gift efficiently, but they also don’t have to participate any other character “strengthening” exercises by being frustrated with a slow online shopping experience.
Create seamless retail solutions on the cloud.
The post Surviving Black Friday and holiday shopping with cloud appeared first on news.
Quelle: Thoughts on Cloud

Great mobile applications come from a great mobile development platform

Users increasingly expect more and more from their mobile applications.
Where mobile apps once simply supplemented a larger application with useful-but-limited functionality, they now — in many cases — have become the key interface that drives sales or user adoption of a service because they offer an “any time, any place” experience.
Increased functionality has led to increased complexity. Today’s mobile apps must interface with multiple data sources. They need to be secure. They must be responsive, and they have to evolve quickly to meet the changing demands of people that use them.
Mobile apps must also deliver analytics, which help with future development of both the application and the services it delivers.
To meet these challenges and others, a feature-rich development environment is required. A good development environment, in which complex applications are easier to create and maintain, will lead to faster-to-market applications and better developer and user experiences.
The IBM MobileFirst Platform Foundation, available through IBM Bluemix, has recently been named a leader in “The Forrester Wave: Mobile Development Platforms, Q4 2016,” Forrester Research Inc., 24 October 2016.
Forrester “evaluated vendors against 32 criteria, which we grouped into three high-level buckets”: current offering, strategy and market presence. Each of the criteria assessed is important to application development and delivery professionals. Strengths and weaknesses in those criteria can determine the suitability of a particular tool.
Because the IBM MobileFirst Platform Foundation is delivered on IBM Bluemix, developers have a rich, cloud-enabled environment at their fingertips, which is recognized in the Forrester report. Data access and delivery from multiple sources is an important aspect of modern, mobile systems of engagement.
Underpinned by IBM Bluemix, IBM MobileFirst provides a strong API ecosystem, giving developers secure access to systems of record through integration services, along with powerful analytics and DevOps capabilities. This further enriches and accelerates the development experience, enabling a two-speed IT model in which born-on-the-cloud mobile systems of engagement can rapidly evolve and change according to consumer and market need, while traditional, on-premises systems of record can remain on their typically slower change cycle.
Like many other professionals, developers like to work with tools that make their jobs easier: tools that are feature-rich, yet simple; that enable building common and complex functionality, such as interfaces and analytics at the click of a mouse; that help them get on with the important job of coding, and that make their apps stand out from the rest. To develop the best mobile applications, developers need a versatile, intuitive mobile development platform.
Download a copy of “The Forrester Wave: Mobile Development Platforms, Q4 2016” here and find out why Forrester says the IBM MobileFirst Platform Foundation is a leader in its evaluation.
The post Great mobile applications come from a great mobile development platform appeared first on news.
Quelle: Thoughts on Cloud

Blog posts in the past week

Here’s what RDO enthusiasts have been blogging about over the last week or so:

Chasing the trunk, but not too fast by amoralej

As explained in a previous post, in RDO Trunk repositories we try to provide packages for new commits in OpenStack projects as soon as possible after they are merged upstream. This has a number of advantages:

Read more at http://tm3.org/c9

Enabling nested KVM support for a instack-virt-setup deployment. by Carlos Camacho

The following bash snippet will enable nested KVM support in the host when deploying TripleO using instack-virt-setup.

Read more at http://tm3.org/ca

Ocata OpenStack summit 2016 – Barcelona by Carlos Camacho

A few weeks ago I had the opportunity to attend to the Barcelona OpenStack summit ‘Ocata design session’ and this post is related to collect some overall information about it. In order to achieve this, I’m crawling into my paper notes to highlight the aspects IMHO are relevant.

Read more at http://tm3.org/cb

On communities: Emotions matter by Flavio Percoco

Technology is social before it’s technical – Gilles Deleuze

Read more at http://tm3.org/cc

New TLS algorithm priority config for libvirt with gnutls on Fedora >= 25 by Daniel Berrange

Libvirt has long supported use of TLS for its remote API service, using the gnutls library as its backend. When negotiating a TLS session, there are a huge number of possible algorithms that could be used and the client & server need to decide on the best one, where “best” is commonly some notion of “most secure”. The preference for negotiation is expressed by simply having an list of possible algorithms, sorted best to worst, and the client & server choose the first matching entry in their respective lists. Historically libvirt has not expressed any interest in the handshake priority configuration, simply delegating the decision to the gnutls library on that basis that its developers knew better than libvirt developers which are best. In gnutls terminology, this means that libvirt has historically used the “DEFAULT” priority string.

Read more at http://tm3.org/cd

New libvirt website design by Daniel Berrange

The current previous libvirt website design dated from circa 2008 just a few years after the libvirt project started. We have grown alot of content since that time, but the overall styling and layout of the libvirt website has not substantially changed. Compared to websites for more recently launched projects, libvirt was starting to look rather outdated. So I spent a little time to come up with a new design for the libvirt website to bring it into the modern era. There were two core aspects to the new design, simplify the layout and navigation, and implement new branding.

Read more at http://tm3.org/ce

Quick Guide: How to Plan Your Red Hat Virtualization 4.0 Deployment by Eric D. Schabell

On August 24th of this year Red Hat announced the newest release of Red Hat Virtualization (RHV) 4.0.

Read more at http://tm3.org/cf

Visualizing Kolla’s Ansible playbooks with ARA by dmsimard

Kolla is an OpenStack deployment tool that’s growing in popularity right now.

Read more at http://tm3.org/cg

Recapping OpenStack Summit Barcelona by Peter Pawelski, Product Marketing Manager, Red Hat OpenStack Platform

More than 5,200 OpenStack professionals and enthusiasts gathered in Barcelona, Spain to attend the 2016 OpenStack Summit. From the keynotes to the break-out sessions to the marketplace to the evening events and the project work sessions on Friday, there was plenty to keep attendees busy throughout the week. In fact, if you were one of the lucky ones who attended OpenStack Summit, there was probably many sessions and activities you wanted to make it to but couldn’t.

Read more at http://tm3.org/ch
Quelle: RDO

OpenShift Commons Briefing #54: DevSecOps: Security Injection with SecurePaaS on OpenShift

In this briefing, Derrick Sutherland of Shadow-Soft addresses cyber security concerns in a DevOps world and demonstrates how SecurePaaS on OpenShift, automatically and without developer intervention, introspects, federates, and injects identity, authentication, authorization, &; auditing (IAAA) into an application’s source code, uniquely protecting IT assets.
Quelle: OpenShift

Chasing the trunk, but not too fast

As explained in a previous post,
in RDO Trunk repositories we try to provide packages for new commits in OpenStack
projects as soon as possible after they are merged upstream. This has a number of advantages:

It allows packagers to identify packaging issues just after introduced.
For project developers, their changes are tested in non-devstack environments,
so test coverage is extended.
For deployment tools projects, they can use these repos to identify problems
with new versions of packages and to start integrating any enhancement added to
projects as soon as it’s merged.
For operators, they can use these packages as hot-fixes to install in their RDO
clouds before patches are included in official packages.

This means that, for every merged commit a new package is created and a yum repository
is published in RDO trunk server. This repo includes
the just built package and the latest builds for the rest of packages in the same
release.

Initially, we applied this approach to every package included in RDO. However,
while testing these repos during the Newton cycle we observed that jobs failed
with errors that didn’t affect OpenStack upstream gates. The reason behind this
is that commits in OpenStack gate jobs are tested with versions of libraries
and clients defined in upper-constraints.txt files in requirements project
for the branch where the change is proposed. Typically these are the last tag-released
versions. As RDO was testing using libraries from last commit instead of last
release, we were effectively ahead of upstream tests, running too fast.

While this provided some interesting information and we could identify issues very
early, it made very difficult to get stable repositories that could be promoted and
used. After some discussions in RDO weekly meeting, it was decided to apply some changes
in the way libraries are managed to leverage the work done in upstream gates
but try to keep catching issues as soon as possible:

For master branch, it was decided to pin libraries and clients to the versions
included in upper-constraints for repositories served in http://trunk.rdoproject.org/centos7.
This repositories are used by RDO CI promotion jobs and marked as current-passed-ci when
all tests succeed.
Additionally, a new builder was created that chases master in all packages,
including libraries, clients, etc… This builder is able to catch issues based
on unit tests executed when packages are created. The produced repos are available in

http://trunk.rdoproject.org/centos7-master-head

but promotion jobs are not executed using them.

The differences between master and master-head are shown in following diagram:

For releases in maintenance phase we pin libraries to what’s found in
upper-constraints.txt file in the corresponding branch.

Implementation

In order to manage the libraries versions properly, RDO is using a peer-reviewed
workflow of gerrit reviews proposed to the rdoinfo project in

http://review.rdoproject.org.

You can see an example here.

A job is executed periodically that automatically creates gerrit reviews when
versions are updated in upper-constraints files. Manual approval is needed to
get the changes merged and the new versions built by DLRN builders.
Quelle: RDO

Why IBM is tripling its cloud data center capacity in the UK

The need for cloud data centers in Europe continues to grow.
UK cloud adoption rates have increased to 84 percent over the last five years, according to Cloud Industry Forum.
That’s why I am thrilled to announce a major expansion of IBM UK cloud data centers, tripling the capacity in the United Kingdom to meet this growing customer demand. The investment expands the number of IBM cloud data centers in the country from two to six.

It is the largest commitment IBM Cloud has made to one country at one time. Expanding the cloud data center foot print in UK that has started over five years ago, IBM will have more UK data centers than any other vendor.
Meeting demand in highly regulated industries
Highly regulated industries, such as the public sector and financial services, have nuanced and sensitive infrastructure and security needs.
The UK government&;s Digital Transformation Plan to boost productivity has put digital technologies at the heart of the UK&8217;s economic future.
The Government Digital Service (GDS) leading the digital transformation of government, runs GOV.UK, helping millions of people find the government services and information they need every day. To make public services simpler, better and safer, the UK&8217;s national infrastructure and digital services require innovative solutions, strong cyber security defenses and high availability platforms. is thus essential to embrace the digital intelligence that will deliver outstanding services to UK citizens.
In response, IBM is further building out its capabilities through its partnership with Ark Data Centres, the majority owner in a joint venture with the UK government. Together, we’re delivering public data center services that are already being used at scale by high-profile, public-sector agencies.
It is all about choice
The IBM point of view is to design a cloud that brings greater flexibility, transparency and control over how clients manage data, run businesses and deploy IT operations.
Hybrid is the reality of cloud migration. Clients don’t want to move everything to the public cloud or keep everything in the private cloud. They want to have a choice.
For example, IBM offers the opportunity to keep data local in client locations to those enterprises with fears about data residency and compliance with regulations for migration of sensitive workloads. Data locality is certainly a factor for European businesses, but even more businesses want the ability to move existing workloads to the cloud and provide cognitive tools and services that allow them to fuel new cloud innovations.
From cost savings to innovation platform
Data is the game changer in cloud.
IBM is optimizing its cloud for data and analytics, infused with services including Watson, blockchain and Internet of Things (IoT) so that clients can take advantage of higher-value services in the cloud. This is not just about storage and compute. If clients can’t analyze and gain deeper insights from the data they have in the cloud, they are not using cloud technology to its full potential.
Besides, our customers are focusing more and more on value creation and innovation. That&8217;s why travel innovators are adopting IBM Cloud, fueled by Watson&8217;s cognitive intelligence, to transform interactions with customers and speed the delivery of new services.
Thomson, part of TUI UK & Ireland, one of the UK’s largest travel operators, taps into one of IBM’s UK cloud data centers to run its new tool developed in IBM’s London Bluemix Garage. The app uses Watson APIs such as Conversation, Natural Language Classifier and Elasticsearch on Bluemix to enable customers to receive holiday destination matches based on natural language requests like &;I want to visit local markets” or “I want to see exotic animals.&;
Other major brands, including Dixons Carphone, National Express, National Grid, Shop Direct, Travis Perkins PLC, Wimbledon, Finnair, EVRY and Lufthansa, are entrusting IBM Cloud to transform their business to create more seamless, personalized experiences for customers and accelerate their digital transformation.
By the end of 2017, IBM will have 16 fully operational cloud data centers across Europe, representing the largest and most comprehensive European cloud data center network. Overall, IBM has now the largest cloud data center footprint globally with more than 50.
These new IBM Cloud data centers will help businesses in industries such as retail, banking, government and healthcare meet customer needs.
The post Why IBM is tripling its cloud data center capacity in the UK appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Best practices for running RabbitMQ in OpenStack

The post Best practices for running RabbitMQ in OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
OpenStack is dependent on message queues, so it&;s crucial that you have the best possible setup. Most deployments include RabbitMQ, so let&8217;s take a few minutes to look at best practices for making certain it runs as efficiently as possible.
Deploy RabbitMQ on dedicated nodes
With dedicated nodes, RabbitMQ is isolated from other CPU-hungry processes, and hence can sustain more stress.
This isolation option is available in Mirantis OpenStack starting from version 8.0. For more information, do a search for ‘Detach RabbitMQ’ on the validated plugins page.
Run RabbitMQ with HiPE
HiPE stands for High Performance Erlang. When HiPE is enabled, the Erlang application is pre-compiled into machine code before being executed. Our benchmark showed that this gives RabbitMQ a performance boost up to 30%. (If you&8217;re into that sort of thing, you can find the benchmark details here and the results are here.)
The drawback with doing things this way is that application initial start time increases considerably while the Erlang application is compiled. With HiPE, the first RabbitMQ start takes around 2 minutes.
Another subtle drawback we have discovered is that if HiPE is enabled, debugging RabbitMQ might be hard as HiPE can spoil error tracebacks, rendering them unreadable.
HiPE is enabled in Mirantis OpenStack starting with version 9.0.
Do not use queue mirroring for RPC queues
Our research shows that enabling queue mirroring on a 3-node cluster makes message throughput drop twice. You can see this effect in publicly available data produced by Mirantis Scale team &; test reports.
On the other side, RPC messages become obsolete pretty quickly (1 minute) and if messages are lost, it leads only to failure of current operations in progress, so overall RPC queues without mirroring seem to be a good tradeoff.
At Mirantis, you generally enable queue mirroring only for Ceilometer queues, where messages must be preserved. You can see how we define such a RabbitMQ policy here.
The option to turn off queue mirroring is available in MOS starting in Mirantis OpenStack 8.0 and is enabled by default for RPC queues starting in version 9.0.
Use a separate RabbitMQ cluster for Ceilometer
In general, Ceilometer doesn&8217;t send many messages through RabbitMQ. But if Ceilometer gets stuck, its queues overflow. That leads to RabbitMQ crashing, which in turn causes outages for other OpenStack services.
The ability to use a separate RabbitMQ cluster for notifications is available starting with OpenStack Mitaka (MOS 9.0) and is not supported in MOS out of the box. The feature is not documented yet, but you can find the implementation here.
Reduce Ceilometer metrics volume
Another best practice when it comes to running RabbitMQ beneath OpenStack is to reduce the number of metrics sent and/or their frequency. Obviously that reduces stress put on RabbitMQ, Ceilometer and MongoDB, but it also reduces the chance of messages piling up in RabbitMQ if Ceilometer/MongoDB can&8217;t cope with their volume. In turn, messages piling up in a queue reduce overall RabbitMQ performance.
You can also mitigate the effect of messages piling up by using RabbitMQ’s lazy queues feature (available starting with RabbitMQ 3.6.0), but as of this writing, MOS does not make use of lazy queues..
(Carefully) consider disabling queue mirroring for Ceilometer queues
In the Mirantis OpenStack architecture, queue mirroring is the only ‘persistence’ measure used. We do not use durable queues, so do not disable queue mirroring if losing Ceilometer notifications will hurt you. For example, if notification data is used for billing, you can&8217;t afford to lose those notifications.
The ability to disable mirroring for Ceilometer queues is available in Mirantis OpenStack starting with version 8.0, but it is disabled by default.
So what do you think?  Did we leave out any of your favorite tips? Let us know in the comments!
The post Best practices for running RabbitMQ in OpenStack appeared first on Mirantis | The Pure Play OpenStack Company.
Quelle: Mirantis

Solving infrastructure management in the transition to strategic IT

Has your department started its shift toward more strategic IT? If you have, you’ve probably also faced the question, “How will I manage my business infrastructure to optimize performance?”
Some of your peers have found an effective solution: managed cloud services.
In a recent Frost & Sullivan survey, 52 percent of IT decision makers said that a lack of in-house expertise hampers their cloud implementations. Of those using cloud today, 11 percent said finding qualified staff is an issue, and 91 percent are seeking outside assistance to deploy their clouds. Of those seeking help, some turn to managed service providers, and rightly so. Managed cloud services can offer big benefits as you balance strategic projects with the need to manage infrastructure.
What are managed cloud services?
Managed cloud services create a partnership between your business and a cloud service provider that extends beyond service provisioning.
In a managed services relationship, the provider contributes cloud technology, infrastructure and expertise while you retain control and oversight of application performance. It’s a best-of-both-worlds scenario. You gain expert assistance to deploy and manage your infrastructure without having to worry about using internal resources to physically deploy, manage and optimize it. The beauty is that you don’t relinquish workload control; you work with the provider to ensure that the infrastructure supports the performance you need to ensure optimal service operation.
For many CIOs, the benefits of managed cloud services are clear. In an atmosphere where your focus is on fast response to business needs, managed service providers offer experts who optimize your infrastructure to ensure application performance with high levels of security and compliance and guaranteed service-level agreements (SLAs).
Above all, businesses are seeking managed cloud providers that will work to align their services with business outcomes, and provide SLAs that guarantee it. Need infrastructure that bursts to accommodate a specific throughput to enable online sales? Your managed service provider should be able to accommodate such a request. If not, you should be looking elsewhere.
What should you look for in providers?
Frost & Sullivan has identified seven key criteria that you should look for when considering managed cloud providers who will meet your business needs, today and in the future. These include:

Core expertise in infrastructure and workload
Support for multiple hardware types and hypervisors
Hybrid management expertise
Customizable services and SLAs
Robust security features
Compliance assurance and related reporting
A robust portfolio of managed services

As your IT department shifts from the role of asset manager to strategic IT business driver, it’s critical to focus your resources wisely. For many CIOs, this means turning to managed cloud service experts. Managed cloud partners handle routine management tasks through experts with the know-how to tailor configurations to ensure optimal function, giving your business the best platform on which to succeed.
For more information on how to use managed cloud services as you shift to a strategic IT model, read our whitepaper titled “Cloud-Based Managed Services: Tips for Selecting a Provider that Can Help You Re-Tool Your IT Department.”
The post Solving infrastructure management in the transition to strategic IT appeared first on news.
Quelle: Thoughts on Cloud