Cloud accounts for more than 20 percent of record-breaking IBM patents in 2017

IBM was issued 9,043 US patents in 2017, making it the top company for US patents granted for the 25th straight year.
Of those, more than 1,900 — about 21 percent of the total — were cloud-related. For example, one patent was for a system that monitors data sources including weather reports, social networks, newsfeeds and network statistics to determine the best uses of cloud resources to meet demand. It’s one of numerous examples of using unstructured data can help organizations work more efficiently.
Other areas with large numbers of patents included AI (1,400) and cybersecurity (1,200).
Some 8,500 IBM engineers, researchers, scientists and designers from 47 different countries were granted patents in 2017. The more than 9,000 patents issued to IBM in 2017 brings the company’s total number of since 1993, the first year of the 25-year streak, up to more than 105,000. The 9,043 patents in 2017 set a record for patents in a single year.

Learn more about IBM patents.
The post Cloud accounts for more than 20 percent of record-breaking IBM patents in 2017 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Multi-tier Application Deployment using Ansible and CloudForms (Video)

This article is a follow up on our previous blog post VMware provisioning example using Ansible, where we deployed a simple virtual machine on VMware using Ansible from the CloudForms service catalog. In this week’s demonstration, we go a step further and provision a multi-tier application on Amazon Web Services (AWS). Once provisioned, the application lifecycle, as well as all day 2 operations are performed from Red Hat CloudForms.

In our example, we deploy the Ticket Monster application on JBoss EAP servers with a PostgreSQL back-end database. We then register our EAP servers to an Amazon Elastic Load Balancer (ELB). The Ansible playbook for this example can be found on this github repository.
In the demonstration video, you show how this playbook execution:

Deploys an instance for hosting our database
Deploys 2 instances for hosting our application servers
Installs PostgreSQL on the database instance
Configures the database (e.g. schema, users, connections, etc)
Deploys Jboss EAP on both application server instances
Configures Jboss EAP, the database driver and connection, and deploys the Ticket Monster web application
Links both JBoss EAP servers to our Amazon Elastic Load Balancer

 
The Amazon EC2 instances created by this playbook are linked to the CloudForms service. We can find all detailed information about the instances, as well as the load balancer from the Red Hat CloudForms user interface.
 

 
The Red Hat Knowledge Base article, including the necessary playbooks to implement this example, are available on the Red Hat Customer Portal.
Quelle: CloudForms

RDO Community Blogposts

If you’ve missed out on some of the great RDO Community content over the past few weeks while you were on holiday, not to worry. I’ve gathered the recent blogposts right here for you. Without further ado…

New TripleO quickstart cheatsheet by Carlos Camacho

I have created some cheatsheets for people starting to work on TripleO, mostly to help them to bootstrap a development environment as soon as possible.

Read more at http://anstack.github.io/blog/2018/01/05/tripleo-quickstart-cheatsheet.html

Using Ansible for Fernet Key Rotation on Red Hat OpenStack Platform 11 by Ken Savich, Senior OpenStack Solution Architect

In our first blog post on the topic of Fernet tokens, we explored what they are and why you should think about enabling them in your OpenStack cloud. In our second post, we looked at the method for enabling these.

Read more at https://redhatstackblog.redhat.com/2017/12/20/using-ansible-for-fernet-key-rotation-on-red-hat-openstack-platform-11/

Automating Undercloud backups and a Mistral introduction for creating workbooks, workflows and actions by Carlos Camacho

The goal of this developer documentation is to address the automated process of backing up a TripleO Undercloud and to give developers a complete description about how to integrate Mistral workbooks, workflows and actions to the Python TripleO client.

Read more at http://anstack.github.io/blog/2017/12/18/automating-the-undercloud-backup-and-mistral-workflows-intro.html

Know of other bloggers that we should be including in these round-ups? Point us to the articles on Twitter or IRC and we’ll get them added to our regular cadence.
Quelle: RDO

Debugging Ansible Automation inside Red Hat CloudForms

Debugging might not be one of your favorite things to do, but when your automation fails it is good to know where to look to find information and troubleshoot. In this blog post, we investigate how to make sure Ansible Automation is correctly configured inside CloudForms, and how to troubleshoot issues that might occur when running Ansible Automation. Content for this blog post is based on the knowledge base article published on Red Hat Customer Portal.
 

 
Before you start, make sure Ansible Automation is correctly configured and running
 
First of all, before you start performing a deep debugging, make sure these steps are in place:

Embedded Ansible Role is enabled. Note that only one appliance per Region running Ansible Role is supported today.
Internet connectivity is available for the appliance with the Embedded Ansible Role before the role is enabled.
Ansible Worker is properly running (you can check its status under Configuration > Diagnostics).
You can restart the Embedded Ansible process by executing ansible-tower-service restart from the appliance command line.

 
If the role is not properly configured and running, you will not be able to add playbooks or credentials.
 
Ok, Ansible Automation is up and running, but our playbook launches are not successful
 
Well, in that case, you need to dig a bit more to figure out what is wrong:

Verify, under Services > Request that the playbook is executed. Every playbook is executed as a service and status can be tracked here.
If the playbook does not execute as expected, check that the version of the playbook is appropriate and that you do not have further dependencies (like ansible roles or modules requirements).
Re-sync the repository from the CloudForms UI under Automation > Ansible > Repositories. On the appliance, playbooks can be found under /var/lib/awx/projects/.
Check the standard output of the service to verify the execution of the playbook itself. This can be found under Services > My Services > Selected Services > Provisioning > Standard Output.
Make sure all required playbook variables are provided to the playbook for execution. CloudForms can automatically create service dialogs while defining a new service to provide these variables. The dialog element name must match the playbook variable.
The execution of the playbook in real time can be found in a file for each new run in the  /var/lib/awx/job_status directory. Note that the name of the file is a hash and has nothing to do with the playbook name. This is currently the only way to track the status of a run when executing the playbook as a policy or alert action.

 
Where else can I find additional information?
 
If all the above steps are validated and you still have trouble, a good place to continue troubleshooting is to look at the CloudForms standard log files under /var/www/miq/vmdb on the appliance:

Production.log – Operations UI & Service UI
Automation.log – Service Ordering, Automation
Policy.log – Events, Policy
Evm.log – Everything else

Quelle: CloudForms

Ansible Automation inside Red Hat CloudForms (Summary)

This blog post concludes our series on Ansible Automation inside Red Hat CloudForms. We hope that the content and demo videos were able to get you a grasp on how Ansible Automation, the leading simple, powerful, and agentless open source IT automation framework, adds value to Red Hat CloudForms and extends its capabilities.
 
Red Hat CloudForms natively supports Ansible Automation and eases the deployment of infrastructure and  IT services across clouds. Users can automate multi-cloud management by defining a wide range of policies and processes with no coding or scripting required.

 
Using Ansible Automation, users have now access to a large number of available modules that facilitate performing operational actions on the data center elements such as monitoring, networking, storage, etc.
 
In our series, we explored how Ansible Automation included as part of Red Hat CloudForms can be used to create services and policies based on Ansible Playbooks to provision new environments (e.g. VMs and instances) and control their lifecycle over time by associating resources to CloudForms services. We also covered how to monitor and troubleshoot Ansible Automation inside CloudForms.
 
The following is a list of all articles published as part of the series:

My First Ansible Service (Video)
My First Ansible Control Action (Video)
Launch Ansible Playbooks from CloudForms REST API (Video)
My First Ansible Playbook Button (Video)
VMware Provisioning Example using Ansible (Video)
Debugging Ansible Automation inside Red Hat CloudForms

 
Each post contains a link to its associated Red Hat Knowledge Base article where you can find additional information.
 
The CloudForms team is currently working on the next release of the product which will include enhancements on the integration of Ansible Automation inside Red Hat CloudForms. Stay tuned for more details on the topic on the CloudForms Blog.
Quelle: CloudForms

Your line of business can “go fast” with hybrid integration

(This post is part of a series. Read part one, part two, part three, part four and part five to learn more about the urgent need for Hybrid Integration.)
Just like Ricky Bobby in Talladega Nights, in today’s ultra-competitive marketplace there is only one rule: “If you ain’t first, you’re last.” First movers have a distinct advantage in the marketplace. They set the tone, define the space and create name recognition. Take Uber, for example. Uber and Lyft offer almost the same service, but most people, no matter which car service they use, will “call an Uber.” Unless you are offering something substantially different, (like the experience of driving with a live cougar in the car) first movers are hard to displace.
That is why the line of business (LOB) is constantly driving to develop faster, get to market faster and adapt faster. Business leaders understand the implications — it’s a race, and they can’t wait months or even weeks to complete implementations. As a result, business functions are going around Central IT and solving their own problems by acquiring and implementing their own solutions. And while this might work in a sprint over the equivalent of 500 miles, in the long-term it is going to cause a wreck, since many of these solutions don’t integrate, don’t scale and are not secure. So, now IT faces the challenge of trying to support and govern what could be dozens of unsanctioned, un-architected, standalone LOB solutions. How do you team these two together?
What many winning IT departments are doing instead of fighting the line of business is working with it (just like Shake and Bake). By being the source of lightweight business solutions they not only help the business work more effectively but also retain greater control over the applications. One of the key areas where we see a major focus is self-service integration. Self-service integration can encompass a variety of scenarios, including the following:

Tactical automation by end users of a process that involves connecting to two or more applications
Situational integration associated with a project or short-term need (With this type of integration, the ease of integration provides an opportunity to automate workflow, distribution of tasks and delivery of data across two or more applications.)
Use by developers for rapid integration or workflow-oriented use cases
End-user data prep to automate activities requiring creation of data sets which can then be used for analytics, monitoring or reporting.

The result is an influx of no-code integration solutions targeted at the business entering the market. However, this subset of solutions has its own set of requirements that you’ll need to consider. According to a report from IDC, “Organizations that are faced with end users demanding self-service integration and automation capabilities need to:

Consider standardizing on an approved solution for self-service integration that provides a level of control and governance in keeping with user abilities and corporate policies.
Develop technology acquisition policies that prefer services that offer open APIs accessible by self-service integration solutions.
Monitor integration solutions implemented by end users to ensure that compliance, security, and technology policies are not being violated.
Factor the near-term eventuality of end-user self-service into integration adoption decisions.”

The benefits are twofold. The business users are happy because they feel empowered to move at the speed of the market, and IT reduces the burden on its resources that comes from having to respond to every application integration request that arises. Wave the checkered flag. It is a win win. (Ok, it is just a win because you can’t have two number ones. That would be eleven.)
If your line of business is coming at you like a spider monkey, learn how to help them get to market faster by downloading the IDC Report, The Urgent Need for Hybrid Integration or going to the IBM Integration website.
SHAKE AND BAKE
(Does that just blow your mind?)
 
The post Your line of business can “go fast” with hybrid integration appeared first on Cloud computing news.
Quelle: Thoughts on Cloud