Announcing IBM integration capabilities for blockchain

Everyone loves making year-end predictions, and this one was a popular pick from last year: blockchain would disrupt everything in 2017, including integration. Those making that claim are right on the money. Blockchain—a shared, immutable ledger for recording the history of transactions—needs the most secure integration right now. Here’s why.
Blockchain to serve customers quickly and securely
One example on how blockchain can improve the customer experience comes from InfoQ. In many European countries, when a customer’s flight is delayed or cancelled, the airline must give a specific amount of compensation. The crux is that airlines don’t have to automatically dole out the funds. Customers must claim it, often with an attorney’s help.
With blockchain, however, a smart contract could eliminate this time-consuming, lawyer-driven process. In fact, smart contracts have long been touted as a replacement for lawyers. As described on Blockgeeks, “smart contracts help you exchange money, property, shares, or anything of value in a transparent, conflict-free way, while avoiding the services of a middleman.” From a technical perspective, this works because blockchain enables automation of business processes that transcend organizational boundaries in a secure and decentralized manner.
Why blockchain needs integration
To make the blockchain revolution happen quickly, companies won’t be building their own blockchain infrastructure. They will leverage cloud services. When blockchain infrastructure is its own peer-to-peer network, every company must be a part of it, and integration is essential for service and governance requirements. Integration also ensures that blockchain will work with your applications on private or public clouds.
Another reason that integration is becoming so important to blockchain is because blockchain has no central database. Yet new events with technical and business information are created constantly. These events must be analyzed and acted upon quickly. Integration makes this possible. Furthermore, you’ll need to be able to visualize these events and even interact with data from other, non-blockchain systems. We’re even seeing the advent of blockchain IoT. But how do you connect to those other non-blockchain systems?
Unveiling new integration capabilities for blockchain
IBM Integration is a part of this movement of when integration meets blockchain. Most recently, IBM announced a blockchain connector for the MQ product. New blockchain connectivity provides the ability to perform a message-driven query into the IBM Blockchain for Bluemix service to gain insight into activity within the blockchain.
To see all the new integration capabilities, head here.
This is just a sample of things to come this year. When 2017 ends, no one will predict that 2018 will be the year blockchain meets integration because that unification is already here.
The post Announcing IBM integration capabilities for blockchain appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Fluentd Enterprise

Fluentd Enterprise is a secure, scalable, and reliable unified logging layer​ built around the WildFly open source Cloud Native Computing
Foundation (CNCF). Fluentd Enterprise allows you to unify your data streams from network devices, firewalls, applications, syslog, and infrastructure data and process and route to the
analytic backends that power your enterprise.
Quelle: OpenShift

Recent blog posts: June 12

Experiences with Cinder in Production by Arne Wiebalck

The CERN OpenStack cloud service is providing block storage via Cinder since Havana days in early 2014. Users can choose from seven different volume types, which offer different physical locations, different power feeds, and different performance characteristics. All volumes are backed by Ceph, deployed in three separate clusters across two data centres.

Read more at http://openstack-in-production.blogspot.com/2017/06/experiences-with-cinder-in-production.html

Using Ansible Validations With Red Hat OpenStack Platform – Part 1 by August Simonelli, Technical Marketing Manager, Cloud

Ansible is helping to change the way admins look after their infrastructure . It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules , providing everything from server management to network switch configuration.

Read more at http://redhatstackblog.redhat.com/2017/06/08/using-ansible-validations-with-red-hat-openstack-platform-part-1/

Upstream First…or Second? by Adam Young

From December 2011 until December 2016, my professional life was driven by OpenStack Keystone development. As I’ve made an effort to diversify myself a bit since then, I’ve also had the opportunity to reflect on our approach, and perhaps see somethings I would like to do differently in the future.

Read more at http://adam.younglogic.com/2017/06/upstream-first-or-second/

Accessing a Mistral Environment in a CLI workflow by John

Recently, with some help of the Mistral devs in freenode #openstack-mistral, I was able to create a simple environment and then write a workflow to access it. I will share my example below.

Read more at http://blog.johnlikesopenstack.com/2017/06/accessing-mistral-environment-in-cli.html

OpenStack papers community on Zenodo by Tim Bell

At the recent summit in Boston, Doug Hellmann and I were discussing research around OpenStack, both the software itself but also how it is used by applications. There are many papers being published in proceedings of conferences and PhD theses but finding out about these can be difficult. While these papers may not necessarily lead to open source code contribution, the results of this research is a valuable resource for the community.

Read more at http://openstack-in-production.blogspot.com/2017/06/openstack-papers-community-on-zenodo.html

Event report: Red Hat Summit, OpenStack Summit by rbowen

During the first two weeks of May, I attended Red Hat Summit, followed by OpenStack Summit. Since both events were in Boston (although not at the same venue), many aspects of them have run together.

Read more at http://drbacchus.com/event-report-red-hat-summit-openstack-summit/
Quelle: RDO

Ottawa utility embraces cloud, aligning IT with business

When I joined Hydro Ottawa just over two years ago, it was my first foray into the energy and utility industry. My background was software and high tech, and coming from that industry, I was very familiar with cloud platforms.
Business challenges in the public the energy and utility industry are similar to what the private sector faces in the sense that the technology model needs to be more agile. I saw there were opportunities at Hydro Ottawa to help IT better respond to business needs. That was one reason Hydro Ottawa began looking at cloud.
Leaning toward the cloud
Many businesses use a variable-cost model with cloud computing, but the utility industry has a different mode. It is capital intensive and can recoup costs as part of the utility rate. Typically, utility companies make huge investments in everything from the poles and transformers to general plant technology systems that allow the business to function efficiently.
Because of our CapEx-oriented business model, it is a challenge for utilities to move to an OpEx-focused cloud model. In other words Hydro Ottawa gets a return within its rates on the capital it spends.
Running an IT group with a conventional model of data centers in-house and procuring infrastructure was not efficient. We wanted to ensure that our group was focused on what was important for the business.  Moving parts of the company’s infrastructure to the cloud was about focusing resources on initiatives that were higher value than racking and stacking servers.
Capitalizing cloud workloads in the utility industry
Hydro Ottawa looked at several cloud providers, but found that IBM cloud was unique in its flexibility to meet the utility’s business objectives. With IBM dedicated bare metal servers, the utility can capitalize hardware and maintenance costs in line with the CapEx model.
When Hydro Ottawa moved its customer care and billing application from its previous environment to two IBM Cloud data centers in Canada, it used its substantial savings to pay for migration services for its enterprise resource planning (ERP) workload.
Hydro Ottawa plans to adopt a hybrid cloud IT model because its critical supervisory control and data acquisition (SCADA) system, which runs its electricity grid, can remain in house. Plus, there are legacy systems, which just don’t lend themselves well to cloud.
Transforming IT and collaboration
Now that Hydro Ottawa’s IT team is focused on strategic projects and business cases, it is taking a technology leadership role. The infrastructure-as-a-service (IaaS) solution from IBM frees up the team to take on more of an analyst and consultant role to business users.
The team provides advice to the business and implements applications that are important to the organization. Because the team is spending less time on operational tasks, Hydro Ottawa can tackle strategic projects around voice, telecommunications, and infrastructure projects which needed resources.
At the same time, the company leans less heavily on outside resources. The team is motivated and enthusiastic and has embraced the Bluemix plaform. They now get to work on a variety of projects where before they were working on rather standard and operational tasks. They were keeping the lights on, so to speak.
As Hydro Ottawa adopts more cloud platforms, our teams will be transformed from system administrators to cloud service specialists.
This is a paradigm shift in how the company thinks of IT versus the more conventional model.
Find out more about IBM Cloud bare metal servers.
The post Ottawa utility embraces cloud, aligning IT with business appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Why banks need application performance management and DevOps

In my last blog post, we looked at why application performance management (APM) is a critical component to success in the transition to DevOps. In this post, I’ll examine specifics for the banking and financial industry.
The banking industry relies heavily on business-critical customer-facing applications, many of which are accessed on a mobile platform. Customers expect these apps to respond quickly and to have extremely minimal downtime.
Because it needs to deliver experiences that meet customer expectations, the banking industry has been ahead of the general market in adopting DevOps practices. In particular, many banks have focused on implementing a process to obtain regular feedback from customers as well as continuous application monitoring. These tools support overall DevOps goals of faster time to market, quicker development and release cycles and reduced defects in the application.
Some of the priorities for implementation within the next 12 to 18 months include improving alignment between app developers and IT operations, applying agile and lean principles and continuously integrating source code updates from all developers on the team.
You can see the full comparison of DevOps practices between the banking industry and the general market in the table below.

To learn more join our APM and DevOps webinar on June 14th. Learn more about IBM Application Performance Management. Read this whitepaper for more industry survey results.
The post Why banks need application performance management and DevOps appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Mizuho Bank builds connections with cloud APIs

One of Japan’s largest banks is embracing application programming interfaces (APIs) to connect with customers and partners, and it’s doing it on IBM Cloud.
Mizuho’s Bank’s new API banking initiative will run on the IBM Japan FinTech API solution and IBM API Connect on the IBM Cloud and IBM DataPower Gateway. The goal of the initiative is to enhance customer experiences, offering customers personal financial management services and Internet of Things (IoT) payment options.
“API banking becomes a crucial step that helps drive open innovations by new technologies for customers, business partners, and Mizuho Bank,” said Masahiko Kato, senior technical officer at Mizuho Bank.
Masao Sanbe, managing director of industry sales at IBM Japan, added, “IBM is delighted to provide IBM’s cloud and cognitive technologies with our expertise to help Mizuho provide quality services to the customers.”
Find out more about Mizuho Bank’s API initiative with IBM Cloud.
The post Mizuho Bank builds connections with cloud APIs appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

A Clojure S2I Builder for OpenShift

In this quickstart, learn how to run Clojure on OpenShift with a custom Source-to-Image (S2I) builder image using OpenShift’s incremental build capability and in an environment where all actions are taken inside OpenShift, except the editing of Clojure code.
Quelle: OpenShift

Application performance management and DevOps: A winning combination

One of the biggest trends in IT organizations is the shift to a DevOps approach. By increasing collaboration between operations and development, DevOps has the power to help your business achieve faster time to market, decrease downtime and reduce defects. In fact, a recent IBM survey cited faster time-to-market as the single greatest driver for DevOps adoption.
Everyone has a DevOps strategy
The survey included results from over 500 companies across multiple industries. The vast majority of respondents had already adopted several key DevOps practices, such as increasing alignment between developers and operations. Of the companies that hadn’t started these practices, most planned to adopt DevOps within the next 12 to 18 months.
While many companies are implementing some components of DevOps to increase collaboration and speed the release cycle, the most advanced practitioners have already implemented continuous delivery—often pushing code updates multiple times a day. Users are beginning to expect this level of reliability, and they’re becoming less tolerant of application slow-downs. In other words, downtime is a thing of the past. If your organization is just beginning to adopt DevOps practices, how can you keep up?
Finding an edge with APM
Application Performance Management (APM) can be a critical tool for successfully adopting a DevOps approach at your organization. APM was once exclusively the domain of the operations team, but with DevOps, each side now has visibility into the processes and capabilities of the other. This allows development to take advantage of APM capabilities that were previously only used in production environments.
APM solutions critically support many of the goals of implementing DevOps. For example, to achieve the continuous release cycles that support the uptime and response time their users expect, DevOps teams need to know about potential issues before they affect the application. APM systems can help by providing predictive analytics to identify anomalies and alert the DevOps teams before service is impacted. In general, the faster an APM solution can identify a potential problem and the root cause, the faster the DevOps team can mitigate the impact. This supports the overall DevOps goals of faster development, deployment and updates. The chart below illustrates the overlap in APM and DevOps objectives.

The transition to DevOps depends on many factors. Implementing an end-to-end APM solution is one way to help ensure your transition is successful.
Register for our June 14th APM and DevOps webinar here: ibm.co/2rG6j1Q
To learn more, visit the Application Performance Management website or read the white paper for more survey results.
The post Application performance management and DevOps: A winning combination appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Do you have a cognitive business?

Cloud and cognitive technologies are driving a revolution throughout enterprises. They’re “disruptive enablers” that produce deeper customer engagement, scale expertise and transform how organizations uncover new opportunities.
Ideally, the combination of cloud and cognitive joins digital business with a new level of digital intelligence, resulting in an organization that creates knowledge from data and expands virtually everyone’s expertise. This enables an enterprise to continually learn and adapt, as well as better anticipate the needs of the marketplace.
To know whether you have a cognitive business, consider these questions:

Do you rely on traditional computing — rules-based, logic-driven, dependent on organized information — or cognitive systems that learn systematically, aren’t dependent on rules, and handle disparate and varied data?
Do your systems understand unstructured information such as imagery, natural language and sounds in books, emails, tweets, journals, blogs, images, sound and videos?
Can your systems learn continually, honing your organization’s expertise to immediately take more informed actions?
Can you and your customers interact with your systems, dissolving barriers and fueling unique, essential user experiences?

As I work with clients, I see those who adopt cloud and cognitive solutions with an entirely different set of advantages from their competitors. Cognitive organizations set their sights on going deeper and wider into their own, as well as third party data. They shorten the cycles between what they can learn from data and what game-changing actions they take. The result is a cognitive business that can think collectively and respond in whole new ways to the marketplace.
Look at how financial technology company Alpha Modus is using cloud and cognitive to solve a common problem of the financial marketplace: analyzing data before the prime moments for investment opportunities have passed. Alpha Modus created a solution with the IBM Bluemix platform to leverage Watson technology. With these capabilities, it can now unlock a variety of unstructured data to evaluate market sentiment and predict market direction.
Cloud and cognitive are also a natural fit with retailers. To integrate its diverse array of order management systems, 1-800-Flowers.com migrated to an IBM Commerce on Cloud platform running in an IBM Bluemix cloud environment. This provides seamless service delivery across the retailer’s 10 brands, increases efficiency, reduces costs and enhances scalability with the IBM Cloud solution.
Then, the retailer created “GWYN,” a cognitive “concierge” that helps tailor responses to each customer by offering personalized feedback and service. It’s based on the IBM Expert Personal Shopper software, which uses the IBM Watson cognitive technology system. When customers inform GWYN that they are looking for a gift for their mothers, for example, GWYN will follow up with a series of questions such as type of occasion and sentiment to ensure that the right product suggestion is given.
Here are a few other cloud and cognitive actions organizations can take:

Build cognitive apps that see, hear, talk and learn and can exceed a user’s highest expectations for experiences and connection to your organization. This can be easy and quick to do. Developers can use IBM Watson application programming interfaces (APIs) now through theopen source developer cloud.

Transform how work gets done and what expertise is shared by giving business processes or workflows cognitive capabilities. Think across the organization by picking one or two places to start.
Collaborate with an expert provider such as IBM. Together, create a comprehensive cognitive system to uncover opportunities to reinvent your industry and give your organization a new view of what’s possible.

Cloud and cognitive systems will put your organization in the here and now. More than anything, they’ll answer the most important question all enterprises grapple with: “How do we create new value?”
With digital systems and intelligence, it will not be one answer, but many, as exciting opportunities await.
Find out how to build your business into a cloud and cognitive enterprise.
The post Do you have a cognitive business? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Using Ansible Validations With Red Hat OpenStack Platform – Part 1

Ansible is helping to change the way admins look after their infrastructure. It is flexible, simple to use, and powerful. Ansible uses a modular structure to deploy controlled pieces of code against infrastructure, utilizing thousands of available modules, providing everything from server management to network switch configuration.
With recent releases of Red Hat OpenStack Platform access to Ansible is included directly within the Red Hat OpenStack Platform subscription and installed by default with Red Hat OpenStack Platform director.
In this three-part series you’ll learn ways to use Ansible to perform powerful pre and post deployment validations against your Red Hat OpenStack environment, utilizing the special validation scripts that ship with recent Red Hat OpenStack Platform releases.

Ansible, briefly …
Ansible modules are commonly grouped into concise, targeted actions called playbooks. Playbooks allow you to create complex orchestrations using simple syntax and execute them against a targeted set of hosts. Operations use SSH which removes the need for agents or complicated client installations. Ansible is easy to learn and allows you to replace most of your existing shell loops and one-off scripts with a structured language that is extensible and reusable.
Introducing … OpenStack TripleO Validations
Red Hat ships a collection of pre-written Ansible playbooks to make cloud validation easier. These playbooks come from the OpenStack TripleO Validations project (upstream, github). The project was created out of a desire to share a standard set of validations for TripleO-based OpenStack installs. Since most operators already have many of their own infrastructure tests, sharing them with the community in a uniform way was the next logical step.
On Red Hat OpenStack Platform director, the validations are provided by the openstack-tripleo-validations RPM installed during a director install. There are many different tests for all parts of a deployment: prep, pre-introspection, pre-deployment, post-deployment and so on. Validation can be run in three different ways: directly with ansible-playbook, via the Mistral workflow execution, and thought the director UI.
Let’s Get Started!
Red Hat OpenStack Platform ships with an Ansible dynamic inventory creation script called tripleo-ansible-inventory. With it you can dynamically include all Undercloud and Overcloud hosts in your Ansible inventory. Dynamic inventory of hosts makes it easier to do administrative and troubleshooting tasks against infrastructure in a repeatable way. This helps manage things like server restarts, log gathering and environment validation. Here’s an example script, run on the director node, to get Ansible’s dynamic inventory setup quickly.
#!/bin/bash

pushd /home/stack
# Create a directory for ansible
mkdir -p ansible/inventory
pushd ansible

# create ansible.cfg
cat << EOF > ansible.cfg
[defaults]
inventory = inventory
library = /usr/share/openstack-tripleo-validations/validations/library
EOF

# Create a dynamic inventory script
cat << EOF > inventory/hosts
#!/bin/bash
# Unset some things in case someone has a V3 environment loaded
unset OS_IDENTITY_API_VERSION
unset OS_PROJECT_ID
unset OS_PROJECT_NAME
unset OS_USER_DOMAIN_NAME
unset OS_IDENTITY_API_VERSION
source ~/stackrc
DEFPLAN=overcloud
PLAN_NAME=$(openstack stack list -f csv -c ‘Stack Name’ | tail -n 1 | sed -e ‘s/”//g’)
export TRIPLEO_PLAN_NAME=${PLAN_NAME:-$DEFPLAN}
/usr/bin/tripleo-ansible-inventory $*
EOF

chmod 755 inventory/hosts
# run inventory/hosts.sh –list for example output

cat << EOF >> ~/.ssh/config
Host *
StrictHostKeyChecking no
EOF
chmod 600 ~/.ssh/config
This script sets up a working directory for your Ansible commands and creates an Ansible configuration file called ansible.cfg, which includes the openstack-tripleo-validations playbooks in the Ansible library. This helps with running the playbooks easily. Next, the script creates the dynamic inventory file (~/inventory/hosts) by using /usr/bin/tripleo-ansible-inventory executed against the Overcloud’s Heat stack name.
You can run the inventory file with the –list flag to see what has been discovered:
[stack@undercloud inventory]$ /home/stack/ansible/inventory/hosts –list | jq ‘.’
{
 “compute”: [
   “192.168.0.25”,
   “192.168.0.34”,
   “192.168.0.39”,
   “192.168.0.35”
 ],
 “undercloud”: {
   “vars”: {
     “ansible_connection”: “local”,
     “overcloud_admin_password”: “AAABBBCCCXXXYYYZZZ”,
     “overcloud_horizon_url”: “http://10.12.48.100:80/dashboard”
   },
   “hosts”: [
     “localhost”
   ]
 },
 “controller”: [
   “192.168.0.23”,
   “192.168.0.27”,
   “192.168.0.33”
 ],
 “overcloud”: {
   “vars”: {
     “ansible_ssh_user”: “heat-admin”
   },
   “children”: [
     “controller”,
     “compute”
   ]
 }
}
We now have a dynamically generated inventory as required, including groups, using the director’s standard controller and compute node deployment roles.
We’re now ready to run the validations! 
Ready to go deeper with Ansible? Check out the latest collection of Ansible eBooks, including free samples from every title!
This is the end of the first part of our series. Check back shortly for Part 2 to learn how you can use this dynamic inventory file with the included validations playbooks!
The “Operationalizing OpenStack” series features real-world tips, advice, and experiences from experts running and deploying OpenStack.
Quelle: RedHat Stack