Become a Blueworks Live ninja: Process made simple in the cloud

Business processes and decisions are the backbone of every company and the source of its competitive advantage. Understanding processes and decisions allows companies to increase efficiency and customer satisfaction. Blueworks Live can give you the ability to discover and document process knowledge in a better way.
Blueworks Live ninja skills
I often see customers race from minimal or no process discipline to an overcomplicated approach. This “zero to 100 miles an hour” approach can overwhelm participants and often results in limited project success.
Being a Blueworks Live ninja is about two things: simplicity and discipline. The following best practices focus primarily on implementing a process modeling program that is both easy to use and successful.
Project best practices
Choose a champion. A successful project must have a management champion. The champion supports the project, helps overcome resistance and protects the team from any political interference.
Model with a reason. Why you are modeling? The reason guides the level of detail for the process diagram and documentation.
Measure milestones. Prepare an approach that defines milestones and deadlines. This minimizes risk by simplifying a large project or rollout into smaller measurable steps.
Model with simplicity. Remember that your most important goal is to understand how things work. Others should be able to easily understand what happens in the process.
Be consistent. It is vital to represent processes in a consistent manner with a consistent level of detail, regardless of the project or individual modeling the process.
Use vigilant validation. Ensure that the process models and associated information are validated and approved by stakeholders and participants. Their buy-in is critical.
Take small steps. Take an incremental rather than a big-bang approach to modeling processes. You will produce results that help create momentum when you start small.
Go pro. Use professional services. The main benefit of using IBM Services or IBM Business Partners is the experience they bring.
Modeling best practices
Depict reality. Identify and document how people really perform the existing process and not how they should. Capture undocumented workarounds.
Leverage expertise. Collaborate with the people who know how the process works and who are responsible for its success—not those who think they know how it works.
Use visual elements. Use colors to visually indicate process issues you need to resolve. Colors can also be used to highlight manual or system performed activities.
Be a taskmaster. Tasks represent the smallest unit of work in your process. A process groups related activities into one parent activity:

Label using action verb + noun. This helps the focus on what is really done
The name should be concise -= easy-to-read
Capitalize the first letter of each word in the name

Ready to become a ninja? Learn more about Blueworks Live.
The post Become a Blueworks Live ninja: Process made simple in the cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Chile’s largest stock exchange teams with IBM for blockchain solution

The Santiago Stock Exchange, the largest stock exchange in Chile, has partnered with IBM to build a blockchain-based securities lending solution intended to speed up back-office processes by 40 percent.
“We spend two-to-three days after trading signing contracts, adding the assets, and the intermediary needs to create the collateral, all of that will now be put in a block that will be queried by different intermediaries.” said Santiago Stock Exchange CIO Andrés Araya.
The tool, which helps securities lenders, banks, stock exchanges, institutional clients and regulators, to exchange information, is the result of a year-long development process. Araya said the Santiago Stock Exchange wants to prove to the market and regulators that blockchain technology really works.
For more, check out Coindesk‘s full article.
The post Chile’s largest stock exchange teams with IBM for blockchain solution appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

RDO Contributor Survey

We recently ran a contributor survey in the RDO community, and while the participation was fairly small (21 respondants), there’s a lot of important insight we can glean from it.

First, and unsurprisingly:

Of the 20 people who answered the “corporate affiliation” question, 18 were Red Hat employees. While we are already aware that this is a place where we need to improve, it’s good to know just how much room for improvement there is. What’s useful here will be figuring out why people outside of Red Hat are not participating more. This is touched on in later questions.

Next, we have the frequency of contributions:

Here we see that while 14% of our contributors are pretty much working on RDO all the time, the majority of contributors only touch it a few times per release – probably updating a single package, or addressing a single bug, for that particular cycle.

This, too, is mostly in line with what we expected. With most of the RDO pipeline being automated, there’s little that most participants would need to do beyond a handful of updates each release. Meanwhile, a core team works on the infrastructure and the tools every week to keep it all moving.

We asked contributors where they participate:

Most of the contributors – 75% – indicate that they are involved in packaging. (Respondants could choose more than one area in which they participate.) Test day participation was a distant second place (35%), followed by documentation (25%) and end user support (25%)

I’ve personally seen way more people than that participate in end user support, on the IRC channel, mailing list, and ask.openstack.org. Possibly these people don’t think of what they’re doing as support, but it is still a very important way that we grow our user community.

The rest of the survey delves into deeper details about the contribution process.

When asked about the ease of contribution, 80% said that it was ok, with just 10% saying that the contribution process was too hard.

When asked about difficulties encountered in the contribution process:

Answers were split fairly evenly between “Confusing or outdated documentation”, “Complexity of process”, and “Lack of documentation”. Encouragingly, “lack of community support” placed far behind these other responses.

It sounds like we have a need to update the documentation, and greatly clarify it. Having a first-time contributor’s view of the documentation, and what unwarranted assumptions it makes, would be very beneficial in this area.

When asked how these difficulties were overcome, 60% responded that they got help on IRC, 15% indicated that they just kept at it until they figured it out, and another 15% indicated that they gave up and focused their attentions elsewhere.

Asked for general comments about the contribution process, almost all comments focused on the documentation – it’s confusing, outdated, and lacks useful examples. A number of people complained about the way that the process seems to change almost every time they come to contribute. Remember: Most of our contributors only touch RDO once or twice a release, and they feel that they have to learn the process from scratch every time. Finally, two people complained that the review process for changes is too slow, perhaps due to the small number of reviewers.

I’ll be sharing the full responses on the RDO-List mailing list later today.

Thank you to everyone who participated in the survey. Your insight is greatly appreciated, and we hope to use it to improve the contributor process in the future.
Quelle: RDO

Webcast: Forrester and IBM reveal hybrid cloud trends

Hybrid cloud can free your organization by providing complete flexibility in infrastructure. But does your IT strategy involve an enterprise-grade cloud platform that supports both your cloud-enabled and cloud-native application deployments?
Join us on June 7th at 1:00 p.m. EDT for a special webcast. We’ll discuss trends in hybrid cloud deployments and the total economic impact of IBM PureApplication, a platform designed to accelerate and simplify the deployment of your application and middleware environments.
Recently, Forrester Consulting interviewed PureApplication clients and developed a Total Economic Impact (TEI). The TEI can help organizations evaluate hybrid cloud application platforms. This webcast will give you an overview of the TEI results. We’ll also discuss the latest version of PureApplication, V2.2.3.
You can access the full commissioned study: The Total Economic Impact™ Of IBM PureApplication.
Our guest speakers – Forrester VP and Principal Analyst John Rymer, and Forrester Total Impact Principal Consultant Reggie Lau – will join Director IBM Hybrid Cloud Management Danny Mace to share insights that help you advance your cloud journey.
Hear about the most significant findings in the TEI of PureApplication, and learn how adopting this hybrid cloud solution can help:

Accelerate application time-to-market
Reduce time, effort and errors in provisioning application environments
Reduce IT management, maintenance and issue resolution costs
Improve business capabilities and resiliency

If you haven’t already, make sure you’re registered for the webcast, and share this event with your peers. Be sure to come with your questions. You don’t want to miss out on this event. Register today.
The post Webcast: Forrester and IBM reveal hybrid cloud trends appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Recent blog posts – May 22nd

Here’s some of the recent blog posts from our community:

Some lessons an IT department can learn from OpenStack by jpena

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

Read more at http://rdoproject.org/blog/2017/05/some-lessons-an-it-department-can-learn-from-openstack/

When is it not cool to add a new OpenStack configuration option? by assafmuller

Adding new configuration options has a cost, and makes already complex projects (Hi Neutron!) even more so. Double so when we speak of architecture choices, it means that we have to test and document all permutations. Of course, we don’t always do that, nor do we test all interactions between deployment options and other advanced features, leaving users with fun surprises. With some projects seeing an increased rotation of contributors, we’re seeing wastelands of unmaintained code behind left behind, increasing the importance of being strategic about introducing new complexity.

Read more at https://assafmuller.com/2017/05/19/when-is-not-cool-to-add-a-new-openstack-configuration-option/

Running (and recording) fully automated GUI tests in the cloud by Matthieu Huin

The problem Software Factory is a full-stack software development platform: it hosts repositories, a bug tracker and CI/CD pipelines. It is the engine behind RDO’s CI pipeline, but it is also very versatile and suited for all kinds of software projects. Also, I happen to be one of Software Factory’s main contributors. :)

Read more at http://rdoproject.org/blog/2017/05/running-and-recording-fully-automated-GUI-tests-in-the-cloud/
Quelle: RDO

Some lessons an IT department can learn from OpenStack

I have spent a lot of my professional career working as an IT Consultant/Architect. In those positions, you talk to many customers with different backgrounds, and see companies that run their IT in many different ways. Back in 2014, I joined the OpenStack Engineering team at Red Hat, and started being involved with the OpenStack community. And guess what, I found yet another way of managing IT.

These last 3 years have taught me a lot about how to efficiently run an IT infrastructure at scale, and what’s better, they proved that many of the concepts I had been previously preaching to customers (automate, automate, automate!) are not only viable, but also required to handle ever-growing requirements with a limited team and budget.

So, would you like to know what I have learnt so far in this 3-year journey?

Processes

The OpenStack community relies on several processes to develop a cloud operating system. Most of these processes have evolved over time, and together they allow a very large contributor base to collaborate effectively. Also, we need to manage a complex infrastructure to support this our processes.

Infrastructure as code: there are several important servers in the OpenStack infrastructure, providing service to thousands of users every day: the Git repositories, the Gerrit code review infrastructure, the CI bits, etc. The deployment and configuration of all those pieces is automated, as you would expect, and the Puppet modules and Ansible playbooks used to do so are available at their Git repository. There can be no snowflakes, no “this server requires a very specific configuration, so I have to log on and do it manually” excuses. If it cannot be automated, it is not efficient enough. Also, storing our infrastructure definitions as code allows us to take changes through peer-review and CI before applying in production. More about that later.

Development practices: each OpenStack project follows the same structure:

There is a Project Team Leader (PTL), elected from the project contributors every six months. A PTL acts as a project coordinator, rather than a manager in the traditional sense, and is usually expected to rotate every few cycles.
There are several core reviewers, people with enough knowledge on the project to judge if a change is correct or not.
And then we have multiple project contributors, who can create patches and peer-review other people’s patches.

Whenever a patch is created, it is sent to review using a code review system, and then:

It is checked by multiple CI jobs, that ensure the patch is not breaking any existing functionality.
It is reviewed by other contributors.

Peer review is done by core reviewers and other project contributors. Each of them have the rights to provide different votes:

A +2 vote can only be set by a core reviewer, and means that the code looks ok for that core reviewer, and he/she thinks it can be merged as-is.
Any project contributor can set a +1 or -1 vote. +1 means “code looks ok to me” while -1 means “this code needs some adjustments”. A vote by itself does not provide a lot of feedback, so it is expanded by some comments on what should be changed, if needed.
A -2 vote can only be set by a core reviewer, and means that the code cannot be merged until that vote is lifted. -2 votes can be caused by code that goes against some of the project design goals, or just because the project is currently in feature freeze and the patch has to wait for a while.

When the patch passes all CI jobs, and received enough +2 votes from the core reviewers (usually two), it goes through another round of CI jobs and is finally merged in the repository.

This may seem as a complex process, but it has several advantages:

It ensures a certain level of quality on the master branch, since we have to ensure that CI jobs are passing.
It encourages peer reviews, so code should always be checked by more than one person before merging.
It engages core reviewers, because they need to have enough knowledge of the project codebase to decide if a patch deserves a +2 vote.

Use the cloud: it would not make much sense to develop a cloud operating system if we could not use the cloud ourselves, would it? As expected, all the OpenStack infrastructure is hosted in OpenStack-based clouds donated by different companies. Since the infrastructure deployment and configuration is automated, it is quite easy to manage in a cloud environment. And as we will see later, it is also a perfect match for our continuous integration processes.

Automated continuous integration: this is a key part of the development process in the OpenStack community. Each month, 5000 to 8000 commits are reviewed in all the OpenStack projects. This requires a large degree of automation in testing, otherwise it would not be possible to review all those patches manually.

Each project defines a number of CI jobs, covering unit and integration tests. These projects are defined as code using Jenkins Job Builder, and reviewed just like any other code contribution.
For each commit:

Our CI automation tooling will spawn short-lived VMs in one of the OpenStack-based clouds, and add them to the test pool
The CI jobs will be executed on those short-lived VMs, and the test results will be fed back as part of the code review
The VM will be deleted at the end of the CI job execution

This process, together with the requirement for CI jobs to pass before merging any code, minimizes the amount of regressions in our codebase.

Use (and contribute to) Open Source: one of the “Four Opens” that drive the OpenStack community is Open Source. As such, all of the development and infrastructure processes happen using Open Source software. And not just that, the OpenStack community has created several libraries and applications with great potential for reuse outside the OpenStack use case. Applications like Zuul and nodepool, general-purpose libraries like pbr, or the contributions to the SQLAlchemy library are good examples of this.

Tools

So, which tools do we use to make all of this happen? As stated above, the OpenStack community relies on several open source tools to do its work:

Infrastructure as code

Git to store the infrastructure definitions
Puppet and Ansible as configuration management and orchestration tools

Development

Git as a code repository
Gerrit as a code review and repository management tool
Etherpad as a collaborative editing tool

Continuous integration

Zuul as an orchestrator of the gate checks
Nodepool to automate the creation and deletion of short-lived VMs for CI jobs across multiple clouds
Jenkins to execute CI jobs (actually, it has now been replaced by Zuul itself)
Jenkins Job Builder as a tool to define CI jobs as code

Replicating this outside OpenStack

It is perfectly possible to replicate this model outside the OpenStack community. We use it in RDO, too! Although we are very closely related to OpenStack, we have our own infrastructure and tools, following a very similar process for development and infrastructure maintenance.

We use an integrated solution, SoftwareFactory, which includes most of the common tools described earlier (and then some other interesting ones). This allows us to simplify our toolset and have:

Infrastructure as code

https://github.com/rdo-infra contains the definitions of our infrastructure components

Development and continuous integration

https://review.rdoproject.org, our SoftwareFactory instance, to integrate our development and CI workflow
Our own RDO Cloud as an infrastructure provider

You can do it, too

Implementing this way of working in an established organization is probably a non-straightforward task. It requires your IT department and application owners to become as cloud-conscious as possible, reduce the amount of micro-managed systems to a minimum, and establish a whole new way of managing your development… But the results speak for themselves, and the OpenStack community (also RDO!) is a proof that it works.
Quelle: RDO

Westpac Bank rolls out first stages of move to cloud

Westpac, a bank based in Sydney, has moved two core banking processes to the cloud since the end of March, and leaders are planning to move 20 more by early 2018, The Australian Financial Review reported.
The bank’s move to platform as a service on IBM hybrid cloud is a big shift, but Westpac Chief Information Officer Dave Curran said he believes moving its infrastructure to the cloud will save money, boost efficiency and increase flexibility. Curran said he hopes that 40 percent of the company’s business applications will be on the cloud by 2020.
The two applications that have already moved to the cloud are Westpac’s business bank “Deal Tracker” app and its risk decision-making engine.
“I believe we’ll be on the public cloud in less than 10 years, I’m pretty happy where we are in relation to that and moving our core business across,” Curran said.
Read the full story of Westpac’s journey onto the cloud at The Australian Financial Review.
 
The post Westpac Bank rolls out first stages of move to cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud