Is 2017 the year of digital business reinvention in financial services?

Against a backdrop of emerging technologies, shifting customer expectations, regulatory changes and evolving business models, Forrester Research lays out the future of financial services in a provocative report entitled “Predictions 2017: Pioneering Financial Providers Will Partner With Fintech To Build Ecosystems.” It makes the claim that breakaway leaders will focus on digital products delivered as application programming interfaces (APIs), and partner with fintech organizations to create new ways to engage customers.
Indeed, the leaders recognize that nothing short of a complete digital reinvention is needed to compete and win going forward.  For some this means launching more mobile payment apps than the market might need.  For the pioneers, however, this means tackling more challenging customer journeys such as mortgage applications and insurance claims, and incorporate predictive analytics into more intelligent personalized product advisors.
On the emerging technology front, everyone is evaluating use cases in the application of blockchain.  Forrester predicts that sentiment around blockchain will transition from “irrational exuberance to rational assessment” as the players accept that the challenges of successfully implementing blockchain applications are equal parts market structure, non-technical issues and technology.
The impact of the European Union’s Directive on Payment Systems (PSD2) cannot be understated as it will spread across the world markets as institutions and fintechs monitor the results of the unfolding opening banking initiatives and shift to embrace the API economy.
These are just some of the main points I picked up from this report. To dig deeper into these, read the full report.
So is 2017 the year of digital business reinvention for financial services where institutions partner with FinTechs in digital ecosystems that create new ways to engage customers? I think so. I can imagine ecosystems and cloud development platforms that bring financial institutions and fintechs together to deliver a range of new solutions from payments, lending, insurance claims to investment management, and more. This vision is not a distant future.
Learn more about how you can start now to capitalize on digital ecosystems and accelerate the pace of innovation for your organization.

A version of this post originally appeared on the IBM Banking blog.
The post Is 2017 the year of digital business reinvention in financial services? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

EasyFix: Getting started contributing to RDO

It can be intimidating trying to get involved in an open source project. Particularly one as huge and complicated as OpenStack. But we want you to join us on the RDO project, so we’re trying to make it as easy as possible to get started.

To that end, we’ve started the EasyFix initiative. EasyFix is a collection of “easy” tickets that you should be able to get started on without having a deep knowledge of the RDO project. And in the process, you’ll learn what you need to know to move on to more complex things.

These tickets cover everything from documentation, to packaging, to helping with events, to writing tools to make things easier. And each ticket comes with a mentor – someone who has volunteered to help you get through that first commit.

There’s also a general mentors list, categorized by area of expertise, if you just have a question and you’re not sure who to ping on IRC.

We’ve been running the EasyFix program for about 3 weeks now, and in that time, four new contributors have started on the RDO project.

We’re very pleased to welcome these new contributors, and hope they’ll be around for a long time to come.

Christopher “snecklifer” Brown – I’m Christopher Brown, a HPC Cloud Engineer based in Sheffield in the UK. I’ve worked on OpenStack since June 2015. I used to work on packaging for the Fedora project so I am transferring and updating those skills to help out with RDO when I can.

Anthony Chow – I am a software developer for legacy networking equipment wanting to be a Developer Advocate. I have been venturing into other technologies such as cloud, container, and configuration management tools. Passionate in learning and sharing technologies related topics.

Treva Williams – T. Nichole Williams is an RHCSA 7 Certified Linux and OpenStack engineer and content author for LinuxAcademy.com, & an OpenStack active technical contributor in the Magnum project. She is actively involved in several OpenStack, OpenShift, RDO, and Ceph communities and groups. When not OpenStacking or Cephing, she enjoys doggos, candy, cartoons, and playing “So You Think You’re a Marine Biologist” on Google.

Ricardo Arguello – I am an Open Source enthusiast that tries to collaborate as much as I can. I have helped in the past with patches for WildFly, and as a Fedora packager. The RDO project is very complex an intimidating, but collaborators are welcome and there are easy to fix bugs for newbies to make their first contributions! That makes RDO a great community if you are interested in helping the project while learning about Open Stack internals in the process.

If you want to participate in the RDO project, we encourage you to find something on the EasyFix list and get started. And please consider attending EasyFix office hours, Tuesdays at 13:30 UTC.
Quelle: RDO

Review and Future Directions of CloudForms State-Machines

This article seeks to explain the use of State Machines in Red Hat CloudForms for the use in the flow control of automation.
The topic of State Machines is sometimes perceived as rocket science, barely used but often taught. The first thing to dispel is the complexity in state machines, then we can compare how a state machine differs from other process automation like Workflows.
Finally the article is to dispel the myth that State Machines are RUBY or if you use Ansible Automation Inside you do not need state machines, again not a true statement.

Why State Machine?
Many automation flows are typically bigger than first envisaged when taken into an enterprise.
Example :
You may wish to provision a Database cluster, so the primary task is the installation and configuration of the database cluster.

Your compliance officer may instruct that the corporate CMDB must be updated with any artifacts that have been provisioned.
As well as the IT department may wish to trace the activity of provisioning using the corporate help desk system, by opening a ticket at the start of the job and closing the ticket when the job has completed.
Or lastly maybe you have a new requirement to use IP addresses from a corporate IP Address Management (IPAM) system when provisioning your Database Cluster.

Enterprise’s bring with them corporate standards, regulatory compliance requirements and operation patterns to follow. State Machines are a great way to combine varying automation requirements together.
By having each requirement handled as a separate state provides the following value;

You can decide how the next state will behave based on how the current state has exited.
You can have re-use of states in other state machines.

The last benefit is important, as each automation group in the enterprise creates their automation in silos, CloudForms provides a way to re-use the corporate states outside of the primary automation play.
Example:

The Amazon team create an automation play that deploys instances into Amazon EC2.
The VMware team create an automation play that deploys VMs into vSphere Clusters.

Both teams need to update the corporate CMDB with the asset details. With CloudForms you can write the automation play that updates the CMDB, but share it twice. Allowing both the Amazon and VMware teams to leverage the same automation play, which saves time but also ensures adherence to corporate standards.
 
Why not Workflows?
“What is the difference between a State Machine and a Workflow?” is the common question.
Answer:
A state machine, which is a series of states with transitions between them, allows for loops as opposed to a sequential workflow, which precedes down different branches until done.
The key here is that a state machine has re-entrancy. A state can run, and by itself decide if it should run again. A state can jump states and even go back to a previous state.
Example:

State – Update the CMDB with asset detail.

This state should succeed, but it can also fail.
A fail could be a hard fail whereby the return from the CMDB tells the state that an authentication failure occurred.
Another more soft failure is simply that the CMDB service is unreachable. Both of these failures could be dealt with in different ways such as;

Authentication Failure EQUALS Fail the State.
Unreachable CMDB EQUALS Retry the state (predefined retry count and interval).

This results in the state that updates the corporate CMDB with information retrying when the service is unavailable/busy, fail the job if it fails to authenticate or continues onto the next state if successful.
This is the state machine for our example;
 

 
So what’s the benefit again? Well to do this in a workflow would require that the workflow author writes the logic to control the gates using decision processes, also the re-entrancy would need to be coded into the workflow too. Which would look something like;
 

 
Another benefit of using States in a State machine is that we have the ability to do pre- and post- processing of a state. This means that before entering a state we can do some logic, then we can execute the state itself, and finally upon exiting the state we can do more process logic on an exit state. Along with supporting a state for failure. This means that any one state can do the following:

Pre State – Is updating the CMDB enabled for this job?
State – Run the CMDB Update.
Post State – Did the CMDB update successfully?
Error State – Send an email to admin to say CMDB is misconfigured.

A flowchart diagram for pre state with state would look like;
 

 
Summary – State Machines are a table of states. Each state has entry, exit and retry logic so that any state can succeed, fail or retry its status giving any state the ability to traverse the state machine in any order.
 
State Machine and Method Types
A state machine transitions through a series of states. The states will call/connect to instances. It is the job of the instance to define the state. What does this mean?
 

 
The instance could define some attributes like;

CMDB Server URL.
CMDB Username.
CMDB Password.

But these are not much use unless you feed them into something that can use them. This brings us to METHODS.
A method is something that runs. You define the method and call the method from an instance. Such as;
 

 
The implementation of State Machines in CloudForms is limited by the method types it supports. Here are the supported method types’

Built-in – A number of built in methods exist for placement, quota and email use cases, etc.

 

 

In-line – RUBY scripting language support. Write a ruby script and it will be executed.

 

 

URI –  Point to a RUBY script on a URI resource. It will be executed from that location.

 

 
As you can see, two of the three options support native RUBY scripting language, but also the first being built in also demonstrates how RUBY is NOT the only Method choice.
 
State Machine using Built-In Method
Some time ago, I wrote a blog on running built-in methods in CloudForms.
It demonstrates how a state machine can call a built-in method, passing parameters to send an email. This requires no RUBY coding, and utilizes simply an instance that calls a method.
 
State Machine using In-Line Method
This is the most common method type. Most if not all states in any of the provisioning.
 
State Machine using URI Method
This method type is not used today out of the box, but you certainly can use it in a custom route. The main issue with URI based location for the method is availability. You need to ensure that ALL CloudForms servers running the Automate role can access the resource. Though…the concept of using an external location for methods is very cool. Maybe you could point the method location to be a Git URL. Then you have versioning, branching and availability all from Git., an interesting blog for the future.
 
State Machine and Ansible Automation Inside
The near future direction for CloudForms is to add Ansible as a method type for State Machines. This would allow for States to use instances that execute methods that are Ansible. This gives all the benefits of the Ansible simplistic language and power of its module integration merged with the power of state machines to control process flows. A single state example would be like;
 

 
A more complete example would look like;
 

 

You can see in this example how the first state calls a playbook to open a ticket in the corporate help-desk system.
Then state 2 is to perform a quota check, this in CloudForms is a RUBY method that takes some input parameters and either fails the state if no quota available or continues on.
The third state calls a built in method for provisioning.
Last state is another playbook to close the ticket, again taking parameters from the instance such as the connection parameters and the ticket number to close.

 
State Control
You can control a state in a state-machine using re-entrancy and exit codes. For example if you set the exit code of your method as follows;

Ok – The placement of the VM was successful.
Error – The placement of the VM failed.
Retry – We want to run this placement logic again.

Therefore what determines how the next state will be processed is simply the exit from the previous state.

If the previous state exits with a retry then the state will be retried for the number of retries the state machine is configured for, plus the duration between retires can be controlled.
If the state exists OK then the next state is processed.
If the state exists Error then the next state is actually the error state of the same state so to clean up any failure or backout what was partially done. A good example of this would be;

State 1 – Create VM.
State 2 – Configure Firewall.
State 3 – Install Apache.

If State 3 fails, then the error state for State 3 might undo the firewall config and remove the VM.
 
Appendix A
Definition from Google/Wikipedia
 

 
 
Assertions, Relationships and Schema
Relationships
A state machine when written contains states, but it can also contain other types. For example if you wish to connect a state machine to another state machine as follows;

State Machine “Create VM in VMware”.
State Machine “Install Apache”.

These two state machines may have many states doing various tasks. The advantage of separating the two state machines, is that you can re-use either of the state machines with others. Such as;

State Machine “Create VM in RHV”.
State Machine “Install Apache”.

Now uses a new state machine to create a VM in RHV, but the same state machine for installing Apache.
To allow state machines to connect to each other we have “Relationships”, you can bind from one place to another in state machines.
 
Assertions, and more on Relationships
These are very cool and allow you to stop a state machine mid-flow. The first example to look at is whereby you do not wish to continue with something based on a condition. Using the following as an example;

State 1 – Create VM.
State 2 – Configure Firewall.
State 3 – Install Apache.

You can in the state 2, do the following;

Assertion = “Continue only if the VM was created”.
Method = Configure Firewall.

This would result in the assertion being resolved first, and if True it will continue to the next line, being the method that actually configures the firewall. Otherwise if the condition returned False, then the state would end processing there and NOT continue to run the method.
You can mix Assertions with Relationships too, an example of this would be, you wish to install all packages onto a VM. You have created methods for each package install. You could do the following based on what we have discussed so far;

State 1 – Create VM.
State 2 – Install Apache.
State 3 – Install PHP.
State 4 – Install CSS.
State 5 – Install WebSite.

Or more easily you can;

State 1 – Create VM.
State 2 – Install Web Components.

The Install Web Components would be a wild card connection from the state to the methods, for example;

State 1 – Create VM.
State 2 – Install Web Components.

Relationship – WebComponents*

The reason why you may wish to use Assertions here is to stop the “resolution” of the wildcard picking instances to methods that should be excluded. Example, that if you wish to only pick up the Linux version of the web components because the VM template was Linux, you could configure on all the Linux instances heading the methods and assertion that evaluates to true or false based on the template OS = Linux. Example;
You have many instances for webcomponents, some Windows, and some Linux;

Linux – Install Apache.
Linux – Install PHP.
Linux – Install CSS.

And

Windows – Install Apache.
Windows – Install PHP.
Windows – Install CSS.

And

Common – Install WebSite.

Therefore when the state machine resolves the relationship it will take the webcomponents that match the condition but always take the “Common – Install WebSite”.
Quelle: CloudForms

From HAL to Watson: AI-driven models that boost efficiency

When I think of artificial intelligence (AI), I cannot help but think of the science fiction films that I grew up watching and the fictional AI computers they featured: HAL in “2001: A Space Odyssey” or Skynet in “The Terminator.” In those films people interacted with computers that could understand natural language and make decisions.
Today, in the real world, these fictional AIs have been surpassed by Watson and others – and thankfully are a lot less menacing. The art of the possible has progressed faster than my childhood self could have dreamed.
And yet, in IT operations, a lot of companies are still monitoring their environment in a traditional way. They are using static thresholds to alert them when something is anomalous. Some monitoring teams are actually still relying on customers complaints to make them aware of problems.
A better way to do alerts is to let software like Predictive Insights help manage the environment. Predictive Insights helps diminish the need for manual, time-consuming effort so teams can be alerted to the most significant problems first, and become more efficient.
You can gain efficiency by simply replacing one manual effort with another. That’s why we made Predictive Insights to be both configurationless and time series data-agnostic. This way teams do not have to expend any additional effort with tuning or configuring the system. Configurationless means that the software can learn automatically without human intervention. Time series data-agnostic means that it can take any time series data, such as key performance indicators, metrics, or measurements of something over time and show value. It can do this regardless whether the data came from IBM or a third-party source.
The second reason we do not require configuration is because it does not make sense for a machine learning-driven product to ask questions that it could better answer itself. For example, I have seen competitor products that require someone to select and configure the algorithm required to evaluate a metric. At best a data scientist could take an educated guess, and at worst create a random alarm generator. The answer depends on the data itself.
Predictive Insights is different. It has multiple algorithms assess the data, determine the algorithms that are best suited for each metric and attempt to build mathematical models that describe their normal behavior. The mathematical models must then pass a validation phase to ensure that they are accurate and that they do not overfit or underfit the data. If the mathematical model is not what the team needs, it will be sent back to the data for relearning. This validation step can occur many times, and models that pass will be used for anomaly detection. The best part is that this happens automatically, without disrupting the environment.
Predictive Insights can evaluate millions of models typically in less than one minute. It will perform three types of relationship discovery: correlations, granger causalities, and metrics that are frequently anomalous at the same time. Any of these algorithms alone requires trillions of calculations. And yet Predictive Insights will learn with them automatically, every day, on commodity hardware.
What once only existed in Hollywood’s imagination is now a reality with Predictive Insights. To learn more, register for our webinar on the value predictive insight brings to IT Operations.
Interested in how APIs can drive business insights for IT operations teams? Check out the first post in our IBM Operations Analytics series. And stay tuned for additional key learnings from our colleagues in coming weeks.
 
The post From HAL to Watson: AI-driven models that boost efficiency appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Creating content with the help of AI

The vast majority of marketers use content marketing.
According to The Content Marketing Institute, instead of pitching products or services, brands are providing truly relevant and useful content to prospects and customers to help them solve their problems. Content should be at the core of marketing. The challenge is how to meet the growing demand for fresh, relevant content.
Marketers could turn to a content farm, also known as a “content mill,” where writers are sometimes paid just fractions of pennies, for inexpensive content. However, it will likely soon become apparent that organizations get what they pay for when search engines don’t rank their keyword-stuffed, low-quality content.
A better alternative is to put the power of artificial intelligence (AI) to work.
AI-assisted content creation
Articoolo helps writers create unique, proofread, high-quality content from scratch, simulating a human writer. Users can choose the topic and length, and an algorithm does the initial work. It helps writers do their jobs more quickly and cost effectively. Writers can get a head start on their content for as little as $1 per word.
Articoolo joined the IBM AlphaZone Accelerator program, which helps startups build leading solutions for the enterprise market. Using OpenWhisk on the IBM Bluemix platform ensures high availability and the flexibility Articoolo needs to meet changing demands. The Watson AlchemyAPI service performs powerful text analytics and natural language processing capabilities to fuel the algorithm that creates fresh, coherent content simulating a human writer.

A short lesson in content marketing
Articoolo generated some of the following content based on the phrase “content marketing”:
Know your demographic and what your audience cares about. Use keywords to target your audience. Conduct research with keywords. Create “expert” content that reflects the competence of your company. Invite people to write guest posts for your blog. Engage with social media followers. Cross promote your content on multiple social platforms to improve click through. Forty-four percent of online shoppers begin by using an internet search engine. SEO is important because your audience only sees a snippet of content in a search result and may never click past the first page.
See how an Articoolo customer in Japan uses AI to provide content for its blog or check out how this comedian got schooled by a robot.
Overcoming writer’s block
Not everyone wants an algorithm to create content, but content creators still might like a little help to get started. Articoolo offers other content-related tools and services for professional writers that can summarize or rewrite an article, generate a title, or find images or quotations. There’s also an API and a WordPress plug-in to make blogging much easier.
Quality content is not a commodity. Articoolo is not trying to completely replace human writers; it is primarily an ideation tool. Content marketers can try Writer’s Little Helper, a free service that offers inspiring ideas and relevant images to use as a starting point.
Read the case study for more details about the technology behind Articoolo.
The post Creating content with the help of AI appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Italian airport operator SEA announces 7-year agreement to use IBM Cloud

SEA (Societã Esercizi Aeroportuali), the company that manages the Milan Linate and Malpensa airports in Italy, has announced a seven-year agreement to use a hybrid IT environment that integrates with IBM Cloud.
The agreement includes data center and hosting services, systems and storage managed services, and help desk and security management. It’s expected to generate considerable cost savings for SEA.
SEA is ranked at the top of Italian airport systems for the volume of cargo transported, and is number two in terms of the number of passengers. It’s in the top 10 across Europe in both categories.
The company will use its new hybrid cloud infrastructure for numerous SAP workloads, including SAP Hybris. SEA will also use Cloud VDI for workplace management and employee training.
Data center services will come through the Italian IBM Data Center Campus, which includes the IBM Cloud Data Center in Milan. IBM has nearly 60 cloud data centers in 19 countries.
Learn more in financialnews.co.uk’s full report.
 
The post Italian airport operator SEA announces 7-year agreement to use IBM Cloud appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

The 7 Habits of Highly Effective DevOps

As I wrote last week, we’ve recently had the opportunity to speak with 100s of Enterprise companies as part of the Culture, Containers and accelerating DevOps events with Gene Kim (@realgenekim). During those discussions, we’ve been able to learn about lessons since the Phoenix Project and the most important metrics for measuring DevOps success. We’ve also learned […]
Quelle: OpenShift

rdopkg-0.44 ChangeBlog

I’m happy to annouce version 0.44.2 of rdopkg RPM packaging automation
tool has been released.

While a changelog generated from git commits is available in the
original 0.44 release commit
message,
I think it’s also worth a human readable summary of the work done by the rdopkg
community for this release. I’m not sure about the format yet, so I’ll start
with a blog post about the changes – a ChangeBlog ;)

41 commits from 7 contributors were merged over the course of 4
months since last release with average time to land of 6 days.
More stats

For more information about each change, follow the link to inspect related
commit on github.

Software Factory migration
Migrate to softwarefactory-project.io

rdopkg now has a
new home under software factory project
alongside DLRN.
Github repository also
moved from legacy
location to softwarefactory-project namespace.
Issues tracker stays on github.

Versioning
Adopt pbr for version and setup.py management
Include minor version 0.44 -> 0.44.0 as pbr dictates

Python 3 compatibility
Add some Python 3 compatibility fixes
More python 3 compatibility fixes

rdopkg now loads under python3, next step is
running tests using python3
using
tox.

Testing
Add BDD feature tests using python-behave

Unit tests sucked for testing high level behavior so I tried an alternative.
I’m quite pleased with python-behave, see one of first new-version scenarios
written in Gherkin:

Scenario: rdopkg new-version with upstream patches
Given a distgit at Version 0.1 and Release 0.1
Given a patches branch with 5 patches
Given a new version 1.0.0 with 2 patches from patches branch
When I run rdopkg new-version -lntU 1.0.0
Then .spec file tag Version is 1.0.0
Then .spec file tag Release is 1%{?dist}
Then .spec file doesn’t contain patches_base
Then .spec file has 3 patches defined
Then .spec file contains new changelog entry with 1 lines
Then new commit was created

It also looks reasonable on the pyton side.

Avoid test failure due to git hooks
tests: include pep8 in test-requirements.txt
tests: enable nice py.test diffs for common test code
tests: fix gerrit query related unit tests

New Features
pkgenv: display color coded hashes for branches

You can now easily tell the state of branches just by looking at color:

distgit: new -H/–commit-header-file option
patch: new -B/–no-bump option to only sync patches
Add support for buildsys-tags in info-tags-diff
Add options to specify user and mail in changelog entry
allow patches remote and branch to be set in git config
new-version: handle RHCEPH and RHSCON products
guess: return RH osdist for eng- dist-git branches

Improvements
distgit: Use NVR for commit title for multiple changelog lines
Improve %changelog handling
Improve patches_ignore detection
Avoid prompt on non interactive console
Update yum references to dnf
Switch to pycodestyle (pep8 rename)

Fixes
Use absolute path for repo_path

This caused trouble when using rdopkg info -l.

Use always parse_info_file in get_info
Fix output of info-tags-diff for new packages

Refactoring
refactor: merge legacy rdopkg.utils.exception

There is only one place for exceptions now o/

core: refactor unreasonable default atomic=False
make git.config_get behave more like dict.get
specfile: fix improper naming: get_nvr, get_vr
fixed linting

Documentation
document new-version’s –bug argument
Update doc to reflect output change in info-tags-diff

Happy rdopkging!
Quelle: RDO

What’s new in ZuulV3

Zuul is a program used to
gate a project’s source code repository so that changes are only
merged if they pass integration tests. This article presents some
of the new features in the next version:
ZuulV3

Distributed configuration

The configuration is distributed accross projects’ repositories,
for example, here is what the new zuul main.yaml configuration will
look like:

– tenant:
name: downstream
source:
gerrit:
config-projects:
– config
untrusted-projects:
– restfuzz
openstack.org:
untrusted-projects:
– openstack-infra/zuul-jobs:
include: job
shadow: config

This configuration describes a downstream tenant with two sources. Gerrit
is a local gerrit instance and openstack.org is the review.openstack.org
service. For each sources, there are 2 types of projects:

config-projects hold configuration information such as logserver access.
Jobs defined in config-projects run with elevated privileges.
untrusted-projects are projects being tested or deployed.

The openstack-infra/zuul-jobs has special settings discussed below.

Default jobs with openstack-infra/zuul-jobs

The openstack-infra/zuul-jobs repository contains common job definitions and
Zuul only imports jobs that are not already defined (shadow) in the local
config.

This is great news for Third Party CIs that will easily be able to re-use
upstream jobs such as tox-docs or tox-py35 with their convenient
post-processing of unittest results.

In-repository configuration

The new distributed configuration enables a more streamlined workflow.
Indeed, pipelines and projects are now defined in the project’s repository
which allows changes to be tested before merging.

Traditionaly, projects’ CI needed to be configured in two steps: first, the jobs
were defined, then a test change was rechecked until the job was working.
This is no longer needed because the jobs and configurations are directly set in
the repository and CI change undergoes the CI workflow.

After being registered in the main.yaml file, a project author can submit a
.zuul.yaml file (along with any other changes needed to make the test succeed).
Here is a minimal zuul.yaml setting:

– project:
name: restfuzz
check:
jobs:
– tox-py35

Zuul will look for a zuul.yaml file or a zuul.d directory as well as hidden
versions prefixed by a ‘.’. The project can also define its own jobs.

Ansible job definition

Jobs are now created in Ansible, which brings many advantages over
the Jenkins Jobs Builder format:

Multi-node architecture where tasks are easily distributed,
Ansible module ecosystem simplify complex task, and
Manual execution of jobs.

Here is an example:

– job:
name: restfuzz-rdo
parent: base
run: playbooks/rdo
nodes:
– name: cloud
label: centos
– name: fuzzer
label: fedora

Then the playbook can be written like this:

– hosts: cloud
tasks:
– name: “Deploy rdo”
command: packstack –allinone
become: yes
become_user: root

– name: “Store openstackrc”
command: “cat /root/keystonerc_admin
register: openstackrc
become: yes
become_user: root

– hosts: fuzzer
tasks:
– name: “Setup openstackrc”
copy:
content: “{{ hostvars[‘cloud’][‘openstackrc’].stdout }}”
dest: “{{ zuul_work_dir }}/.openstackrc”

– name: “Deploy restfuzz”
command: python setup.py install
args:
chdir: “{{ zuul_work_dir }}”
become: yes
become_user: root

– name: “Run restfuzz”
command: “restfuzz –target {{ hostvars[‘cloud’][‘ansible_eth0′][‘ipv4′][‘address’] }}”

The base parent from the config project manages the pre phase to copy
the sources to the instances and the post phase to publish the job logs.

Nodepool drivers

This is still a work in progress
but it’s worth noting that Nodepool is growing a driver based design to
support non-openstack providers. The primary goal is to support static node
assignements, and the interface can be used to implement new providers.
A driver needs to implement a Provider class to manage access to a new API,
and a RequestHandler to manage resource creation.

As a Proof Of Concept, I wrote an
OpenContainer driver that can spawn
thin instances using RunC:

providers:
– name: container-host-01
driver: oci
hypervisor: fedora.example.com
pools:
– name: main
max-servers: 42
labels:
– name: fedora-26-oci
path: /
– name: centos-6-oci
path: /srv/centos6
– name: centos-7-oci
path: /srv/centos7
– name: rhel-7.4-oci
path: /srv/rhel7.4

This is good news for operators and users who don’t have access to an
OpenStack cloud since Zuul/Nodepool may be able to use new providers
such as OpenShift for example.

In conclusion, ZuulV3 brings a lot of new cool features to the table,
and this article only covers a few of them. Check the
documentation
for more information and stay tuned for the upcoming release.
Quelle: RDO