Does your data deserve a private cloud?

Your business and your data are unique. For that reason, your enterprise architecture must also be tailored to fit the exact needs of your business. When data is involved, you want choices, not tradeoffs. More importantly, you want your solutions to build upon and complement one another.
For most companies, the variety of data sources and where that data should be stored are top priorities. You can’t afford to keep the data in silos or leave certain data untouched just because of its type or where it happens to sit. You need the flexibility to access all your data and place it in the optimal location.
Typically, the solution to this problem has been the public cloud. But what if you also have sensitive data that, due to company mandates or external regulations, needs significant levels of protection? You don’t want to trust the safety of that data to anyone else, preferring instead to take control and choose the level of security for yourself. This is where on-premises solutions have often come into play.
Ultimately, you want choice in flexibility and security without making sacrifices. You want to benefit from the best of both options as well as exceptional performance. At the intersection of these needs is the private cloud. Private clouds offer flexibility, just like public clouds, but they sit behind your firewall, giving you more control. Instead of making a tradeoff, flexibility and security are provided in unison, giving you more choice when it comes to your data.
IBM has embraced the concept of private clouds because we recognize the uniqueness of each data-related business opportunity. IBM Cloud Private delivers the features that many have come to expect in public cloud architectures with shared-resource efficiencies, utility computing and flexible scalability that delivers better total cost of ownership (TCO) and simplicity of deployment. Private cloud also gives companies more options in security, compliance and infrastructure customization. In other words, IBM Cloud Private helps provide the flexibility and security you desire in concert with our other data-focused solutions.
For example, running IBM Db2 on IBM Cloud Private offers choice in terms of deployment flexibility and security without sacrifices. At its core, Db2 still maintains the performance that enterprise users have come to expect: it’s fast, always available, secure, and flexible. The built-in IBM BLU Acceleration MPP architecture supports in-memory speed to get insights to those that need it faster. Its compression technology increases performance while simultaneously reducing storage requirements and giving you the opportunity to reduce storage costs. None of those key features go away when Db2 is running on IBM Cloud Private, but enhancements are made in two areas.
Data management flexibility complemented by container technology
Db2 is extremely flexible on its own. Thanks to the common SQL engine it shares with the entire IBM family of hybrid data management offerings, you can use data of various types sitting in a multitude of on-premises and on-cloud locations. The Db2 deployment is also flexible thanks to its ability to be deployed within a container. This is where IBM Cloud Private’s additive effect comes into play. IBM Cloud Private is built with two of the most popular container technologies at its base: Kubernetes and Cloud Foundry. Deploying Db2 using these technologies opens up the ability to maximize performance and efficiency by more closely aligning usage with company needs.
IBM Cloud Private also opens up the possibility of optimizing your infrastructure costs by offering the right mix of transactional (IBM Db2) and data warehousing solutions (IBM Db2 Warehouse) that adhere to the software-defined architecture. This provides the flexibility of managing your infrastructure needs via simplified deployment and efficient management of the same.
Built-in security bolstered by your own firewall
The built-in security features of Db2 help you deliver the protection that your customers, industry regulations, and other stakeholders demand. It begins with strong encryption capabilities included, so you can address compliance concerns. Then it goes further to provide centralized key management to heighten security and ease of use. IBM Cloud Private enhances this level of security by providing more control. Since IBM Cloud Private sits behind your firewall, it is protected by the security features you have built and integrated over the life of your company. Perhaps most importantly, you get to decide what level of security you need and adjust as necessary. The choice is in your hands.
IBM Db2 running on IBM Cloud Private delivers on the idea of choice without tradeoffs, cloud flexibility with on-premises security. Using IBM Cloud Private takes the extraordinary performance, flexibility, and security customers are accustomed to with Db2 and improves them.
To learn more about IBM Cloud Private and all the ways in which Db2 can benefit from running on that platform, read our solution brief. You can also register to attend our webinar and hear from the experts.
The post Does your data deserve a private cloud? appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

My First Contribution to ManageIQ

In this blog post, I am going to share my experience on how I made my first contribution to ManageIQ, the upstream open source project for Red Hat CloudForms. The post explains how I encountered and investigated an issue, and finally fixed it thereby sending my first “Pull Request” to ManageIQ repository.
 

Issue
When an infrastructure provider like VMware is added to CloudForms/ManageIQ, a user/admin have an option to put host(s) into maintenance mode. The “Enter Maintenance mode” option is available in a dropdown list when “Power” button is clicked on the host summary page, as shown in below image,
 

 
The following image shows a host in maintenance mode from Red Hat CloudForms. The host goes into maintenance mode but never “exits” the mode when selecting “Exit Maintenance Mode”.
 

 
As see below, the request to exit maintenance mode was successfully initiated from CloudForms user-interface.
 

 
However, the host still remains into maintenance mode, and we can validate this state from the VMware vSphere client.
 

 
Now that we have identified the issue, we can look at its possible cause(s) by troubleshooting Red Hat CloudForms.
 
Debugging an issue
A good place to troubleshoot is to look into standard log files under /var/www/miq/vmdb/ on the CloudForms appliance. Below is short description of few important log files:

production.log: All user Interface activities from Operations as well as Service UI are logged here.
automation.log: As the name suggest, all automation logs are collected in this file.
policy.log: This is a good place to look for logs related to events and policies.
evm.log: This file also covers automation logs as well as everything else. This file can be large in size and probably the first log file to look for errors & warning messages.

 
As you can see below, the evm.log file contained warning messages every time “Exit Maintenance Mode” request is initiated,
 
[—-] W, [2017-12-20T16:32:02.557678 #2197:1090af0]  WARN — : MIQ(ManageIQ::Providers::Vmware::InfraManager::HostEsx#exit_maint_mode) Cannot exit maintenance mode because <The Host is not powered ‘on’>
 
The log message clearly shows that the host attempts to exit maintenance mode but fails as it is not powered on. At this point, we can ask ourselves why is the task failing with this warning? Isn’t the host supposed to be in maintenance mode? We suspect something is not right with the logic behind this action. To dig deeper we can look into the host.rb file available at ManageIQ GitHub repository.
 

 
Looking at the logic in host.rb file, the method enter_maint_mode() is triggered when “Enter Maintenance Mode” request is made. This in-turns validates the maintenance mode using method validate_enter_maint_mode() which basically checks the power state of host using method validate_esx_host_connected_to_vc_with_power_state(). The arguments passed to this method are ‘on’ or ‘maintenance’.
 
 

 
A similar logic should be applied to the  exit_maint_mode() method. However, the method calls validate_enter_maint_mode() instead of calling validate_exit_maint_mode(), which causes the issue. The validation fails as the host is in ‘maintenance’ mode and not in ‘on’ mode as we can see below,
 
 

 
A simple fix is to call validate_exit_maint_mode() instead of validate_enter_maint_mode() each time “Exit Maintenance Mode ” request is made. This fix should validate the host to exit maintenance mode successfully.
 
Test
To verify our analysis, We can replaced the validation method call from validate_enter_maint_mode() to validate_exit_maint_mode() and restart evmserverd on the appliance using,
 
systemctl restart evmserverd
 
This time the host successfully exits maintenance mode
 
CloudForms User Interface:
 

 
VMware User Interface:
 

 
Creating a Pull Request
A “Pull Request” is a way to propose a change in code on GitHub. For those who don’t have an GitHub account, you can create one by following https://github.com/join. Once the account is created we have to fork the repository by clicking “Fork” button as shown below,
 

 
Next step is to clone the repository to our local machine so that changes can be made. Click on “Clone or download” button to copy the https URL link.
 

 
We can clone the repository by using the command
git clone https://github.com/imaanpreet/manageiq.git
 
Once our clone is completed, we can create a new branch using,
git checkout -b validate_exit_maint_mode
 
Make required changes and commit the changes using,
git add app/models/host.rb

git commit

 
Once changes are committed, it is time to send back changes as “Pull Request”, this can be done by pushing changes to the newly created branch,
 
git push origin validate_exit_maint_mode
 
The process to create a pull request is documented here.
 
Conclusion
The Pull Request is merged in the manageiq repository and the bug is currently being worked on. This was a great experience and I enjoyed the process of debugging, investigating, and fixing a bug in ManageIQ. I hope sharing this experience in this article will be useful for other readers, and will encourage them to submit more Pull Requests.
 
Quelle: CloudForms

Summary of rdopkg development in 2017

During the year of 2017, 10 contributors managed to merge
146 commits into rdopkg.

3771 lines of code were added and 1975 lines deleted
across 107 files.

54 unit tests were added on top of existing 32 tests – an increase
of 169 % to total of 86 unit tests.

33 scenarios for 5 core rdopkg features were added in new feature
tests spanning total of 228 test steps.

3 minor releases increased version from 0.42 to 0.45.0.

Let’s talk about the most significant improvements.

Stabilisation

rdopkg started as a developers’ tool, basically a central repository to
accumulate RPM packaging automation in a reusable manner. Quickly adding new
features was easy, but making sure existing functionality works consistently
as code is added and changed proved to be much greater challenge.

As rdopkg started shifting from developers’ powertool to a module used in
other automation systems, unevitable breakages started to become a problem and
prompted me to adapt development accordingly. As a first step, I tried to practice
Test-Driven Development (TDD)
as opposed to writing tests after a breakage to prevent specific case. Unit
tests helped discover and prevent various bugs introduced by new code, but
testing complex behaviors was a frustrating experience where most of
development time was spent on writing units tests for cases they weren’t meant
to cover.

Sounds like using a wrong tool for the job, right? And so I opened a rather
urgent rdopkg RFE: test actions in a way that doesn’t
suck and
started researching what cool kids use to develop and test python software
without suffering.

Behavior-Driven Development

It would seem that cucumber started quite a revolution of
Behavior-Driven Development (BDD)
and I really like
Gherkin,
the Business Readable, Domain Specific Language that lets you describe
software’s behaviour without detailing how that behaviour is implemented.
Gherkin serves two purposes — documentation and automated tests.

After some more research on python BDD tools, I liked
behave’s implementation, documentation
and community the most so I integrated it into rdopkg and started using
feature tests. They make it easy to describe and define expected behavior before
writing code. New features now start with feature scenario which can be
reviewed before writing any code. Covering existing behavior with
feature tests helps ensuring they are both preserved and well
defined/explained/documented. Big thanks goes to Jon Schlueter who
contributed huge number of initial feature tests for core rdopkg features.

Here is an example of rdopkg fix scenario:

Scenario: rdopkg fix
Given a distgit
When I run rdopkg fix
When I add description to .spec changelog
When I run rdopkg –continue
Then spec file contains new changelog entry with 1 lines
Then new commit was created
Then rdopkg state file is not present
Then last commit message is:
“””
foo-bar-1.2.3-3

Changelog:
– Description of a change
“””

Proper CI/gating

Thanks to
Software Factory,
zuul and gerrit, every rdopkg change now needs to pass following
automatic gate tests before it can be merged:

unit tests (python 2, python 3, Fedora, EPEL, CentOS)
feature tests (python 2, python 3, Fedora, EPEL, CentOS)
integration tests
code style check

In other words, master is now significantly harder to break!

Tests are managed as individual tox targets for convenience.

Paying back the Technical Debt

I tried to write rdopkg code with reusability and future extension in mind,
yet in one point of development with big influx of new
features/modifications, rdopkg approached critical mass of technical debt
where it got into spiral of new functionality breaking existing functionality
and with each fix two new bugs surfaced. This kept happening so I stopped
adding new stuff and focused on ensuring rdopkg keeps doing what people use
it for before extending(breaking) it further. This required quite a few core
code refactors, proper integration of features that were hacked in on the
clock, as well as leveraging new tools like
software factory CI pipeline,
and behave described above. But I think it was a success and rdopkg paid
its technical debt in 2017 and is ready to face whatever community throws at
it in near and far future.

Integration

Join Software Factory project

rdopkg became a part of Software Factory project
and found a
new home
alongside
DLRN.

Software Factory is an open source, software development forge with an
emphasis on collaboration and ensuring code quality through Continuous
Integration (CI). It is inspired by OpenStack’s development workflow that has
proven to be reliable for fast-changing, interdependent projects driven by
large communities. Read more in
Introducing Software Factory.

Specifically, rdopkg leverages following Software Factory features:

git
repository mangement
code reviews:
gerrit
Continuous Integration:
zuulV3 (bye Jenkins)
code metricts:
repoXplorer

rdopkg repo is still mirrored to github
and bugs are kept in Issues
tracker there as
well because github is accessible public open space.

Did I meantion you can login to Software Factory using github account?

Finally, big thanks to Javier Peña, who paved the way towards Software Factory
with DLRN.

Continuous Integration

rdopkg has been using human
code reviews
for quite some time, and it proved very useful even though I often +2/+1 my
own reviews due to lack of reviewers. However, people unevitably make
mistakes. There are decent unit and feature tests now
to detect mistakes, so we fight human error with computing power and
automation.

Each review and thus each code change to rdopkg is gated – all unit tests,
feature tests, integration tests and code style checks need to pass before
human reviewers consider accepting the change.

Instead of setting up machines and testing environments and installing
requirements and waiting for tests to pass, this boring process is now
automated on supported distributions and humans can focus on the changes
themselves.

Integration with Fedora, EPEL and CentOS

rdopkg is now finally available directly from Fedora/EPEL repositories, so
install instructions on Fedora 25+ systems boiled down to:

dnf install rdopkg

On CentOS 7+, EPEL is needed:

yum install epel-release
yum install rdopkg

Fun fact: to update Fedora rdopkg package, I use rdopkg:

fedpkg clone rdopkg
cd rdopkg
rdopkg new-version -bN
fedpkg mockbuild
# testing
fedpkg push
fedpkg build
fedpkg update

So rdopkg is officially packaging itself while also being packaged by
itself.

Please nuke jruzicka/rdopkg copr if you were using it previously, it is now
obsolete.

Documentation

rdopkg documentation was cleand up, proof-read, extended with more details
and updated with latest information and links.

Feature scenarios are now available as man pages thanks to mhu.

Packaging and Distribution

Python 3 compatibility

By popular demand, rdopkg now supports Python 3. There are Python 3 unit
tests and python3-rdopkg RPM package.

Adopt pbr for Versioning

Most of initial patches rdopkg was handling in the very beginning were
related to distutils and pbr, the OpenStack packaging meta-library,
specifically making it work on a distribution with integrated package
management and old conservative packages.

Amusingly, pbr was integrated into rdopkg (well, it actually does solve
some problems aside from creating new ones) and in order to release the new
rdopkg version with pbr on CentOS/EPEL 7, I had to disable hardcoded
pbr>=2.1.0 checks on update of python-pymod2pkg because older version of
pbr is available from EPEL 7. I removed the check (in two different places)
as I did so many times before and it works fine.

As a tribute to all the fun I had with pbr and distutils, here is a link
to my first
nuke bogus requirements patch
of 2018.

Aside from being consistent with OpenStack related projects, rdopkg adopted
strict sematic versioning that pbr uses, which means that releases are
always going to have 3 version numbers from now on:

0.45 -> 0.45.0
1.0 -> 1.0.0

And More!

Aside from the big changes mentioned above, large amount of new feature
tests and numerous not-so-exciting fixes, here is a list of changes might be
worth mentioning:

unify rdopkg patch and rdopkg update-patches and use alias
rdopkg pkgenv shows more information and better color coding for easy
telling of a distgit state and branches setup
preserve Change-Id when amending a commit
allow fully unattended runs of core actions.
commit messages created by all rdopkg actions are now clearer, more
consistent and can be overriden using -H/–commit-header-file.
better error messages on missing patches in all actions
git config can be used to override patches remote, pranch, user name and
email
improved handling of patches_base and patches_ignore including tests
improved handling of %changelog
improved new/old patcehs detection
improved packaging as suggested in Fedora review
improved naming in git and specfile modules
properly handle state files
linting cleanup and better code style checks
python 3 support
improve unicode support
handle VX.Y.Z tags
split bloated utils.cmd into utils.git module
merge legacy rdopkg.utils.exception so there is only single module for
exceptions now
refactor unreasonable default atomic=False affecting action definitions
remove legacy rdopkg coprbuild action

Thank you, rdopkg community!
Quelle: RDO

OpenFaaS on OpenShift

Learn how to use OpenFaaS, an open source framework and tool that allows you to use the Function-as-a-Service (FaaS) paradigm in a containerized setup, on OpenShift.
Quelle: OpenShift