Don’t confuse OpenStack with infrastructure

The post Don’t confuse OpenStack with infrastructure appeared first on Mirantis | Pure Play Open Cloud.
A word about OpenStack for business people, CIO’s and cloud consumers.
Ask anyone in the IT industry today what they think OpenStack is, and you will probably get a long list of technical jargon describing the components of OpenStack, an explanation that it is a cloud platform, or you may just as well perhaps get someone that tells you it is a community driven project or some other equally complex explanation that doesn’t actually quite answer the question.
At its most fundamental, OpenStack is a common API abstraction layer for infrastructure. But what does that actually mean, and why should you care?
What it means is that OpenStack is essentially a way of enabling developers to address datacenter infrastructure through a standard set of instructions, regardless of what that actual infrastructure is. The advantages of this for developers and infrastructure providers is manyfold:

It reduces development time by removing the need to perform custom integrations for every type of hardware out there.
It enables infrastructure providers to swap out components with less need to worry about compatibility issues.
It enables applications to be portable across different datacenters.
It reduces the need for developers to have deep a understanding of the infrastructure

But “standard” only takes you so far.  It’s important to know when you need to go the extra mile.
OpenStack Architecture
OpenStack is made up of a number of largely independent but related projects, each of which provides the abstraction layer for a particular area of infrastructure. For example, Nova for compute (mostly focused on virtualization), Neutron for networking, Cinder for volume storage and Keystone for authentication are but a few of the available projects related to the community.

OpenStack Project high level abstraction diagram
The community projects are collectively organized into core projects and a grouping of projects known as ‘Big Tent’. These projects are all expected to follow the same governance model, but this is only really enforced for the core projects.
In general each project is made up of three layers, The API layer, the driver Layer, and the implementation layer.
The API layer: This is just what it says it is: a software application that provides a clear  API interface that describes a way of interacting, and what the expected result should be.
The driver layer: You can consider the driver layer to be a translator that takes the API instruction set and converts it into the appropriate commands for the base infrastructure.
Implementation: this is the actual infrastructure that provides the resources that are consumed by the applications. It is also where the confusion starts.
Each project typically provides a reference implementation that provides guidance on how to setup the infrastructure to make it consumable through the API and Driver layers.
But these reference implementations come with varying degrees of maturity and completeness.
The problem arises because many implementors are taking the provided reference infrastructure implementations and treating them as if they are fully baked, production-ready solutions that work straight out of the box.
This has lead to many failed or flaky implementations of OpenStack-based clouds that are painting OpenStack in a bad light.
The fundamental point here is that the implementation layer needs careful design and testing, just like any other complex IT solution, and is no less important than the API layer.
While the advantages of OpenStack to the traditional IT environment are primarily focused on the way infrastructure is consumed, it does have an impact on other areas.

It provides for convergence of IT infrastructure reducing sprawl and increasing the understanding that interrelated components have on each other.
It drives organizational change in management a development processes
It provides an open interface supporting the move away from vendor lock-in
It enables the rapid development and testing of new infrastructure ideas

But none of these benefits can be achieved without careful design and deployment of the base implementation layer that underpins the benefits of OpenStack.
In conclusion, OpenStack is a collection of community-driven projects that provide an effective abstraction layer to simplify the consumption of datacenter infrastructure, but it does not alleviate need to design the datacenter infrastructure properly. The implementation of OpenStack takes careful planning and a clear understanding of why it is being deployed, and why it is being deployed in a particular way. We need to focus on what the consumers of the resources provided by the datacenter need, and help simplify their consumption of said resources by ensuring that we’re not taking the easy way out.
The post Don’t confuse OpenStack with infrastructure appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Recent blog posts

Here’s what the RDO community has been blogging about recently:

OpenStack 3rd Party CI with Software Factory by jpena

Introduction When developing for an OpenStack project, one of the most important aspects to cover is to ensure proper CI coverage of our code. Each OpenStack project runs a number of CI jobs on each commit to test its validity, so thousands of jobs are run every day in the upstream infrastructure.

Read more at http://rdoproject.org/blog/2017/09/openstack-3rd-party-ci-with-software-factory/

OpenStack Days UK by Steve Hardy

OpenStack Days UKYesterday I attended the OpenStack Days UK event, held in London.  It was a very good day and there were a number of interesting talks, and it provided a great opportunity to chat with folks about OpenStack.I gave a talk, titled “Deploying OpenStack at scale, with TripleO, Ansible and Containers”, where I gave an update of the recent rework in the TripleO project to make more use of Ansible and enable containerized deployments.I’m planning some future blog posts with more detail on this topic, but for now here’s a copy of the slide deck I used, also available on github.

Read more at http://hardysteven.blogspot.com/2017/09/openstack-days-uk-yesterday-i-attended.html

OpenStack Client in Queens – Notes from the PTG by jpichon

Here are a couple of notes about the OpenStack Client, taken while dropping in and out of the room during the OpenStack PTG in Denver, a couple of weeks ago.

Read more at http://www.jpichon.net/blog/2017/09/openstack-client-queens-notes-ptg/

Event report: OpenStack PTG by rbowen

Last week I attended the second OpenStack PTG, in Denver. The first one was held in Atlanta back in February.

Read more at http://drbacchus.com/event-report-openstack-ptg/
Quelle: RDO

Calculating the ROI of migrating SAP to cloud managed services

The potential value of cloud managed services extends far beyond reducing infrastructure costs and freeing up IT staff to focus on more innovative tasks.
These solutions can create value across an enterprise by helping to accelerate time to market for new products and services, increase customer reach and retention, reduce labor costs, help improve security, and increase system uptime.
For example, the benefits that actual companies have experienced after implementing cloud managed services include:

Accelerating time to market for new products and services by 60 percent through rapid provisioning of IT resources needed for development
Faster, more accurate reporting by speeding access to enterprise information by 50 percent
Increased customer satisfaction rankings as a result of deploying cutting-edge technologies that help companies better identify and respond to changes in demand
Reducing time spent managing and maintaining infrastructure by 90 percent
Improving recovery time objective (RTO) after a disaster by 98 percent through geographically-dispersed disaster recovery solutions

The decision to implement a cloud managed services solution may require approval from leaders outside the IT department. An IBM internal study conducted by Frost & Sullivan recently found that 72 percent of CIOs who want to adopt a cloud managed services solution for their SAP deployments are seeking help building their business cases.
The IBM Cloud for SAP Benefits Estimator online tool can help leaders across lines of business better understand and communicate the potential value that cloud managed services can deliver for their strategic priorities.
Simply enter a few pieces of general information about your company, IT requirements and strategic priorities. Using more than 20 years of real customer data, the tool calculates and assigns a monetary value to estimate the benefits cloud managed services could deliver for your company across six key categories.
The “self-service” online tool takes about ten minutes to complete and provides a quick view of the projected return on investment (ROI) for implementing cloud managed services into your IT environment. After completing the online tool, users can access a customized report highlighting the projected benefits these solutions can offer their businesses.
For a more personalized, in-depth analysis, including a more detailed five-year ROI projection, contact an IBM sales representative.
Try the IBM Cloud for SAP Benefits Estimator tool.  If you do not currently have an assigned IBM sales representative and want to know more, click the “talk to an expert” button on the IBM Cloud for SAP website.
The post Calculating the ROI of migrating SAP to cloud managed services appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Container Management with CloudForms

Containers have rapidly evolved from being used for development and testing to production. Today, many vendors provide container products supporting enterprise IT production workloads. Red Hat offers OpenShift, an open source container platform based on Kubernetes.
 
The promise of containers includes greater cross-cloud workload portability, better support for microservices and faster business innovation through the ability to support CI/CD methodologies for rapidly launching new functionality. By pairing containers and CI/CD with public/private hybrid cloud infrastructure, enterprise IT teams expect to better match infrastructure spending to workload performance while enabling end user developers self-service and agility.
 
Despite these promises, there are still many challenges when it comes to running and managing container workloads efficiently. For example, how can the operations team manage security, patching, and performance of tens of thousands of containers running in multiple locations, on multiple types of infrastructure? How can you guarantee the health and performance of end user applications built on top of dozens of microservices, each updated on it own CI/CD cadence? And finally, how do you manage the cost of running these containers?
 

In this series of articles, we look at how Red Hat CloudForms can optimize cross-cloud container management and workload performance. CloudForms is an enterprise grade multi-cloud management platform based on ManageIQ open source project. The tool provides all required functionality to manage virtualization infrastructure (e.g. VMware, RHV, HyperV), private cloud (e.g. OpenStack), and public cloud (e.g. Amazon AWS, Microsoft Azure, Google Cloud Engine). Further, CloudForms offers management capabilities for storage, networking and middleware applications.
 
Our focus in these posts is to show how CloudForms can address the challenges of managing enterprise container workloads at scale across multiple public and private cloud infrastructure.
 
We will look at each key management areas, investigate how the product can help, and see it in action with a brief demonstration video. Stay tuned!

Part 1: Container Management with CloudForms
Part 2: Container Management with CloudForms – Operational Efficiency
Part 3: Container Management with CloudForms – Service Health
Part 4: Container Management with CloudForms – Security & Compliance
Part 5: Container Management with CloudForms – Financial Management

 
Quelle: CloudForms

OpenStack 3rd Party CI with Software Factory

Introduction

When developing for an OpenStack project, one of the most important aspects to cover is to ensure
proper CI coverage of our code. Each OpenStack project runs a number of CI jobs on each commit to
test its validity, so thousands of jobs are run every day in the upstream infrastructure.

In some cases, we will want to set up an external CI system, and make it report as a 3rd Party CI
on certain OpenStack projects. This may be because we want to cover specific software/hardware
combinations that are not available in the upstream infrastructure, or want to extend test
coverage beyond what is feasible upstream, or any other reason you can think of.

While the process to set up a 3rd Party CI is documented,
some implementation details are missing. In the RDO Community, we have been using Software Factory
to power our 3rd Party CI for OpenStack, and it has worked very reliably over some cycles.

The main advantage of Software Factory is that it integrates all the pieces of the OpenStack CI
infrastructure in an easy to consume package, so let’s have a look at how to build a 3rd party CI
from the ground up.

Requirements

You will need the following:

An OpenStack-based cloud, which will be used by Nodepool to create temporary VMs where the CI jobs
will run. It is important to make sure that the default security group in the tenant accepts SSH
connections from the Software Factory instance.
A CentOS 7 system for the Software Factory instance, with at least 8 GB of RAM and 80 GB of disk.
It can run on the OpenStack cloud used for nodepool, just make sure it is running on a separate
project.
DNS resolution for the Software Factory system.
A 3rd Party CI user on review.openstack.org. Follow this guide to configure it.
Some previous knowledge on how Gerrit and Zuul
work is advisable, as it will help during the configuration process.

Basic Software Factory installation

For a detailed installation walkthrough, refer to the Software Factory documentation.
We will highlight here how we set it up on a test VM.

Software installation

On the CentOS 7 instance, run the following commands to install the latest release of Software Factory (2.6 at the time of this article):

$ sudo yum install -y https://softwarefactory-project.io/repos/sf-release-2.6.rpm
$ sudo yum update -y
$ sudo yum install -y sf-config

Define the architecture

Software Factory has several optional components, and can be set up to run them on more than one system.
In our setup, we will install the minimum required components for a 3rd party CI system, all in one.

$ sudo vi /etc/software-factory/arch.yaml

Make sure the nodepool-builder role is included. Our file will look like:


description: “OpenStack 3rd Party CI deployment”
inventory:
– name: managesf
ip: 192.168.122.230
roles:
– install-server
– mysql
– gateway
– cauth
– managesf
– gitweb
– gerrit
– logserver
– zuul-server
– zuul-launcher
– zuul-merger
– nodepool-launcher
– nodepool-builder
– jenkins

In this setup, we are using Jenkins to run our jobs, so we need to create an additional file:

$ sudo vi /etc/software-factory/custom-vars.yaml

And add the following content

nodepool_zuul_launcher_target: False

Note: As an alternative, we could use zuul-launcher to run our jobs and drop Jenkins. In that case,
there is no need to create this file. However, later when defining our jobs we will need to use the
jobs-zuul directory instead of jobs in the config repo.

Edit Software Factory configuration

$ sudo vi /etc/software-factory/sfconfig.yaml

This file contains all the configuration data used by the sfconfig script. Make sure you set the
following values:

Password for the default admin user.

authentication:
admin_password: supersecurepassword

The fully qualified domain name for your system.

fqdn: sftests.com

The OpenStack cloud configuration required by Nodepool.

nodepool:
providers:
– auth_url: http://192.168.1.223:5000/v2.0
name: microservers
password: cloudsecurepassword
project_name: mytestci
region_name: RegionOne
regions: []
username: ciuser

The authentication options if you want other users to be able to log into your instance of
Software Factory using OAuth providers like GitHub. This is not mandatory for a 3rd party CI.
See this part of the documentation for details.

If you want to use LetsEncrypt to get a proper SSL certificate, set:

use_letsencrypt: true

Run the configuration script

You are now ready to complete the configuration and get your basic Software Factory installation running.

$ sudo sfconfig

After the script finishes, just point your browser to https:// and you can see the
Software Factory interface.

Configure SF to connect to the OpenStack Gerrit

Once we have a basic Software Factory environment running, and our service account set up in
review.openstack.org, we just need to connect both together. The process is quite simple:

First, make sure the local Zuul user SSH key, found at /var/lib/zuul/.ssh/id_rsa.pub, is added
to the service account at review.openstack.org.

Then, edit /etc/software-factory/sfconfig.yaml again, and edit the zuul section
to look like:

zuul:
default_log_site: sflogs
external_logservers: []
gerrit_connections:
– name: openstack
hostname: review.openstack.org
port: 29418
puburl: https://review.openstack.org/r/
username: mythirdpartyciuser

Finally, run sfconfig again. Log information will start flowing in /var/log/zuul/server.log,
and you will see a connection to review.openstack.org port 29418.

Create a test job

In Software Factory 2.6, a special project named config is automatically created on the internal
Gerrit instance. This project holds the user-defined configuration, and changes to the project must
go through Gerrit.

Configure images for nodepool

All CI jobs will use a predefined image, created by Nodepool. Before creating any CI job, we need to
prepare this image.

As a first step, add your SSH public key to the admin user in your Software Factory Gerrit instance.

Then, clone the config repo on your computer and edit the nodepool configuration file:

$ git clone ssh://admin@sftests.com:29418/config sf-config
$ cd sf-config
$ vi nodepool/nodepool.yaml

Define the disk image and assign it to the OpenStack cloud defined previously:


diskimages:
– name: dib-centos-7
elements:
– centos-minimal
– nodepool-minimal
– simple-init
– sf-jenkins-worker
– sf-zuul-worker
env-vars:
DIB_CHECKSUM: ‘1’
QEMU_IMG_OPTIONS: compat=0.10
DIB_GRUB_TIMEOUT: ‘0’

labels:
– name: dib-centos-7
image: dib-centos-7
min-ready: 1
providers:
– name: microservers

providers:
– name: microservers
cloud: microservers
clean-floating-ips: true
image-type: raw
max-servers: 10
boot-timeout: 120
pool: public
rate: 2.0
networks:
– name: private
images:
– name: dib-centos-7
diskimage: dib-centos-7
username: jenkins
min-ram: 1024
name-filter: m1.medium

First, we are defining the diskimage-builder elements
that will create our image, named dib-centos-7.

Then, we are assigning that image to our microservers cloud provider, and specifying that we want
to have at least 1 VM ready to use.

Finally we define some specific parameters about how Nodepool will use our cloud provider: the
internal (private) and external (public) networks, the flavor for the virtual machines to create
(m1.medium), how many seconds to wait between operations (2.0 seconds), etc.

Now we can submit the change for review:

$ git add nodepool/nodepool.yaml
$ git commit -m “Nodepool configuration”
$ git review

In the Software Factory Gerrit interface, we can then check the open change. The config repo has
some predefined CI jobs, so you can check if your syntax was correct. Once the CI jobs show a
Verified +1 vote, you can approve it (Code Review +2, Workflow +1), and the change will be merged in
the repository.

After the change is merged in the repository, you can check the logs at /var/log/nodepool and see
the image being created, then uploaded to your OpenStack cloud.

Define test job

There is a special project in OpenStack meant to be used to test 3rd Party CIs,
openstack-dev/ci-sandbox. We will now define a CI job to “check” any new commit being reviewed there.

Assign the nodepool image to the test job

$ vi jobs/projects.yaml

We are going to use a pre-installed job named demo-job. All we have to do is to ensure it uses the
image we just created in Nodepool.

– job:
name: ‘demo-job’
defaults: global
builders:
– prepare-workspace
– shell: |
cd $ZUUL_PROJECT
echo “This is a demo job”
triggers:
– zuul
node: dib-centos-7

Define a Zuul pipeline and a job for the ci-sandbox project

$ vi zuul/upstream.yaml

We are creating a specific Zuul pipeline
for changes coming from the OpenStack Gerrit, and specifying that we want to run a CI job for commits
to the ci-sandbox project:

pipelines:
– name: openstack-check
description: Newly uploaded patchsets enter this pipeline to receive an initial +/-1 Verified vote from Jenkins.
manager: IndependentPipelineManager
source: openstack
precedence: normal
require:
open: True
current-patchset: True
trigger:
openstack:
– event: patchset-created
– event: change-restored
– event: comment-added
comment: (?i)^(Patch Set [0-9]+:)?( [w+-]*)*(nn)?s*(recheck|reverify)
success:
openstack:
verified: 0
failure:
openstack:
verified: 0

projects:
– name: openstack-dev/ci-sandbox
openstack-check:
– demo-job

Note that we are telling our job not to send a vote for now (verified: 0). We can change that later
if we want to make our job voting.

Apply configuration change

$ git add zuul/upstream.yaml jobs/projects.yaml
$ git commit -m “Zuul configuration for 3rd Party CI”
$ git review

Once the change is merged, Software Factory’s Zuul process will be listening for changes to the
ci-sandbox project. Just try creating a change and see
if everything works as expected!

Troubleshooting

If something does not work as expected, here are some troubleshooting tips:

Log files

You can find the Zuul log files in /var/log/zuul. Zuul has several components, so start with checking server.log
and launcher.log, the log files for the main server and the process that launches CI jobs.

The Nodepool log files are located in /var/log/nodepool. builder.log contains the log from image
builds, while nodepool.log has the log for the main process.

Nodepool commands

You can check the status of the virtual machines created by nodepool with:

$ sudo nodepool list

Also, you can check the status of the disk images with:

$ sudo nodepool image-list

Jenkins status

You can see the Jenkins status from the GUI, at https:///jenkins/, if logged on with the admin
user. If no machines show up at the ‘Build Executor Status’ pane, that means that either Nodepool could
not launch a VM, or there was some issue in the connection between Zuul and Jenkins. In that case,
check the jenkins logs at `/var/log/jenkins`, or restart the service if there are errors.

Next steps

For now, we have only ran a test job against a test project. The real power comes when you create
a proper CI job on a project you are interested in. You should now:

Create a file under jobs/ with the JJB
definition for your new job.

Edit zuul/upstream.yaml to add the project(s) you want your 3rd Party CI system to watch.

Quelle: RDO

What Austin Powers taught me about IT Integration

(This post is part of a series on IT integration. Read the first article on the urgent need for hybrid integration)
Remember when Austin Powers was defrosted back in 1997? He was a man caught between eras — living life in the style of the swinging 60’s but caught in a world that had changed.
Vanessa Kensington: Mr. Powers, my job is to acclimatize you to the nineties. You know, a lot’s changed since 1967.
Austin Powers: No doubt, love, but as long as people are still having promiscuous sex with many anonymous partners without protection while at the same time experimenting with mind-expanding drugs in a consequence-free environment, I’ll be sound as a pound!
The underlying theme of the movie was Austin’s struggle between his conditioning to the freedom of the 60’s while being forced to live with the responsibility of the 90’s.
In many ways, IT professionals face the same struggle in reverse. Traditionally, IT has been centrally controlled with a dedicated set of resources ensuring that everything was done properly, securely. But the rise of cloud changed that. Now, people across the organization have the freedom to identify a business need and quickly stand up a solution in the cloud, regardless of the long-term impacts. The results have been a little…evil.
The rise of cloud has been swift, but not complete. With IDC’s worldwide 2017 CloudView Survey finding that “nearly 54% of respondents have adopted SaaS for at least one application, and an additional 17% of respondents are planning to adopt SaaS within 12 months,” most organizations run some percentage of their applications on cloud and some percentage on-premises. They have become companies caught between eras.
Like Austin Powers, you need to adapt. You need to consider how you bridge between the freedom of cloud and the responsibility of on-premises applications.
We recently asked our friends at IDC to lend their brain power to the issue of integrating on-premises and cloud environments, and they found that organizations need to consider changing their approach. This is what they determined:

Using ETL or FTP to synchronize application data is not secure enough between a datacenter and the cloud application, and cloud-optimized data integration software is required.
Communication shifts from a high-speed LAN to slower broadband connections, creating higher integration latency. This may require a rework of services interfaces to narrow their scope while making them more lightweight to make them faster.
There is the potential of lower reliability between the datacenter and the SaaS application, which means there may be a need for reliable messaging and improved error handling.
Web services interacting with the legacy application may need to be extended to include REST APIs to support formats required by SaaS applications.
The integration bus will not be capable of mediating cloud-originating web services requests without use of gateway software or some type of trusted agent.
There may be a need to integrate assets in the cloud, which means the integration capabilities must be extended to support every workload in every cloud, impacting the new major application and relevant business processes.
There may be a decision to co-locate supporting applications by hosting them in the same cloud as the SaaS application. This may remove latency and reliability concerns, but there is still a need for integration. This means there is a need to adopt new integration software.
Data associated with the cloud application may need to be replicated to a cloud or on-premises data repository for reporting and analytics, which may require cloud-resident data integration and movement technologies.

Integrating your on-premises applications with your cloud applications allows you to put your enterprise data to work in new ways. It provides you the best of both worlds. As Austin said, “Right now we have freedom and responsibility. It’s a very groovy time.”
If you want to learn more about cloud and on-premises integration, download the IDC Report – The Urgent Need for Hybrid Integration or go to the IBM Integration website to learn more about IBM’s view on hybrid cloud integration.
Yeah, baby!
The post What Austin Powers taught me about IT Integration appeared first on Cloud computing news.
Quelle: Thoughts on Cloud