fedora-review tool for reviewing RDO packages

This tool makes reviews of rpm packages for Fedora easier. It tries to automate most of the process.
Through a bash API the checks can be extended in any programming language and for any programming language.

We can also use it for also reviewing RDO packages on Centos 7/Fedora-24.

Install fedora-review and DLRN

[1.] Install fedora-review and Mock

For Centos 7

Enable epel repos on centos

$ sudo yum -y install epel-release

Download fedora-review el7 build from Fedora Koji

$ sudo yum -y install https://kojipkgs.fedoraproject.org//packages/fedora-review/0.5.3/2.el7/noarch/fedora-review-0.5.3-2.el7.noarch.rpm
$ sudo yum -y install mock

On Fedora 24

$ sudo dnf -y install fedora-review mock

[2.] Add the user you intend to run as to the mock group:

$ sudo usermod -a -G mock $USER
$ newgrp mock
$ newgrp $USER

[3.] Install DLRN:

On Centos 7

$ sudo yum -y install mock rpm-build git createrepo python-virtualenv python-pip openssl-devel gcc libffi-devel

On Fedora 24

$ sudo dnf -y install mock rpm-build git createrepo python-virtualenv python-pip openssl-devel gcc libffi-devel

The below steps works on both distros.

$ virtualenv rdo
$ source .rdo/bin/activate
$ git clone https://github.com/openstack-packages/DLRN.git
$ cd DLRN
$ pip install -r requirements.txt
$ python setup.py develop

[4.] Generate dlrn.cfg (RDO trunk mock config)

$ dlrn –config-file projects.ini –package-name python-keystoneclient
$ ls <path to cloned DLRN repo>/data/dlrn.cfg

[5.] Add dlrn.cfg to mock config.

Add mock config is in /etc/mock directory.

$ sudo cp <path to cloned DLRN repo>/data/dlrn.cfg /etc/mock
$ ls /etc/mock/dlrn.cfg

Now, everything is set, we are now ready to review any RDO package reviews using fedora-review.

Run Fedora-review tool

$ fedora-review -b <RH bug number for RDO Package Review> -m <mock config to use>

Let’s review ‘python-osc-lib’ using dlrn.cfg.

$ fedora-review -b 1346412 -m dlrn

Happy Reviewing!
Quelle: RDO

OpenStack Summit Austin: Day 4

 
Hello again from Austin, Texas where the fourth day of the main OpenStack Summit has come to a close. While there are quite a few working sessions and contributor meet-ups on Friday, Thursday marks the last official day of the main summit event. The exhibition hall closed its doors around lunch time, and the last of the vendor sessions occurred later in the afternoon. As the day concluded, many attendees were already discussing travel plans for OpenStack Summit Barcelona in October!
Before we get ahead of ourselves however, day 4 was still jam-packed with a busy agenda. Like the first three days of the event, Red Hat speakers led quite a few interesting&;and well attended&8211;sessions.

To start, Al Kari, Kambiz Aghaiepour, and Will Foster combined to give a talk entitled Deploying Microservices Architecture on OpenStack Using Kubernetes, Docker, Flannel and etcd. The hands-on lab provided a step by step demonstration of how to deploy these services in a variety of environments.
Lars Herrmann, General Manager of Red Hat’s Integrated Solutions Business Unit then led a talk called Orchestrated Containerization with OpenStack. In the session, Lars explored how to leverage container standards, like Kubernetes, in implementing hybrid containerization strategies. He also discussed a variety of architectural designs for hybrid containerization and revealed how to use Ansible in these scenarios.
Ihar Hrachyshka then teamed with Kevin Benton and Sean Collins from Mirantis, as well as Matthew Kassawara from IBM in a presentation entitled The Notorious M.T.U. (Maximum Transmission Unit). The presenters examined impacts of improper MTU parameters on both physical and virtual networks, neutron MTU problems, and how to properly configure neutron MTU in various environments.
Just before lunch, in his presentation on CephFS, Greg Farnum, a long-standing member of the core Ceph development group, detailed why CephFS is more stable and feature-rich than ever. He then summarized which key new functions were introduced in the recent Jewel release, and also provided a glimpse of what’s to come in future iterations.
Later, Sridhar Gaddam joined with Bin Hu from AT&T and Prakash Ramchandran from Huawei Technology to discuss IPv6 capabilities in Telco environments. Among other things, the trio examined scenarios enabled by the IPv6 platform, its current state, and future expectations.
And finally, Miguel Angel Ajo, a Red Hat developer focused on Neutron, collaborated with Victor Howard from Comcast and Sławek Kapłoński from OVH in a presentation called Neutron Quality of Service, New Features, and Future Roadmap. The presenters detailed the Quality of Service (QoS) framework introduced in the Liberty release, and how it serves to provide QoS settings on the Neutron networking API.  They also covered DSCP rules, role based access control (RBAC) for QoS policies, and much more.
As you probably can imagine, it was a busy final day at OpenStack Summit. Like all OpenStack Summits, it was an extremely informative event, and also lots of fun! If you missed our previous daily recaps, we encourage you to read our blog posts from Day 1, Day 2, and Day 3. And for those who were present, we hope you enjoyed the event and found time to visit the Red Hat booth, as well as network with friends and colleagues from around the world. Like you, we’re already counting down the days until the next OpenStack Summit in Barcelona!
For continued Red Hat and industry cloud news, we invite you to follow us on Twitter at @RedHatCloud or @RedHatNews.
Quelle: RedHat Stack

Who is Testing Your Cloud?

Co-Authored with Dan Sheppard, Product Manager, Rackspace
 
With test driven development, continuous integration/continuous deployment and devops practices now the norm, most organizations understand the importance of testing their applications.
But what about the cloud those applications are going to live on? Too many companies miss this critical step, leading to gaps in their operations, which can lead to production issues, API outages, inability to upgrade, problems when trying to upgrade and general instability of the cloud.
It all begs the question: &;Do you even test?&;
At Rackspace, our industry leading support teams use a proactive approach to operations, and that begins with detailed and comprehensive testing, so that not only your applications but your cloud is ready for your production workload.
Critical Collaboration
For Rackspace Private Cloud Powered by Red Hat, we collaborate closely with Red Hat; we test the upstream OpenStack code as well as the open sourced projects we leverage for our deployment, such as Ceph and Red Hat OpenStack Platform Director. This is done in a variety of ways, like sharing test cases upstream with the community via Tempest, creating and tracking bugs, and creating bug fixes upstream.

The Rackspace and Red Hat team also work together on larger scale and performance tests at the OpenStack Innovation Center, which we launched last year in conjunction with Intel to advance the capabilities of OpenStack.
Recent tests have included performance improvements in relation to offloading VXLAN onto network cards, scaled upgrade testing from Red Hat OpenStack Platform version 7 to version 8, and testing of scaled out Ceph deployments. Data from this testing will be made available to the community as the detailed results are analyzed.
Building on the upstream testing, the Rackspace operations team leverages Rally and Tempest to execute 1,300 test cases prior to handing the cloud over to the customer. This testing serves as a &8220;1,300 point inspection&8221; of the cloud to give you the confidence that your cloud is production ready and a report of this testing is handed over to you with a guide to help you get started with your new cloud. These test cases serve to validate and demonstrate the functionality of the OpenStack APIs, with specific scripts testing things such as (just to name a few):

administration functions
creating instances and cinder volumes
creating software defined networks
testing keystone functions and user management

Upgrades Made Easy
One of the key requirements for enterprises is the ability to upgrade software without impacting the business.
These upgrades have been challenging in OpenStack in the past, but thanks to the Rackspace/Red Hat collaboration, we can now make those upgrades with limited downtime to your guests on the Rackspace Private Cloud Powered by Red Hat.
To deliver this, the Rackspace team runs the latest version of OpenStack code through our lab and executes the 1,300 point inspection. When we are satisfied with that, we test upgrading our lab to the latest version and execute our 1,300 point test again, thus confirming that the new version of OpenStack meets your requirements and that the code is safe for your environment.
Our team doesn’t stop there.
So that the code deploys properly to your cloud, our operations team executes a 500-script regression test at the start of a scheduled upgrade window. Then our team upgrades your cloud and executes the regression test again. The final step in the scheduled upgrade window is to compare our pre- and post-regression test results to validate that the upgrade was successful.
Since the launch of Rackspace Private Cloud Powered by Red Hat, the Red Hat and Rackspace team has been working to refine that process by incorporating Red Hat’s Distributed Continuous Integration project.

Distributed Continuous Integration User Interface
Extended Testing
With Distributed Continuous Integration, Red Hat extends the testing process related to building Red Hat OpenStack Platform to Rackspace’s in-house environment. Instead of waiting for a general availability release of Red Hat OpenStack Platform to start testing Rackspace scenarios, pre-release versions are delivered and tested following a CI process. Test results are automatically shared with Red Hat’s experts and, along with Rackspace, new features are debugged and improved taking into consideration new scenarios.
Using DCI to test pre-released versions of Red Hat OpenStack Platform helps ensure we’re ready for the new general release just after launch. Why? Because we have been running incremental changes of the software in preparation for general availability.
DCI also helps existing Rackspace private cloud customers, by allowing the Rackspace operations team to test code changes from Red Hat while they’re being developed, allowing us a shorten the feedback loop back to Red Hat engineering, and giving us a supported CI/CD environment for your cloud at a scale not possible without a considerable investment in talent and resources.
So, if you are one of the 81 percent of senior IT professionals leveraging or planning to leverage OpenStack, ask your team, &8220;How do we test our OpenStack?&8221; — then give Rackspace a call to talk about a better way.
 
 
 
Quelle: RedHat Stack

How connection tracking in Open vSwitch helps OpenStack performance

Written by Jiri Benc,  Senior Software Engineer, Networking Services, Linux kernel, and Open vSwitch
 
 
By introducing a connection tracking feature in Open vSwitch, thanks to the latest Linux kernel, we greatly simplified the maze of virtual network interfaces on OpenStack compute nodes and improved its networking performance. This feature will appear soon in Red Hat OpenStack Platform.
Introduction
It goes without question that in the modern world, we need firewalling to protect machines from hostile environments. Any non-trivial firewalling requires you keep track of the connections to and from the machine. This is called &;stateful firewalling&;. Indeed, even such basic rule as &8220;don&;t allow machines from the Internet to connect to the machine while allowing the machine itself to connect to servers on the Internet&8221; requires stateful firewall. This applies also to virtual machines. And obviously, any serious cloud platform needs such protection.

Stateful Firewall in OpenStack
It&8217;s of no surprise that OpenStack implements stateful firewalling for guest VMs. It&8217;s the core of its Security Groups feature. It allows the hypervisor to protect virtual machines from unwanted traffic.
As mentioned, for stateful firewalling the host (OpenStack node) needs to keep track of individual connections and be able to match packets to the connections. This is called connection tracking, or &8220;conntrack&8221;. Note that connections are a different concept to flows: connections are bidirectional and need to be established, while flows are unidirectional and stateless.
Let&8217;s add Open vSwitch to the picture. Open vSwitch is an advanced programmable software switch. Neutron uses it for OpenStack networking &; to connect virtual machines together and to create overlay network connecting the nodes. (For completeness, there are other backends than Open vSwitch available; however, Open vSwitch offers the most features and performance due to its flexibility and it&8217;s considered the &8220;main&8221; backend by many.)
However, packet switching in Open vSwitch datapath is based on flows and solely on flows. It had been traditionally stateless. Not a good situation when we need a stateful firewall.
Bending iptables to Our Will
There&8217;s a way out of this. The Linux kernel contains connection tracking module and it can be used to implement a stateful firewall. However, these features had been available only to the Linux kernel firewall at the IP protocol layer (called &8220;iptables&8221;). And that&8217;s a problem: Open vSwitch does not operate at the IP protocol layer (also called L3), it&8217;s at one layer below (called L2). In other words, not all packets processed by the kernel are subject to iptables processing. In order for a packet to be processed by iptables, it needs either to be destined to an IP address local to the host or routed by the host. Packets which are switched (either by Linux bridge or Open vSwitch) are not processed by iptables.
OpenStack needs the VMs to be on the same L2 segment, i.e. packet between them are switched. In order to still make use of iptables to implement a stateful firewall, it used a trick.
The Linux bridge (traditional software switch included in the Linux kernel) contains its own filtering mechanism called ebtables. While connection tracking cannot be used from within ebtables, by setting appropriate system config parameters it&8217;s possible to call iptables chains from ebtables. By using this technique, it&8217;s possible to make use of connection tracking even when doing L2 packet switching.
Now, the obvious question is where to put this on the OpenStack packet traversal path.
The heart of every OpenStack node is so called &8220;integration bridge&8221;, br-int. In a typical deployment, br-int is implemented using Open vSwitch. It&8217;s responsible for directing packets between VMs, tunneling them between nodes and some other tasks. Thus, every VM is connected to an integration bridge.
The stateful firewall needs to be inserted between the VM and the integration bridge. We want to make use of iptables, which means inserting a Linux bridge between the VM and the integration bridge. That bridge needs to have the correct settings applied to call iptables and iptables rules need to be populated to utilize conntrack and do the necessary firewalling.
How It Looks
Looking at the picture below, let&8217;s examine how a packet from VM to VM traverses the network stack.
The first VM is connected to the host through the tap1 interface. A packet coming out of the VM is then directed to the Linux bridge qbr1. On that bridge, ebtables call into iptables where the incoming packet is matched according to configured rules. If the packet is approved, it passes the bridge and is sent out to the second interface connected to the bridge. That&8217;s qvb1 which is one side of the veth pair.
Veth pair is a pair of interfaces that are internally connected to each other. Whatever is sent to one of the interfaces is received by the other one and vice versa. Why the veth pair is needed here? Because we need something that could interconnect the Linux bridge and the Open vSwitch integration bridge.
Now the packet reached br-int and is directed to the second VM. It goes out of br-int to qvo2, then through qvb2 it reaches the bridge qbr2. The packet goes through ebtables and iptables and finally reaches tap2 which is the target VM.
This is obviously very complex. All those bridges and interfaces add cost in extra CPU processing and extra latency. The performance suffers.
Connection Tracking in Open vSwitch to the Rescue
All of this can be dramatically simplified. If only we could include the connection tracking directly in Open vSwitch&;
And that&8217;s exactly what happened. Recently, the connection tracking code in the kernel was decoupled from iptables and Open vSwitch got support for conntrack. Now it&8217;s possible to match not only on flows but also on connections. Jakub Libosvar (Red Hat) made use of this new feature in Neutron.
Now, VMs can connect directly to the integration bridge and stateful firewall is implemented just using Open vSwitch rules alone.
Let&8217;s examine the new, improved situation in the second picture below.

A packet coming out of the first VM (tap1) is directed to br-int. It&8217;s examined using the configured rules and either dropped or directly output to the second VM (tap2).
This substantially saves packet processing costs and thus increases performance. The following overhead was eliminated:

Packet enqueueing on veth pair: The packet sent to a veth endpoint is put to a queue and dequeued and processed later.
Bridge processing on per-VM bridge:. Each packet traversing the bridge is subject to FDB (forwarding database) processing.
ebtables overhead: We measured that just enabling ebtables without any rules configured has performance costs on the bridge throughput. Generally, ebtables are considered obsolete and don&8217;t receive much work, especially not performance work.
iptables overhead: There is no concept of per-interface rules in iptables, iptables rules are global. This means that for every packet, incoming interface needs to be checked and execution of rules branched to the set of rules appropriate for the particular interface. This means linear search using interface name matches which is very costly, especially with a high number of VMs.

In contrast, by using Open vSwitch conntrack, 1.-3. are gone instantly. Open vSwitch has only global rules, thus we still need to match for the incoming interface in Open vSwitch but unlike iptables, the lookup is done using port number (not textual interface name) and more importantly, using a hash table. The overhead in 4. is thus completely eliminated, too.
The only remaining overhead is of the firewall rules themselves.
In Summary
Without Open vSwitch conntrack:

A Linux bridge needs to be inserted between a VM and the integration bridge.
This bridge is connected to the integration bridge by a veth pair.
Packets traversing the bridge are processed by ebtables and iptables, implementing the stateful firewall.
There&8217;s substantial performance penalty caused by veth, bridge, ebtables and iptables overhead.

With Open vSwitch conntrack:

VMs are connected directly to the integration bridge.
The stateful firewall is implemented directly at the integration bridge using hash tables.

Images were captured on a real system using plotnetcfg and simplified to better illustrate the points of this article.
Quelle: RedHat Stack

TripleO (Director) Components in Detail

In our previous post we introduced Red Hat OpenStack Platform Director. We showed how at the heart of Director is TripleO, short for “OpenStack on OpenStack”. TripleO is an OpenStack project that aims to utilise OpenStack itself as the foundations for deploying OpenStack. To clarify, TripleO advocates the use of native OpenStack components, and their respective API’s to configure, deploy, and manage OpenStack environments itself.
The major benefit of utilising these existing API&;s with Director is that they&8217;re well documented, they go through extensive integration testing upstream, and are the most mature components in OpenStack. For those that are already familiar with the way that OpenStack works, it&8217;s a lot easier to understand how TripleO (and therefore, Director) works. Feature enhancements, security patches, and bug fixes are therefore automatically inherited into Director, without us having to play catch up with the community.
With TripleO, we refer to two clouds: The first to consider is the undercloud, this is the command and control cloud in which a smaller OpenStack environment exists that&8217;s sole purpose is to bootstrap a larger production cloud. This is known as the overcloud, where tenants and their respective workloads reside. Director sometimes is treated as a synonymous to the undercloud; Director bootstraps the undercloud OpenStack deployment and provides the necessary tooling to deploy an overcloud.

Ironic+Nova+Glance: baremetal management of overcloud nodes
For proper baremetal management during a deployment, Nova and Ironic need to be in perfect coordination. Nova is responsible for the orchestration, deployment, and lifecycle management of compute resources, for example, virtual machines. Nova relies on a set of plugins and drivers to establish compute resources requested by a tenant, such as the utilisation of the KVM hypervisor.
Ironic started life as an alternative Nova &;baremetal driver&;. Now, Ironic has its own OpenStack project and compliments Nova using its own respective API and command line utilities. Once the overcloud is deployed,  Ironic can be offered to customers that want to offer the baremetal nodes to its tenants using dedicated hardware outside of Nova’s compute pools. Here, in Director’s context, Ironic is a key core component of the undercloud, controlling, and deploying the physical nodes that are required for the overcloud deployment.
But first Director has to register the nodes with Ironic. One has to catalog the IPMI (out-of-band management), it’s IP, username and password, although there are also vendor-specific drivers, for example HP iLO, Cisco UCS, Dell DRAC. Ironic will manage the power-state of bare metal nodes used for the overcloud deployment, as well as the deployment of the operating system (via a PXE-bootable installer image)
The disk image used during hardware bootstrap is taken from the undercloud Glance image service. Red Hat provides the required images to be deployed in the overcloud nodes. These disk images typically contain Red Hat Enterprise Linux and all OpenStack components, which minimises any post-deployment software installation. They can, of course, be customised further prior to upload into Glance. For example, customers often want to integrate additional software or configurations as per their requirements.
Neutron: network management of the overcloud
As you may already know, Neutron provides network access to tenants via a self-service interface to define networks, ports, and IP addresses that can be attached to instances. It also provides supporting services for booting instances such as DHCP, DNS, and routing. Within Director, we firstly use Neutron as an API for defining all overcloud networks, any required VLAN isolation, and associated IP addresses for the nodes (IP address management).
Secondly we use Neutron in the undercloud as a mechanism for managing the network provisioning of the overcloud nodes during deployment. Neutron will detect booting nodes and instruct them to do PXE boot via a special DHCP offer, and then Ironic takes over responsibility for image deployment. Once deployed, the ironic deployment image reboots the machine to boot from hardrive, so it&8217;s the first time the node boots by itself. Then, it will execute os-net-apply (from the TripleO project) to statically configure the operating system with the IP address. Despite that IP being managed in the undercloud&8217;s Neutron DHCP server, it is actually set as a static IP in the overcloud&8217;s interface configuration. This allows for configuration of VLAN tagging, LACP or failover bonding, MTU settings and other advance parameters, from the Director network configuration. Visit this tutorial for more information on os-net-config.
Heat: orchestrating the overcloud deployment steps
The most important component in Director is Heat, which is OpenStack’s generic orchestration engine. Users define stack templates using plain YAML text documents, listing the required resources (for example, instances, networks, storage volumes) along with a set of parameters for configuration. Heat deploys the resources based on a given dependency chain, sorting out which resources need to be built before the others. Heat can then monitor such resources for availability, and scale them out where necessary. These templates enable application stacks to become portable and to achieve repeatability and predictability.
Heat is used extensively within Director as the core orchestration engine for overcloud deployment.  Heat takes care of the provisioning and management of any required resources, including the physical servers and networks, and the deployment and configuration of the dozens of OpenStack software components. Director’s Heat stack template describe the overcloud environment in intimate detail, including quantities and any necessary configuration parameters. It also makes the templates versionable and programmatically understood &; a truly Software Defined Infrastructure.
Deployment templates: customizable reference architectures
Whilst not an OpenStack service, one of the most important components to look at are the actual templates that we use for deployment with Heat. The templates come from the upstream TripleO community in a sub-project known as tripleo-heat-templates (read an introduction here). The tripleo-heat-templates repository comprises of a directory of Heat templates and the required puppet manifests and scripts to perform certain advanced tasks.
Red Hat relies on these templates with Director and works heavily to enhance them to provide additional features that customers request, this includes working with certified partners to confirm that their value add technology can be automatically enabled via Director, thus minimising any post-deployment effort (for more information, visit our Partner&8217;s instructions to integrate with Director). The default templates will stand up a vanilla Red Hat OpenStack Platform environment, with all default parameters and backends (KVM, OVS, LVM or Ceph if enabled, etc).
Director offers customers the ability to easily set their own configuration by simply overriding the defaults in their own templates, and also provides hooks in the default templates to easily call additional code that organisations may want to run, this could include the installation and configuration of additional software, make non-standard configuration changes that the templates aren’t aware of, or to enable a plugin not supported by Director.
 
In our next blog post we’ll explain the Reference Architecture that Director provides out of the box, and how to plan for a successful deployment.
Quelle: RedHat Stack