[Podcast] PodCTL Basics – Understanding Service Meshes

As we get back into 2018, we decided to get back to basic, “PodCTL Basics”. These are our shorter, 10-15 minute shows which cover introductory-level knowledge about various topics related to Containers, Kubernetes or Cloud-native applications. We’re done some “Basics” shows in the past: What is Kubernetes Linux Containers How to Containerize an Application We […]
Quelle: OpenShift

OpenShift Commons Briefing #112: Kubernetes 1.9 Release Update with Derek Carr (Red Hat)

Get a walkthrough from Red Hat’s Derek Carr on the recent Kubernetes 1.9 release’s features and functions and a review of what is in the works for release 1.10. The briefing is also a great guide to the 1.9 Release which went out the door at the very end of 2017. The 1.9 release had a strong focus on fixing bugs, maturing existing features to beta or stable. For Kubernetes 1.9, “Stability” is a key feature with an emphasis on refining, polishing, scale, and tightening up production matters.
Quelle: OpenShift

[Podcast] PodCTL #20 – Gathering Kubernetes Communities

As we enter 2018, we know a few things about the Kubernetes community: It continues to grow larger each year, as evidenced by the growth of KubeCon. More companies are trying to run Kubernetes in production. More technology vendors are making Kubernetes a priority, and providing various levels of integration. More and more companies are […]
Quelle: OpenShift

Hooroo! Australia bids farewell to incredible OpenStack Summit

We have reached the end of another successful and exciting OpenStack Summit. Sydney did not disappoint giving attendees a wonderful show of weather ranging from rain and wind to bright, brilliant sunshine. The running joke was that Sydney was, again, just trying to be like Melbourne. Most locals will get that joke, and hopefully now some of our international visitors do, too!
Monty Taylor (Red Hat), Mark Collier (OpenStack Foundation), and Lauren Sell (OpenStack Foundation) open the Sydney Summit. (Photo: Author)

And much like the varied weather, the Summit really reflected the incredible diversity of both technology and community that we in the OpenStack world are so incredibly proud of. With over 2300 attendees from 54 countries, this Summit was noticeably more intimate but no less dynamic. Often having a smaller group of people allows for a more personal experience and increases the opportunities for deep, important interactions.
To my enjoyment I found that, unlike previous Summits, there wasn’t as much of a singularly dominant technological theme. In Boston it was impossible to turn a corner and not bump into a container talk. While containers were still a strong theme here in Sydney, I felt the general impetus moved away from specific technologies and into use cases and solutions. It feels like the OpenStack community has now matured to the point that it’s able to focus less on each specific technology piece and more on the business value those pieces create when working together.
Jonathan Bryce (OpenStack Foundation) (Photo: Author)
It is exciting to see both Red Hat associates and customers following this solution-based thinking with sessions demonstrating the business value that our amazing technology creates. Consider such sessions as SD-WAN – The open source way, where the complex components required for a solution are reviewed, and then live demoed as a complete solution. Truly exceptional. Or perhaps check out an overview of how the many components to an NFV solution come together to form a successful business story in A Telco Story of OpenStack Success.
At this Summit I felt that while the sessions still contained the expected technical content they rarely lost sight of the end goal: that OpenStack is becoming a key, and necessary, component to enabling true enterprise business value from IT systems.
To this end I was also excited to see over 40 sessions from Red Hat associates and our customers covering a wide range of industry solutions and use cases.  From Telcos to Insurance companies it is really exciting to see both our associates and our customers sharing their experiences with our solutions, especially in Australia and New Zealand with the world.
Mark McLoughlin, Senior Director of Engineering at Red Hat with Paddy Power Betfair’s Steven Armstrong and Thomas Andrew getting ready for a Facebook Live session (Photo: Anna Nathan)
Of course, there were too many sessions to attend in person, and with the wonderfully dynamic and festive air of the Marketplace offering great demos, swag, food, and, most importantly, conversations, I’m grateful for the OpenStack Foundation’s rapid publishing of all session videos. It’s a veritable pirate’s bounty of goodies and I recommend checking it out sooner rather than later on their website.
I was able to attend a few talks from Red Hat customers and associates that really got me thinking and excited. The themes were varied, from the growing world of Edge computing, to virtualizing network operations, to changing company culture; Red Hat and our customers are doing very exciting things.

Digital Transformation
Take for instance Telstra, who are using Red Hat OpenStack Platform as part of a virtual router solution. Two years ago the journey started with a virtualized network component delivered as an internal trial. This took a year to complete and was a big success from both a technological and cultural standpoint. As Senior Technology Specialist Andrew Harris from Telstra pointed out during the Q and A of his session, projects like this are not only about implementing new technology but also about “educating … staff in Linux, OpenStack and IT systems.” It was a great session co-presented with Juniper and Red Hat and really gets into how Telstra are able to deliver key business requirements such as reliability, redundancy, and scale while still meeting strict cost requirements.

 
Of course this type of digital transformation story is not limited to telcos. The use of OpenStack as a catalyst for company change as well as advanced solutions was seen strongly in two sessions from Australia’s Insurance Australia Group (IAG). Product
Eddie Satterly, IAG (Photo: Author)
Engineering and DataOps Lead Eddie Satterly recounted the journey IAG took to consolidate data for a better customer experience using open source technologies. IAG uses Red Hat OpenStack Platform as the basis for an internal open source revolution that has not only lead to significant cost savings but has even resulted in the IAG team open sourcing some of the tools that made it happen. Check out the full story of how they did it and join TechCrunch reporter Frederic Lardinois who chats with Eddie about the entire experience. There’s also a Facebook live chat Eddie did with Mark McLoughlin, Senior Director of Engineering at Red Hat that further tells their story.
 
 
Ops!
An area of excitement for those of us with roots in the operational space is the way that OpenStack continues to become easier to install and maintain. The evolution of TripleO, the upstream project for Red Hat OpenStack Platform’s deployment and lifecycle management tool known as director, has really reached a high point in the Pike cycle. With Pike, TripleO has begun utilizing Ansible as the core “engine” for upgrades, container orchestration, and lifecycle management. Check out Senior Principal Software Engineer Steve Hardy’s deep dive into all the cool things TripleO is doing and learn just how excited the new “openstack overcloud config download” command is going to make you, and your Ops team, become.
Steve Hardy (Red Hat) and Jaromir Coufal (Red Hat) (Photo: Author)
And as a quick companion to Steve’s talk, don’t miss his joint lightening talk with Red Hat Senior Product Manager Jaromir Coufal, lovingly titled OpenStack in Containers: A Deployment Hero’s Story of Love and Hate, for an excellent 10 minute intro to the journey of OpenStack, containers, and deployment.
Want more? Don’t miss these sessions …
Storage and OpenStack:

Delivering OpenStack and Ceph in containers
Panel: Experiences Scaling File Storage with CephFS and OpenStack
CephFS: Now fully awesome (What is the impact of CephFS on OpenStack cloud)

Containers and OpenStack:

Standing Up and Operating a Container Service on top of OpenStack Using OpenShift
Bringing Worlds Together: Designing and Deploying Kubernetes on an OpenStack multi-site environment

Telcos and OpenStack

Will OpenStack be Everywhere in 5G Networks?
“I pity the fool that builds his own cloud!”–Overcoming challenges of OpenStack based telco clouds
Sprint and the Open Telco Transformation

A great event

Although only 3 days long, this Summit really did pack a sizeable amount of content into that time. Being able to have the OpenStack world come to Sydney and enjoy a bit of Australian culture was really wonderful. Whether we were watching the world famous Melbourne Cup horse race with a room full of OpenStack developers and operators, or cruising Sydney’s famous harbour and talking the merits of cloud storage with the community, it really was an unique and exceptional week.
The Melbourne Cup is about to start! (Photo: Author)
The chance to see colleagues from across the globe, immersed in the technical content and environment they love, supporting and learning alongside customers, vendors, and engineers is incredibly exhilarating. In fact, despite the tiredness at the end of each day, I went to bed each night feeling more and more excited about the next day, week, and year in this wonderful community we call OpenStack!

See you in Vancouver!
Photo: Darin Sorrentino
Quelle: RedHat Stack

An Introduction to Fernet tokens in Red Hat OpenStack Platform

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.
First, some definitions …
What is a token? OpenStack tokens are bearer tokens, used to authenticate and validate users and processes in your OpenStack environment. Pretty much any time anything happens in OpenStack a token is involved. The OpenStack Keystone service is the core service that issues and validates tokens. Using these tokens, users and and software clients via API’s authenticate, receive, and finally use that token when requesting operations ranging from creating compute resources to allocating storage. Services like Nova or Ceph then validate that token with Keystone and continue on with or deny the requested operation. The following diagram, shows a simplified version of this dance.
Courtesy of the author

Token Types
Tokens come in several types, referred to as “token providers” in Keystone parlance. These types can be set at deployment time, or changed post deployment. Ultimately, you’ll have to decide what works best for your environment, given your organization’s workload in the cloud.
The following types of tokens exist in Keystone:
UUID (Universal Unique Identifier)
The default token provider in Keystone is UUID. This is a 32-byte bearer token that must be persisted (stored) across controller nodes, along with their associated metadata, in order to be validated.
PKI & PKIZ (public key infrastructure)
This token format is deprecated as of the OpenStack Ocata release, which means it is deprecated in Red Hat OpenStack Platform 11. This format is also persisted across controller nodes. PKI tokens contain catalog information of the user that bears them, and thus can get quite large, depending on how large your cloud is. PKIZ tokens are simply compressed versions of PKI tokens.
Fernet
Fernet tokens (pronounced fehr:NET) are message packed tokens that contain authentication and authorization data. Fernet tokens are signed and encrypted before being handed out to users. Most importantly, however, Fernet tokens are ephemeral. This means they do not need to be persisted across clustered systems in order to successfully be validated.
Fernet was originally a secure messaging format created by Heroku. The OpenStack implementation of this lightweight and more API-friendly format was developed by the OpenStack Keystone core team.
The Problem
As you may have guessed by now, the real problem solved by Fernet tokens is one of persistence. Imagine, if you will, the following scenario:

A user logs into Horizon (the OpenStack Dashboard)
User creates a compute instance
User requests persistent storage upon instance creation
User assigns a floating IP to the instance

While this is a simplified scenario, you can clearly see that there are multiple calls to different core components being made. In even the most basic of examples  you see at least one authentication, as well as multiple validations along the way. Not only does this require network bandwidth, but when using persistent token providers such as UUID it also requires a lot of storage in Keystone. Additionally, the token table in the database
Photo by Eugenio Mazzone on Unsplash
used by  Keystone grows as your cloud gets more usage. When using UUID tokens, operators must implement a detailed and comprehensive strategy to prune this table at periodic intervals to avoid real trouble down the line. This becomes even more difficult in a clustered environment.
It’s not only backend components which are affected. In fact, all services that are exposed to users require authentication and authorization. This leads to increased bandwidth and storage usage on one of the most critical core components in OpenStack. If Keystone goes down, your users will know it and you no longer have a cloud in any sense of the word.
Now imagine the impact as you scale your cloud;  the  problems with UUID tokens are dangerously amplified.
Benefits of Fernet tokens
Because Fernet tokens are ephemeral, you have the following immediate benefits:

Tokens do not need to be replicated to other instances of Keystone in your controller cluster
Storage is not affected, as these tokens are not stored

The end-result offers increased performance overall. This was the design imperative of Fernet tokens, and the OpenStack community has more than delivered.  
Show me the numbers
All of these benefits sound good, but what are the real numbers behind the performance differences between UUID and Fernet? One of the core keystone developers, Dolph Matthews, created a great post about Fernet benchmarks.
Note that these benchmarks are for OpenStack Kilo, so you’ll most likely see even greater performance numbers in newer releases.
The most important benchmarks in Dolph’s post are the ones comparing the various token formats to each other on a globally-distributed Galera cluster. These show the following results using UUID as a baseline:
Token creation performance

Fernet
50.8 ms (85% faster than UUID)
237.1 (42% faster than UUID)

Token validation performance

Fernet
5.55 ms (8% faster than UUID)
1957.8 (14% faster then UUID)

As you can see, these numbers are quite remarkable. More informal benchmarks can be found at the Cern OpenStack blog, OpenStack in Production.
Security Implications
Photo by Praveesh Palakeel on Unsplash
One important aspect of using Fernet tokens is security. As these tokens are signed and encrypted, they are inherently more secure than plain text UUID tokens. One really great aspect of this is the fact that you can invalidate a large number of tokens, either during normal operations or during a security incident, by simply changing the keys used to validate them. This requires a key rotation strategy, which I’ll get into in the third part of this series.
While there are security advantages to Fernet tokens, it must be said they are only as secure as the keys that created them. Keystone creates the tokens with a set of keys in your Red Hat OpenStack Platform environment. Using advanced technologies like SELinux, Red Hat Enterprise Linux is a trusted partner in this equation. Remember, the OS matters.
Conclusion
While OpenStack functions just fine with its default UUID token format, I hope that this article shows you some of the benefits of Fernet tokens. I also hope that you find the knowledge you’ve gained here to be useful, once you decide to move forward to implementing them.
In our follow-up blog post in this series, we’ll be looking at how to enable Fernet tokens in your OpenStack environment — both pre and post-deploy. Finally, our last post will show you how to automate key rotation using Red Hat Ansible in a production environment. I hope you’ll join me along the way.
Quelle: RedHat Stack

Enabling Keystone’s Fernet Tokens in Red Hat OpenStack Platform

As we learned in part one of this blog post, beginning with the OpenStack Kilo release, a new token provider is now available as an alternative to PKI and UUID. Fernet tokens are essentially an implementation of ephemeral tokens in Keystone. What this means is that tokens are no longer persisted and hence do not need to be replicated across clusters or regions.
“In short, OpenStack’s authentication and authorization metadata is neatly bundled into a MessagePacked payload, which is then encrypted and signed as a Fernet token. OpenStack Kilo’s implementation supports a three-phase key rotation model that requires zero downtime in a clustered environment.” (from: http://dolphm.com/openstack-keystone-fernet-tokens/)

In our previous post, I covered the different types of tokens, the benefits of Fernet and a little bit of the technical details. In this part of our three part series we provide a method for enabling Fernet tokens on Red Hat OpenStack Platform Platform 10, during both pre and post deployment of the overcloud stack.
Pre-Overcloud Deployment
Official Red Hat documentation for enabling Fernet tokens in the overcloud can be found here:
Deploy Fernet on the Overcloud
Tools
We’ll be using the Red Hat OpenStack Platform here, so this means we’ll be interacting with the director node and heat templates. Our primary tool is the command-line client keystone-manage, part of the tools provided by the openstack-keystone RPM and used to set up and manage keystone in the overcloud. Of course, we’ll be using the director-based deployment of Red Hat’s OpenStack Platform to enable Fernet pre and/or post deployment.
Photo by Barn Images on Unsplash
Prepare Fernet keys on the undercloud
This procedure will start with preparation of the Fernet keys, which a default  deployment places on each controller in /etc/keystone/fernet-keys. Each controller must have the same keys, as tokens issued on one controller must be able to be validated on all controllers. Stay tuned to part three of this blog for an in-depth explanation of Fernet signing keys.

Source the stackrc file to ensure we are working with the undercloud:

$ source ~/stackrc‍‍‍‍‍‍‍‍‍‍‍‍

From your director, use keystone_manage to generate the Fernet keys as deployment artifacts:

$ sudo keystone-manage fernet_setup
  –keystone-user keystone
  –keystone-group keystone‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Tar up the keys for upload into a swift container on the undercloud:

$ sudo tar -zcf keystone-fernet-keys.tar.gz /etc/keystone/fernet-keys‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Upload the Fernet keys to the undercloud as swift artifacts (we assume your templates exist in ~/templates):

$ upload-swift-artifacts -f keystone-fernet-keys.tar.gz
  –environment ~/templates/deployment-artifacts.yaml‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Verify that your artifact exists in the undercloud:

$ swift list overcloud-artifacts Keystone-fernet-keys.tar.gz
NOTE: These keys should be secured as they can be used to sign and validate tokens that will have access to your cloud.‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Let’s verify that deployment-artifacts.yaml exists in ~/templates (NOTE: your URL detail will differ from what you see here – as this is a uniquely generated temporary URL):

$ cat ~/templates/deployment-artifacts.yaml
# Heat environment to deploy artifacts via Swift Temp URL(s)
parameter_defaults:
 DeployArtifactURLs:
   – ‘http://192.0.2.1:8080/v1/AUTH_c9d16242396b4eb1a0f950093fa9464c/over
cloud-artifacts/keystone-fernet-keys.tar.gz?temp_url_sig=917bd467e70516
581b1db295783205622606e367&temp_url_expires=1520463185’‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
NOTE: This is the swift URL that your overcloud deployment will use to copy the Fernet keys to your controllers.

Finally, generate the fernet.yaml template to enable Fernet as the default token provider in your overcloud:

$ cat << EOF > ~/templates/fernet.yaml
parameter_defaults:
         controllerExtraConfig:
           keystone::token_provider: ‘fernet’‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
Deploy and Validate
At this point, you are ready to deploy your overcloud with Fernet enabled as the token provider, and your keys distributed to each controller in /etc/keystone/fernet-keys.
Photo by Glenn Carstens-Peters on Unsplash
NOTE: This is an example deploy command, yours will likely include many more templates. For the purposes of our discussion, it is important that you simply include fernet.yaml as well as deployment-artifacts.yaml.
$ openstack overcloud deploy
–templates /home/stack/templates
-e  /home/stack/templates/environments/deployment-artifacts.yaml
-e /home/stack/templates/environments/fernet.yaml
–control-scale 3
–compute-scale 4
–control-flavor control
–compute-flavor compute
–ntp-server pool.ntp.org
Testing
Once the deployment is done you should validate that your overcloud is indeed using Fernet tokens instead of the default UUID token provider. From the director node:
$ source ~/overcloudrc
$ openstack token issue
+————+——————————————+
| Field | Value |
+————+——————————————+
| expires | 2017-03-22 19:16:21+00:00                |
| id | gAAAAABY0r91iYvMFQtGiRRqgMvetAF5spEZPTvEzCpFWr3  |
| | 1IB8T8L1MRgf4NlOB6JsfFhhdxenSFob_0vEEHLTT6rs3Rw |
| | q3-Zm8stCF7sTIlmBVms9CUlwANZOQ4lRMSQ6nTfEPM57kX |
| | Xw8GBGouWDz8hqDYAeYQCIHtHDWH5BbVs_yC8ICXBk       |
| project_id | f8adc9dea5884d23a30ccbd486fcf4c6      |
| user_id | 2f6106cef80741c6ae2bfb3f25d70eee         |
+————+——————————————+‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
Note the length of this token in the “id” field. This is a Fernet token.
Enabling Fernet Post Overcloud Deployment
Part of the power of the Red Hat OpenStack Platform director deployment methodology lies in its ability to easily upgrade and change a running overcloud. Features such as Fernet, scaling, and complex service management, can be managed by running a deployment update directly against a running overcloud.
Updating is really straightforward. If you’ve already deployed your overcloud with UUID tokens you can change them to Fernet by simply following the pre-deploy example above and run the openstack deploy command again, with the enabled heat templates mentioned, against your running deployment! This will change your overcloud token default to Fernet. Be sure to deploy with your original deploy command, as any changes there could affect your overcloud. And of course, standard outage windows apply – production changes should be tested and prepared accordingly.
Conclusion
I hope you’ve enjoyed our discussion on enabling Fernet tokens in the overcloud. Additionally, I hope that I was able to shed some light on this process as well. Official documentation on these concepts and Fernet tokens in the overcloud process is available. 
In our last, and final instalment on this topic we’ll look at some of the many methods for rotating your newly enabled Fernet keys on your controller nodes. We’ll be using Red Hat’s awesome IT automation tool, Red Hat Ansible to do just that.
Quelle: RedHat Stack

Red Hat OpenStack Platform 12 Is Here!

We are happy to announce that Red Hat OpenStack Platform 12 is now Generally Available (GA).
This is Red Hat OpenStack Platform’s 10th release and is based on the upstream OpenStack release, Pike.
Red Hat OpenStack Platform 12 is focused on the operational aspects to deploying OpenStack. OpenStack has established itself as a solid technology choice and with this release, we are working hard to further improve the usability aspects and bring OpenStack and operators into harmony.

With operationalization in mind, let’s take a quick look at some the biggest and most exciting features now available.

Containers.
As containers are changing and improving IT operations it only stands to reason that OpenStack operators can also benefit from this important and useful technology concept. In Red Hat OpenStack Platform we have begun the work of containerizing the control plane. This includes some of the main services that run OpenStack, like Nova and Glance, as well as supporting technologies, such as Red Hat Ceph Storage. All these services can be deployed as containerized applications via Red Hat OpenStack Platform’s lifecycle and deployment tool, director.
Photo by frank mckenna on Unsplash
Bringing a containerized control plane to OpenStack is important. Through it we can immediately enhance, among other things, stability and security features through isolation. By design, OpenStack services often have complex, overlapping library dependencies that must be accounted for in every upgrade, rollback, and change. For example, if Glance needs a security patch that affects a library shared by Nova, time must be spent to ensure Nova can survive the change; or even more frustratingly, Nova may need to be updated itself. This makes the change effort and resulting change window and impact, much more challenging. Simply put, it’s an operational headache.
However, when we isolate those dependencies into a container we are able to work with services with much more granularity and separation. An urgent upgrade to Glance can be done alongside Nova without affecting it in any way. With this granularity, operators can more easily quantify and test the changes helping to get them to production more quickly.
We are working closely with our vendors, partners, and customers to move to this containerized approach in a way that is minimally disruptive. Upgrading from a non-containerized control plane to one with most services containerized is fully managed by Red Hat OpenStack Platform director. Indeed, when upgrading from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12 the entire move to containerized services is handled “under the hood” by director. With just a few simple preparatory steps director delivers the biggest change to OpenStack in years direct to your running deployment in an almost invisible, simple to run, upgrade. It’s really cool!
Red Hat Ansible.
Like containers, it’s pretty much impossible to work in operations and not be aware of, or more likely be actively using, Red Hat Ansible. Red Hat Ansible is known to be easier to use for customising and debugging; most operators are more comfortable with it, and it generally provides an overall nicer experience through a straightforward and easy to read format.

Of course, we at Red Hat are excited to include Ansible as a member of our own family. With Red Hat Ansible we are actively integrating this important technology into more and more of our products.
In Red Hat OpenStack Platform 12, Red Hat Ansible takes center stage.
But first, let’s be clear, we have not dropped Heat; there are very real requirements around backward compatibility and operator familiarity that are delivered with the Heat template model.
But we don’t have to compromise because of this requirement. With Ansible we are offering operator and developer access points independent of the Heat templates. We use the same composable services architecture as we had before; the Heat-level flexibility still works the same, we just translate to Ansible under the hood.
Simplistically speaking, before Ansible, our deployments were mostly managed by Heat templates driving Puppet. Now, we use Heat to drive Ansible by default, and then Ansible drives Puppet and other deployment activities as needed. And with the addition of containerized services, we also have positioned Ansible as a key component of the entire container deployment. By adding a thin layer of Ansible, operators can now interact with a deployment in ways they could not previously.
For instance, take the new openstack overcloud config download command. This command allows an operator to generate all the Ansible playbooks being used for a deployment into a local directory for review. And these aren’t mere interpretations of Heat actions, these are the actual, dynamically generated playbooks being run during the deployment. Combine this with Ansible’s cool dynamic inventory feature, which allows an operator to maintain their Ansible inventory file based on a real-time infrastructure query, and you get an incredibly powerful troubleshooting entry point.
Check out this short (1:50) video showing Red Hat Ansible and this new exciting command and concept:

Network composability.
Another major new addition for operators is the extension of the composability concept into networks.
As a reminder, when we speak about composability we are talking about enabling operators to create detailed solutions by giving them basic, simple, defined components from which they can build for their own unique, complex topologies.
With composable networks, operators are no longer only limited to using the predefined networks provided by director. Instead, they can now create additional networks to suit their specific needs. For instance, they might create a network just for NFS filer traffic, or a dedicated SSH network for security reasons.
Photo by Radek Grzybowski on Unsplash
And as expected, composable networks work with composable roles. Operators can create custom roles and apply multiple, custom networks to them as required. The combinations lead to an incredibly powerful way to build complex enterprise network topologies, including an on-ramp to the popular L3 spine-leaf topology.
And to make it even easier to put together we have added automation in director that verifies that resources and Heat templates for each composable network are automatically generated for all roles. Fewer templates to edit can mean less time to deployment!
Telco speed.
Telcos will be excited to know we are now delivering production ready virtualized fast data path technologies. This release includes Open vSwitch 2.7 and the Data Plane Development Kit (DPDK) 16.11 along with improvements to Neutron and Nova allowing for robust virtualized deployments that include support for large MTU sizing (i.e. jumbo frames) and multiple queues per interface. OVS+DPDK is now a viable option alongside SR-IOV and PCI passthrough in offering more choice for fast data in Infrastructure-as-a-Service (IaaS) solutions.
Operators will be pleased to see that these new features can be more easily deployed thanks to new capabilities within Ironic, which store environmental parameters during introspection. These values are then available to the overcloud deployment providing an accurate view of hardware for ideal tuning. Indeed, operators can further reduce the complexity around tuning NFV deployments by allowing director to use the collected values to dynamically derive the correct parameters resulting in truly dynamic, optimized tuning.
Serious about security.

Helping operators, and the companies they work for, focus on delivering business value instead of worrying about their infrastructure is core to Red Hat’s thinking. And one way we make sure everyone sleeps better at night with OpenStack is through a dedicated focus on security.
Starting with Red Hat OpenStack Platform 12 we have more internal services using encryption than in any previous release. This is an important step for OpenStack as a community to help increase adoption in enterprise datacenters, and we are proud to be squarely at the center of that effort. For instance, in this release even more services now feature internal TLS encryption.
Let’s be realistic, though, focusing on security extends beyond just technical implementation. Starting with Red Hat OpenStack Platform 12 we are also releasing a comprehensive security guide, which provides best practices as well as conceptual information on how to make an OpenStack cloud more secure. Our security stance is firmly rooted in meeting global standards from top international agencies such as FedRAMP (USA), ETSI (Europe), and ANSSI (France). With this guide, we are excited to share these efforts with the broader community.
Do you even test?
How many times has someone asked an operations person this question? Too many! “Of course we test,” they will say. And with Red Hat OpenStack Platform 12 we’ve decided to make sure the world knows we do, too.
Through the concept of Distributed Continuous Integration (DCI), we place remote agents on site with customers, partners, and vendors that continuously build our releases at all different stages on all different architectures. By engaging outside resources we are not limited by internal resource restrictions; instead, we gain access to hardware and architecture that could never be tested in any one company’s QA department. With DCI we can fully test our releases to see how they work under an ever-increasing set of environments. We are currently partnered with major industry vendors for this program and are very excited about how it helps us make the entire OpenStack ecosystem better for our customers.
So, do we even test? Oh, you bet we do!
Feel the love!
Photo by grafxart photo on Unsplash
And this is just a small piece of the latest Red Hat OpenStack Platform 12 release. Whether you are looking to try out a new cloud, or thinking about an upgrade, this release brings a level of operational maturity that will really impress!
Now that OpenStack has proven itself an excellent choice for IaaS, it can focus on making itself a loveable one.
Let Red Hat OpenStack Platform 12 reignite the romance between you and your cloud!

Red Hat OpenStack Platform 12 is designated as a “Standard” release with a one-year support window. Click here for more details on the release lifecycle for Red Hat OpenStack Platform.
Find out more about this release at the Red Hat OpenStack Platform Product page. Or visit our vast online documentation.
And if you’re ready to get started now, check out the free 60-day evaluation available on the Red Hat portal.
Looking for even more? Contact your local Red Hat office today.
 
Quelle: RedHat Stack

Using Ansible for Fernet Key Rotation on Red Hat OpenStack Platform 11

In our first blog post on the topic of Fernet tokens, we explored what they are and why you should think about enabling them in your OpenStack cloud. In our second post, we looked at the method for enabling these
Fernet tokens in Keystone are fantastic. Enabling these, instead of UUID or PKI tokens, really does make a difference in your cloud’s performance and overall ease of management. I get asked a lot about how to manage keys on your controller cluster when using Fernet. As you may imagine, this could potentially take your cloud down if you do it wrong. Let’s review what Fernet keys are, as well as how to manage them in your Red Hat OpenStack Platform cloud.
Photo by Freddy Marschall on Unsplash

Prerequisites

A Red Hat OpenStack Platform 11 director-based deployment
One or more controller nodes
Git command-line client

What are Fernet Keys?
Fernet keys are used to encrypt and decrypt Fernet tokens in OpenStack’s Keystone API. These keys are stored on each controller node, and must be available to authenticate and validate users of the various OpenStack components in your cloud.
Any given implementation of keystone can have (n)keys based on the max_active_keys setting in /etc/keystone/keystone.conf. This number will include all of the types listed below.
There are essentially three types of keys:
Primary
Primary keys are used for token generation and validation. You can think of this as the active key in your cloud. Any time a user authenticates, or is validated by an OpenStack API, these are the keys that will be used. There can only be one primary key, and it must exist on all nodes (usually controllers) that are running the keystone API. The primary key is always the highest indexed key.
Secondary
Secondary keys are only used for token validation. These keys are rotated out of primary status, and thus are used to validate tokens that may exist after a new primary key has been created. There can be multiple secondary keys, the oldest of which will be deleted based on your max_active_keys setting after each key rotation.
Staged
These keys are always the lowest indexed keys (0). Whenever keys are rotated, this key is promoted to a primary key at the highest index allowable by max_active_keys. These keys exist to allow you to copy them to all nodes in your cluster before they’re promoted to primary status. This avoids the potential issue where keystone fails to validate a token because the key used to encrypt it does not yet exist in /etc/keystone/fernet-keys.
The following example shows the keys that you’d see in /etc/keystone/fernet-keys, with max_active_keys set to 4.
0 (staged: the next primary key)
1 (primary: token generation & validation)

Upon performing a key rotation, our staged key (0), will be the new primary key (2), while our old primary key (1), will be moved to secondary status (1).
0 (staged: the next primary key)
1 (secondary: token validation)
2 (primary: token generation & validation)

We have three keys here, so yet another key rotation will produce the following result:
0 (staged: the next primary key)
1 (secondary: token validation)
2 (secondary: token validation)
3 (primary: token generation & validation)
Our staged key (0), now becomes our primary key (3). Our old primary key (2), now becomes a secondary key (2), and (1) remains a secondary key.
We now have four keys, the number we’ve set in max_active_keys. One more final rotation would produce the following:
0 (staged: the next primary key)
1 (deleted)
2 (secondary: token validation)
3 (secondary: token validation)
4 (primary: token generation & validation)
Our oldest key, secondary (1), is deleted. Our previously staged key (0), is moved to primary (4) status.  A new staged key (0) is created. And finally our old primary key (3) is moved to secondary status.
If you haven’t noticed this by now, rotating keys will always remove the key with the lowest index, excluding 0 — up to your max_active_keys. Additionally, note that you must be careful to set your max_active_keys configuration setting to something that makes sense, given your token lifetime and how often you plan to rotate your keys.
When to rotate?
Photo by Uroš Jovičić on Unsplash
The answer to this question would probably be different for most organizations. My take on this is simply: if you can do it safely, why not automate it and do it on a regular basis? Your threat model and use-case would normally dictate this or you may need to adhere to certain encryption and key management security controls in a given compliance framework. Whatever the case, I think about regular key rotation as a best-practices security measure. You always want to limit the amount of sensitive data, in this case Fernet tokens, encrypted with a single version of any given encryption key. Rotating your keys on a regular basis creates a smaller exposure surface for your cloud and your users.
How many keys do you need active at one time? This all depends on how often you plan to rotate them, as well as how long your token lifetime is. The answer to this can be expressed in the following equation:
fernet-keys = token-validity(hours) / rotation-time(hours) + 2
Let’s use an example of rotation every 8 hours, with a default token lifetime of 24 hours. This would be
24 hours / 8 hours + 2 = 5
Five keys on your controllers would ensure that you always had an active set of keys for your cloud. With this in mind, let’s look at way to rotate your keys using Ansible.
Rotating Fernet keys
So you may be wondering, how does one automate this process? You can image that this process can be painful and prone to error if done by hand. While you could use the fernet_rotate command to do this on each node manually, why would you?
Let’s look at how to do this with Ansible, Red Hat’s awesome tool for automation. If you’re new to Ansible, please do yourself a favor and check out this quick-start video.
We’ll be using an Ansible role, created by my fellow Red Hatter Juan Antonio Osorio (Ozz), one of the coolest guys I know. This is just one way of doing this. For a Red Hat OpenStack Platform install you should contact Red Hat support to review your options and support implications. And of course, your results may vary so be sure to test out on a non-production install!
Let’s start by logging into your Red Hat OpenStack director node as the stack user, and creating a roles directory in /home/stack:
$ cat << EOF > ~/rotate.yml
– hosts: controller
 become: true
 roles:
   – tripleo-fernet-keys-rotation
EOF
We need to source our stackrc, as we’ll be operating on our controller nodes in the next step
$ source ~/stackrc
Using a dynamic inventory from /usr/bin/tripleo-ansible-inventory, we’ll run this playbook and rotate the keys on our controllers
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory rotate.yml
Ansible Role Analysis
What happened? Looking at Ansible’s output, you’ll note that several tasks were performed. If you’d like to see these tasks, look no further than /home/stack/roles/tripleo-fernet-keys-rotation/tasks/main.yml:
This task runs a python script, generate_key_yaml.py, in ~/roles/tripleo-ansible-inventory/files, that creates a new fernet key:
– name: Generate new key
script: generate_key_yaml.py
register: new_key_register
run_once: true

This task will take the output of the previous task, from stdout, and register it as the new_key.

– name: Set new key fact
set_fact:
new_key: “{{ new_key_register.stdout }}”
Next, we get a sorted list of the keys that currently exist in /etc/keystone/fernet-keys
– name: Get current primary key index
shell: ls /etc/keystone/fernet-keys | sort -r | head -1
register: current_key_index_register
Let’s set the next primary key index
– name: Set next key index fact
set_fact:
next_key_index: “{{ current_key_index_register.stdout|int + 1 }}”
Now we’ll move the staged key to the new primary key
– name: Move staged key to new index
command: mv /etc/keystone/fernet-keys/0 /etc/keystone/fernet-keys/{{ next_key_index }}
Next, let’s set our new_key to the new staged key
– name: Set new key as staged key
copy:
content: “{{ new_key }}”
dest: /etc/keystone/fernet-keys/0
owner: keystone
group: keystone
mode: 0600
Finally, we’ll reload (not restart) httpd on the controller, allowing keystone to load the new keys
– name: Reload httpd
service:
name: httpd
state: reloaded
Scheduling
Now that we have a way to automate rotation of our keys, it’s time to schedule this automation. There are several ways you could do this:
Cron
You could, but why?
Systemd Realtime Timers
Let’s create the systemd service that will run our playbook:
cat << EOF > /etc/systemd/system/fernet-rotate.service
[Unit]
Description=Run an Ansible playbook to rotate fernet keys on the overcloud
[Service]
User=stack
Group=stack
ExecStart=/usr/bin/ansible-playbook
 -i /usr/bin/tripleo-ansible-inventory /home/stack/rotate.yml
EOF
Now we’ll create a timer with the same name, only with .timer as the suffix, in /etc/systemd/system on the director node:
cat << EOF > /etc/systemd/system/fernet-rotate.timer
[Unit]
Description=Timer to rotate our Overcloud Fernet Keys weekly
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.target
EOF
Ansible Tower
I like how your thinking! But that’s a topic for another day.
Red Hat OpenStack Platform 12
Red Hat OpenStack Platform 12 provides support for key rotation via Mistral. Learn all about Red Hat OpenStack Platform 12 here.
What about logging?
Ansible to the rescue!
Ansible will use the log_path configuration option from /etc/ansible/ansible.cfg, ansible.cfg in the directory of the playbook, or $HOME/.ansible.cfg. You just need to set this and forget it.
So let’s enable this service and timer, and we’re off to the races:
$ sudo systemctl enable fernet-rotate.service
$ sudo systemctl enable fernet-rotate.timer
Credit: Many thanks to Lance Bragstad and Dolph Matthews for the key rotation methodology.
Quelle: RedHat Stack