Enabling Keystone’s Fernet Tokens in Red Hat OpenStack Platform

As we learned in part one of this blog post, beginning with the OpenStack Kilo release, a new token provider is now available as an alternative to PKI and UUID. Fernet tokens are essentially an implementation of ephemeral tokens in Keystone. What this means is that tokens are no longer persisted and hence do not need to be replicated across clusters or regions.
“In short, OpenStack’s authentication and authorization metadata is neatly bundled into a MessagePacked payload, which is then encrypted and signed as a Fernet token. OpenStack Kilo’s implementation supports a three-phase key rotation model that requires zero downtime in a clustered environment.” (from: http://dolphm.com/openstack-keystone-fernet-tokens/)

In our previous post, I covered the different types of tokens, the benefits of Fernet and a little bit of the technical details. In this part of our three part series we provide a method for enabling Fernet tokens on Red Hat OpenStack Platform Platform 10, during both pre and post deployment of the overcloud stack.
Pre-Overcloud Deployment
Official Red Hat documentation for enabling Fernet tokens in the overcloud can be found here:
Deploy Fernet on the Overcloud
Tools
We’ll be using the Red Hat OpenStack Platform here, so this means we’ll be interacting with the director node and heat templates. Our primary tool is the command-line client keystone-manage, part of the tools provided by the openstack-keystone RPM and used to set up and manage keystone in the overcloud. Of course, we’ll be using the director-based deployment of Red Hat’s OpenStack Platform to enable Fernet pre and/or post deployment.
Photo by Barn Images on Unsplash
Prepare Fernet keys on the undercloud
This procedure will start with preparation of the Fernet keys, which a default  deployment places on each controller in /etc/keystone/fernet-keys. Each controller must have the same keys, as tokens issued on one controller must be able to be validated on all controllers. Stay tuned to part three of this blog for an in-depth explanation of Fernet signing keys.

Source the stackrc file to ensure we are working with the undercloud:

$ source ~/stackrc‍‍‍‍‍‍‍‍‍‍‍‍

From your director, use keystone_manage to generate the Fernet keys as deployment artifacts:

$ sudo keystone-manage fernet_setup
  –keystone-user keystone
  –keystone-group keystone‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Tar up the keys for upload into a swift container on the undercloud:

$ sudo tar -zcf keystone-fernet-keys.tar.gz /etc/keystone/fernet-keys‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Upload the Fernet keys to the undercloud as swift artifacts (we assume your templates exist in ~/templates):

$ upload-swift-artifacts -f keystone-fernet-keys.tar.gz
  –environment ~/templates/deployment-artifacts.yaml‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Verify that your artifact exists in the undercloud:

$ swift list overcloud-artifacts Keystone-fernet-keys.tar.gz
NOTE: These keys should be secured as they can be used to sign and validate tokens that will have access to your cloud.‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Let’s verify that deployment-artifacts.yaml exists in ~/templates (NOTE: your URL detail will differ from what you see here – as this is a uniquely generated temporary URL):

$ cat ~/templates/deployment-artifacts.yaml
# Heat environment to deploy artifacts via Swift Temp URL(s)
parameter_defaults:
 DeployArtifactURLs:
   – ‘http://192.0.2.1:8080/v1/AUTH_c9d16242396b4eb1a0f950093fa9464c/over
cloud-artifacts/keystone-fernet-keys.tar.gz?temp_url_sig=917bd467e70516
581b1db295783205622606e367&temp_url_expires=1520463185’‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
NOTE: This is the swift URL that your overcloud deployment will use to copy the Fernet keys to your controllers.

Finally, generate the fernet.yaml template to enable Fernet as the default token provider in your overcloud:

$ cat << EOF > ~/templates/fernet.yaml
parameter_defaults:
         controllerExtraConfig:
           keystone::token_provider: ‘fernet’‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
Deploy and Validate
At this point, you are ready to deploy your overcloud with Fernet enabled as the token provider, and your keys distributed to each controller in /etc/keystone/fernet-keys.
Photo by Glenn Carstens-Peters on Unsplash
NOTE: This is an example deploy command, yours will likely include many more templates. For the purposes of our discussion, it is important that you simply include fernet.yaml as well as deployment-artifacts.yaml.
$ openstack overcloud deploy
–templates /home/stack/templates
-e  /home/stack/templates/environments/deployment-artifacts.yaml
-e /home/stack/templates/environments/fernet.yaml
–control-scale 3
–compute-scale 4
–control-flavor control
–compute-flavor compute
–ntp-server pool.ntp.org
Testing
Once the deployment is done you should validate that your overcloud is indeed using Fernet tokens instead of the default UUID token provider. From the director node:
$ source ~/overcloudrc
$ openstack token issue
+————+——————————————+
| Field | Value |
+————+——————————————+
| expires | 2017-03-22 19:16:21+00:00                |
| id | gAAAAABY0r91iYvMFQtGiRRqgMvetAF5spEZPTvEzCpFWr3  |
| | 1IB8T8L1MRgf4NlOB6JsfFhhdxenSFob_0vEEHLTT6rs3Rw |
| | q3-Zm8stCF7sTIlmBVms9CUlwANZOQ4lRMSQ6nTfEPM57kX |
| | Xw8GBGouWDz8hqDYAeYQCIHtHDWH5BbVs_yC8ICXBk       |
| project_id | f8adc9dea5884d23a30ccbd486fcf4c6      |
| user_id | 2f6106cef80741c6ae2bfb3f25d70eee         |
+————+——————————————+‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍
Note the length of this token in the “id” field. This is a Fernet token.
Enabling Fernet Post Overcloud Deployment
Part of the power of the Red Hat OpenStack Platform director deployment methodology lies in its ability to easily upgrade and change a running overcloud. Features such as Fernet, scaling, and complex service management, can be managed by running a deployment update directly against a running overcloud.
Updating is really straightforward. If you’ve already deployed your overcloud with UUID tokens you can change them to Fernet by simply following the pre-deploy example above and run the openstack deploy command again, with the enabled heat templates mentioned, against your running deployment! This will change your overcloud token default to Fernet. Be sure to deploy with your original deploy command, as any changes there could affect your overcloud. And of course, standard outage windows apply – production changes should be tested and prepared accordingly.
Conclusion
I hope you’ve enjoyed our discussion on enabling Fernet tokens in the overcloud. Additionally, I hope that I was able to shed some light on this process as well. Official documentation on these concepts and Fernet tokens in the overcloud process is available. 
In our last, and final instalment on this topic we’ll look at some of the many methods for rotating your newly enabled Fernet keys on your controller nodes. We’ll be using Red Hat’s awesome IT automation tool, Red Hat Ansible to do just that.
Quelle: RedHat Stack

Red Hat OpenStack Platform 12 Is Here!

We are happy to announce that Red Hat OpenStack Platform 12 is now Generally Available (GA).
This is Red Hat OpenStack Platform’s 10th release and is based on the upstream OpenStack release, Pike.
Red Hat OpenStack Platform 12 is focused on the operational aspects to deploying OpenStack. OpenStack has established itself as a solid technology choice and with this release, we are working hard to further improve the usability aspects and bring OpenStack and operators into harmony.

With operationalization in mind, let’s take a quick look at some the biggest and most exciting features now available.

Containers.
As containers are changing and improving IT operations it only stands to reason that OpenStack operators can also benefit from this important and useful technology concept. In Red Hat OpenStack Platform we have begun the work of containerizing the control plane. This includes some of the main services that run OpenStack, like Nova and Glance, as well as supporting technologies, such as Red Hat Ceph Storage. All these services can be deployed as containerized applications via Red Hat OpenStack Platform’s lifecycle and deployment tool, director.
Photo by frank mckenna on Unsplash
Bringing a containerized control plane to OpenStack is important. Through it we can immediately enhance, among other things, stability and security features through isolation. By design, OpenStack services often have complex, overlapping library dependencies that must be accounted for in every upgrade, rollback, and change. For example, if Glance needs a security patch that affects a library shared by Nova, time must be spent to ensure Nova can survive the change; or even more frustratingly, Nova may need to be updated itself. This makes the change effort and resulting change window and impact, much more challenging. Simply put, it’s an operational headache.
However, when we isolate those dependencies into a container we are able to work with services with much more granularity and separation. An urgent upgrade to Glance can be done alongside Nova without affecting it in any way. With this granularity, operators can more easily quantify and test the changes helping to get them to production more quickly.
We are working closely with our vendors, partners, and customers to move to this containerized approach in a way that is minimally disruptive. Upgrading from a non-containerized control plane to one with most services containerized is fully managed by Red Hat OpenStack Platform director. Indeed, when upgrading from Red Hat OpenStack Platform 11 to Red Hat OpenStack Platform 12 the entire move to containerized services is handled “under the hood” by director. With just a few simple preparatory steps director delivers the biggest change to OpenStack in years direct to your running deployment in an almost invisible, simple to run, upgrade. It’s really cool!
Red Hat Ansible.
Like containers, it’s pretty much impossible to work in operations and not be aware of, or more likely be actively using, Red Hat Ansible. Red Hat Ansible is known to be easier to use for customising and debugging; most operators are more comfortable with it, and it generally provides an overall nicer experience through a straightforward and easy to read format.

Of course, we at Red Hat are excited to include Ansible as a member of our own family. With Red Hat Ansible we are actively integrating this important technology into more and more of our products.
In Red Hat OpenStack Platform 12, Red Hat Ansible takes center stage.
But first, let’s be clear, we have not dropped Heat; there are very real requirements around backward compatibility and operator familiarity that are delivered with the Heat template model.
But we don’t have to compromise because of this requirement. With Ansible we are offering operator and developer access points independent of the Heat templates. We use the same composable services architecture as we had before; the Heat-level flexibility still works the same, we just translate to Ansible under the hood.
Simplistically speaking, before Ansible, our deployments were mostly managed by Heat templates driving Puppet. Now, we use Heat to drive Ansible by default, and then Ansible drives Puppet and other deployment activities as needed. And with the addition of containerized services, we also have positioned Ansible as a key component of the entire container deployment. By adding a thin layer of Ansible, operators can now interact with a deployment in ways they could not previously.
For instance, take the new openstack overcloud config download command. This command allows an operator to generate all the Ansible playbooks being used for a deployment into a local directory for review. And these aren’t mere interpretations of Heat actions, these are the actual, dynamically generated playbooks being run during the deployment. Combine this with Ansible’s cool dynamic inventory feature, which allows an operator to maintain their Ansible inventory file based on a real-time infrastructure query, and you get an incredibly powerful troubleshooting entry point.
Check out this short (1:50) video showing Red Hat Ansible and this new exciting command and concept:

Network composability.
Another major new addition for operators is the extension of the composability concept into networks.
As a reminder, when we speak about composability we are talking about enabling operators to create detailed solutions by giving them basic, simple, defined components from which they can build for their own unique, complex topologies.
With composable networks, operators are no longer only limited to using the predefined networks provided by director. Instead, they can now create additional networks to suit their specific needs. For instance, they might create a network just for NFS filer traffic, or a dedicated SSH network for security reasons.
Photo by Radek Grzybowski on Unsplash
And as expected, composable networks work with composable roles. Operators can create custom roles and apply multiple, custom networks to them as required. The combinations lead to an incredibly powerful way to build complex enterprise network topologies, including an on-ramp to the popular L3 spine-leaf topology.
And to make it even easier to put together we have added automation in director that verifies that resources and Heat templates for each composable network are automatically generated for all roles. Fewer templates to edit can mean less time to deployment!
Telco speed.
Telcos will be excited to know we are now delivering production ready virtualized fast data path technologies. This release includes Open vSwitch 2.7 and the Data Plane Development Kit (DPDK) 16.11 along with improvements to Neutron and Nova allowing for robust virtualized deployments that include support for large MTU sizing (i.e. jumbo frames) and multiple queues per interface. OVS+DPDK is now a viable option alongside SR-IOV and PCI passthrough in offering more choice for fast data in Infrastructure-as-a-Service (IaaS) solutions.
Operators will be pleased to see that these new features can be more easily deployed thanks to new capabilities within Ironic, which store environmental parameters during introspection. These values are then available to the overcloud deployment providing an accurate view of hardware for ideal tuning. Indeed, operators can further reduce the complexity around tuning NFV deployments by allowing director to use the collected values to dynamically derive the correct parameters resulting in truly dynamic, optimized tuning.
Serious about security.

Helping operators, and the companies they work for, focus on delivering business value instead of worrying about their infrastructure is core to Red Hat’s thinking. And one way we make sure everyone sleeps better at night with OpenStack is through a dedicated focus on security.
Starting with Red Hat OpenStack Platform 12 we have more internal services using encryption than in any previous release. This is an important step for OpenStack as a community to help increase adoption in enterprise datacenters, and we are proud to be squarely at the center of that effort. For instance, in this release even more services now feature internal TLS encryption.
Let’s be realistic, though, focusing on security extends beyond just technical implementation. Starting with Red Hat OpenStack Platform 12 we are also releasing a comprehensive security guide, which provides best practices as well as conceptual information on how to make an OpenStack cloud more secure. Our security stance is firmly rooted in meeting global standards from top international agencies such as FedRAMP (USA), ETSI (Europe), and ANSSI (France). With this guide, we are excited to share these efforts with the broader community.
Do you even test?
How many times has someone asked an operations person this question? Too many! “Of course we test,” they will say. And with Red Hat OpenStack Platform 12 we’ve decided to make sure the world knows we do, too.
Through the concept of Distributed Continuous Integration (DCI), we place remote agents on site with customers, partners, and vendors that continuously build our releases at all different stages on all different architectures. By engaging outside resources we are not limited by internal resource restrictions; instead, we gain access to hardware and architecture that could never be tested in any one company’s QA department. With DCI we can fully test our releases to see how they work under an ever-increasing set of environments. We are currently partnered with major industry vendors for this program and are very excited about how it helps us make the entire OpenStack ecosystem better for our customers.
So, do we even test? Oh, you bet we do!
Feel the love!
Photo by grafxart photo on Unsplash
And this is just a small piece of the latest Red Hat OpenStack Platform 12 release. Whether you are looking to try out a new cloud, or thinking about an upgrade, this release brings a level of operational maturity that will really impress!
Now that OpenStack has proven itself an excellent choice for IaaS, it can focus on making itself a loveable one.
Let Red Hat OpenStack Platform 12 reignite the romance between you and your cloud!

Red Hat OpenStack Platform 12 is designated as a “Standard” release with a one-year support window. Click here for more details on the release lifecycle for Red Hat OpenStack Platform.
Find out more about this release at the Red Hat OpenStack Platform Product page. Or visit our vast online documentation.
And if you’re ready to get started now, check out the free 60-day evaluation available on the Red Hat portal.
Looking for even more? Contact your local Red Hat office today.
 
Quelle: RedHat Stack

Using Ansible for Fernet Key Rotation on Red Hat OpenStack Platform 11

In our first blog post on the topic of Fernet tokens, we explored what they are and why you should think about enabling them in your OpenStack cloud. In our second post, we looked at the method for enabling these
Fernet tokens in Keystone are fantastic. Enabling these, instead of UUID or PKI tokens, really does make a difference in your cloud’s performance and overall ease of management. I get asked a lot about how to manage keys on your controller cluster when using Fernet. As you may imagine, this could potentially take your cloud down if you do it wrong. Let’s review what Fernet keys are, as well as how to manage them in your Red Hat OpenStack Platform cloud.
Photo by Freddy Marschall on Unsplash

Prerequisites

A Red Hat OpenStack Platform 11 director-based deployment
One or more controller nodes
Git command-line client

What are Fernet Keys?
Fernet keys are used to encrypt and decrypt Fernet tokens in OpenStack’s Keystone API. These keys are stored on each controller node, and must be available to authenticate and validate users of the various OpenStack components in your cloud.
Any given implementation of keystone can have (n)keys based on the max_active_keys setting in /etc/keystone/keystone.conf. This number will include all of the types listed below.
There are essentially three types of keys:
Primary
Primary keys are used for token generation and validation. You can think of this as the active key in your cloud. Any time a user authenticates, or is validated by an OpenStack API, these are the keys that will be used. There can only be one primary key, and it must exist on all nodes (usually controllers) that are running the keystone API. The primary key is always the highest indexed key.
Secondary
Secondary keys are only used for token validation. These keys are rotated out of primary status, and thus are used to validate tokens that may exist after a new primary key has been created. There can be multiple secondary keys, the oldest of which will be deleted based on your max_active_keys setting after each key rotation.
Staged
These keys are always the lowest indexed keys (0). Whenever keys are rotated, this key is promoted to a primary key at the highest index allowable by max_active_keys. These keys exist to allow you to copy them to all nodes in your cluster before they’re promoted to primary status. This avoids the potential issue where keystone fails to validate a token because the key used to encrypt it does not yet exist in /etc/keystone/fernet-keys.
The following example shows the keys that you’d see in /etc/keystone/fernet-keys, with max_active_keys set to 4.
0 (staged: the next primary key)
1 (primary: token generation & validation)

Upon performing a key rotation, our staged key (0), will be the new primary key (2), while our old primary key (1), will be moved to secondary status (1).
0 (staged: the next primary key)
1 (secondary: token validation)
2 (primary: token generation & validation)

We have three keys here, so yet another key rotation will produce the following result:
0 (staged: the next primary key)
1 (secondary: token validation)
2 (secondary: token validation)
3 (primary: token generation & validation)
Our staged key (0), now becomes our primary key (3). Our old primary key (2), now becomes a secondary key (2), and (1) remains a secondary key.
We now have four keys, the number we’ve set in max_active_keys. One more final rotation would produce the following:
0 (staged: the next primary key)
1 (deleted)
2 (secondary: token validation)
3 (secondary: token validation)
4 (primary: token generation & validation)
Our oldest key, secondary (1), is deleted. Our previously staged key (0), is moved to primary (4) status.  A new staged key (0) is created. And finally our old primary key (3) is moved to secondary status.
If you haven’t noticed this by now, rotating keys will always remove the key with the lowest index, excluding 0 — up to your max_active_keys. Additionally, note that you must be careful to set your max_active_keys configuration setting to something that makes sense, given your token lifetime and how often you plan to rotate your keys.
When to rotate?
Photo by Uroš Jovičić on Unsplash
The answer to this question would probably be different for most organizations. My take on this is simply: if you can do it safely, why not automate it and do it on a regular basis? Your threat model and use-case would normally dictate this or you may need to adhere to certain encryption and key management security controls in a given compliance framework. Whatever the case, I think about regular key rotation as a best-practices security measure. You always want to limit the amount of sensitive data, in this case Fernet tokens, encrypted with a single version of any given encryption key. Rotating your keys on a regular basis creates a smaller exposure surface for your cloud and your users.
How many keys do you need active at one time? This all depends on how often you plan to rotate them, as well as how long your token lifetime is. The answer to this can be expressed in the following equation:
fernet-keys = token-validity(hours) / rotation-time(hours) + 2
Let’s use an example of rotation every 8 hours, with a default token lifetime of 24 hours. This would be
24 hours / 8 hours + 2 = 5
Five keys on your controllers would ensure that you always had an active set of keys for your cloud. With this in mind, let’s look at way to rotate your keys using Ansible.
Rotating Fernet keys
So you may be wondering, how does one automate this process? You can image that this process can be painful and prone to error if done by hand. While you could use the fernet_rotate command to do this on each node manually, why would you?
Let’s look at how to do this with Ansible, Red Hat’s awesome tool for automation. If you’re new to Ansible, please do yourself a favor and check out this quick-start video.
We’ll be using an Ansible role, created by my fellow Red Hatter Juan Antonio Osorio (Ozz), one of the coolest guys I know. This is just one way of doing this. For a Red Hat OpenStack Platform install you should contact Red Hat support to review your options and support implications. And of course, your results may vary so be sure to test out on a non-production install!
Let’s start by logging into your Red Hat OpenStack director node as the stack user, and creating a roles directory in /home/stack:
$ cat << EOF > ~/rotate.yml
– hosts: controller
 become: true
 roles:
   – tripleo-fernet-keys-rotation
EOF
We need to source our stackrc, as we’ll be operating on our controller nodes in the next step
$ source ~/stackrc
Using a dynamic inventory from /usr/bin/tripleo-ansible-inventory, we’ll run this playbook and rotate the keys on our controllers
$ ansible-playbook -i /usr/bin/tripleo-ansible-inventory rotate.yml
Ansible Role Analysis
What happened? Looking at Ansible’s output, you’ll note that several tasks were performed. If you’d like to see these tasks, look no further than /home/stack/roles/tripleo-fernet-keys-rotation/tasks/main.yml:
This task runs a python script, generate_key_yaml.py, in ~/roles/tripleo-ansible-inventory/files, that creates a new fernet key:
– name: Generate new key
script: generate_key_yaml.py
register: new_key_register
run_once: true

This task will take the output of the previous task, from stdout, and register it as the new_key.

– name: Set new key fact
set_fact:
new_key: “{{ new_key_register.stdout }}”
Next, we get a sorted list of the keys that currently exist in /etc/keystone/fernet-keys
– name: Get current primary key index
shell: ls /etc/keystone/fernet-keys | sort -r | head -1
register: current_key_index_register
Let’s set the next primary key index
– name: Set next key index fact
set_fact:
next_key_index: “{{ current_key_index_register.stdout|int + 1 }}”
Now we’ll move the staged key to the new primary key
– name: Move staged key to new index
command: mv /etc/keystone/fernet-keys/0 /etc/keystone/fernet-keys/{{ next_key_index }}
Next, let’s set our new_key to the new staged key
– name: Set new key as staged key
copy:
content: “{{ new_key }}”
dest: /etc/keystone/fernet-keys/0
owner: keystone
group: keystone
mode: 0600
Finally, we’ll reload (not restart) httpd on the controller, allowing keystone to load the new keys
– name: Reload httpd
service:
name: httpd
state: reloaded
Scheduling
Now that we have a way to automate rotation of our keys, it’s time to schedule this automation. There are several ways you could do this:
Cron
You could, but why?
Systemd Realtime Timers
Let’s create the systemd service that will run our playbook:
cat << EOF > /etc/systemd/system/fernet-rotate.service
[Unit]
Description=Run an Ansible playbook to rotate fernet keys on the overcloud
[Service]
User=stack
Group=stack
ExecStart=/usr/bin/ansible-playbook
 -i /usr/bin/tripleo-ansible-inventory /home/stack/rotate.yml
EOF
Now we’ll create a timer with the same name, only with .timer as the suffix, in /etc/systemd/system on the director node:
cat << EOF > /etc/systemd/system/fernet-rotate.timer
[Unit]
Description=Timer to rotate our Overcloud Fernet Keys weekly
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.target
EOF
Ansible Tower
I like how your thinking! But that’s a topic for another day.
Red Hat OpenStack Platform 12
Red Hat OpenStack Platform 12 provides support for key rotation via Mistral. Learn all about Red Hat OpenStack Platform 12 here.
What about logging?
Ansible to the rescue!
Ansible will use the log_path configuration option from /etc/ansible/ansible.cfg, ansible.cfg in the directory of the playbook, or $HOME/.ansible.cfg. You just need to set this and forget it.
So let’s enable this service and timer, and we’re off to the races:
$ sudo systemctl enable fernet-rotate.service
$ sudo systemctl enable fernet-rotate.timer
Credit: Many thanks to Lance Bragstad and Dolph Matthews for the key rotation methodology.
Quelle: RedHat Stack

Far Right Activist Charles Johnson Has Sued Twitter Over His Suspension

Mark Pernice for BuzzFeed News

For years, the controversial right-wing activist Charles C. Johnson has threatened to sue Twitter, which banned him in 2015.

Now, following a BuzzFeed News report that revealed the internal debate behind Twitter’s 2015 decision to bar him from its service, Johnson is putting his money where his mouth has long been.

In a lawsuit filed in California Superior Court in San Francisco on Monday, Johnson’s attorney Robert E. Barnes claims that the microblogging service banned his client for his political views, violating his right to free speech and breaking its contract with him in the process. In addition, the suit seeks millions of dollars of relief for alleged damage to Johnson’s media businesses. It was the second lawsuit filed today by a conservative activist against a tech superpower, following ex-Googler James Damore's suit against his former employer.

“This is going to be a very serious case over the freedom of the internet,” Johnson told BuzzFeed News, “And whether people have the right to say what they mean and mean what they say.”

Informed of the suit before it was filed, Twitter declined to comment on pending litigation.

The Johnson suit comes at a time when Americans across the political spectrum have become skeptical of the amount of power held by Silicon Valley giants and suspicious of their motives. It joins several other lawsuits by conservative parties against big tech platforms that claim tech companies like Twitter and Facebook discriminate against right-wing users. And while Johnson has a history of unsuccessful legal action, this suit hopes to test whether the various laws that have historically protected internet publishers are strong enough to withstand this new public scrutiny.

Twitter permanently suspended Johnson — a former Breitbart reporter who owns the crowd-sourced investigations site WeSearchr — in May of 2015 after he asked for donations to help “take out” civil rights activist Deray McKessson. While he claimed the tweet was taken out of context, prior to his suspension Johnson had drawn the company’s ire for his incendiary tweets —among them false rumors that President Obama was gay. In 2014 he was temporarily suspended from Twitter for posting photos and the address of an individual he claimed had been exposed to the Ebola virus. (After his suspension — as BuzzFeed News reported in December — Johnson began shorting Twitter's stock and attempting to enlist a range of conservative figures to help him sue the company.)

While the complaint takes issue with Twitter’s vague rules and inability to “convey a sufficiently definite warning” to Johnson for his behavior, the suit alleges that emails published by BuzzFeed News prove that the ban was “a political hit job on a politically disfavored individual.” In one January 2016 email to executives including current CEO Jack Dorsey, Tina Bhatnagar, Twitter’s VP of user services, suggested that Johnson’s suspension was a judgement call, rather than a strict interpretation of company rules. “We perma suspended Chuck Johnson even though it wasn't direct violent threats. It was just a call that the policy team made,” she wrote.

“That account is permanently suspended and nobody for no reason may reactivate it. Period.”

In a subsequent email, Twitter’s general counsel, Vijaya Gadde referenced a May 25, 2015, email from Costolo, which suggested the decision to make Johnson’s suspension permanent was Costolo’s. “As for Chuck Johnson – [former Twitter CEO] Dick [Costolo] made that decision,” Gadde wrote. Johnson’s complaint quotes the 2015 email from Costolo, in which the former CEO warns senior staff, “I don't want to find out we unsuspended this Chuck Johnson troll later on. That account is permanently suspended and nobody for no reason may reactivate it. Period. The press is reporting it as temporarily suspended. It is not temporarily suspended it is permanently suspended. I'm not sure why they're mistakenly reporting it as temporarily suspended but that's not the case here…don't let anybody unsuspend it.”

Costolo’s email, according to the complaint, “confirms that Twitter’s decision to permanently ban Johnson was not based on a perceived rule violation, but bias against Johnson.”

But even if Johnson’s attorneys are able to show that Twitter broke its contract with Johnson by banning him arbitrarily, the suit faces long odds. According to Eric Goldman, director of the Santa Clara University School of Law’s High Tech Law Institute, Twitter possesses a range of legal protections when it decides to ban a user.

“Twitter can choose to terminate anyone’s account at any time without repercussion,” Goldman told BuzzFeed News. “It has a categorical right to block whoever they choose.”

As a publisher, Twitter is protected by the first amendment. And as an internet service provider, Twitter is protected by Section 230 of the Communications Decency Act — often referred to as the most influential law in the development of the modern internet — which has historically immunized provider’s decisions to terminate accounts.

The protections of Section 230 depend on the “good faith” of the provider, and Johnson’s suit argues that the emails reported by BuzzFeed demonstrate the lack thereof. And yet Johnson’s own reputation for bad faith may undercut that argument.

“It’s clear Twitter blocked him because they consider him a troll,” Goldman said.

In addition, Johnson’s suit argues that Twitter “performs an exclusively and traditionally public function,” and so it shouldn’t have the right to ban him for speech it doesn’t like. According to Goldman, such arguments have historically been unsuccessful in the courts, in part because judges are loath to set a potentially sweeping new precedent. Still, it’s an area where growing public resentment of big tech’s monopolistic power could have influence over a judge or a jury.

“We can’t ignore that there is such skepticism towards internet companies’ consolidation of power,” Goldman said. “The prevailing environment makes it dangerous for them.”

And for Johnson, who seems to want to embarrass Twitter as much as he wants to make a broader statement about the nature of internet platforms and the way they discriminate against conservatives, simply getting the suit past an initial motion to dismiss — and into discovery — might represent a victory.

“You can lose a lawsuit and still win the argument,” Johnson said.

Quelle: <a href="Far Right Activist Charles Johnson Has Sued Twitter Over His Suspension“>BuzzFeed

A comparison of different Containers-as-a-Service products

The post A comparison of different Containers-as-a-Service products appeared first on Mirantis | Pure Play Open Cloud.
Just because you’re using containers instead of VMs doesn’t mean that your need for self-service infrastructure goes away. In fact, the “instant gratification” nature of containers means that it’s more important than ever for your developers to be able to get the resources they need at a moment’s notice, and that changes your requirements from Infrastructure-as-a-Service to Containers-as-a-Service (CaaS).

Naturally, there are multiple companies working on products to fill that CaaS need, and as one of them, we often get asked about the differences between them. And there are lots of differences. Some provide containers, some provider entire clusters. Some support multi-cloud installs, some don’t. There are so many variations, in fact, that we decided to put together a features matrix that explains it.

You can see the basics here in this post …… or you can see the details. Either way, we’d love to hear your opinions on what’s important to you!The post A comparison of different Containers-as-a-Service products appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis