[Podcast] PodCTL #20 – Gathering Kubernetes Communities

As we enter 2018, we know a few things about the Kubernetes community: It continues to grow larger each year, as evidenced by the growth of KubeCon. More companies are trying to run Kubernetes in production. More technology vendors are making Kubernetes a priority, and providing various levels of integration. More and more companies are […]
Quelle: OpenShift

Hooroo! Australia bids farewell to incredible OpenStack Summit

We have reached the end of another successful and exciting OpenStack Summit. Sydney did not disappoint giving attendees a wonderful show of weather ranging from rain and wind to bright, brilliant sunshine. The running joke was that Sydney was, again, just trying to be like Melbourne. Most locals will get that joke, and hopefully now some of our international visitors do, too!
Monty Taylor (Red Hat), Mark Collier (OpenStack Foundation), and Lauren Sell (OpenStack Foundation) open the Sydney Summit. (Photo: Author)

And much like the varied weather, the Summit really reflected the incredible diversity of both technology and community that we in the OpenStack world are so incredibly proud of. With over 2300 attendees from 54 countries, this Summit was noticeably more intimate but no less dynamic. Often having a smaller group of people allows for a more personal experience and increases the opportunities for deep, important interactions.
To my enjoyment I found that, unlike previous Summits, there wasn’t as much of a singularly dominant technological theme. In Boston it was impossible to turn a corner and not bump into a container talk. While containers were still a strong theme here in Sydney, I felt the general impetus moved away from specific technologies and into use cases and solutions. It feels like the OpenStack community has now matured to the point that it’s able to focus less on each specific technology piece and more on the business value those pieces create when working together.
Jonathan Bryce (OpenStack Foundation) (Photo: Author)
It is exciting to see both Red Hat associates and customers following this solution-based thinking with sessions demonstrating the business value that our amazing technology creates. Consider such sessions as SD-WAN – The open source way, where the complex components required for a solution are reviewed, and then live demoed as a complete solution. Truly exceptional. Or perhaps check out an overview of how the many components to an NFV solution come together to form a successful business story in A Telco Story of OpenStack Success.
At this Summit I felt that while the sessions still contained the expected technical content they rarely lost sight of the end goal: that OpenStack is becoming a key, and necessary, component to enabling true enterprise business value from IT systems.
To this end I was also excited to see over 40 sessions from Red Hat associates and our customers covering a wide range of industry solutions and use cases.  From Telcos to Insurance companies it is really exciting to see both our associates and our customers sharing their experiences with our solutions, especially in Australia and New Zealand with the world.
Mark McLoughlin, Senior Director of Engineering at Red Hat with Paddy Power Betfair’s Steven Armstrong and Thomas Andrew getting ready for a Facebook Live session (Photo: Anna Nathan)
Of course, there were too many sessions to attend in person, and with the wonderfully dynamic and festive air of the Marketplace offering great demos, swag, food, and, most importantly, conversations, I’m grateful for the OpenStack Foundation’s rapid publishing of all session videos. It’s a veritable pirate’s bounty of goodies and I recommend checking it out sooner rather than later on their website.
I was able to attend a few talks from Red Hat customers and associates that really got me thinking and excited. The themes were varied, from the growing world of Edge computing, to virtualizing network operations, to changing company culture; Red Hat and our customers are doing very exciting things.

Digital Transformation
Take for instance Telstra, who are using Red Hat OpenStack Platform as part of a virtual router solution. Two years ago the journey started with a virtualized network component delivered as an internal trial. This took a year to complete and was a big success from both a technological and cultural standpoint. As Senior Technology Specialist Andrew Harris from Telstra pointed out during the Q and A of his session, projects like this are not only about implementing new technology but also about “educating … staff in Linux, OpenStack and IT systems.” It was a great session co-presented with Juniper and Red Hat and really gets into how Telstra are able to deliver key business requirements such as reliability, redundancy, and scale while still meeting strict cost requirements.

 
Of course this type of digital transformation story is not limited to telcos. The use of OpenStack as a catalyst for company change as well as advanced solutions was seen strongly in two sessions from Australia’s Insurance Australia Group (IAG). Product
Eddie Satterly, IAG (Photo: Author)
Engineering and DataOps Lead Eddie Satterly recounted the journey IAG took to consolidate data for a better customer experience using open source technologies. IAG uses Red Hat OpenStack Platform as the basis for an internal open source revolution that has not only lead to significant cost savings but has even resulted in the IAG team open sourcing some of the tools that made it happen. Check out the full story of how they did it and join TechCrunch reporter Frederic Lardinois who chats with Eddie about the entire experience. There’s also a Facebook live chat Eddie did with Mark McLoughlin, Senior Director of Engineering at Red Hat that further tells their story.
 
 
Ops!
An area of excitement for those of us with roots in the operational space is the way that OpenStack continues to become easier to install and maintain. The evolution of TripleO, the upstream project for Red Hat OpenStack Platform’s deployment and lifecycle management tool known as director, has really reached a high point in the Pike cycle. With Pike, TripleO has begun utilizing Ansible as the core “engine” for upgrades, container orchestration, and lifecycle management. Check out Senior Principal Software Engineer Steve Hardy’s deep dive into all the cool things TripleO is doing and learn just how excited the new “openstack overcloud config download” command is going to make you, and your Ops team, become.
Steve Hardy (Red Hat) and Jaromir Coufal (Red Hat) (Photo: Author)
And as a quick companion to Steve’s talk, don’t miss his joint lightening talk with Red Hat Senior Product Manager Jaromir Coufal, lovingly titled OpenStack in Containers: A Deployment Hero’s Story of Love and Hate, for an excellent 10 minute intro to the journey of OpenStack, containers, and deployment.
Want more? Don’t miss these sessions …
Storage and OpenStack:

Delivering OpenStack and Ceph in containers
Panel: Experiences Scaling File Storage with CephFS and OpenStack
CephFS: Now fully awesome (What is the impact of CephFS on OpenStack cloud)

Containers and OpenStack:

Standing Up and Operating a Container Service on top of OpenStack Using OpenShift
Bringing Worlds Together: Designing and Deploying Kubernetes on an OpenStack multi-site environment

Telcos and OpenStack

Will OpenStack be Everywhere in 5G Networks?
“I pity the fool that builds his own cloud!”–Overcoming challenges of OpenStack based telco clouds
Sprint and the Open Telco Transformation

A great event

Although only 3 days long, this Summit really did pack a sizeable amount of content into that time. Being able to have the OpenStack world come to Sydney and enjoy a bit of Australian culture was really wonderful. Whether we were watching the world famous Melbourne Cup horse race with a room full of OpenStack developers and operators, or cruising Sydney’s famous harbour and talking the merits of cloud storage with the community, it really was an unique and exceptional week.
The Melbourne Cup is about to start! (Photo: Author)
The chance to see colleagues from across the globe, immersed in the technical content and environment they love, supporting and learning alongside customers, vendors, and engineers is incredibly exhilarating. In fact, despite the tiredness at the end of each day, I went to bed each night feeling more and more excited about the next day, week, and year in this wonderful community we call OpenStack!

See you in Vancouver!
Photo: Darin Sorrentino
Quelle: RedHat Stack

Flip: Samsung stellt elektronisches Whiteboard vor

Mit Samsungs elektronischem Whiteboard Flip soll gemeinsames Arbeiten einfacher werden: Bis zu vier Personen können gleichzeitig Eingaben machen, zudem lassen sich Screenshots von mobilen Geräten ans Flip übertragen. Das Board lässt sich um 90 Grad drehen. (CES 2018, Samsung)
Quelle: Golem

An Introduction to Fernet tokens in Red Hat OpenStack Platform

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.
First, some definitions …
What is a token? OpenStack tokens are bearer tokens, used to authenticate and validate users and processes in your OpenStack environment. Pretty much any time anything happens in OpenStack a token is involved. The OpenStack Keystone service is the core service that issues and validates tokens. Using these tokens, users and and software clients via API’s authenticate, receive, and finally use that token when requesting operations ranging from creating compute resources to allocating storage. Services like Nova or Ceph then validate that token with Keystone and continue on with or deny the requested operation. The following diagram, shows a simplified version of this dance.
Courtesy of the author

Token Types
Tokens come in several types, referred to as “token providers” in Keystone parlance. These types can be set at deployment time, or changed post deployment. Ultimately, you’ll have to decide what works best for your environment, given your organization’s workload in the cloud.
The following types of tokens exist in Keystone:
UUID (Universal Unique Identifier)
The default token provider in Keystone is UUID. This is a 32-byte bearer token that must be persisted (stored) across controller nodes, along with their associated metadata, in order to be validated.
PKI & PKIZ (public key infrastructure)
This token format is deprecated as of the OpenStack Ocata release, which means it is deprecated in Red Hat OpenStack Platform 11. This format is also persisted across controller nodes. PKI tokens contain catalog information of the user that bears them, and thus can get quite large, depending on how large your cloud is. PKIZ tokens are simply compressed versions of PKI tokens.
Fernet
Fernet tokens (pronounced fehr:NET) are message packed tokens that contain authentication and authorization data. Fernet tokens are signed and encrypted before being handed out to users. Most importantly, however, Fernet tokens are ephemeral. This means they do not need to be persisted across clustered systems in order to successfully be validated.
Fernet was originally a secure messaging format created by Heroku. The OpenStack implementation of this lightweight and more API-friendly format was developed by the OpenStack Keystone core team.
The Problem
As you may have guessed by now, the real problem solved by Fernet tokens is one of persistence. Imagine, if you will, the following scenario:

A user logs into Horizon (the OpenStack Dashboard)
User creates a compute instance
User requests persistent storage upon instance creation
User assigns a floating IP to the instance

While this is a simplified scenario, you can clearly see that there are multiple calls to different core components being made. In even the most basic of examples  you see at least one authentication, as well as multiple validations along the way. Not only does this require network bandwidth, but when using persistent token providers such as UUID it also requires a lot of storage in Keystone. Additionally, the token table in the database
Photo by Eugenio Mazzone on Unsplash
used by  Keystone grows as your cloud gets more usage. When using UUID tokens, operators must implement a detailed and comprehensive strategy to prune this table at periodic intervals to avoid real trouble down the line. This becomes even more difficult in a clustered environment.
It’s not only backend components which are affected. In fact, all services that are exposed to users require authentication and authorization. This leads to increased bandwidth and storage usage on one of the most critical core components in OpenStack. If Keystone goes down, your users will know it and you no longer have a cloud in any sense of the word.
Now imagine the impact as you scale your cloud;  the  problems with UUID tokens are dangerously amplified.
Benefits of Fernet tokens
Because Fernet tokens are ephemeral, you have the following immediate benefits:

Tokens do not need to be replicated to other instances of Keystone in your controller cluster
Storage is not affected, as these tokens are not stored

The end-result offers increased performance overall. This was the design imperative of Fernet tokens, and the OpenStack community has more than delivered.  
Show me the numbers
All of these benefits sound good, but what are the real numbers behind the performance differences between UUID and Fernet? One of the core keystone developers, Dolph Matthews, created a great post about Fernet benchmarks.
Note that these benchmarks are for OpenStack Kilo, so you’ll most likely see even greater performance numbers in newer releases.
The most important benchmarks in Dolph’s post are the ones comparing the various token formats to each other on a globally-distributed Galera cluster. These show the following results using UUID as a baseline:
Token creation performance

Fernet
50.8 ms (85% faster than UUID)
237.1 (42% faster than UUID)

Token validation performance

Fernet
5.55 ms (8% faster than UUID)
1957.8 (14% faster then UUID)

As you can see, these numbers are quite remarkable. More informal benchmarks can be found at the Cern OpenStack blog, OpenStack in Production.
Security Implications
Photo by Praveesh Palakeel on Unsplash
One important aspect of using Fernet tokens is security. As these tokens are signed and encrypted, they are inherently more secure than plain text UUID tokens. One really great aspect of this is the fact that you can invalidate a large number of tokens, either during normal operations or during a security incident, by simply changing the keys used to validate them. This requires a key rotation strategy, which I’ll get into in the third part of this series.
While there are security advantages to Fernet tokens, it must be said they are only as secure as the keys that created them. Keystone creates the tokens with a set of keys in your Red Hat OpenStack Platform environment. Using advanced technologies like SELinux, Red Hat Enterprise Linux is a trusted partner in this equation. Remember, the OS matters.
Conclusion
While OpenStack functions just fine with its default UUID token format, I hope that this article shows you some of the benefits of Fernet tokens. I also hope that you find the knowledge you’ve gained here to be useful, once you decide to move forward to implementing them.
In our follow-up blog post in this series, we’ll be looking at how to enable Fernet tokens in your OpenStack environment — both pre and post-deploy. Finally, our last post will show you how to automate key rotation using Red Hat Ansible in a production environment. I hope you’ll join me along the way.
Quelle: RedHat Stack