What’s new in Kubernetes 1.9

The post What’s new in Kubernetes 1.9 appeared first on Mirantis | Pure Play Open Cloud.
Seem like we were just talking about what was new in version 1.8, and here we are with a look at new features and changes in Kubernetes 1.9.
So of the new “features” in Kubernetes 1.9 aren’t actually new, but are existing features that are now considered stable enough for production use, such as the Workloads API (DaemonSet, Deployment, ReplicaSet, and StatefulSet API), which provides the foundation for many real-world workloads, or have entered the beta phase, meaning they’re enabled by default, such as support for Windows Server workloads.
Others, however, are just entering the codebase. For example, Kubernetes 1.9 includes  alpha implementations of the Container Storage Interface (CSI) and IPv6 support.
Before you even start
Before you even make the decision to upgrade to Kubernetes 1.9, you must back up your etcd data.  Seriously.  Do it now.  We’ll wait.

OK, great. So now that you’re doing that, why is it so important? It’s important because many of the tools used to deploy and upgrade Kubernetes default to etcd 3.1, and because etcd doesn’t support downgrading, you will not be able to return to your previous version without reinstalling should you decide to downgrade your Kubernetes deployment. So while you could upgrade without performing that backup, you probably shouldn’t.
Now let’s get into the details of changes to each area of Kubernetes.
Core services
Let’s start by looking at the heart of Kubernetes and changes that will affect how you use it.
Authentication and API Machinery
The process of authenticating and authorizing access to Kubernetes saw a number of improvements this cycle.
For one thing,  permissions can now be added to the built-in RBAC admin/edit/view roles using cluster role aggregation. These roles apply to the entire cluster, making it possible to more easily administer who can and can’t perform certain actions.
In addition, authorization itself has been improved. For example, if a rule denying entry fires, there’s no reason to evaluate the rest of the rules in the chain, so the rest can now be short-circuited.
All of this depends on extensibility, and during this cycle, the community worked on increasing extensibility with the addition of a new type of admission control webhook.
Admission controllers are the different components of what happens when you try to perform an action in Kubernetes, such as checking access and checking for namespaces. Webhooks enable you to communicate with Kubernetes via HTTP POST requests; you can send a request, and Kubernetes will make a callback when certain events happen.
In this release, the team worked on “mutating” webhooks, which enable more flexible admission control plugins, because they let Kubernetes make changes as necessary, allowing for greater extensibility going forward.
Custom resources
Custom resources, which enable you to create your own “objects” that can be manipulated by Kubernetes, have also been enhanced to allow for easier creation and more reliability. This includes a new sample controller Custom Resource Definition in the Kubernetes repo, as well as new metadata field selectors, scripts to help generate code, and validation of the defined resources to improve reliability of your overall solutions. In addition, where previous versions only enabled you to refer to groups of custom resources, you can now get a single instance.
Networking
With IPv4 addresses running out, it’s good to see the beginnings of IPv6 support in Kubernetes 1.9.  This support is still in alpha and has significant limitations, such as a lack of dual-stack support and no HostPorts, but it’s a start.  You can get a full list of the new IPv6 related changes here.
In addition, with the release of CoreDNS 1.0, you have the option to use it as a drop-in replacement for kube-dns. To install it, CLUSTER_DNS_CORE_DNS to ‘true’. Be aware, however, that this support is experimental, which means that it can change or be removed at any time.
Other networking improvements include the –cleanup-ipvs flag, which determines whether kube-proxy flushes all existing ipvs rules in on startup (as it does in previous versions by default), and a new podAntiAffinity kube-dns annotation to enhance resilience.
You can also customize the behavior of a pod’s DNS client by adding “options” to the host’s /etc/resolve.conf or –resolv-conf, which will cause them to propagate down propogate to the pods resolve.conf.
Cluster Lifecycle
The Federation SIG has been renamed to Cluster Lifecycle, and has been focused on bringing the kubeadm deployment tool up to production quality. The project does work, but it’s still fairly young, and includes a number of newly added alpha features, such as  support for CoreDNS, IPv6 and Dynamic Kubelet Configuration. Again, to install CoreDNS instead of kube-dns, set CLUSTER_DNS_CORE_DNS to ‘true’ in your configuration.
Kubeadm also received some additional new features, such as the –print-join-command, which makes it possible to get the necessary information to add new nodes after the initial cluster deployment, support for Kubelet Dynamic Configuration, and the ability to add a Windows node to a cluster.
The group is also responsible for the Cluster API, for “declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.”
If you’re building multi-cluster installations, you’ll be glad to know that kubefed, which lets you create a control plane to add, remove, and manage federated clusters, has gotten several new flags that give you more control over how it is installed and how it operates.  The –nodeSelector flag lets you decide where the controller gets installed, and the addition of support for  –imagePullSecrets and –imagePullPolicy, mean you can now pull images from a private container registry.
Nodes functions
If you’re a system administrator or operator, you’ll be glad to know that Kubernetes 1.9 makes writing configurations a bit easier, with Kubelet’s feature gates now represented as a map within KubeletConfiguration, rather than as a string of key-value pairs. In addition, you can now set multiple manifest url headers, either using the –manifest-url-header flag or the ManifestURLHeader field in the KubeletConfiguration.
In addition, deviceplugin has been extended to more gracefully handle the full device plugin lifecycle, including an explicit cm.GetDevicePluginResourceCapacity() function that makes it possible to more accurately determine what resources are inactive, thus making a more accurate view of available resources possible. It also ensures that devices are removed properly even if kubelet restarts, and passes sourcesReady from kubelet to the device plugin . Finally, it makes sure that scheduled pods can continue to run even after a device plugin deletion and kubelet restart.
Note that according to the release notes, “Kubelet no longer removes unregistered extended resource capacities from node status; cluster admins will have to manually remove extended resources exposed via device plugins when they the remove plugins themselves.”
Kubernetes 1.9 includes a number of enhancements to logging and monitoring, including pod-level CPU, memory, and local ephemeral storage. In addition, the status summary network value, which used to consider only eth0, now considers all network interfaces.
The new release also eases some user issues, adding read/write permissions to the default admin and edit roles, and adding read permissions on poddisruptionbudget.policy to the view role.
Finally, the team has made CRI log parsing available as a library at pkg/kubelet/apis/cri/logs, so you don’t have to struggle with this manually.
Scheduling
Kubernetes 1.9 changes how you configure kube-scheduler, adding a new –config flag that points to a configuration file. This file is where Kubernetes will expect to find your configuration values in future versions; most other kube-scheduler flags are now deprecated.
This version also provides the ability to more efficiently schedule workloads that need extended resources such as GPUs; you can taint the node with the extended resource name as the key, and pods requesting those resources will be the only ones scheduled to those nodes.
The Scheduling SIG also completed a number of other individual changes, such as scheduling higher priority pods before lower priority pods and the ability for a pod to listen on multiple IP addresses.
Storage
The big news for storage in Kubernetes 1.9 is the addition of an alpha implementation of the Container Storage Interface (CSI). CSI is a joint project between the Kubernetes, Docker, Mesosphere, and Cloud Foundry communities, and is meant to provide a single API that storage vendors can implement to be sure that their products work “out of the box” in any orchestrator that supports CSI. According to the Kubernetes Storage SIG, “CSI will make installing new volume plugins as easy as deploying a pod, and enable third-party storage providers to develop their plugins without the need to add code to the core Kubernetes codebase.” You can make use of this new functionality by instantiating a volume as a CSIVolumeSource.
The Storage SIG also added several new features, including:

Volume resizing for GCE PD, Ceph RBD, AWS EBS, and OpenStack Cinder volumes
Volumes as raw block devices (for Fibre Channel only as of Kubernetes 1.9)
Mount utilities that can run inside a container instead of on the host

Topology Aware Volume Scheduling, in which PersistentVolumes are scheduled based on the Pod’s scheduling requirements.

Cloud providers
One important change in Kubernetes 1.9 is that you must set a value for the –cloud-provider flag if you are manually deploying Kubernetes; the default is no longer “auto-detect”. Allowable options are aws, azure, cloudstack, fake, gce, mesos, openstack, ovirt, photon, rackspace, vsphere, or unset; auto-detect will be removed in Kubernetes 1.10.  (If you’re installing Kubernetes with a tool such as Minikube or Kubeadm you don’t have to worry about this.)
In addition, some of the changes in this version are specific to individual cloud providers.
OpenStack
If you’re using Kubernetes with OpenStack, you’ll find that configuration in v1.9 is considerably simpler. Auto-detection of OpenStack services and versions is now the rule “wherever feasible” — which in this case means Block Storage API versions and Security Groups — and you can now configure your OpenStack Load Balancing as a Service v2 provider. Both OpenStack Octavia v2 and Neutron LBaaS v2 are supported.
AWS
The AWS Special Interest Group (SIG) has been focusing on improving Kubernetes integration with EBS volumes. Users will no longer wind up with workloads scheduled to nodes with volumes stuck in the “attaching” state; instead, the nodes will be “tainted” so that administrators can take care of the  problem. The team recommends watching for these taints. Also, when nodes are stopped, volumes will be automatically detached.
In addition, Kubernetes now supports AWS’ new NVMe instance types, as well as using AWS Network Load Balancers rather than Elastic Load Balancers.
Azure
If you’re using Kubernetes on Windows, and especially on Azure, you’ll find that mounting volumes is a bit less frustrating, as you can now create Windows mount paths, and with the elimination of the need for a drive letter, an unlimited number of mount points.
You can also explicitly set the Azure DNS label for a public IP address using the service.beta.kubernetes.io/azure-dns-label-name annotation, while still being able to use Azure NSG rules to ensure that external access is allowed only to the load balancer IP address. The load balancer has also been enhanced to consider more properties of NSG rules, including Protocol, SourcePortRange, and DestinationAddressPrefs, when updating.  (Previously changes in these fields didn’t trigger an update because the load balancer didn’t recognize that there had been a change.)
Where to get Kubernetes 1.9
You can download Kubernetes 1.9 on GitHub.
The post What’s new in Kubernetes 1.9 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Is 2018 when the machines take over? Predictions for machine learning, the data center, and beyond

The post Is 2018 when the machines take over? Predictions for machine learning, the data center, and beyond appeared first on Mirantis | Pure Play Open Cloud.
It’s hard to believe, but 2017 is almost over and 2018 is in sight. This year has seen a groundswell of technology in ways that seem to be simmering under the surface, and if you look a bit more closely it’s all there, just waiting to be noticed.
Here are the seeds being sown in 2017 that you can expect to bloom in 2018 and beyond.
Machine Learning
Our co-founder, Boris Renski, also gave another view of 2018 here.
Machine learning takes many different forms, but the important thing to understand about it is that it enables a program to react to a situation that was not explicitly anticipated by its developers.
It’s easy to think that robots and self-learning machines are the stuff of science fiction — unless you’ve been paying attention to the technical news.  Not a day goes by without at least a few stories about some advance in machine learning and other forms of artificial intelligence.  Companies and products based on it launch daily. Your smartphone increasingly uses it. So what does that mean for the future?
Although today’s machine learning algorithms have already surpassed anything that we thought possible even a few years ago, it is still a pretty nascent field.  The important thing that’s happening right now is that Machine Learning has now reached the point where it’s accessible to non-PhDs through toolkits such as Tensorflow and Scikit-Learn — and that is going to make all the difference in the next 18-24 months.
Here’s what you can expect to see in the reasonably near future.
Hardware vendors jump into machine learning
Although machine learning generally works better and faster on Graphics Processing Units (GPUs) — the same chips used for blockchain mining — one of the advances that’s made it accessible is the fact that software such as Tensorflow and Scikit-Learn can run on normal CPUs. But that doesn’t mean that hardware vendors aren’t trying to take things to the next level.
These efforts run from Nvidia’s focus on GPUs to Intel’s Nervana Neural Network Processor (NNP) to Google’s Tensor Processing Unit (TPU). Google, Intel and IBM are also working on quantum computers, which use completely different architecture from traditional digital chips, and are particularly well suited to machine learning tasks.  IBM has even announced that it will make a 20 qubit version of its quantum computer available through its cloud. It’s likely that 2018 will see these quantum computers reach the level of “quantum supremacy”, meaning that they can solve problems that can’t be solved on traditional hardware. That doesn’t mean they’ll be generally accessible the way general machine learning is now — the technical and physical requirements are still quite complex — but they’ll be on their way.
Machine learning in the data center
Data center operations are already reaching a point where manually managing hardware and software is difficult, if not impossible. The solution has been using devops, or scripting operations to create “Infrastructure as Code“, providing a way to create verifiable, testable, repeatable operations. Look for this process to add machine learning to improve operational outcomes.
IoT bringing additional intelligence into operations
Machine learning is at its best when it has enough data to make intelligent decisions, so look for the multitude of data that comes from IoT devices to be used to help improve operations.  This applies to both consumer devices, which will improve understanding of and interaction with consumers, and industrial devices, which will improve manufacturing operations.
Ethics and transparency
As we increasingly rely on machine learning for decisions being made in our lives, the fact that most people don’t know how those decisions are made — and have no way of knowing — can lead to major injustices. Think it’s not possible? Machine learning is used for mortgage lending decisions, which while important, aren’t life or death.  But they’re also used for things like criminal sentencing and parole decisions. And it’s still early.
One good example given for this “tyranny of the algorithm” involves the example of two people up for a promotion. One is a man, one is a woman. To prevent the appearance of bias, the company uses a machine learning algorithm to determine which candidate will be more successful in the new position. The algorithm chooses the man.  Why?  Because it has more examples of successful men in the role. But it doesn’t take into account that there are simply fewer women who have been promoted.
This kind of unintentional bias can be hard to spot, but companies and governments are going to have to begin looking at greater transparency as to how decisions are made.
The changing focus of datacenter infrastructures
All of this added intelligence is going to have massive effects on datacenter infrastructures.
For years now, the focus has been on virtualizing hardware, moving from physical servers to virtual ones, enabling a single physical host to serve as multiple “computers”.  The next step from here was cloud computing in which workloads didn’t know or care where in the cloud they resided; they just specified what they needed, and the cloud would provide it.  The rise of containers accelerated this trend; containers are self-contained units, making them even easier to schedule in connected infrastructure using tools such as Kubernetes.
The natural progression from here is the de-emphasis on the cloud itself.  Workloads will run wherever needed, and whereas before you didn’t worry about where in the cloud that wound up being, now you won’t even worry about what cloud you’re using, and eventually, the architecture behind that cloud will become irrelevant to you as an end user.  All of this will be facilitated by changes in philosophy.
APIs make architecture irrelevant
We can’t call microservices new for 2018, but the march to decompose monolithic applications into multiple microservices will continue and accelerate in 2018 as developers and businesses try to gain the flexibility that this architecture provides. Multiple APIs will exist for many common features, and we’ll see “API brokers” that provide a common interface for similar functions.
This reliance on APIs will mean that developers will worry less about actual architectures. After all, when you’re a developer making an API call, do you care what the server is running on?  Probably not.
The application might be running on a VM, or in containers, or even in a so-called serverless environment. As developers lean more heavily on composing applications out of APIs, they’ll reach the point where the architecture of the server is irrelevant to them.
That doesn’t mean that the providers of those APIs won’t have to worry about it, of course.
Multi-cloud infrastructures
Server application developers such as API providers will have to think about architecture, but increasingly they will host their applications in multi-cloud environments, where workloads run where it’s most efficient — and most cost-effective. Like their users, they will be building against APIs — in this case, cloud platform APIs — and functionality is all that will matter; the specific cloud will be irrelevant.
Intelligent cloud orchestration
In order to achieve this flexibility, application designers will need to be able to do more than simply spread their applications among multiple clouds. In 2018 look for the maturation of systems that enable application developers and operators to easily deploy workloads to the most advantageous cloud system.
All of this will become possible because of the ubiquity of open source systems and orchestrators such as Kubernetes. Amazon and other systems that thrive on vendor lock-in will hold on for a bit longer, but the tide will begin to turn and even they will start to compete on other merits so that developers are more willing to include them as deployment options.
Again, this is also a place where machine learning and artificial intelligence will begin to make themselves known as a way to optimize workload placement.
Continuous Delivery becomes crucial as tech can’t keep up
Remember when you bought software and used it for years without doing an update?  Your kids won’t.
Even Microsoft has admitted that it’s impossible to keep up with advances in technology by doing specific releases of software.  Instead, new releases are pushed to Windows 10 machines on a regular basis.
Continuous Delivery (CD) will become the de facto standard for keeping software up to date as it becomes impossible to keep up with the rate of development in any other way.  As such, companies will learn to build workflows that take advantage of this new software without giving up human control over what’s going on in their production environment.
At a more tactical level, technologies to watch are:

Service meshes such as Istio, which abstract away many of the complexities of working with multiple services
Serverless/event-driven programming, which reduces an API to its most basic form of call-response
Policy agents such as the Open Policy Agent (OPA), which will enable developers to easily control access to and behavior of their applications in a manageable, repeatable, and granular way
Cloud service brokers such as Open Service Broker (OSB), which provide a way for enterprises to curate and provide access to additional services their developers may need in working with the cloud.
Workflow management tools such as Spinnaker, which make it possible to create and manage repeatable workflows, particularly for the purposes of intelligent continuous delivery.
Identity services such as SPIFFE and SPIRE, which make it possible to uniquely identify workloads so that they can be provided the proper access and workflow.

Beyond the datacenter
None of this happens in a vacuum, of course; in addition to these technical changes, we’ll also see the rise of social issues they create, such as privacy concerns, strain on human infrastructure when dealing with the accelerating rate of development, and perhaps most important, the potential for cyber-war.
But when it comes to indirect effects of the changes we’re talking about, perhaps the biggest is the question of what our reliance on fault-tolerant programming will create.  Will it lead to architectures that are essentially foolproof, or such an increased level of sloppiness that eventually, the entire house of cards will come crashing down?
Either outcome is possible; make sure you know which side you’re on.
Getting ready for the 2018 and beyond
The important thing is to realize that whether we like it or not, the world is changing, but we don’t have to be held hostage by it. Here are Mirantis we have big plans, and we’re looking forward to talking more about them in the new year!
 
The post Is 2018 when the machines take over? Predictions for machine learning, the data center, and beyond appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Connect with IBM Cloud Managed Application Services at Think 2018

Are you ready for something new?
On 19 – 22 March, great minds in technologies ranging from blockchain, AI, data and cloud to IT infrastructure, Internet of Things (IoT) and security will come together in Las Vegas to share their knowledge in hundreds of sessions, demonstrations and face-to-face conversations.
The event is called Think 2018, a first-of-its-kind gathering of leaders and visionaries that seeks to create an environment of sharing and collaboration. Designed with thinkers in mind, the event will put attendees face-to-face with innovators to discuss the most pressing challenges and discover solutions that are relevant to their needs.
IBM Cloud Managed Application Services at Think 2018
Among the many important topics at Think 2018 will be cloud managed application services, solutions that handle the daily management of infrastructure and applications so users can focus on adding business value through innovation.
IBM Cloud Managed Application Services (CMAS) specialists will be on hand at Think 2018 to discuss how you can accelerate time to market on products and services, deliver exceptional customer experiences and create new business value by transforming your management of critical applications, including SAP S/4HANA.
Mark Slaga, general manager of IBM CMAS, will host a core session discussing the business advantages of combining SAP and AI on the IBM Cloud to gain a competitive edge. The session will also feature IBM clients who will discuss the benefits they have realized by pairing S/4HANA with IBM technologies in a managed cloud deployment.
Following the core session, four CMAS Think Tank sessions will help you better understand how managed cloud solutions can transform your business:
Maintain information security controls on the cloud. Meet face-to-face with CMAS specialists to discuss how IBM Cloud integrated security controls help detect, address and prevent security breaches. Participants will explore use cases that illustrate how IBM solutions can protect SAP data and applications across networks, business continuity management, disaster recovery and IT operations.
Meeting industry and regulatory compliance on the cloud. Connect with IBM specialists to learn about building strategies to help meet industry and regulatory compliance requirements such as PCI, FedRAMP and HIPAA in more efficient, streamlined ways by hosting SAP solutions on IBM Cloud.
Build a business case for a managed SAP environment. This session will walk through an   interactive ROI tool, producing a customized report for your business that includes a five-year cost savings projection for running SAP workloads in cloud managed environments. The report will also calculate savings based on increased customer reach and retention, reduction of infrastructure costs, labor optimization and reduced downtime.
SAP on IBM Cloud client panel. Learn about successful SAP on IBM Cloud deployments from the people who use it today. This panel of your peers will share their latest integration initiatives and discuss opportunities, pitfalls and lessons learned.
Customer-led sessions at Think 2018
IBM clients will also host individual sessions that will cover their experiences working with CMAS.
These companies will share examples about how they are using managed cloud services to host and deploy their SAP environment, driving benefits such as improved performance, increased agility in responding to changing customer demands and reduced cost of maintaining infrastructure.
Keep checking Thoughts on Cloud for more information about Think 2018. Visit the Think 2018 website to register for this exciting event and enroll for sessions on CMAS and other important topics. For more information on Cloud Managed Application Services, visit the website.
The post Connect with IBM Cloud Managed Application Services at Think 2018 appeared first on Cloud computing news.
Quelle: Thoughts on Cloud

Getting started with Software Factory and Zuul3

Introduction

Software Factory 2.7 has been recently released.
Software Factory is an easy to deploy software development forge that is deployed at
review.rdoproject.org and softwarefactory-project.io.
Software Factory provides, among other features, code review and continuous integration (CI).
This new release features Zuul V3 that is, now, the default CI
component of Software Factory.

In this blog post I will explain how to deploy a Software Factory
instance for testing purposes in less than 30 minutes and initialize
two demo repositories to be tested via Zuul.

Note that Zuul V3 is not yet released upstream however
it is already in production, acting as the CI system of OpenStack.

Prerequisites

Software Factory requires CentOS 7 as its base Operating System so
the commands listed below should be executed on a fresh deployment of CentOS 7.

The default FQDN of a Software Factory deployment is sftests.com. In order to
be accessible in your browser, sftests.com must be added to your /etc/hosts
with the IP address of your deployment.

Installation

First, let’s install the repository of the last version then
install sf-config, the configuration management tool.

sudo yum install -y https://softwarefactory-project.io/repos/sf-release-2.7.rpm
sudo yum install -y sf-config

Activating extra components

Software Factory has a modular architecture that can be easily defined through
a YAML configuration file, located in /etc/software-factory/arch.yaml. By default,
only a limited set of components are activated to set up a minimal CI with Zuul V3.

We will now add the hypervisor-oci component to configure a container provider,
so that OCI containers can be consumed by Zuul
when running CI jobs. In others words it means you won’t need an OpenStack cloud
account for running your first Zuul V3 jobs with this Software Factory instance.

Note that the OCI driver, on which hypervisor-oci relies, while totally functional,
is still under review and not yet merged upstream.

echo ” – hypervisor-oci” | sudo tee -a /etc/software-factory/arch.yaml

Starting the services

Finally run sf-config:

sudo sfconfig –enable-insecure-slaves –provision-demo

When the sf-config command finishes you should be able to access
the Software Factory web UI by connecting your browser to https://sftests.com.
You should then be able to login using the login admin and password userpass
(Click on “Toggle login form” to display the built-in authentication).

Triggering a first job on Zuul

The –provision-demo option is a special command to provision two demo Git
repositories on Gerrit with two demo jobs.

Let’s propose a first change on it:

sudo -i
cd demo-project
touch f1 && git add f1 && git commit -m”Add a test change” && git review

Then you should see the jobs being executed on the ZuulV3 status page.

And get the jobs’ results on the corresponding Gerrit review page.

Finally, you should find the links to the generated artifacts and the ARA reports.

Next steps to go further

To learn more about Software Factory please refer to the user documentation.
You can reach the Software Factory team on IRC freenode channel #softwarefactory
or by email at the softwarefactory-dev@redhat.com mailing list.
Quelle: RDO

Upcoming changes to test day

TL;DR: Live RDO cloud will be available for testing on upcoming test day. http://rdoproject.org/testday/queens/milestone2/ for more info.

The last few test days have been somewhat lackluster, and have not had much participation. We think that there’s a number of reasons for this:

Deploying OpenStack is hard and boring
Not everyone has the necessary hardware to do it anyways
Automated testing means that there’s not much left for the humans to do

In today’s IRC meeting, we were brainstorming about ways to improve participation in test day.

We think that, in addition to testing the new packages, it’s a great way for you, the users, to see what’s coming in future releases, so that you can start thinking about how you’ll use this functionality.

One idea that came out of it is to have a test cloud, running the latest packages, available to you during test day. You can get on there, poke around, break stuff, and help test it, without having to go through the pain of deploying OpenStack.

David has written more about this on his blog.

If you’re interested in participating, please sign up.

Please also give some thought to what kinds of test scenarios we should be running, and add those to the test page. Or, respond to this thread with suggestions of what we should be testing.

Details about the upcoming test day may be found on the RDO website.

Thanks!
Quelle: RDO

Open Source Summit, Prague

In October, RDO had a small presence at the Open Source Summit
(formerly known as LinuxCon) in Prague, Czechia.

While this event does not traditionally draw a big OpenStack audience, we were treated to a great talk by Monty Taylor on Zuul, and
Fatih Degirmenci gave an interesting talk on cross-community CI, in which he discussed the joint
work between the OpenStack and OpenDaylight communities to help one another verify cross-project
functionality.

On one of the evenings, members of the Fedora and CentOS community met in a BoF (Birds
of a Feather) meeting, to discuss how the projects relate, and how some of the load – including
the CI work that RDO does in the CentOS infrastructure – can better be shared between the two
projects to reduce duplication of effort.

This event is always a great place to interact with other open source enthusiasts. While, in the past, it was very Linux-centric, the event this year had a rather broader scope, and so drew people from many more communities.

Upcoming Open Source Summits will be held in Japan (June 20-22, 2018), Vancouver (August 29-31, 2018) and Edinburgh (October 22-24, 2018), and we expect to have a presence of some kind at each of these events.
Quelle: RDO

Gate repositories on Github with Software Factory and Zuul3

Introduction

Software Factory is an easy to deploy software development forge. It provides,
among others features, code review and continuous integration (CI). The latest
Software Factory release features Zuul V3 that provides integration with Github.

In this blog post I will explain how to configure a Software Factory instance, so
that you can experiment with gating Github repositories with Zuul.

First we will setup a Github application to define the Software Factory instance
as a third party application and we will configure this instance to act
as a CI system for Github.

Secondly, we will prepare a Github test repository by:

Installing the application on it
configuring its master branch protection policy
providing Zuul job description files

Finally, we will configure the Software Factory instance to test and gate
Pull Requests for this repository, and we will validate this CI
by opening a first Pull Request on the test repository.

Note that Zuul V3 is not yet released upstream however it is already
in production, acting as the CI system of OpenStack.

Pre-requisite

A Software Factory instance is required to execute the instructions given in this blog post.
If you need an instance, you can follow the quick deployment guide in this previous article.
Make sure the instance has a public IP address and TCP/443 is open so that Github can reach
Software Factory via HTTPS.

Application creation and Software Factory configuration

Let’s create a Github application named myorg-zuulapp and register it on the instance.
To do so, follow this section
from Software Factory’s documentation.

But make sure to:

Replace fqdn in the instructions by the public IP address of your Software Factory
instance. Indeed the default sftests.com hostname won’t be resolved by Github.
Check “Disable SSL verification” as the Software Factory instance is by default
configured with a self-signed certificate.
Check “Only on this account” for the question “Where can this Github app be installed”.

After adding the github app settings in /etc/software-factory/sfconfig.yaml, run:

sudo sfconfig –enable-insecure-slaves –disable-fqdn-redirection

Finally, make sure Github.com can contact the Software Factory instance by clicking
on “Redeliver” in the advanced tab of the application. Having the green tick is the
pre-requisite to go further. If you cannot get it, the rest of the article will not
be able to be accomplished successfuly.

Define Zuul3 specific Github pipelines

On the Software Factory instance, as root, create the file config/zuul.d/gh_pipelines.yaml.

cd /root/config
cat <<EOF > zuul.d/gh_pipelines.yaml

– pipeline:
name: check-github.com
description: |
Newly uploaded patchsets enter this pipeline to receive an
initial +/-1 Verified vote.
manager: independent
trigger:
github.com:
– event: pull_request
action:
– opened
– changed
– reopened
– event: pull_request
action: comment
comment: (?i)^s*rechecks*$
start:
github.com:
status: ‘pending’
status-url: “https://sftests.com/zuul3/{tenant.name}/status.html”
comment: false
success:
github.com:
status: ‘success’
sqlreporter:
failure:
github.com:
status: ‘failure’
sqlreporter:

– pipeline:
name: gate-github.com
description: |
Changes that have been approved by core developers are enqueued
in order in this pipeline, and if they pass tests, will be
merged.
success-message: Build succeeded (gate pipeline).
failure-message: Build failed (gate pipeline).
manager: dependent
precedence: high
require:
github.com:
review:
– permission: write
status: “myorg-zuulapp[bot]:local/check-github.com:success”
open: True
current-patchset: True
trigger:
github.com:
– event: pull_request_review
action: submitted
state: approved
– event: pull_request
action: status
status: “myorg-zuulapp[bot]:local/check-github.com:success”
start:
github.com:
status: ‘pending’
status-url: “https://sftests.com/zuul3/{tenant.name}/status.html”
comment: false
success:
github.com:
status: ‘success’
merge: true
sqlreporter:
failure:
github.com:
status: ‘failure’
sqlreporter:
EOF
sed -i s/myorg/myorgname/ zuul.d/gh_pipelines.yaml

Make sure to replace “myorgname” by the organization name.

git add -A .
git commit -m”Add github.com pipelines”
git push git+ssh://gerrit/config master

Setup a test repository on Github

Create a repository called ztestrepo, initialize it with an empty README.md.

Install the Github application

Then follow the process below to add the application myorg-zuulapp to ztestrepo.

Visit your application page, e.g.: https://github.com/settings/apps/myorg-zuulapp/installations
Click “Install”
Select ztestrepo to install the application on
Click “Install”

Then you should be redirected on the application setup page. This can
be safely ignored for the moment.

Define master branch protection

We will setup the branch protection policy for the master branch of ztestrepo.
We want a Pull Request to have, at least, one code review approval and all CI checks
passed with success before a PR become mergeable.

You will see, later in this article, that the final job run and the merging phase
of the Pull Request are ensured by Zuul.

Go to https://github.com/myorg/ztestrepo/settings/branches
Choose the master branch
Check “Protect this branch”
Check “Require pull request reviews before merging”
Check “Dismiss stale pull request approvals when new commits are pushed”
Check “Require status checks to pass before merging”
Click “Save changes”

Add a collaborator

A second account on Github is needed to act as collaborator of the repository
ztestrepo. Select one in https://github.com/myorg/ztestrepo/settings/collaboration.
This collaborator will act as the PR reviewer later in this article.

Define a Zuul job

Create the file .zuul.yaml at the root of ztestrepo.

git clone https://github.com/myorg/ztestrepo.git
cd ztestrepo
cat <<EOF > .zuul.yaml

– job:
name: myjob-noop
parent: base
description: This a noop job
run: playbooks/noop.yaml
nodeset:
nodes:
– name: test-node
label: centos-oci

– project:
name: myorg/ztestrepo
check-github.com:
jobs:
– myjob-noop
gate-github.com:
jobs:
– myjob-noop
EOF
sed -i s/myorg/myorgname/ .zuul.yaml

Make sure to replace “myorgname” by the organization name.

Create playbooks/noop.yaml.

mkdir playbooks
cat <<EOF > playbooks/noop.yaml
– hosts: test-node
tasks:
– name: Success
command: “true”
EOF

Push the changes directly on the master branch of ztestrepo.

git add -A .
git commit -m”Add zuulv3 job definition”
git push origin master

Register the repository on Zuul

At this point, the Software Factory instance is ready to receive events
from Github and the Github repository is properly configured. Now we will
tell Software Factory to consider events for the repository.

On the Software Factory instance, as root, create the file myorg.yaml.

cd /root/config
cat <<EOF > zuulV3/myorg.yaml

– tenant:
name: ‘local’
source:
github.com:
untrusted-projects:
– myorg/ztestrepo
EOF
sed -i s/myorg/myorgname/ zuulV3/myorg.yaml

Make sure to replace “myorgname” by the organization name.

git add zuulV3/myorg.yaml && git commit -m”Add ztestrepo to zuul” && git push git+ssh://gerrit/config master

Create a Pull Request and see Zuul in action

Create a Pull Request via the Github UI
Wait the for check-github.com pipeline to finish with success

Ask the collaborator to set his approval on the Pull request

Wait for Zuul to detect the approval
Wait the for gate-github.com pipeline to finish with success

Wait for for the Pull Request to be merged by Zuul

As you can see, after the run of the check job and the reviewer’s approval, Zuul has
detected that the state of the Pull Request was ready to enter the gating
pipeline. During the gate run, Zuul has executed the job against the Pull Request code
change rebased on the current master then made Github merge the Pull Request as
the job ended with a success.

Other powerful Zuul features such as cross-repository testing or Pull Request
dependencies between repositories are supported but beyond the scope of this
article. Do not hesitate to refer to the upstream documentation
to learn more about Zuul.

Next steps to go further

To learn more about Software Factory please refer to the upstream documentation.
You can reach the Software Factory team on IRC freenode channel #softwarefactory
or by email at the softwarefactory-dev@redhat.com mailing list.
Quelle: RDO

Blog Round-up

It’s time for another round-up of the great content that’s circulating our community. But before we jump in, if you know of an OpenStack or RDO-focused blog that isn’t featured here, be sure to leave a comment below and we’ll add it to the list.

ICYMI, here’s what has sparked the community’s attention this month, from Ansible to TripleO, emoji-rendering, and more.

TripleO and Ansible (Part 2) by slagle

In my last post, I covered some of the details about using Ansible to deploy with TripleO. If you haven’t read that yet, I suggest starting there: http://blog-slagle.rhcloud.com/?p=355

Read more at http://blog-slagle.rhcloud.com/?p=369

TripleO and Ansible deployment (Part 1) by slagle

In the Queens release of TripleO, you’ll be able to use Ansible to apply the software deployment and configuration of an Overcloud.

Read more at http://blog-slagle.rhcloud.com/?p=355

An Introduction to Fernet tokens in Red Hat OpenStack Platform by Ken Savich, Senior OpenStack Solution Architect

Thank you for joining me to talk about Fernet tokens. In this first of three posts on Fernet tokens, I’d like to go over the definition of OpenStack tokens, the different types and why Fernet tokens should matter to you. This series will conclude with some awesome examples of how to use Red Hat Ansible to manage your Fernet token keys in production.

Read more at https://redhatstackblog.redhat.com/2017/12/07/in-introduction-to-fernet-tokens-in-red-hat-openstack-platform/

Full coverage of libvirt XML schemas achieved in libvirt-go-xml by Daniel Berrange

In recent times I have been aggressively working to expand the coverage of libvirt XML schemas in the libvirt-go-xml project. Today this work has finally come to a conclusion, when I achieved what I believe to be effectively 100% coverage of all of the libvirt XML schemas. More on this later, but first some background on Go and XML…

Read more at https://www.berrange.com/posts/2017/12/07/full-coverage-of-libvirt-xml-schemas-achieved-in-libvirt-go-xml/

Full colour emojis in virtual machine names in Fedora 27 by Daniel Berrange

Quite by chance today I discovered that Fedora 27 can display full colour glyphs for unicode characters that correspond to emojis, when the terminal displaying my mutt mail reader displayed someone’s name with a full colour glyph showing stars:

Read more at https://www.berrange.com/posts/2017/12/01/full-colour-emojis-in-virtual-machine-names-in-fedora-27/

Booting baremetal from a Cinder Volume in TripleO by higginsd

Up until recently in TripleO booting, from a cinder volume was confined to virtual instances, but now thanks to some recent work in ironic, baremetal instances can also be booted backed by a cinder volume.

Read more at http://goodsquishy.com/2017/11/booting-baremetal-from-a-cinder-volume-in-tripleo/
Quelle: RDO

Greetings from North Pole Operations! All systems go!

By Merry, Chief Information Elf, North Pole

Hi there! I’m Merry, Santa’s CIE (Chief Information Elf), responsible for making sure computers help us deliver joy to the world each Christmas. My elf colleagues are really busy getting ready for the big day (or should I say night?), but this year, my team has things under control, thanks to our fully cloud-native architecture running on Google Cloud Platform (GCP)! What’s that? You didn’t know that the North Pole was running in the cloud? How else did you think that we could scale to meet the demands of bringing all those gifts to all those children around the world?

You see, North Pole Operations have evolved quite a lot since my parents were young elves. The world population increased from around 1.6 billion in the early 20th century to 7.5 billion today. The elf population couldn’t keep up with that growth and the increased production of all these new toys using our old methods, so we needed to improve efficiency.

Of course, our toy list has changed a lot too. It used to be relatively simple — rocking horses, stuffed animals, dolls and toy trucks, mostly. The most complicated things we made when I was a young elf were Teddy Ruxpins (remember those?). Now toy cars and even trading card games come with their own apps and use machine learning.

This is where I come in. We build lots of computer programs to help us. My team is responsible for running hundreds of microservices. I explain microservices to Santa as a computer program that performs a single service. We have a microservice for processing incoming letters from kids, another microservice for calculating kids’ niceness scores, even a microservice for tracking reindeer games rankings.

Here’s an example of the Letter Processing Microservice, which takes handwritten letter in all languages (often including spelling and grammatical errors) and turns each one into text.

Each microservice runs on one or more computers (also called virtual machines or VMs). We tried to run it all from some computers we built here at the North Pole but we had trouble getting enough electricity for all these VMs (solar isn’t really an option here in December). So we decided to go with GCP. Santa had some reservations about “the Cloud” since he thought it meant our data would be damaged every time it rained (Santa really hates rain). But we managed to get him a tour of a data center (not even Santa can get in a Google data center without proper clearances), and he realized that cloud computing is really just a bunch of computers that Google manages for us.

Google lets us use projects, folders and orgs to group different VMs together. Multiple microservices can make up an application and everything together makes up our system. Our most important and most complicated application is our Christmas Planner application. Let’s talk about a few services in this application and how we make sure we have a successful Christmas Eve.

Our Christmas Planner application includes microservices for a variety of tasks: microservices generate lists of kids that are naughty or nice, as well as a final list of which child receives which gift based on preferences and inventory. Microservices plan the route, taking into consideration inclement weather and finally, generate a plan for how to pack the sleigh.

Small elves, big data

Our work starts months in advance, tracking naughty and nice kids by relying on parent reports, teacher reports, police reports and our mobile elves. Keeping track of almost 2 billion kids each year is no easy feat. Things really heat up around the beginning of December, when our army of Elves-on-the-Shelves are mobilized, reporting in nightly.

We send all this data to a system called BigQuery where we can easily analyze the billions of reports to determine who’s naughty and who’s nice in just seconds.

Deck the halls with SLO dashboards

Our most important service level indicator or SLI is “child delight”. We target “5 nines” or 99.999% delightment level meaning 99,999/100,000 nice children are delighted. This limit is our service level objective or SLO and one of the few things everyone here in the North Pole takes very seriously. Each individual service has SLOs we track as well.

We use Stackdriver for dashboards, which we show in our control center. We set up alerting policies to easily track when a service level indicator is below expected and notify us. Santa was a little grumpy since he wanted red and green to be represented equally and we explained that the red warning meant that there were alerts and incidents on a service, but we put candy canes on all our monitors and he was much happier.

Merry monitoring for all

We have a team of elite SREs (Site Reliability Elves, though they might be called Site Reliability Engineers by all you folks south of the North Pole) to make sure each and every microservice is working correctly, particularly around this most wonderful time of the year. One of the most important things to get right is the monitoring.

For example, we built our own “internet of things” or IoT where each toy production station has sensors and computers so we know the number of toys made, what their quota was and how many of them passed inspection. Last Tuesday, there was an alert that the number of failed toys had shot up. Our SREs sprang into action. They quickly pulled up the dashboards for the inspection stations and saw that the spike in failures was caused almost entirely by our baby doll line. They checked the logs and found that on Monday, a creative elf had come up with the idea of taping on arms and legs rather than sewing them to save time. They rolled back this change immediately. Crisis averted. Without the proper monitoring and logging, it would be very difficult to find and fix the issue, which is why our SREs consider it the base of their gift reliability pyramid.

All I want for Christmas is machine learning

Running things in Google Cloud has another benefit: we can use technology they’ve developed at Google. One of our most important services is our gift matching service, which takes 50 factors as input including the child’s wish list, niceness score, local regulations, existing toys, etc., and comes up with the final list of which gifts should be delivered to this child. Last year, we added machine learning or ML, where we gave the Cloud ML engine the last 10 years of inputs, gifts and child and parent delight levels. It automatically learned a new model to use in gift matching based on this data.

Using this new ML model, we reduced live animal gifts by 90%, ball pits by 50% and saw a 5% increase in child delight and a 250% increase in parent delight.

Tis the season for sharing

Know someone who loves technology that might enjoy this article or someone who reminds you of Santa — someone with many amazing skills but whose eyes get that “reindeer-in-the-headlights look” when you talk about cloud computing? Share this article with him or her and hopefully you’ll soon be chatting about all the cool things you can do with cloud computing over Christmas cookies and eggnog… And be sure to tell them to sign up for a free trial — Google Cloud’s gift to them!
Quelle: Google Cloud Platform

Google Memo Author James Damore Sues Company For Discrimination Against White Males

James Damore / Via Twitter

The author of a controversial memo that sparked debates about gender and diversity at Google sued his former employer on Monday alleging that the company discriminates against politically conservative white males.

James Damore, who was fired in August for internally circulating a manifesto that argued Google’s gender pay gap was the result of genetic inferiority, said in a lawsuit filed in Santa Clara Superior Court that the search giant “singled out, mistreated, and systemically punished and terminated” employees that deviated from the company’s view on diversity. Damore and a second plaintiff David Gudeman, another former Google engineer, are seeking class action status for conservative Caucasian men.

The men are being represented by Harmeet K. Dhillon, the Republican National Committee’s Comitteewoman for California.

“Google’s management goes to extreme — and illegal — lengths to encourage hiring managers to take protected categories such as race and/or gender into consideration as determinative hiring factors, to the detriment of Caucasian and male employees and potential employees at Google,” the suit reads.

Damore’s lawsuit is the latest legal challenge for Google, which also faces a suit for unequal pay. Earlier this month, four women plaintiffs as part of a revised lawsuit, alleged that the company had asked for their prior salaries and had underpaid them compared to their male counterparts.

Damore’s suit, which comes from the opposite end of the spectrum, was expected, given his very public hiring of Dhillon, in August. That month, the Dhillon Law Group published a blog post asking for anyone who had experienced illegal or retaliatory employment practices to get in touch.

IIn the 161-page complaint, Damore frames himself as a model Google employee who received 8 performance bonuses and $150,000 per year stock bonuses since he started working at the company in the summer of 2013. Despite this, he was terminated from his job after voicing his complaints about diversity practices and publishing his now infamous 10-page memo titled “Google's Ideological Echo Chamber.”

“Damore was surprised by Google’s position on blatantly taking gender into consideration during the hiring and promotion processes, and in publicly shaming Google business units for failing to achieve numerical gender parity,” reads the suit, following an event in March 2017 in which Chief Financial Officer Ruth Porat and Human Resources Director Eileen Naughton “shamed” that had less than a 50% female workforce.

Damore also says that he felt forced to attend and participate in diversity training events, and that he was threatened and insulted by his coworkers following the publishing of his memo. He included an email from another Google employee who promised to “hound” Damore until one of them was fired.

During the call when they terminated Damore, management did not identify “any Google policy or procedure that Damore had violated,” the suit reads.

Gudeman, according to his LinkedIn, worked at Google as an engineer from Nov. 2013 to Dec. 2016. He is currently a self-employed software contractor and writer.

A Google spokesperson did not respond to a request for comment.

Damore and Dhillon are expected to have a press conference at 12 p.m. in San Francisco.

Quelle: <a href="Google Memo Author James Damore Sues Company For Discrimination Against White Males“>BuzzFeed