What’s new in Kubernetes 1.9

The post What’s new in Kubernetes 1.9 appeared first on Mirantis | Pure Play Open Cloud.
Seem like we were just talking about what was new in version 1.8, and here we are with a look at new features and changes in Kubernetes 1.9.
So of the new “features” in Kubernetes 1.9 aren’t actually new, but are existing features that are now considered stable enough for production use, such as the Workloads API (DaemonSet, Deployment, ReplicaSet, and StatefulSet API), which provides the foundation for many real-world workloads, or have entered the beta phase, meaning they’re enabled by default, such as support for Windows Server workloads.
Others, however, are just entering the codebase. For example, Kubernetes 1.9 includes  alpha implementations of the Container Storage Interface (CSI) and IPv6 support.
Before you even start
Before you even make the decision to upgrade to Kubernetes 1.9, you must back up your etcd data.  Seriously.  Do it now.  We’ll wait.

OK, great. So now that you’re doing that, why is it so important? It’s important because many of the tools used to deploy and upgrade Kubernetes default to etcd 3.1, and because etcd doesn’t support downgrading, you will not be able to return to your previous version without reinstalling should you decide to downgrade your Kubernetes deployment. So while you could upgrade without performing that backup, you probably shouldn’t.
Now let’s get into the details of changes to each area of Kubernetes.
Core services
Let’s start by looking at the heart of Kubernetes and changes that will affect how you use it.
Authentication and API Machinery
The process of authenticating and authorizing access to Kubernetes saw a number of improvements this cycle.
For one thing,  permissions can now be added to the built-in RBAC admin/edit/view roles using cluster role aggregation. These roles apply to the entire cluster, making it possible to more easily administer who can and can’t perform certain actions.
In addition, authorization itself has been improved. For example, if a rule denying entry fires, there’s no reason to evaluate the rest of the rules in the chain, so the rest can now be short-circuited.
All of this depends on extensibility, and during this cycle, the community worked on increasing extensibility with the addition of a new type of admission control webhook.
Admission controllers are the different components of what happens when you try to perform an action in Kubernetes, such as checking access and checking for namespaces. Webhooks enable you to communicate with Kubernetes via HTTP POST requests; you can send a request, and Kubernetes will make a callback when certain events happen.
In this release, the team worked on “mutating” webhooks, which enable more flexible admission control plugins, because they let Kubernetes make changes as necessary, allowing for greater extensibility going forward.
Custom resources
Custom resources, which enable you to create your own “objects” that can be manipulated by Kubernetes, have also been enhanced to allow for easier creation and more reliability. This includes a new sample controller Custom Resource Definition in the Kubernetes repo, as well as new metadata field selectors, scripts to help generate code, and validation of the defined resources to improve reliability of your overall solutions. In addition, where previous versions only enabled you to refer to groups of custom resources, you can now get a single instance.
Networking
With IPv4 addresses running out, it’s good to see the beginnings of IPv6 support in Kubernetes 1.9.  This support is still in alpha and has significant limitations, such as a lack of dual-stack support and no HostPorts, but it’s a start.  You can get a full list of the new IPv6 related changes here.
In addition, with the release of CoreDNS 1.0, you have the option to use it as a drop-in replacement for kube-dns. To install it, CLUSTER_DNS_CORE_DNS to ‘true’. Be aware, however, that this support is experimental, which means that it can change or be removed at any time.
Other networking improvements include the –cleanup-ipvs flag, which determines whether kube-proxy flushes all existing ipvs rules in on startup (as it does in previous versions by default), and a new podAntiAffinity kube-dns annotation to enhance resilience.
You can also customize the behavior of a pod’s DNS client by adding “options” to the host’s /etc/resolve.conf or –resolv-conf, which will cause them to propagate down propogate to the pods resolve.conf.
Cluster Lifecycle
The Federation SIG has been renamed to Cluster Lifecycle, and has been focused on bringing the kubeadm deployment tool up to production quality. The project does work, but it’s still fairly young, and includes a number of newly added alpha features, such as  support for CoreDNS, IPv6 and Dynamic Kubelet Configuration. Again, to install CoreDNS instead of kube-dns, set CLUSTER_DNS_CORE_DNS to ‘true’ in your configuration.
Kubeadm also received some additional new features, such as the –print-join-command, which makes it possible to get the necessary information to add new nodes after the initial cluster deployment, support for Kubelet Dynamic Configuration, and the ability to add a Windows node to a cluster.
The group is also responsible for the Cluster API, for “declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes.”
If you’re building multi-cluster installations, you’ll be glad to know that kubefed, which lets you create a control plane to add, remove, and manage federated clusters, has gotten several new flags that give you more control over how it is installed and how it operates.  The –nodeSelector flag lets you decide where the controller gets installed, and the addition of support for  –imagePullSecrets and –imagePullPolicy, mean you can now pull images from a private container registry.
Nodes functions
If you’re a system administrator or operator, you’ll be glad to know that Kubernetes 1.9 makes writing configurations a bit easier, with Kubelet’s feature gates now represented as a map within KubeletConfiguration, rather than as a string of key-value pairs. In addition, you can now set multiple manifest url headers, either using the –manifest-url-header flag or the ManifestURLHeader field in the KubeletConfiguration.
In addition, deviceplugin has been extended to more gracefully handle the full device plugin lifecycle, including an explicit cm.GetDevicePluginResourceCapacity() function that makes it possible to more accurately determine what resources are inactive, thus making a more accurate view of available resources possible. It also ensures that devices are removed properly even if kubelet restarts, and passes sourcesReady from kubelet to the device plugin . Finally, it makes sure that scheduled pods can continue to run even after a device plugin deletion and kubelet restart.
Note that according to the release notes, “Kubelet no longer removes unregistered extended resource capacities from node status; cluster admins will have to manually remove extended resources exposed via device plugins when they the remove plugins themselves.”
Kubernetes 1.9 includes a number of enhancements to logging and monitoring, including pod-level CPU, memory, and local ephemeral storage. In addition, the status summary network value, which used to consider only eth0, now considers all network interfaces.
The new release also eases some user issues, adding read/write permissions to the default admin and edit roles, and adding read permissions on poddisruptionbudget.policy to the view role.
Finally, the team has made CRI log parsing available as a library at pkg/kubelet/apis/cri/logs, so you don’t have to struggle with this manually.
Scheduling
Kubernetes 1.9 changes how you configure kube-scheduler, adding a new –config flag that points to a configuration file. This file is where Kubernetes will expect to find your configuration values in future versions; most other kube-scheduler flags are now deprecated.
This version also provides the ability to more efficiently schedule workloads that need extended resources such as GPUs; you can taint the node with the extended resource name as the key, and pods requesting those resources will be the only ones scheduled to those nodes.
The Scheduling SIG also completed a number of other individual changes, such as scheduling higher priority pods before lower priority pods and the ability for a pod to listen on multiple IP addresses.
Storage
The big news for storage in Kubernetes 1.9 is the addition of an alpha implementation of the Container Storage Interface (CSI). CSI is a joint project between the Kubernetes, Docker, Mesosphere, and Cloud Foundry communities, and is meant to provide a single API that storage vendors can implement to be sure that their products work “out of the box” in any orchestrator that supports CSI. According to the Kubernetes Storage SIG, “CSI will make installing new volume plugins as easy as deploying a pod, and enable third-party storage providers to develop their plugins without the need to add code to the core Kubernetes codebase.” You can make use of this new functionality by instantiating a volume as a CSIVolumeSource.
The Storage SIG also added several new features, including:

Volume resizing for GCE PD, Ceph RBD, AWS EBS, and OpenStack Cinder volumes
Volumes as raw block devices (for Fibre Channel only as of Kubernetes 1.9)
Mount utilities that can run inside a container instead of on the host

Topology Aware Volume Scheduling, in which PersistentVolumes are scheduled based on the Pod’s scheduling requirements.

Cloud providers
One important change in Kubernetes 1.9 is that you must set a value for the –cloud-provider flag if you are manually deploying Kubernetes; the default is no longer “auto-detect”. Allowable options are aws, azure, cloudstack, fake, gce, mesos, openstack, ovirt, photon, rackspace, vsphere, or unset; auto-detect will be removed in Kubernetes 1.10.  (If you’re installing Kubernetes with a tool such as Minikube or Kubeadm you don’t have to worry about this.)
In addition, some of the changes in this version are specific to individual cloud providers.
OpenStack
If you’re using Kubernetes with OpenStack, you’ll find that configuration in v1.9 is considerably simpler. Auto-detection of OpenStack services and versions is now the rule “wherever feasible” — which in this case means Block Storage API versions and Security Groups — and you can now configure your OpenStack Load Balancing as a Service v2 provider. Both OpenStack Octavia v2 and Neutron LBaaS v2 are supported.
AWS
The AWS Special Interest Group (SIG) has been focusing on improving Kubernetes integration with EBS volumes. Users will no longer wind up with workloads scheduled to nodes with volumes stuck in the “attaching” state; instead, the nodes will be “tainted” so that administrators can take care of the  problem. The team recommends watching for these taints. Also, when nodes are stopped, volumes will be automatically detached.
In addition, Kubernetes now supports AWS’ new NVMe instance types, as well as using AWS Network Load Balancers rather than Elastic Load Balancers.
Azure
If you’re using Kubernetes on Windows, and especially on Azure, you’ll find that mounting volumes is a bit less frustrating, as you can now create Windows mount paths, and with the elimination of the need for a drive letter, an unlimited number of mount points.
You can also explicitly set the Azure DNS label for a public IP address using the service.beta.kubernetes.io/azure-dns-label-name annotation, while still being able to use Azure NSG rules to ensure that external access is allowed only to the load balancer IP address. The load balancer has also been enhanced to consider more properties of NSG rules, including Protocol, SourcePortRange, and DestinationAddressPrefs, when updating.  (Previously changes in these fields didn’t trigger an update because the load balancer didn’t recognize that there had been a change.)
Where to get Kubernetes 1.9
You can download Kubernetes 1.9 on GitHub.
The post What’s new in Kubernetes 1.9 appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Published by