Dynamic Provisioning and Storage Classes in Kubernetes

Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6Storage is a critical part of running stateful containers, and Kubernetes offers powerful primitives for managing it. Dynamic volume provisioning, a feature unique to Kubernetes, allows storage volumes to be created on-demand. Before dynamic provisioning, cluster administrators had to manually make calls to their cloud or storage provider to provision new storage volumes, and then create PersistentVolume objects to represent them in Kubernetes. With dynamic provisioning, these two steps are automated, eliminating the need for cluster administrators to pre-provision storage. Instead, the storage resources can be dynamically provisioned using the provisioner specified by the StorageClass object (see user-guide). StorageClasses are essentially blueprints that abstract away the underlying storage provider, as well as other parameters, like disk-type (e.g.; solid-state vs standard disks).StorageClasses use provisioners that are specific to the storage platform or cloud provider to give Kubernetes access to the physical media being used. Several storage provisioners are provided in-tree (see user-guide), but additionally out-of-tree provisioners are now supported (see kubernetes-incubator).In the Kubernetes 1.6 release, dynamic provisioning has been promoted to stable (having entered beta in 1.4). This is a big step forward in completing the Kubernetes storage automation vision, allowing cluster administrators to control how resources are provisioned and giving users the ability to focus more on their application. With all of these benefits, there are a few important user-facing changes (discussed below) that are important to understand before using Kubernetes 1.6.Storage Classes and How to Use themStorageClasses are the foundation of dynamic provisioning, allowing cluster administrators to define abstractions for the underlying storage platform. Users simply refer to a StorageClass by name in the PersistentVolumeClaim (PVC) using the “storageClassName” parameter.In the following example, a PVC refers to a specific storage class named “gold”.apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: mypvc  namespace: testnsspec:  accessModes:  – ReadWriteOnce  resources:    requests:      storage: 100Gi  storageClassName: goldIn order to promote the usage of dynamic provisioning this feature permits the cluster administrator to specify a default StorageClass. When present, the user can create a PVC without having specifying a storageClassName, further reducing the user’s responsibility to be aware of the underlying storage provider. When using default StorageClasses, there are some operational subtleties to be aware of when creating PersistentVolumeClaims (PVCs). This is particularly important if you already have existing PersistentVolumes (PVs) that you want to re-use:PVs that are already “Bound” to PVCs will remain bound with the move to 1.6They will not have a StorageClass associated with them unless the user manually adds itIf PVs become “Available” (i.e.; if you delete a PVC and the corresponding PV is recycled), then they are subject to the followingIf storageClassName is not specified in the PVC, the default storage class will be used for provisioning.Existing, “Available”, PVs that do not have the default storage class label will not be considered for binding to the PVCIf storageClassName is set to an empty string (‘’) in the PVC, no storage class will be used (i.e.; dynamic provisioning is disabled for this PVC)Existing, “Available”, PVs (that do not have a specified storageClassName) will be considered for binding to the PVCIf storageClassName is set to a specific value, then the matching storage class will be usedExisting, “Available”, PVs that have a matching storageClassName will be considered for binding to the PVCIf no corresponding storage class exists, the PVC will fail.To reduce the burden of setting up default StorageClasses in a cluster, beginning with 1.6, Kubernetes installs (via the add-on manager) default storage classes for several cloud providers. To use these default StorageClasses, users do not need refer to them by name – that is, storageClassName need not be specified in the PVC.The following table provides more detail on default storage classes pre-installed by cloud provider as well as the specific parameters used by these defaults.Cloud ProviderDefault StorageClass NameDefault ProvisionerAmazon Web Servicesgp2aws-ebsMicrosoft Azurestandardazure-diskGoogle Cloud Platformstandardgce-pdOpenStackstandardcinderVMware vSpherethinvsphere-volumeWhile these pre-installed default storage classes are chosen to be “reasonable” for most storage users, this guide provides instructions on how to specify your own default.Dynamically Provisioned Volumes and the Reclaim PolicyAll PVs have a reclaim policy associated with them that dictates what happens to a PV once it becomes released from a claim (see user-guide). Since the goal of dynamic provisioning is to completely automate the lifecycle of storage resources, the default reclaim policy for dynamically provisioned volumes is “delete”. This means that when a PersistentVolumeClaim (PVC) is released, the dynamically provisioned volume is de-provisioned (deleted) on the storage provider and the data is likely irretrievable. If this is not the desired behavior, the user must change the reclaim policy on the corresponding PersistentVolume (PV) object after the volume is provisioned.How do I change the reclaim policy on a dynamically provisioned volume?You can change the reclaim policy by editing the PV object and changing the “persistentVolumeReclaimPolicy” field to the desired value. For more information on various reclaim policies see user-guide.FAQsHow do I use a default StorageClass?If your cluster has a default StorageClass that meets your needs, then all you need to do is create a PersistentVolumeClaim (PVC) and the default provisioner will take care of the rest – there is no need to specify the storageClassName:apiVersion: v1kind: PersistentVolumeClaimmetadata:  name: mypvc  namespace: testnsspec:  accessModes:  – ReadWriteOnce  resources:    requests:      storage: 100GiCan I add my own storage classes?Yes. To add your own storage class, first determine which provisioners will work in your cluster. Then, create a StorageClass object with parameters customized to meet your needs (see user-guide for more detail). For many users, the easiest way to create the object is to write a yaml file and apply it with “kubectl create -f”. The following is an example of a StorageClass for Google Cloud Platform named “gold” that creates a “pd-ssd”. Since multiple classes can exist within a cluster, the administrator may leave the default enabled for most workloads (since it uses a “pd-standard”), with the “gold” class reserved for workloads that need extra performance. kind: StorageClassapiVersion: storage.k8s.io/v1metadata:  name: goldprovisioner: kubernetes.io/gce-pdparameters:  type: pd-ssdHow do I check if I have a default StorageClass Installed?You can use kubectl to check for StorageClass objects. In the example below there are two storage classes: “gold” and “standard”. The “gold” class is user-defined, and the “standard” class is installed by Kubernetes and is the default.$ kubectl get scNAME                 TYPEgold                 kubernetes.io/gce-pd   standard (default)   kubernetes.io/gce-pd$ kubectl describe storageclass standardName:     standardIsDefaultClass: YesAnnotations: storageclass.beta.kubernetes.io/is-default-class=trueProvisioner: kubernetes.io/gce-pdParameters: type=pd-standardEvents:         <none>Can I delete/turn off the default StorageClasses?You cannot delete the default storage class objects provided. Since they are installed as cluster addons, they will be recreated if they are deleted.You can, however, disable the defaulting behavior by removing (or setting to false) the following annotation: storageclass.beta.kubernetes.io/is-default-class.If there are no StorageClass objects marked with the default annotation, then PersistentVolumeClaim objects (without a StorageClass specified) will not trigger dynamic provisioning. They will, instead, fall back to the legacy behavior of binding to an available PersistentVolume object.Can I assign my existing PVs to a particular StorageClass?Yes, you can assign a StorageClass to an existing PV by editing the appropriate PV object and adding (or setting) the desired storageClassName field to it.What happens if I delete a PersistentVolumeClaim (PVC)?If the volume was dynamically provisioned, then the default reclaim policy is set to “delete”. This means that, by default, when the PVC is deleted, the underlying PV and storage asset will also be deleted. If you want to retain the data stored on the volume, then you must change the reclaim policy from “delete” to “retain” after the PV is provisioned.–Saad Ali & Michelle Au, Software Engineers, and Matthew De Lio, Product Manager, GooglePost questions (or answer questions) on Stack Overflow Join the community portal for advocates on K8sPortGet involved with the Kubernetes project on GitHub Follow us on Twitter @Kubernetesio for latest updatesConnect with the community on SlackDownload Kubernetes
Quelle: kubernetes

Data enables proactive healthcare, improving chronic disease management

Coughing, wheezing and tightness in the chest are are all symptoms of asthma, a chronic disease estimated to affect 400 million people around the world by 2025.
Asthma is a unique condition. A patient may live symptom-free life for many days and months with maintenance therapy. Unfortunately, even those who control their symptoms well can experience a trigger that unexpectedly causes an asthma attack. In minutes, a patient can experience a life-threatening situation that can lead to death.
At Teva Pharmaceutical Industries, a global pharmaceutical company based in Israel, we’ve considered if there are ways to identify early warning signs of an asthma attack. We are developing a digital respiratory disease management system which may enable  a proactive, data-driven approach to asthma management.
Many people who live with asthma experience uncontrolled symptoms and frequent attacks, often due to incorrect inhaler use or poor adherence to treatment.
Teva is committed to developing digital respiratory solutions for asthma patients to help them, and their caretakers, control their condition to better manage chronic symptoms. When patients use their digital inhalers and the corresponding software application, they generate data that their doctors can interpret to understand behavior patterns and enable a proactive, systematic and comprehensive approach to chronic disease treatment and management.
The collaboration will combine cloud-connected drug delivery and app technology with more than six billion data points, including integration of data from The Weather Company to incorporate environmental data that could potentially affect asthma patients. Using Watson cognitive processing capabilities and newly developed algorithms these data may be used to calculate the prospective risk of health events, such as an asthma attack. Teva delivers that information directly to caregivers and their patients via an app or other software so they may take a more proactive approach in managing that risk.
Using the IBM Watson Health Cloud will comply with operational and security requirements for health data.
When Teva started looking for a digital solution partner, the company considered several cloud and computing providers. It needed to work with a partner that was able to deliver a global cloud for storing, analyzing and communicating patient data. Teva also needed the capability to perform analysis of multiple data points on millions of patients in real time.
Teva chose IBM as its partner because of the unique capabilities of IBM Watson. Both Teva and IBM have the same aspiration to transform healthcare with digital therapeutic solutions designed to fulfill unmet and emerging patient needs, as well as provide the highest level of care to customers around the world.
As one result of Teva’s global partnership with IBM as a Foundational Life Sciences Partner for IBM Watson Health Cloud, Teva is using IBM Watson Health capabilities to help to improve chronic disease management.
Teva’s vision for the future is that patients will be empowered to better understand and manage chronic diseases, including asthma. They will use data to enable a systematic, comprehensive approach to help them take control of their health conditions and proactively seek the right solution before a health crisis.
In doing so, Teva aims to cut treatment costs by providing patients, payers, healthcare providers and caregivers with relatable data and insights that can inform action.
Learn more about IBM Cloud healthcare solutions.
The post Data enables proactive healthcare, improving chronic disease management appeared first on news.
Quelle: Thoughts on Cloud

Enterprise Slack apps on Google Cloud–now easier than ever

By Tim Swast, Developer Programs Engineer, Google Cloud

Slack recently announced a new, streamlined path to building apps, opening the door to corporate engineers to build fully featured internal integrations for companies of all sizes.

You can now make an app that supports any Slack API feature such as message buttons, threads and the Events API without having to enable app distribution. This means you can keep the app private to your team as an internal integration.

With support for the Events API in internal integrations, you can now use platforms like Google App Engine or Cloud Functions to host a Slack bot or app just for your team. Even if you’re building an app for multiple teams, internal integrations let you focus on developing your app logic first and wait to implement the OAuth2 flow for distribution until you’re ready.

We’ve updated the Google Cloud Platform samples for Slack to use this new flow. With samples for multiple programming languages, including Node.js, Java, and Go, it’s easier than ever to get started building Slack apps on Google Cloud Platform (GCP).

Slack also made an appearance at Google Cloud Next ’17. Check out the video for best practices for building bots for the enterprise from Amir Shevat, head of developer relations at Slack, and Alan Ho from Google Cloud.

Questions? Comments? Come chat with us on the bots channel in the Google Cloud Platform Slack community.
Quelle: Google Cloud Platform