MLOps—the path to building a competitive edge

Enterprises today are transforming their businesses using Machine Learning (ML) to develop a lasting competitive advantage. From healthcare to transportation, supply chain to risk management, machine learning is becoming pervasive across industries, disrupting markets and reshaping business models.

Organizations need the technology and tools required to build and deploy successful Machine Learning models and operate in an agile way. MLOps is the key to making machine learning projects successful at scale. What is MLOps ? It is the practice of collaboration between data science and IT teams designed to accelerate the entire machine lifecycle across model development, deployment, monitoring, and more. Microsoft Azure Machine Learning enables companies to fully embrace MLOps practices will and truly be able to realize the potential of AI in their business.

One great example of a customer transforming their business with Machine Learning and MLOps is TransLink. They support Metro Vancouver's transportation network, serving 400 million total boarding’s from residents and visitors as of 2018. With an extensive bus system spanning 1,800 sq. kilometers, TransLink customers depend heavily on accurate bus departure times to plan their journeys.

To enhance customer experience, TransLink deployed 18,000 different sets of Machine Learning models to better predict bus departure times that incorporate factors like traffic, bad weather, and other schedule disruptions. Using MLOps with Azure Machine Learning they were able to manage and deliver the models at scale.

“With MLOps in Azure Machine Learning, TransLink has moved all models to production and improved predictions by 74 percent, so customers can better plan their journey on TransLink's network. This has resulted in a 50 percent reduction on average in customer wait times at stops.”–Sze-Wan Ng, Director of Analytics & Development, TransLink.

Johnson Controls is another customer using Machine Learning Operations at scale. For over 130 years, they have produced fire, HVAC and security equipment for buildings. Johnson Controls is now in the middle of a smart city revolution, with Machine Learning being a central aspect of their equipment maintenance approach.

Johnson Controls runs thousands of chillers with 70 different types of sensors each, streaming terabytes of data. MLOps helped put models into production in a timely fashion, with a repeatable process, to deliver real-time insights on maintenance routines. As a result, chiller shutdowns could be predicted days in advance and mitigated effectively, delivering cost savings and increasing customer satisfaction.

“Using the MLOps capabilities in Azure Machine Learning, we were able to decrease both mean time to repair and unplanned downtime by over 66 percent, resulting in substantial business gains.”–Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls

Getting started with MLOps

To take full advantage of MLOps, organizations need to apply the same rigor and processes of other software development projects.

To help organizations with their machine learning journey, GigaOm developed the MLOps vision report that includes best practices for effective implementation and a maturity model.

Maturity is measured through five levels of development across key categories such as strategy, architecture, modeling, processes, and governance. Using the maturity model, enterprises can understand where they are and determine what steps to take to ‘level up’ and achieve business objectives.

 

 

“Organizations can address the challenges of developing AI solutions by applying MLOps and implementing best practices. The report and MLOps maturity model from GigaOm can be a very valuable tool in this journey,”– Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls.

To learn more, read the GigaOm report and make machine learning transformation a reality for your business.

More information

Learn more about Azure Machine Learning

Read the GigaOm report, Delivering on the Vision of MLOps

Try Azure Machine Learning for free today.

Quelle: Azure

Configure the OpenShift Image Registry backed by OpenShift Container Storage

1. Requirements

Note: This scenario supposes you already have Red Hat OpenShift Container Storage 4 running inside your Red Hat OpenShift Container Platform 4 environment. Alternatively, follow the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.
In order to verify the presence of OpenShift Container Storage, please run:

oc describe project openshift-storage|grep Status

The result of the above command should be similar to:
Status: Active

2. Registry Introduction
OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster, as well as a source of images for workloads running on the cluster.
In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources.
3. Image Registry Operator
The Registry is configured and managed by an infrastructure operator. The Image Registry Operator installs a single instance of the OpenShift Container Platform registry, and it manages all configuration of the registry, including setting up registry storage when you install an installer-provisioned infrastructure cluster on AWS, GCP, Azure, or OpenStack.
The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location as well. All configuration and workload resources for the registry reside in that namespace.
In case you want to see detailed information about the registry operator, run:
oc describe configs.imageregistry.operator.openshift.io

4. Registry Storage Requirements
A registry needs to have storage in order to store its contents. Image data is stored in two locations. The actual image data is stored in a configurable storage location such as cloud storage or a filesystem volume.
The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams.
4.1. Project
The registry is arranged in a Project (namespace) named: ‘openshift-image-registry’
To get more information about this project, run the following oc command:
oc describe project openshift-image-registry
Name: openshift-image-registry
Created: 3 hours ago
Labels: openshift.io/cluster-monitoring=true
Annotations: openshift.io/node-selector=
openshift.io/sa.scc.mcs=s0:c18,c17
openshift.io/sa.scc.supplemental-groups=1000340000/10000
openshift.io/sa.scc.uid-range=1000340000/10000
Display Name: <none>
Description: <none>
Status: Active
Node Selector: <none>
Quota: <none>
Resource limits: <none>

4.2. Pods
To view the registry related pods, run the command:
oc get pods -n openshift-image-registry

(The NAME of your machines will be different than shown below)
NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-74465655b4-gq44m 2/2 Running 0 3h16m
image-registry-7489584ddc-jhw2j 1/1 Running 0 3h16m
node-ca-4×477 1/1 Running 0 116m
node-ca-d82rv 1/1 Running 0 3h16m
node-ca-gxd8r 1/1 Running 0 3h16m
node-ca-kjp28 1/1 Running 0 3h16m
node-ca-lvb48 1/1 Running 0 116m
node-ca-ndwhh 1/1 Running 0 3h16m
node-ca-nwstp 1/1 Running 0 116m
node-ca-pwrrs 1/1 Running 0 3h16m

5. Review the current Registry Operator configuration settings
Let’s review the current Registry settings first. To do so, please run the command:
oc edit configs.imageregistry.operator.openshift.io/cluster

# Please edit the object below. Lines beginning with a ‘#’ will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
creationTimestamp: “2019-10-28T09:07:09Z”
finalizers:
– imageregistry.operator.openshift.io/finalizer
generation: 3
name: cluster
resourceVersion: “16463”
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 52880e6b-f962-11e9-8995-12a861d1434e
spec:
defaultRoute: true
httpSecret: 0c394aabee8e6a9ef8aa2c927c8f8c487f8ad3249ff67794a8685af4f76c72811c97ee4ddf936602dd9fca12e198c4eff413130568a4c356d7b6f14f805bcb59
logging: 2
managementState: Managed
proxy:
http: “”
https: “”
noProxy: “”
readOnly: false
replicas: 1
requests:
read:
maxInQueue: 0
maxRunning: 0
maxWaitInQueue: 0s
write:
maxInQueue: 0
maxRunning: 0
maxWaitInQueue: 0s
storage:
s3:
bucket: cluster-ocs-f562-9d4rh-image-registry-us-east-1-rjqkgcsxlotmwm
encrypt: true
keyID: “”
region: us-east-1
regionEndpoint: “”
status:
conditions:
– lastTransitionTime: “2019-10-28T09:07:10Z”
reason: S3 Bucket Exists
status: “True”
type: StorageExists
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Public access to the S3 bucket and its contents have been successfully
blocked.
reason: Public Access Block Successful
status: “True”
type: StoragePublicAccessBlocked
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Tags were successfully applied to the S3 bucket
reason: Tagging Successful
status: “True”
type: StorageTagged
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Default AES256 encryption was successfully enabled on the S3 bucket
reason: Encryption Successful
status: “True”
type: StorageEncrypted
– lastTransitionTime: “2019-10-28T09:07:10Z”
message: Default cleanup of incomplete multipart uploads after one (1) day was
successfully enabled
reason: Enable Cleanup Successful
status: “True”
type: StorageIncompleteUploadCleanupEnabled
– lastTransitionTime: “2019-10-28T09:07:56Z”
message: The registry is ready
reason: Ready
status: “True”
type: Available
– lastTransitionTime: “2019-10-28T09:18:32Z”
message: The registry is ready
reason: Ready
status: “False”
type: Progressing
– lastTransitionTime: “2019-10-28T09:07:11Z”
status: “False”
type: Degraded
– lastTransitionTime: “2019-10-28T09:07:11Z”
status: “False”
type: Removed
observedGeneration: 3
readyReplicas: 0
storage:
s3:
bucket: cluster-ocs-f562-9d4rh-image-registry-us-east-1-rjqkgcsxlotmwm
encrypt: true
keyID: “”
region: us-east-1
regionEndpoint: “”
storageManaged: true

Note: The storage“ designation currently refers tos3and abucket` name. This is an initial deployment of OCP4 on AWS infrastructure.

storage:
s3:
bucket: cluster-ocs-f562-9d4rh-image-registry-us-east-1-rjqkgcsxlotmwm
encrypt: true
keyID: “”
region: us-east-1
regionEndpoint: “”

Close the VI editor by first pressing ESC and then : followed by q! and ENTER
6. Moving the registry storage to OpenShift Container Storage
In this section we will change the registry storage to OpenShift Container Storage, where it will consume CephFS RWX storage, as multiple pods will need to access the storage concurrently.
6.1. Storage Class
First we want to make sure that a CephFS storageclass is present, in order to create a Persistent Volume Claim for the registry storage.
To check for presence of an existing CephFS storage class, please run the following command:
oc get sc

This should result in an outcome similar to:
NAME PROVISIONER AGE
gp2 kubernetes.io/aws-ebs 5h57m
ocs-storagecluster-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com 4h5m
ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com 4h5m
openshift-storage.noobaa.io openshift-storage.noobaa.io/obc 3h59m

According to the above output, there is already a storageclass named ocs-storagecluster-cephfs
6.2. PVC (Persistent Volume Claim)
In this step we will setup a pvc named ocs4registry addressed to our storageclass named ocs-storagecluster-cephfs, which is going to be used for storing registry data.
First, please make sure to be inside the openshift-image-registry project.
oc project openshift-image-registry

In order to create the pvc, please run the following command:
oc create -f <(echo ‘{
“apiVersion”: “v1″,
“kind”: “PersistentVolumeClaim”,
“metadata”: {
“name”: “ocs4registry”
},
“spec”: {
“storageClassName”: “ocs-storagecluster-cephfs”,
“accessModes”: [ “ReadWriteMany” ],
“resources”: {
“requests”: { “storage”: “100Gi”
}
}
}
}’);

This should result in:
persistentvolumeclaim/ocs4registry created

To check if it worked out well:
oc get pvc

Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ocs4registry Bound pvc-b7339457-fb23-11e9-846d-0a3016334dd1 100Gi RWX ocs-storagecluster-cephfs 60s

6.3. Configure the Image Registry to use the newly created PVC
In this section we will instruct the Registry Operator to use the CephFS-backed RWX PVC.

Note: This method of moving the registry to OpenShift Container Storage will work exactly the same for OpenShift Container Platform on VMware infrastructure.
Now configure the registry to use OpenShift Container Storage. Find the storage: stanza and remove it and everything below it. Everything above it should remain in place. Then put the following instead:

oc edit configs.imageregistry.operator.openshift.io

Find the storage section and add the following:
storage:
pvc:
claim: ocs4registry

Close the VI editor by first pressing ESC and then : followed by wq! and ENTER
Then check the /registry mountpoint inside the image-registry pod, as a validation that the pod now uses the OCS pvc instead of the s3 resources on AWS. Here is how to do this:
oc get pods

Note: The NAME of your machines will be different than shown below

Example output:
NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-6d65bcbd4b-7h6b6 2/2 Running 0 8h
image-registry-6c4dbbcdbb-9bl8w 1/1 Running 0 8m59s
node-ca-26q5d 1/1 Running 0 8h
node-ca-6tdrs 1/1 Running 0 8h
node-ca-9jdwt 1/1 Running 0 8h
node-ca-g6dr5 1/1 Running 0 8h
node-ca-jt7w8 1/1 Running 0 8h
node-ca-r9qtx 1/1 Running 0 7h41m
node-ca-srgv9 1/1 Running 0 7h41m
node-ca-wg2xs 1/1 Running 0 7h41m

We now open a remote shell on the registry pod. This is the podname that starts with image-registry-*

Note: The NAME of your registry pod will be different than shown below

oc rsh image-registry-6c4dbbcdbb-9bl8w

Once connected, a bash prompt appears. You are now running a bash shell on the pod itself.
From within this remote shell, we can run both the df -h and/or mount command, which shows the result information from pod perspective:
sh-4.2$ df -h | grep registry

This results in the following output:
172.30.107.130:6789,172.30.82.116:6789,172.30.125.23:6789:/volumes/csi/csi-vol-5b88bd0e-fc09-11e9-9939-0a580a820206 100G 0 100G 0% /registry

Another approach to take could be to use the mount command.
sh-4.2$ mount | grep registry

Resulting in the following output:
172.30.107.130:6789,172.30.82.116:6789,172.30.125.23:6789:/volumes/csi/csi-vol-5b88bd0e-fc09-11e9-9939-0a580a820206 on /registry type ceph (rw,relatime,name=csi-cephfs-node,secret=<hidden>,acl,mds_namespace=ocs-storagecluster-cephfilesystem)

At this point, the image registry should be using the OCS RWX volume, backed by CephFS.
In the output from either command, it is shown that the /registry filesystem mount originates from /volumes/csi/csi-vol-5b88bd0e-fc09-11e9-9939-0a580a820206 sourced from Ceph nodes, managed by the rook operator.
As a result from the df -h command, we can verify it has 100Gi available space. The mount command shows which options were used when mounting the /registry filesystem slice.
You can exit the pod remote shell rsh by either pressing Ctrl+D or by executing exit.
Resources and Feedback
To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.
If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.
The post Configure the OpenShift Image Registry backed by OpenShift Container Storage appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift

Docker Desktop release 2.2 is here!

We are excited to announce that we released a new Docker Desktop version today! Thanks to the user feedback on the new features initially released in the Edge channel, we are now ready to publish them into Stable. 

Before getting to each feature into detail, let’s see what’s new in Docker Desktop 2.2:

WSL 2 as a technical preview, allowing access to the full system resources, improved boot time, access to Linux workspaces and improved file system performanceA new file sharing implementation for Windows, improving the developer inner loop user experienceA New Integrated Desktop Dashboard, to see at once glance your local running containers and Compose applications, and easily manage them.

WSL 2 – New architecture 

Back in July we released on Edge the technical preview of Docker Desktop for WSL 2, where we included an experimental integration of Docker running on an existing user Linux distribution. We learnt from our experience and re-architected our solution (covered in Simon’s blog) . 

This new architecture for WSL 2 allows users to: 

Use Kubernetes on the WSL 2 backendWork with just WSL 2/turn off the traditional HyperV VM while working with WSL 2Continue to work as they did in the traditional Docker Desktop with a friendly networking stack, support for http proxy settings, and trusted CA synchronization Start Docker Desktop in <5 secondsUse Linux Workspaces 

To make use of the WSL 2 features you will need to be on a Windows preview version that supports WSL 2.

Read More on WSL 2

File system improvements on Windows 

For existing Windows users not on Windows Insider builds we have been working on improving the user experience we have today for the inner loop. Traditionally with Docker Desktop on Windows, we have relied upon the Samba protocol to manage the interaction between the Linux file system working with Docker and the Windows file system. We have now replaced this with gRPC FUSE, which:

uses caching to (for example) reduce page load time in Symfony by up to 60%;supports Linux inotify events, triggering automatic recompilation / reload when the source code is changed;is independent of how you authenticate to Windows: smartcard, Azure AD are all fine;always works irrespective of whether your VPN is connected or disconnected;reduces the amount of code running as Administrator.

Read More on new File Sharing

New Integrated Desktop Dashboard

Last but not least, Docker Desktop now includes an interactive Dashboard UI for managing your local running containers and Compose applications. We have been listening to developers and working hard to incorporate a single user interface across Mac and Windows so that we could look at how we can make it easier to work with Docker locally. Historically Docker offered similar capability with Kitematic, which we plan to archive in 2020 and replace with the new Desktop Dashboard.

Read More on Desktop Dashboard

Get started today

You can try all of the new features now by getting Docker Desktop 2.2!

Download Docker Desktop 2.2 for Windows

Download Docker Desktop 2.2 for macOS

The post Docker Desktop release 2.2 is here! appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/