How Azure Machine Learning service powers suggested replies in Outlook

Microsoft 365 applications are so commonplace that it’s easy to overlook some of the amazing capabilities that are enabled with breakthrough technologies, including artificial intelligence (AI). Microsoft Outlook is an email client that helps you work efficiently with email, calendar, contacts, tasks, and more in a single place.

To help users be more productive and deliberate in their actions while emailing, the web version of Outlook and the Outlook for iOS and Android app have introduced suggested replies, a new feature powered by Azure Machine Learning service. Now when you receive an email message that can be answered with a quick response, Outlook on the web and the Outlook mobile suggest three response options that you can use to reply with only a couple of clicks or taps, helping people communicate in both their workplace and personal life, by reducing the time and effort involved in replying to an email.

The developer team behind suggested replies is comprised of data scientists, designers, and machine learning engineers with diverse backgrounds who are working to improve the lives of Microsoft Outlook users by expediting and simplifying communications. They are at the forefront of applying cutting-edge natural language processing (NLP) and machine learning (ML) technologies and leverage these technologies to understand how users communicate through email and improve those interactions from a productivity standpoint to create a better experience for users.

A peek under the hood

To process the massive amount of raw data that these interactions provide, the team uses Azure Machine Learning pipelines to build their training models. Azure Machine Learning pipelines allow the team to divide training steps into discrete steps such as data cleanup, transforms, feature extraction, training, and evaluation. The output of the Azure Machine Learning pipeline converts raw data into a model. This Machine Learning pipeline allows the data scientists to build a training pipeline in a compliant manner that enforces privacy and compliance checks.

In order to train this model, the team needed a way to build and prepare a large data set comprised of over 100 million messages. To do this, the team leveraged a distributed processing framework to sample and retrieve data from a broad user base.

Azure Data Lake Storage is used to store the training data used for training the suggested replies models. We then clean and curate the data into message reply pairs (including potential responses to an email) that are stored in Azure Data Lake Storage (ADLS). The training pipelines also consume the reply pairs stored in ADLS in order to train models. To conduct the Machine Learning training itself, the team uses GPU pools available in Azure. The training pipelines leverage these curated Message Reply pairs to learn how to suggest appropriate replies based on a given message. Once the model is created, data scientists can compare the model performance with previous models and evaluate which approaches perform better at recommending relevant suggested replies.

The Outlook team helps protect your data by using the Azure platform to prepare large-scale data sets that are required to build a feature like suggested replies in accordance with Office 365 compliance standards. The data scientists use Azure compute and workflow solutions that enforce privacy policies to create experiments and train multiple models on GPUs. This helps with the overall developer experience and provides agility in the inner development loop cycle.

This is just one of many examples of how Microsoft products are powered by the breakthrough capabilities of Azure AI to create better user experiences. The team is learning from feedback every day and improving the feature for users while also expanding the types of suggested replies offered. Keep following the Azure blog to stay up-to-date with the team and be among the first to know when this feature is released.

Learn more

Learn more about the Azure Machine Learning service.

Get started with a free trial of Azure Machine Learning service.
Quelle: Azure

Tips, Tricks, and Best Practices for Distributed RDO Teams

While a lot of RDO contributors are remote, there are many more who are not and now find themselves in lock down or working from home due to the coronavirus. A few members of the RDO community requested tips, tricks, and best practices for working on and managing a distributed team.
Connectivity
I mean, obviously, there needs to be enough bandwidth, which might normally be just fine, but if you have a partner and kids also using the internet, video calls might become impossible.
Communicate with the family to work out a schedule or join the call without video so you can still participate.
Manage Expectations
Even if you’re used to being remote AND don’t have a partner / family invading your space, there is added stress in the new reality.
Be sure to manage expectations with your boss about priorities, focus, goals, project tracking, and mental health.
This will be an ongoing conversation that evolves as projects and situations evolve.
Know Thyself
Some people NEED to get ready in the morning, dress in business clothes, and work in a specific space. Some people can wake up, grab their laptop and work from the bed.
Some people NEED to get up once an hour to walk around the block. Some people are content to take a break once every other hour or more.
Some people NEED to physically be in the office around other people. Some will be totally content to work from home.
Sure, some things aren’t optional, but work with what you can.
Figure out what works for you.
Embrace #PhysicalDistance Not #SocialDistance
Remember to stay connected socially with your colleagues. Schedule a meeting without an agenda where you chat about whatever.
Come find the RDO Technical Community Liaison, leanderthal, and your other favorite collaborators on Freenode IRC on channels #rdo and #tripleo.
For that matter, don’t forget to reach out to your friends and family.
Even introverts need to maintain a certain level of connection.
Further Reading
There’s a ton of information about working remotely / distributed productivity and this is, by no means, an exhaustive list, but to get you started:

Ergonomic Essentials for Remote Working
Cornell University Ergonomics
Wikipedia.Org Time Management
You’re Taking Breaks The Wrong Way
WHO | Health Workforce Burnout
Wikipedia.Org Stress Management
Lessons from Community Leaders on Working Remote
Top 15 Tips To Effectively Manage Remote Employees

Now let’s hear from you!
What tips, tricks, and resources do you recommend to work from home, especially in this time of stress? Please add your advice in the comments below.
And, as always, thank you for being a part of the RDO community!
Quelle: RDO

Red Hat OpenShift 4 and Red Hat Virtualization: Together at Last

OpenShift 4 was launched not quite a year ago at Red Hat Summit 2019.  One of the more significant announcements was the ability for the installer to deploy an OpenShift cluster using full-stack automation.  This means that the administrator only needs to provide credentials to a supported Infrastructure-as-a-Service, such as AWS, and the installer would provision all of the resources needed, e.g. virtual machines, storage, networks, and integrating them all together as well.
Over time, the full-stack automation experience has expanded to include Azure, Google Compute Platform, and Red Hat Openstack, allowing customers to deploy OpenShift clusters across different clouds and even on-premises with the same fully automated experience.
For organizations who need enterprise virtualization, but not the API-enabled, quota enforced consumption of infrastructure provided by Red Hat OpenStack, Red Hat Virtualization (RHV) provides a robust and trusted platform to consolidate workloads and provide the resiliency, availability, and manageability of a traditional hypervisor.
When using RHV, OpenShift’s “bare metal” installation experience, where there existed no testing or integration between OpenShift and the underlying infrastructure, has been the solution so far.  But, the wait is over! OpenShift 4.4 nightly releases now offer the full-stack automation experience for RHV!

Getting started with OpenShift on RHV
As you would expect from the full-stack automation installation experience, getting started is straightforward with just a few prerequisites below.  You can also use the quick start guide for more thorough and details instructions.

You need a RHV deployment with RHV Manager.  It doesn’t matter if you’re using a self-hosted Manager or standalone, just be sure you’re using RHV version 4.3.7.2 or later.
Until OpenShift 4.4 is generally available, you will need to download and use the nightly release of the OpenShift installer, available from https://cloud.redhat.com.
Network requirements:

DHCP is required for full-stack automated installs to assign IPs to nodes as they are created.
Identify three (3) IP addresses you can statically allocate to the cluster and create two (2) DNS entries, as below.  These are used for communicating with the cluster as well as internal DNS and API access.

An IP address for the internal-only OpenShift API endpoint
An IP address for the internal OpenShift DNS, with an external DNS record of api.clustername.basedomain for this address
An IP address for the ingress load balancer, with an external DNS record of *.apps.clustername.basedomain for this address.

Create an ovirt-config.yaml file for the credentials you want to use, this file has just four lines:

ovirt_url: https://rhv-m.host.name/ovirt-engine/api
ovirt_username: user@domain.tld
ovirt_password: password
ovirt_insecure: True

For now, the last value, “ovirt_insecure”, should be “True”.  As documented in this BZ, even if the RHV-M certificate is trusted by the client where openshift-install is executing from, that doesn’t mean that the pods deployed to OpenShift trust the certificate.  We are working on a solution to this, so please keep an eye on the BZ for when it’s been addressed!  Remember, this is tech preview :D

With the prerequisites out of the way, let’s move on to deploying OpenShift to Red Hat Virtualization!
Magic (but really automation)!
Starting the install process, as with all OpenShift 4 deployments, uses the openshift-install binary.  Once we answer the questions, the process is wholly automated and we don’t have to do anything but wait for it to complete!

# log level debug isn’t necessary, but gives detailed insight to what’s
# happening
# the “dir” parameter tells the installer to use the provided directory
# to store any artifacts related to the installation
[notroot@jumphost ~] openshift-install create cluster –log-level=debug –dir=orv
? SSH Public Key /home/notroot/.ssh/id_rsa.pub
? Platform ovirt
? Select the oVirt cluster Cluster2
? Select the oVirt storage domain nvme
? Select the oVirt network VLAN101
? Enter the internal API Virtual IP 10.0.101.219
? Enter the internal DNS Virtual IP 10.0.101.220
? Enter the ingress IP  10.0.101.221
? Base Domain lab.lan
? Cluster Name orv
? Pull Secret [? for help] **********************

snip snip snip

INFO Waiting up to 30m0s for the cluster at https://api.orv.lab.lan:6443 to initialize…
INFO Waiting up to 10m0s for the openshift-console route to be created…
INFO Install complete!
INFO To access the cluster as the system:admin user when using ‘oc’, run ‘export KUBECONFIG=/home/notroot/orv/auth/kubeconfig’
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.orv.lab.lan
INFO Login to the console with user: kubeadmin, password: passw-wordp-asswo-rdpas

The result, after a few minutes of waiting, is a fully functioning OpenShift cluster, ready for the final configuration to be applied, like deploying logging and monitoring, and configuring a persistent storage provider.
From a RHV perspective, the installer has created a template virtual machine, which was used to deploy all of the member nodes, regardless of role, for the OpenShift cluster.  As you saw at the end of the video, not only does the installer use this template, but the Machine API integration also makes use of it when creating new VMs when scaling the nodes.  Scaling nodes manually is as easy as one command line (oc scale –replicas=# machineset)!

Deploying OpenShift
To get started testing and trying OpenShift full-stack automated deployments to your RHV clusters, the installer can be found from the Red Hat OpenShift Cluster Manager.  For now, deploying the full-stack automation experience on RHV is in developer preview, so please send us any feedback and questions you have via BugZilla.  The quickest way to reach us is using “OpenShift Container Platform” as the product, with “Installer” as the component and “OpenShift on RHV” for sub-component.
The post Red Hat OpenShift 4 and Red Hat Virtualization: Together at Last appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift