AI in depth: Creating preprocessing-model serving affinity with custom online prediction on AI Platform Serving

AI Platform Serving now lets you deploy your trained machine learning (ML) model with custom online prediction Python code, in beta. In this blog post, we show how custom online prediction code helps maintain affinity between your preprocessing logic and your model, which is crucial to avoid training-serving skew. As an example, we build a Keras text classifier, and deploy it for online serving on AI Platform, along with its text preprocessing components. The code for this example can be found in this Notebook.BackgroundThe hard work of building an ML model pays off only when you deploy the model and use it in production—when you integrate it into your pre-existing systems or incorporate your model into a novel application. If your model has multiple possible consumers, you might want to deploy the model as an independent, coherent microservice that is invoked via a REST API that can automatically scale to meet demand. Although AI Platform may be better known for its training abilities, it can also serve TensorFlow, Keras, scikit-learn, and XGBoost models with REST endpoints for online prediction.While training that model, it’s common to transform the input data into a format that improves model performance. But when performing predictions, the model expects the input data to already exist in that transformed form. For example, the model might expect a normalized numerical feature, for example TF-IDF encoding of terms in text, or a constructed feature based on a complex, custom transformation. However, the callers of your model will send “raw”, untransformed data, and the caller doesn’t (or shouldn’t) need to know which transformations are required. This means the model microservice will be responsible for applying the required transformation on the data before invoking the model for prediction.The affinity between the preprocessing routines and the model (i.e., having both of them coupled in the same service) is crucial to avoid training-serving skew, since you’ll want to ensure that these routines are applied on any data sent to the model, with no assumptions about how the callers prepare the data. Moreover, the model-preprocessing affinity helps to decouple the model from the caller. That is, if a new model version requires new transformations, these preprocessing routines can change independently of the caller, as the caller will keep on sending data in its raw format.Beside preprocessing, your deployed model’s microservice might also perform other operations, including postprocessing of the prediction produced by the model, or even more complex prediction routines that combine the predictions of multiple models.To help maintain affinity of preprocessing between training and serving, AI Platform Serving now lets you customize the prediction routine that gets called when sending prediction requests to a model deployed on AI Platform Serving. This feature allows you to upload a custom model prediction class, along with your exported model, to apply custom logic before or after invoking the model for prediction.Customizing prediction routines can be useful for the following scenarios:Applying (state-dependent) preprocessing logic to transform the incoming data points before invoking the model for prediction.Applying (state-dependent) post-processing logic to the model prediction before sending the response to the caller. For example, you might want to convert the class probabilities produced by the model to a class label.Integrating rule-based and heuristics-based prediction with model-based prediction.Applying a custom transform used in fitting a scikit-learn pipeline.Performing complex prediction routines based on multiple models, that is, aggregating predictions from an ensemble of estimators, or calling a model based on the output of the previous model in a hierarchical fashion.The above tasks can be accomplished by custom online prediction, using the standard framework supported by AI Platform Serving, as well as with any model developed by your favorite Python-based framework, including PyTorch. All you need to do is to include the dependency libraries in the setup.py of your custom model package (as discussed below). Note that without this feature, you would need to implement the preprocessing, post-processing, or any custom prediction logic in a “wrapper” service, using, for example, App Engine. This App Engine service would also be responsible for calling the AI Platform Serving models, but this approach adds complexity to the prediction system, as well as latency to the prediction time.Next we’ll demonstrate how we built a microservice that can handle both preprocessing and post-processing scenarios using the AI Platform custom online prediction, using text classification as the example. We chose to implement the text preprocessing logic and built the classifier using Keras, but thanks to AI Platform custom online prediction, you could implement the preprocessing using any other libraries (like NLTK or Scikit-learn), and build the model using any other Python-based ML framework (like TensorFlow or PyTorch). You can find the code for this example in this Notebook.A text classification exampleText classification algorithms are at the heart of a variety of software systems that process text data at scale. The objective is to classify (categorize) text into a set of predefined classes, based on the text’s content. This text can be a tweet, a web page, a blog post, user feedback, or an email: in the context of text-oriented ML models, a single text entry (like a tweet) is usually referred to as a “document.”Common use cases of text classification include:Spam-filtering: classifying an email as spam or not.Sentiment analysis: identifying the polarity of a given text, such as tweets, product or service reviews.Document categorization: identifying the topic of a given document (for example, politics, sports, finance, etc.)Ticket routing: identifying to which department to dispatch a ticketYou can design your text classification model in two different ways; choosing one versus the other will influence how you’ll need to prepare your data before training the model.N-gram models: In this option, the model treats a document as a “bag of words,” or more precisely, a “bag of terms,” where a term can be one word (uni-gram), two words (bi-gram) or n words (n-grams). The ordering of the words in the document is not relevant. The feature vector representing a document encodes whether a term occurs in the document or not (binary encoding), how many times the term occurs in the document (count encoder) or more commonly, Term Frequency Inverse Document Frequency (TF-IDF encoder). Gradient-boosted trees and Support Vector Machines are typical techniques to use in n-gram models.Sequence models: With this option, the text is treated as a sequence of words or terms, that is, the model uses the word ordering information to make the prediction. Types of sequence models include convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variations.In our example, we utilize the sequence model approach.Hacker News is one of many public datasets available in BigQuery. This dataset includes titles of articles from several data sources. For the following tutorial, we extracted the titles that belong to either GitHub, The New York Times, or TechCrunch, and saved them as CSV files in a publicly shared Cloud Storage bucket at the following location:gs://cloud-training-demos/blogs/CMLE_custom_predictionHere are some useful statistics about this dataset:Total number of records: 96,203Min, Max, and Average number of words per title: 1, 52, and 8.7Number of records in GitHub, The New York Times, and TechCrunch: 36,525, 28,787, and 30,891Training and evaluation percentages: 75% and 25%The objective of the tutorial is to build a text classification model, using Keras to identify the source of the article given its title, and deploy the model to AI Platform serving using custom online prediction, to be able to perform text pre-processing and prediction post-processing.Preprocessing textSequence tokenization with KerasIn this example, we perform the following preprocessing steps:Tokenization: Divide the documents into words. This step determines the “vocabulary” of the dataset (set of unique tokens present in the data). In this example, you’ll make use of the most frequently 20,000 words, and discard the other ones from the vocabulary. This value is set through the VOCAB_SIZE parameter.Vectorization: Define a good numerical measure to characterize these documents. A given embedding’s representation of the tokens (words) will be helpful when you’re ready to train your sequence model. However, these embeddings are created as part of the model, rather than as a preprocessing step. Thus, what you need here is to simply convert each token to a numerical indicator. That is, each article’s title is represented as a sequence of integers, and each is an indicator of a token in the vocabulary that occured in the title.Length fixing: After vectorization, you have a set of variable-length sequences. In this step, the sequences are converted into a single fixed length: 50. This can be configured using MAX_SEQUENCE_LENGTH parameter. Sequences with more than 50 tokens will be right-trimmed, while sequences with fewer than 50 tokens will be left-padded with zeros.Both the tokenization and vectorization steps are considered to be stateful transformations. In other words, you extract the vocabulary from the training data (after tokenization and keeping the top frequent words), and create a word-to-indicator lookup, for vectorization, based on the vocabulary. This lookup will be used to vectorize new titles for prediction. Thus, after creating the lookup, you need to save it to (re-)use it when serving the model.The following block shows the code for performing text preprocessing. The TextPreprocessor class in the preprocess.py module includes two methods.fit(): applied on training data to generate the lookup (tokenizer). The tokenizer is stored as an attribute in the object.transform(): applies the tokenizer on any text data to generate the fixed-length sequence of word indicators.Preparing training and evaluation dataThe following code prepares the training and evaluation data (that is, it converts each raw text title to a NumPy array with 50 numeric indicator). Note that, you use both fit() and transform() with the training data, while you only use transform() with the evaluation data, to make use of the tokenizer generated from the training data. The outputs, train_texts_vectorized and eval_texts_vectorized, will be used to train and evaluate our text classification model respectively.Next, save the processor object (which includes the tokenizer generated from the training data) to be used when serving the model for prediction. The following code dumps the object to processor_state.pkl file.Training a Keras modelThe following code snippet shows the method that creates the model architecture. We create a Sequential Keras model, with an Embedding layer, Dropout layer, followed by two Conv1d and Pooling Layers, then a Dense layer with Softmax activation at the end. The model is compiled with sparse_categorical_crossentropy loss and accuracy acc (accuracy) evaluation metric.The following code snippet creates the model by calling the create_model method with the required parameters, trains the model on the training data, and evaluates the trained model’s quality using the evaluation data. Lastly, the trained model is saved to keras_saved_model.h5 file.Implementing a custom model prediction classIn order to apply a custom prediction routine that includes preprocessing and postprocessing, you need to wrap this logic in a Custom Model Prediction class. This class, along with the trained model and the saved preprocessing object, will be used to deploy the AI Platform Serving microservice. The following code shows how the Custom Model Prediction class (CustomModelPrediction) for our text classification example is implemented in the model_prediction.py module.Note the following points in the Custom Model Prediction class implementation:from_path is a “classmethod”, responsible for loading both the model and the preprocessing object from their saved files, and instantiating a new CustomModelPrediction object with the loaded model and preprocessor object (which are both stored as attributes to the object).predict is the method invoked when you call the “predict” API of the deployed AI Platform Serving model. The method does the following:Receives the instances (list of titles) for which the prediction is neededPrepares the text data for prediction by applying the transform() method of the “stateful” self._processor object.Calls the self._model.predict() to produce the predicted class probabilities, given the prepared text.Post-processes the output by calling the _postprocess method._postprocess is the method that receives the class probabilities produced by the model, picks the label index with the highest probability, and converts this label index to a human-readable label  github,  nytimes,  or techcrunch .Deploying to AI Platform ServingFigure 1 shows an overview of how to deploy the model, along with its required artifacts for a custom prediction routine to AI Platform Serving.Uploading the artifacts to Cloud StorageThe first thing you want to do is to upload your artifacts to Cloud Storage. First, you need to upload:Your saved (trained) model file: keras_saved_model.h5 (see the Training a Keras model section).Your pickled (seralized) preprocessing objects (which contain the state needed for data transformation prior to prediction): processor_state.pkl. (see the Preprocessing Text section). Remember, this object includes the tokenizer generated from the training data.Second, upload a python package including all the classes you need for prediction (e.g., preprocessing, model classes, and post-processing). In this example, you need to create a pip-installable tar with model_prediction.py and preprocess.py. First, create the following setup.py file:Now, generate the package by running the following command:This creates a .tar.gz package under a new /dist directory, created in your working directory. The name of the package will be $name-$version.tar.gz where $name and $version are the ones specified in the setup.py.Once you have successfully created the package, you can upload it to Cloud Storage:Deploying the model to AI Platform ServingLet’s define the model name, the model version, and the AI Platform Serving runtime (which corresponds to a TensorFlow version) required to deploy the model.First, create a model in AI Platform Serving using the following gcloud command:Second, create a model version using the following gcloud command, in which you specify the location of the model and preprocessing object (–origin), the location the package(s) including the scripts needed for your prediction (–package-uris), and a pointer to your Custom Model Prediction class (–prediction-class).This should take one to two minutes.Calling the deployed model for online predictionsAfter deploying the model to AI Platform Serving, you can invoke the model for prediction using the following code:Given the titles defined in the request object, the predicted source of each title from the deployed model would be as follows: [techcrunch, techcrunch, techcrunch, nytimes, nytimes, nytimes, github, github, techcrunch]. Note that the last one was mis-classified by the model.ConclusionIn this tutorial, we built and trained a text classification model using Keras to predict the source media of a given article. The model required text preprocessing operations for preparing the training data, and preparing the incoming requests to the model deployed for online predictions. Then, we showed you how to deploy the model to AI Platform Serving with custom online prediction code, in order to perform preprocessing to the incoming prediction requests and post-processing to the prediction outputs. Enabling a custom online prediction routine in AI Platform Serving allows for affinity between the preprocessing logic, the model, and the post-processing logic required to handle prediction request end-to-end. This helps to avoid training-serving skew, and simplifies deploying ML models for online prediction.Thanks for following along. If you’re curious to try out some other machine learning tasks on GCP, take this specialization on Coursera. If you want to try out these examples for yourself in a local environment, run this Notebook. Send a tweet to @GCPcloud if there’s anything we can change or add to make text analysis even easier on Google Cloud.AcknowledgementsWe would like to thank Lak Lakshmanan, Technical Lead, Machine Learning and Big Data in Google Cloud, for reviewing and improving the blog post.
Quelle: Google Cloud Platform

Dear Spark developers: Welcome to Azure Cognitive Services

This post was co-authored by Mark Hamilton, Sudarshan Raghunathan, Chris Hoder, and the MMLSpark contributors.

Integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™

Today at Spark AI Summit 2019, we're excited to introduce a new set of models in the SparkML ecosystem that make it easy to leverage the Azure Cognitive Services at terabyte scales. With only a few lines of code, developers can embed cognitive services within your existing distributed machine learning pipelines in Spark ML. Additionally, these contributions allow Spark users to chain or Pipeline services together with deep networks, gradient boosted trees, and any SparkML model and apply these hybrid models in elastic and serverless distributed systems.

From image recognition to object detection using speech recognition, translation, and text-to-speech, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario. To this date, more than a million developers have already discovered and tried Cognitive Services to accelerate breakthrough experiences in their application.

Azure Cognitive Services on Apache Spark™

Cognitive Services on Spark enable working with Azure’s Intelligent Services at massive scales with the Apache Spark™ distributed computing ecosystem. The Cognitive Services on Spark are compatible with any Spark 2.4 cluster such as Azure Databricks, Azure Distributed Data Engineering Toolkit (AZTK) on Azure Batch, Spark in SQL Server, and Spark clusters on Azure Kubernetes Service. Furthermore, we provide idiomatic bindings in PySpark, Scala, Java, and R (Beta).

Cognitive Services on Spark allows users to embed general purpose and continuously improve intelligent models directly into their Apache Spark™ and SQL computations. This contribution aims to liberate developers from low-level networking details, so they can focus on creating intelligent, distributed applications. Each Cognitive Service is a SparkML transformer, so users can add services to existing SparkML pipelines. We also introduce a new type of API to the SparkML framework that allows users to parameterize models by either a single scalar, or a column of a distributed spark DataFrame. This API yields a succinct, yet powerful fluent query language that offers a full distributed parameterization without clutter. For more information, check out our session.

Use Azure Cognitive Services on Spark in these 3 simple steps:

Create an Azure Cognitive Services Account
Install MMLSpark on your Spark Cluster
Try our example notebook

Low-latency, high-throughput workloads with the cognitive service containers

The cognitive services on Spark are compatible with services from any region of the globe, however many scenarios require low or no-connectivity and ultra-low latency. To tackle these with the cognitive services on Spark, we have recently released several cognitive services as docker containers. These containers enable running cognitive services locally or directly on the worker nodes of your cluster for ultra-low latency workloads. To make it easy to create Spark Clusters with embedded cognitive services, we have created a Helm Chart for deploying a Spark clusters onto the popular container orchestration platform Kubernetes. Simply point the Cognitive Services on Spark at your container’s URL to go local!

Add any web service to Apache Spark™ with HTTP on Spark

The Cognitive Services are just one example of using networking to share software across ecosystems. The web is full of HTTP(S) web services that provide useful tools and serve as one of the standard patterns for making your code accessible in any language. Our goal is to allow Spark developers to tap into this richness from within their existing Spark pipelines.

To this end, we present HTTP on Spark, an integration between the entire HTTP communication protocol and Spark SQL. HTTP on Spark allows Spark users to leverage the parallel networking capabilities of their cluster to integrate any local, docker, or web service. At a high level, HTTP on Spark provides a simple and principled way to integrate any framework into the Spark ecosystem.

With HTTP on Spark, users can create and manipulate their requests and responses using SQL operations, maps, reduces, filters, and any tools from the Spark ecosystem. When combined with SparkML, users can chain services together and use Spark as a distributed micro-service orchestrator. HTTP on Spark provides asynchronous parallelism, batching, throttling, and exponential back-offs for failed requests so that you can focus on the core application logic.

Real world examples

The Metropolitan Museum of Art

At Microsoft, we use HTTP on Spark to power a variety of projects and customers. Our latest project uses the Computer Vision APIs on Spark and Azure Search on Spark to create a searchable database of Art for The Metropolitan Museum of Art (The MET). More Specifically, we load The MET’s Open Access catalog of images, and use the Computer Vision APIs to annotate these images with searchable descriptions in parallel. We also used CNTK on Spark, and SparkML’s Locality Sensitive Hash implementation to futurize these images and create a custom reverse image search engine. For more information on this work, check out our AI Lab or our Github.

The Snow Leopard Trust

We partnered with the Snow Leopard Trust to help track and understand the endangered Snow Leopard population using the Cognitive Services on Spark. We began by creating a fully labelled training dataset for leopard classification by pulling snow leopard images from Bing on Spark. We then used CNTK and Tensorflow on Spark to train a deep classification system. Finally, we interpreted our model using LIME on Spark to refine our leopard classifier into a leopard detector without drawing a single bounding box by hand! For more information, you can check out our blog post.

Conclusion

With only a few lines of code you can start integrating the power of Azure Cognitive Services into your big data workflows on Apache Spark™. The Spark bindings offer high throughput and run anywhere you run Spark. The Cognitive Services on Spark fully integrate with containers for high performance, on premises, or low connectivity scenarios. Finally, we have provided a general framework for working with any web service on Spark. You can start leveraging the Cognitive Services for your project

with our open source initiative MMLSpark on Azure Databricks.

Learn more

Web

Github

Email: mmlspark-support@microsoft.com
Quelle: Azure

5 data security techniques that help boost consumer confidence

These days, it seems like hardly any time passes between headlines about the most recent data breach. Consider the revelation in late September that a security intrusion exposed the accounts of more than 50 million Facebook users.
For that matter, not much time goes by without a new survey or study that confirms the difficulty of data security. Forbes recently reported that US businesses and government agencies suffered 668 million security intrusions and data breaches in the first half of 2018 alone. It’s no wonder consumers have little faith in organizations’ abilities to protect their data. Only 20 percent of US consumers completely trust organizations to keep their data private.
No business is immune to data breaches, but that doesn’t mean you can’t do everything in your power to prevent them. By taking proven, sensible measures to ensure data security, your enterprise will not only tighten its defenses, but also promote trust among customers.
Here are five steps your organization can take that will demonstrate to consumers that you’re committed to data security.
1. Encrypt sensitive information.
Many industry regulations require certain data be encrypted, but it wouldn’t hurt if your organization considered safeguarding other types of data too. Almost anything can be encrypted. There are the obvious resources: email, SMS messages, user names, passwords and databases. Other sensitive data, such as intellectual property and the personal data of customers and employees, can also be encrypted.
Before considering encryption, review whether a particular type of data would cause financial harm and reputational damage to your organization if someone exposed and manipulated it. Encryption isn’t foolproof, especially if the key to encryption falls in the wrong hands, but it is a first-line security step that can show customers you take these matters seriously.
2. Optimize backup and recovery.
Most enterprises have data backup and recovery plans and likely rely on some form of disaster recovery (DR) technology, whether it’s offsite servers or a cloud service. But is it effective enough to boast about? An organization can’t make any stated commitment to protecting customers’ data if it’s at risk of losing it.
Because cyber incidents usually happen without notice and can go undetected for days, weeks or even longer, it’s critical to restore data to its clean, pre-breach condition. It’s a complicated process, but cutting-edge, purpose-built resiliency technologies can automatically recover data to its correct state and enable enterprises to find their footing quickly after a breach.
3. Promote compliance and transparency.
This year, organizations around the world started abiding by the General Data Protection Regulation (GDPR), a European Union standard for the handling of customer data. The GDPR essentially puts the power in consumers’ hands, enabling them to control how their data is stored and managed. It’s a thorough and detailed mandate for any organization, no matter where it’s based, to properly handle European citizens’ data.
Companies that comply with GDPR should use this compliance to their advantage by promoting how they collect, use and store consumer data. Asking users to review privacy settings or agree to a laundry list of new standards won’t effectively relay the steps you’re taking on their behalf. Instead, organizations should separately promote the many ways they follow GDPR and other compliance standards in easily consumable marketing materials. This will show customers that the organization is serious about its commitment to protecting personal information.
4. Consider cyber insurance.
In its annual study on the expenses of cybercrime, Ponemon estimates that the global average cost of a data breach has increased 6.4 percent over last year, climbing to an average $3.86 million in 2018. Those high costs have prompted many businesses to view cyber risk insurance as a critical investment.
Businesses that want the support of insurance should look for a policy that covers common reimbursable expenses. These might include a forensics examination to review the data breach, as well as monetary losses from business interruption, crisis management costs, legal expenses and regulatory fines. Hopefully, your enterprise won’t face many of those costs, but cybercrime is unpredictable. The peace of mind that insurance can provide you and your customers is worth the cost.
5. Work with a data security expert.
It’s not easy deciding which technologies and data security management strategies will work best for your organization. There are many technologies and strategies to implement. With regulations such as GDPR increasing expectations, don’t take any chances with customer data. Work with a data security expert that knows the lay of the land and already has insight on potential changes that would affect how you safeguard information.
Customers have an increasingly endless array of options to choose from on the digital market, so you might get only one chance with each consumer. Win their loyalty by demonstrating how you can expertly handle and preserve their data.
Learn about more ways IBM can help your organization secure your cloud platforms by registering for the guide to securing cloud platforms.
The post 5 data security techniques that help boost consumer confidence appeared first on Cloud computing news.
Quelle: Thoughts on Cloud