Data Factory supports multiple web service inputs for Azure ML Batch Execution

For orchestrating workloads on Azure ML (Machine Learning) batch execution web services, Azure Data Factory supports a built-in activity, namely Azure ML Batch Execution activity. Customers can leverage this activity to operationalize their ML models at scale.

Little while ago, Azure ML added support to allow multiple Web Service Inputs for a given experiment. Consequently, customers have been looking to leverage this capability through Azure Data Factory. Data Factory now supports configuring the ML Batch Execution Activity to pass multiple Web Service Inputs to the ML web service.

Suppose you have an Azure ML experiment which accepts more than one Web Service Input.

Note the names of the created Web service inputs, as you must use these names when specifying the endpoints in your Data Factory Pipeline. The name can be found in the Properties pane of the module. By default the first Web Service Input module you create will be named “input1,” the next one “input2,” and so on. If you rename the modules, be sure to update the names in the webServiceInputs property in your Data Factory pipeline accordingly.

In your Azure Data Factory Pipeline, you can use the new WebServiceInputs property instead of the existing WebServiceInput property to specify the inputs into your experiment. 

"typeproperties":
{
"webServiceInputs":
{
"trainingData": "NameOfInputDataset1",
"scoringData": "NameOfInputDataset2"
},
"webServiceOutputs":
{
"output1": "NameOfOutputDataset"
},
"globalParameters": {}
}

For more information on the Azure ML Batch Execution activity in Azure Data Factory, refer to this documentation page.

If you have any feedback on the above capabilities please visit Azure Data Factory User Voice and/or MSDN Forums to reach out. We are eager to hear from you!
Quelle: Azure

Published by