How to build a conversational app using Cloud Machine Learning APIs, Part 2

By Chang Luo and Bob Liu, Software Engineers

In part 1 of this blogpost, we gave you an overview of what a conversational tour guide iOS app might look like built on Cloud Machine Learning APIs and API.AI. We also demonstrated how to create API.AI intents and contexts. In part 2, we’ll discuss an advanced API.AI topic — webhook with Cloud Functions. We’ll also show you how to use Cloud Machine Learning APIs (Vision, Speech and Translation) and how to support a second language.

Webhooks via Cloud Functions 
In API.AI, Webhook integrations allow you to pass information from a matched intent into a web service and get a result from it. Read on to learn how to request parade info from Cloud Functions.

Go to console.cloud.google.com. Log in with your own account and create a new project. 

Once you’ve created a new project, navigate to that project. 
Enable the Cloud Functions API. 

Create a function. For the purposes of this guide, we’ll call the function “parades”. Select the “HTTP” trigger option, then select “inline” editor. 

Don’t forget to specify the function to execute to “parades”.

You’ll also need to create a “stage bucket”. Click on “browse” — you’ll see the browser, but no buckets will exist yet. 

Click on the “+” button to create the bucket.

Specify a unique name for the bucket (you can use your project name, for instance), select “regional” storage and keep the default region (us-central1).
Click back on the “select” button in the previous window.
Click the “create” button to create the function.

The function will be created and deployed: 

Click the “parades” function line. In the “source” tab, you’ll see the sources. 

Now it’s time to code our function! We’ll need two files: the “index.js” file will contain the JavaScript / Node.JS logic, and the “package.json” file contains the Node package definition, including the dependencies we’ll need in our function.

Here’s our package.json file. This is dependent on the actions-on-google NPM module to ease the integration with API.AI and the Actions on Google platform that allows you to extend the Google Assistant with your own extensions (usable from Google Home):

{
“name”: “parades”,
“version”: “0.0.1”,
“main”: “index.js”,
“dependencies”: {
“actions-on-google”: “^1.1.1″
}
}

In the index.js file, here’s our code:

const ApiAiApp = require(‘actions-on-google’).ApiAiApp;
function parade(app) {
app.ask(`Chinese New Year Parade in Chinatown from 6pm to 9pm.`);
}
exports.parades = function(request, response) {
var app = new ApiAiApp({request: request, response: response});
var actionMap = new Map();
actionMap.set(“inquiry.parades”, parade);
app.handleRequest(actionMap);
};

In the code snippets above: 

We require the actions-on-google NPM module. 
We use the ask() method to let the assistant send a result back to the user. 
We export a function where we’re using the actions-on-google module’s ApiAiApp class to handle the incoming request. 
We create a map that maps “intents” from API.AI to a JavaScript function. 
Then, we call the handleRequest() to handle the request. 
Once done, don’t forget to click the “create” function button. It will deploy the function in the cloud. 

There’s subtle difference between tell() and ask() APIs. tell() will end the conversation and close the mic, while ask() will not. This difference doesn’t matter for API.AI projects like the one we demonstrate here in part 1 and part 2 of this blogpost. When we integrate Actions on Google in part 3, we’ll explain this difference in more detail. 

As shown below, the “testing” tab invokes your function, the “general” tab shows statistics and the “trigger” tab reveals the HTTP URL created for your function: 

Your final step is to go to the API.AI console, then click the Fulfillment tab. Enable webhook and paste the URL above into the URL field. 

With API.AI, we’ve built a chatbot that can converse with a human by text. Next, let’s give the bot “ears” to listen with Cloud Speech API, “eyes” to see with Cloud Vision API, a “mouth” to talk with the iOS text-to-speech SDK and “brains” for translating languages with Cloud Translation API.

Using Cloud Speech API 

Cloud Speech API includes an iOS sample app. It’s quite straightforward to integrate the gRPC non-streaming sample app into our chatbot app. You’ll need to acquire an API key from Google Cloud Console and replace this line in SpeechRecognitionService.m with your API key.

#define API_KEY @”YOUR_API_KEY”

Landmark detection 
NSDictionary *paramsDictionary =
@{@”requests”:@[
@{@”image”:
@{@”content”:binaryImageData},
@”features”:@[
@{@”type”:@”LANDMARK_DETECTION”, @”maxResults”:@1}]}]};

Follow this example to use Cloud Vision API on iOS. You’ll need to replace the label and face detection with landmark detection as shown below. 

You can use the same API key you used for Cloud Speech API. 

Text to speech
iOS 7+ has a built-in text-to-speech SDK, AVSpeechSynthesizer. The code below is all you need to convert text to speech.

#import
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:message];
AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
[synthesizer speakUtterance:utterance];

Supporting multiple languages

Supporting additional languages in Cloud Speech API is a one-line change on the iOS client side. (Currently, there’s no support for mixed languages.) For Chinese, replace this line in SpeechRecognitionService.m: 

recognitionConfig.languageCode = @”en-US”;
with

recognitionConfig.languageCode = @”zh-Hans”;

To support additional text-to-speech languages, add this line to the code:

#import
AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:message];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@”zh-Hans”];
AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc] init];
[synthesizer speakUtterance:utterance];
Both Cloud Speech API and Apple’s AVSpeechSynthesisVoice support BCP-47 language code.

Cloud Vision API landmark detection currently only supports English, so you’ll need to use the Cloud Translation API to translate to your desired language after receiving the English-language landmark description. (You would use Cloud Translation API similarly to Cloud Vision and Speech APIs.) 

On the API.AI side, you’ll need to create a new agent and set its language to Chinese. One agent can support only one language. If you try to use the same agent for a second language, machine learning won’t work for that language. 

You’ll also need to create all intents and entities in Chinese. 

And you’re done! You’ve just built a simple “tour guide” chatbot that supports English and Chinese.

Next time 

We hope this example has demonstrated how simple it is to build an app powered by machine learning. For more getting-started info, you might also want to try:

Cloud Speech API Quickstart 
Cloud Vision API Quickstart 
Cloud Translation API Quickstart 
API.AI QuickstartY

You can download the source code from Github.

In part 3, we’ll cover how to build this app on Google Assistant with Actions on Google integration.

Quelle: Google Cloud Platform

Mirantis Launches Industry-First Course for Certified Kubernetes Administrator Exam

The post Mirantis Launches Industry-First Course for Certified Kubernetes Administrator Exam appeared first on Mirantis | Pure Play Open Cloud.
SUNNYVALE, Calif., Aug. 16, 2017 (GLOBE NEWSWIRE) — Mirantis today announced Kubernetes and Docker Bootcamp II, the first course available for Kubernetes users to train for the Certified Kubernetes Administrator exam (CKA). The CKA exam, announced in June by the Cloud Native Computing Foundation (CNCF), is still in Beta and expected to launch in September 2017.

The Kubernetes and Docker Bootcamp II (KD200) course guides students in detail through the topics covered by the exam, and helps ensure they are fully prepared to successfully achieve their CKA certification. This advanced Docker and Kubernetes course is a continuation of Kubernetes and Docker Bootcamp (KD100), the first vendor-agnostic Kubernetes and Docker training, announced in December 2016. KD200 is designed for deployment engineers and cloud administrators who want to acquire complete knowledge in using Kubernetes for deploying and managing containerized applications, and when combined with KD100, is the most comprehensive Kubernetes training available on the market today.

“Mirantis has provided training on open source software for years, offering career growth opportunities and giving businesses peace of mind that they are getting properly trained engineers,” said Lee Xie, senior director of Education Services, Mirantis. “This course gives Kubernetes users a chance to gain essential hands-on experience and expert guidance before taking the CKA exam.”

As one of the fastest-growing open source projects, Kubernetes use is expected to explode as companies increasingly evolve towards cloud-native software development. This course and certification ensures enterprises feel more secure when hiring a certified partner or developer. Cloud computing skills have progressed from being niche to mainstream as the world’s most in-demand skill set. The OpenStack User Survey shows Kubernetes taking the lead as the top Platform-as-a-Service (PaaS) tool, while 451 Research has called containers the “future of virtualization,” predicting strong container growth across on-premises, hosted and public clouds.

Mirantis has been a leader in open source training for 6 years, training more than 15,000 cloud professionals, many of those employed with Fortune 500 companies.

The first KD200 class from Mirantis will take place on October 3, 2017 in Sunnyvale and virtually, and is currently available at an introductory price.

About Mirantis
Mirantis delivers open cloud infrastructure to top enterprises using OpenStack, Kubernetes and related open source technologies. The company is a major contributor of code to many open infrastructure projects and follows a build-operate-transfer model to deliver its Mirantis Cloud Platform and cloud management services, empowering customers to take advantage of open source innovation with no vendor lock-in. To date, Mirantis has helped over 200 enterprises build and operate some of the largest open clouds in the world. Its customers include iconic brands such as AT&T, Comcast, Shenzhen Stock Exchange, eBay, Wells Fargo Bank and Volkswagen. Learn more at www.mirantis.com.

Contact information:
Joseph Eckert for Mirantis
jeckertflak@gmail.comThe post Mirantis Launches Industry-First Course for Certified Kubernetes Administrator Exam appeared first on Mirantis | Pure Play Open Cloud.
Quelle: Mirantis

Introducing Azure Event Grid – an event service for modern applications

Most modern applications are built using events – whether it is reacting to changes coming from IoT devices, responding to user clicks on mobile apps, or initiating business processes from customer requests. With the growth of event-based programming, there is an increased focus on serverless platforms, like Azure Functions, a serverless compute engine, and Azure Logic Apps, a serverless workflow orchestration engine. Both services enable you to focus on your application without worrying about any infrastructure, provisioning, or scaling.

Today, I am excited to announce that we are making event-based and serverless applications even easier to build on Azure. Azure Event Grid is a fully-managed event routing service and the first of its kind. Azure Event Grid greatly simplifies the development of event-based applications and simplifies the creation of serverless workflows. Using a single service, Azure Event Grid manages all routing of events from any source, to any destination, for any application.

Azure Event Grid is an innovative offering that makes an event a first-class object in Azure. With Azure Event Grid, you can subscribe to any event that is happening across your Azure resources and react using serverless platforms like Functions or Logic Apps. In addition to having built-in publishing support for events with services like Blob Storage and Resource Groups, Event Grid provides flexibility and allows you to create your own custom events to publish directly to the service. In addition to having a wide range of Azure services with built-in handlers for events, like Functions, Logic Apps, and Azure Automation, Event Grid allows flexibility in handling events, supporting custom web hooks to publish events to any service, even 3rd-party services outside of Azure. This flexibility creates endless application options and makes Azure Event Grid a truly unique service in the public cloud. 

Here is how it works:

Here are some additional details of this new Azure service:

Events as first-class objects with intelligent filtering: Azure Event Grid enables direct event filtering using event type, prefix or suffix, so your application will only need to receive the events you care about. Whether you want to handle built-in Azure events, like a file being added to storage, or you want to produce your own custom events and event handlers, Event Grid enables this through the same underlying model. Thus, no matter the service or the use case, the intelligent routing and filtering capabilities apply to every event scenario and ensure that your apps can focus on the core business logic instead of worrying about routing events.
Built to scale: Azure Event Grid is designed to be highly available and to handle massive scale dynamically, ensuring consistent performance and reliability for your critical services.
Opens new serverless possibilities: By allowing serverless endpoints to react to new event sources, Azure Event Grid enables event-based scenarios to span new services with ease, increasing the possibilities for your serverless applications. Both code-focused applications in Functions and visual workflow applications in Logic Apps benefit from Azure Event Grid.
Lowers barriers to ops automation: The same unified event management interface enables simpler operational and security automation, including easier policy enforcement with built-in support for Azure Automation to react to VM creations or infrastructure changes.

Today, Azure Event Grid has built-in integration with the following services:

We are working to deliver many more event sources and destinations later this year, including Azure Active Directory, API Management, IoT Hub, Service Bus, Azure Data Lake Store, Azure Cosmos DB, Azure Data Factory, and Storage Queues.

Azure Event Grid has a pay-per-event pricing model, so you only pay for what you use. Additionally, to help you get started quickly, the first 100,000 operations per month are free. Beyond 100,000 per month, pricing is $0.30 per million operations (per-operation) during the preview. More details can be found on the pricing page.

Azure Event Grid completes the missing half of serverless applications. It simplifies event routing and event handling with unparalleled flexibility. I am excited about the endless possibilities!

Go ahead and give it a try. I can’t wait to see what you build. To learn more try the quick start.

See ya around,

Corey
Quelle: Azure