Cloud AI in the developer community

Editor’s note: This post features third party projects built with AI Platform. At Google I/O on May 18, 2021 Google Cloud announced Vertex AI, a unified UI for the entire ML workflow, which includes equivalent functionality from the AI Platform and new MLOps services. Most of the sample code and materials introduced in this post will also be applicable to Vertex AI products.Do you know Google Developers Experts (GDEs)? The GDE program is a network of highly experienced technology experts, influencers and thought leaders who are passionate in sharing their knowledge and experiences with fellow developers. Among the many GDEs specialized in various Google technologies, ML (Machine Learning) GDEs have been very active across the globe hence we would like to share some of the great demos, samples and blog posts these ML GDEs have recently published for learning Cloud AI technologies. If you are interested in becoming an ML GDE, please check the bottom of this article to apply.Try the live demo: and learn how to train and serve scikit-learn modelsVictor Dibia created a great live demo NYC Taxi Trip Advisor with Cloud AI tools. Anyone can try it out. With this demo, you can choose a starting point and destination point (e.g. from JFK Airport to Central Park) so the tool shows a predicted trip time and fare using a multitask ML model (sklearn)Live demo: NYC Taxi Trip AdvisorOn the Notebooks published on the GitHub repo, Victor explains how he designed the demo with Vertex AI Notebooks, Prediction and App Engine, including the process for downloading the training data, preprocessing, training of the ML models (Random Forest and MLP) with scikit-learn, deploying to Prediction and serving with App Engine. The repo will be improved to further fine tune the user experience and the underlying ML models (e.g. use of a bayesian prediction model that allows for principled measures of uncertainty).Systems architectureVisual sanity checks on the MLP model predictions.AutoML + Notebooks + BigQuery = a fast, quick and efficient MLMinori Matsuda published a blog post Empowering Google Cloud AI Platform Notebooks by powerful AutoML where he explains how you can integrate Vertex AI Notebooks and AutoML Tables with BigQuery by using New York City taxi trips public dataset. He says: “Combining these, we can quickly implement efficient iterations of feature engineering, modeling, evaluation, and prediction to increase the accuracy.”In the post, Minori explains how AutoML technology works, using Model Search Google published recently. “The article says the concept of model search uses greedy beam-search the multiple trainers (even try RNNs such as LSTM), tunes the depth of the layers and the connection, and eventually does ensembles. It creates a model written in TensorFlow finally”. Minori actually tries out the framework and shows how it works with a video:Model Search trial by Minori MatsudaAlso, Minori points out that one of the easiest ways to create an AutoML model from the dataset on BigQuery is to use BigQuery ML on Vertex AI Notebooks.Creating an AutoML Tables model from BigQuery ML on Vertex AI NotebooksThis is a great example of an integrated solution you can compose with the powerful platform and services on Google Cloud.Video tutorials on Google Cloud AI platform and services Srivatsan Srinivasan has been posting a great series of videos on YouTube: Artificial Intelligence on Google Cloud Platform with sample code. One of those videos features a telecom churn prediction use case where he trains a XGBoost model and deploys it to Vertex AI Prediction.This is not only a sample code, but a great online learning content. The video includes introductions to the following concepts:Google Cloud Vertex AI OverviewCreating Cloud AI Notebook InstanceDeveloping Your First ML Model on Google CloudCreating Custom Predictor for InferenceBundling Dependency for DeploymentDeploying model on Vertex AI predictionCloud StorageFeature importance with the XGBoost modelIn addition to Google cloud AI platform and AI platform prediction, the video tutorial covers:Deploying model on Google Cloud Run, App Engine and GKEBigQuery MLCloud AutoML VisionSpeech to TextMLOps on Google CloudDistributed Training in TensorFlow with AI Platform and DockerLast April, Sayak Paul posted a full-fledged content Distributed Training in TensorFlow with AI Platform & Docker. He starts with: “Operating with a Jupyter Notebook environment can get very challenging if you are working your way through large-scale training workflows as is common in deep learning.” He uses AI Platform and Docker for solving this problem by providing a training workflow that is fully managed by a secure and reliable service with high availability.Sayak says: “While developing this workflow, I considered the following aspects for services I used to develop the workflow:” The service should automatically provision and deprovision the resources we would ask it to configure allowing us to only get charged for what’s been truly consumed.The service should also be very flexible. It must not introduce too much technical debt into our existing pipelines.In this post, he explains the end-to-end processes starting from designing the data pipeline that takes images for cats and dogs and converts to TFRecord stored on Cloud Storage.Data pipeline with TensorFlowAlso, his published repository contains the all code required for implementing the workflow, with rich documentation explaining how those files are organized and packaged in a Docker container to be submitted to AI Platform Training.Dockerfile for the container packagingTraining logs on Cloud LoggingIf you are a TensorFlow user, Sayak’s post could be the best way to learn what benefit you can get from the AI Platform and how to get started with the actual sample code.SNS curation with AI Platform + GKEChansung Park’s project Curated Personal Newsletter is a great sample with an actual demo app and the source code that aims for “collecting all the posts from one’s SNS wall (including personal note/shared/retweeted), then it will send an automatically curated periodic newsletter”.The system combines AI Platform Training and Prediction with Google Kubernetes Engine for building an end-to-end MLOps pipelines for continuous training and deployment whenever a new version of data or code for a model is integrated.Systems ArchitectureAlthough the project is still in development, it is a useful example as an end-to-end ML pipeline built with various Google Cloud services. Chansung also published a great write up on MLOps in Google Cloud which also helps understanding how you can build a production ML pipeline with various Cloud AI tools. Next stepsIf you are interested in joining the community nearby you, please check Google Cloud community page and find relevant information on meetups, tutorials and discussions.If you share the same passion in sharing your Cloud AI knowledge and experiences with fellow developers and interested in joining this ML GDE network, please check the GDE Program website, watch this ML GDE Program intro video and send an email to cloudai-gde@google.com with your intro and relevant activity information. Related ArticleGoogle Cloud unveils Vertex AI, one platform, every ML tool you needGoogle Cloud launches Vertex AI, a managed platform for experimentation, versioning and deploying ML models into production.Read Article
Quelle: Google Cloud Platform

A Big Thank You to Our DockerCon Live 2021 Sponsors

With DockerCon just a day away, let’s not forget to give a big THANK YOU to all our sponsors.

As our ecosystem partners, they play a central role in our strategy to deliver the best developer experience from local desktop to cloud, and/or to offer best-in-class solutions to help you build apps faster, easier and more securely. Translation: We couldn’t do what we do without them.

So be sure to visit their virtual rooms and special sessions at DockerCon this Thursday, May 27. With more than 20 Platinum, Gold or Silver sponsors this year, you’ll have plenty to choose from.

For example, check out AWS’s virtual room and the session with AWS Principal Technologist Massimo Re Ferrè at 3:15 p.m.-3:45 p.m. PDT.

And check out Microsoft’s virtual room and any of the three sessions it’s offering — How to Package DevOps Tools Using Docker Containers (3:45 p.m.- 4:15 p.m.), Container-Based Development with Visual Studio Code (4:15 p.m.- 4:45 p.m.), and Supercharging Machine Learning Development with Azure Machine Learning and Containers in VS Code! (4:45 p.m.- 5:15 p.m.).

Or there’s Mirantis’ virtual room and their two sessions —A Day in the Life of a Developer: Moving Code From Development to Production Without Losing Control (11:15 a.m.- 11:45 a.m.), and theCUBE interview with Mirantis CEO Adrian Ionel.

Responsibility for cloud native security is increasingly moving towards developers and devops teams, and as our ability to easily integrate security into our pipelines has increased, so has the amount of information about security that these teams are expected to parse. Join Snyk Senior Developer Advocate Matt Jarvis (1:00pm-1:30pm) in a session that covers this important topic.

No matter which of their sessions you choose to attend, you’ll learn something new about modern application delivery in a cloud-native world. Register for DockerCon today at https://dockr.ly/2PSJ7vn.

 DockerCon Live 2021

Join us for DockerCon Live 2021 on Thursday, May 27. DockerCon Live is a free, one day virtual event that is a unique experience for developers and development teams who are building the next generation of modern applications. If you want to learn about how to go from code to cloud fast and how to solve your development challenges, DockerCon Live 2021 offers engaging live content to help you build, share and run your applications. Register today at https://dockr.ly/2PSJ7vn
The post A Big Thank You to Our DockerCon Live 2021 Sponsors appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

AWS IoT Device Management veröffentlicht Job-Vorlagen in der Vorschau, um die Bereitstellung von flottenweiten Remote-Steuerungen schneller, einfacher und sicherer zu machen

AWS IoT freut sich, die öffentliche Vorschau von Job-Vorlage für AWS IoT Device Management Jobs ankündigen zu können. Job-Vorlagen machen die Bereitstellung von Remote-Aktionen schneller, einfacher und sicherer. IoT Entwickler und Flotten-Administratoren können die Remote-Aktionen, die auf ihre IoT-Geräte angewendet werden sollen, vordefinieren, und wichtige Bereitstellungsparameter, wie Rollout-Raten, Abbruch-Schwellenwerte und Timeout-Kriterien spezifizieren. Flotten-Bediener und Techniker, die sich auf die Flottenüberwachung und die Problembehebung fokussieren, können spezifische Gerätegruppen als Ziele identifizieren und die vordefinierten Job-Vorlagen nutzen, um Remote-Aktionen für die Ziele sicher bereitzustellen.
Quelle: aws.amazon.com

AWS Transfer Family unterstützt nun das Microsoft Active Directory

AWS Transfer Family Kunden können nun AWS Managed Microsoft Active Directory (AD), on-premises und selbstverwaltete AD in AWS nutzen, um ihren Datentransfer an Endnutzer zu authentifizieren. Dies ermöglicht die nahtlose Migration von Datenübertragungs-Workflows, die sich auf AD verlassen, ohne die Zugangsdaten des Endnutzers zu ändern oder einen benutzerdefinierten Autorisierer zu benötigen.
Quelle: aws.amazon.com

Fleet Hub for AWS IoT Device Management ist eine neue, einfache Art IoT Geräteflotten zu überwachen und mit ihnen zu interagieren und ist jetzt allgemein verfügbar

Heute kündigt AWS die allgemeine Verfügbarkeit von IoT Fleet Hub für AWS IoT Device Management an. Die Funktion ermöglicht Kunden die einfache Erstellung einer vollständig verwalteten Webanwendung, um ihre Geräteflotten anzuzeigen und mit ihnen zu interagieren, um die Flotte und den Gerätezustand zu überwachen, auf Alarme zu reagieren, Fernaktionen zu ergreifen und die Zeit zur Fehlerbehebung zu reduzieren.
Quelle: aws.amazon.com