Raumfahrt: Jeff Bezos' Mondfahrt

Blue Moon heißt die neue Mondlandefähre des reichsten Mannes der Welt, Jeff Bezos. Sie ist vor exklusivem Publikum und ohne Möglichkeit von Rückfragen vorgestellt worden. Wann sie fliegen soll, ist unklar. Von Frank Wunderlich-Pfeiffer (Mond, Brennstoffzelle)
Quelle: Golem

Take your machine learning models to production with new MLOps capabilities

This blog post was authored by Jordan Edwards, Senior Program Manager, Microsoft Azure.

At Microsoft Build 2019 we announced MLOps capabilities in Azure Machine Learning service. MLOps, also known as DevOps for machine learning, is the practice of collaboration and communication between data scientists and DevOps professionals to help manage the production of the machine learning (ML) lifecycle.

Azure Machine Learning service’s MLOps capabilities provide customers with asset management and orchestration services, enabling effective ML lifecycle management. With this announcement, Azure is reaffirming its commitment to help customers safely bring their machine learning models to production and solve their business’s key problems faster and more accurately than ever before.

 

Here is a quick look at some of the new features:

Azure Machine Learning Command Line Interface (CLI) 

Azure Machine Learning’s management plane has historically been via the Python SDK. With the new Azure Machine Learning CLI, you can easily perform a variety of automated tasks against the ML workspace including:

Compute target management

Experiment submission

Model registration and deployment

Management capabilities

Azure Machine Learning service introduced new capabilities to help manage the code, data, and environments used in your ML lifecycle.

Code management

Git repositories are commonly used in industry for source control management and as key assets in the software development lifecycle. We are including our first version of Git repository tracking – any time you submit code artifacts to Azure Machine Learning service, you can specify a Git repository reference. This is done automatically when you are running from a CI/CD solution such as Azure Pipelines.

Data set management

With Azure Machine Learning data sets you can version, profile, and snapshot your data to enable you to reproduce your training process by having access to the same data. You can also compare data set profiles and determine how much your data has changed or if you need to retrain your model.

Environment management

Azure Machine Learning Environments are shared across Azure Machine Learning scenarios, from data preparation to model training to inferencing. Shared environments help to simplify handoff from training to inferencing as well as the ability to reproduce a training environment locally.

Environments provide automatic Docker image management (and caching!), plus tracking to streamline reproducibility.

Simplified model debugging and deployment

Some data scientists have difficulty getting an ML model prepared to run in a production system. To alleviate this, we have introduced new capabilities to help you package and debug your ML models locally, prior to pushing them to the cloud. This should greatly reduce the inner loop time required to iterate and arrive at a satisfactory inferencing service, prior to the packaged model reaching the datacenter.

Model validation and profiling 

Another challenge that data scientists commonly face is guaranteeing that models will perform as expected once they are deployed to the cloud or the edge. With the new model validation and profiling capabilities, you can provide sample input queries to your model. We will automatically deploy and test the packaged model on a variety of inference CPU/memory configurations to determine the optimal performance profile. We also check that the inference service is responding correctly to these types of queries.

Model interpretability

Data scientists want to know why models predict in a specific manner. With the new model interpretability capabilities, we can explain why a model is behaving a certain way during both training and inferencing.

ML audit trail

Azure Machine Learning is used for managing all of the artifacts in your model training and deployment process. With new audit trail capabilities, we are enabling automatic tracking of the experiments and datasets that corresponds to your registered ML model. This helps to answer the question, “What code/data was used to create this model?”

Azure DevOps extension for machine learning

Azure DevOps provides commonly used tools data scientists leverage to manage code, work items, and CI/CD pipelines. With the Azure DevOps extension for machine learning, we are introducing new capabilities to make it easy to manage your ML CI/CD pipelines with the same tools you use for software development processes. The extension includes the abilities to trigger Azure Pipelines release on model registration, easily connect an Azure Machine Learning Workspace to an Azure DevOps project, and perform a series of tasks designed to help interaction with Azure Machine Learning as easy as possible from the existing automation tooling.

Get started today

These new MLOps features in the Azure Machine Learning service aim to enable users to bring their ML scenarios to production by supporting reproducibility, auditability, and automation of the end-to-end ML lifecycle. We’ll be publishing more blogs that go in-depth with these features in the following weeks, so follow along for the latest updates and releases.

Learn more about Azure Machine Learning service
Get started today with a free trial

Quelle: Azure

What’s in a Container Platform?

Fresh off the heels of DockerCon and the announcement of Docker Enterprise 3.0, an end-to-end and dev-to-cloud container platform, I wanted to share some thoughts on what we mean when we say “complete container platform”.

Choice and Flexibility
A complete solution has to meet the needs of different kinds of applications and users – not just cloud native projects but legacy and brownfield applications on both Linux and Windows, too. At a high level, one of the goals of modernization – the leading reason organizations are adopting container platforms – is to rid ourselves of technical debt. Organizations want the freedom to create their apps based on the “right” stack and running in the “right” place, even though what’s “right” may vary from app to app. So the container platform running those applications should be flexible and open to support those needs, rather than rigidly tying application teams to a single OS or virtualization and cloud model.
High-Velocity Innovation
To deliver high velocity innovation your developers are a key constituent for the container platform. That means the container platform should extend to their environment, so that developers are building and testing on the same APIs that will be used in production environments.

.
Your platform of choice should have tools that integrate into your developers’ preferred workflow, rather than forcing a new or different tool or completely new workflow on them that only works for one deployment pattern. Developers are hired for their creative ability to solve problems with code so adopting a platform that requires your teams to abandon their intuition and prior knowledge in favor of tools that only work with one prescriptive methodology not only slows down innovation, it also increases the risk of developers going outside the IT-approved processes to get the job done.

Operations teams also want to run a platform that enables applications to be deployed faster. That means making complex tasks simpler from day one with the assurance that the platform will work as expected, while still allowing them to grow their skills over time. The number true Kubernetes experts is relatively small, so if your platform of choice requires admins and operators to know Kubernetes on day one, in addition to learning the ins and outs of the container platform itself, you’re easily looking at 12 months or more of training, services, and proof of concept trials and errors before your container platform is ready for its first “real” workload.
In addition, Kubernetes is a trusted orchestrator and the Docker Engine, built on the CNCF-graduated containerd project, is a trusted and widely used container runtime. Your container platform should be built on these fundamental components because this will give you the most flexibility in the future. Docker Enterprise and all the major public clouds use Kubernetes and the Docker engine (in some cases containerd) because they are open and mature. If your container platform vendor says they’ve built their own projects which are “mostly compatible” with one or both of these then you might want to take note.
Operations teams are also interested in stability. Container platforms will get frequent updates but that does not mean you should be required to rip and replace your container platform every two years, and along with it all the skills, scripts, and other tooling your operations teams built up around the platform over time. When we added Kubernetes in Docker Enterprise 2.0 it was a major upgrade, but we made that upgrade as simple as possible, including continuing to provide and develop Docker Swarm. If you are evaluating container platforms, look at their history. It’s a relatively new market. If you see three major platform architecture redesigns which all forced a major operations shift, you might be in for a bumpy ride in the future.
Intrinsic Security
Last, but absolutely not least, security has to be built-in at all layers of the platform. With the push for more frequent and faster software releases, security has to be part of both the developer’s experience and the operator’s experience. But security cannot be so restrictive or obtrusive that nobody wants to use the platform. You should have guardrails that help developers get started quickly from known good foundations, shifting left in your security process instead of finding out later that something is broken. And your platform should give you visibility into every application you ship: Windows or Linux, Edge or data centers. Security must be a fundamental building block of your container platform and that includes security for your running applications, too.

In Summary
We were proud that Docker was named a Leader in The Forrester New Wave for Enterprise Container Platform Software Suites in Q4 2018. We believe that our 3.0 platform adds even greater capabilities in a non-disruptive fashion and is the only end-to-end platform for building, sharing and running container-based applications, from the developer’s desktop to the cloud and managing the entire application lifecycle at every stage without dependencies based on a particular OS version, virtualization platform, or public cloud stack.

 
 

What’s in a #container platform? Learn more about what makes #Docker Enterprise the leading platform:Click To Tweet

For more information:

Learn more about Docker Enterprise 3.0
Watch the DockerCon 2019 Day 1 Keynote
Download The Forrester New Wave for Enterprise Container Platform Software Suites

The post What’s in a Container Platform? appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/

New Quick Start deploys New Relic Infrastructure on AWS

This Quick Start deploys New Relic Infrastructure on the Amazon Web Services (AWS) Cloud in 20-30 minutes. This Quick Start is for those who want to use AWS and its products and services to launch Amazon Elastic Container Services for Kubernetes (Amazon EKS) and monitor the infrastructure by using New Relic Infrastructure.
Quelle: aws.amazon.com

AWS AppSync Now Enables More Visibility into Performance and Health of GraphQL Operations

AWS AppSync is a managed GraphQL service that simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources.
With today’s launch, AWS AppSync enables you to better understand the performance of your GraphQL requests and usage characteristics of your GraphQL schema fields. You can easily identify resolvers with large latencies that may be the root cause of a performance issue. You can also identify the most and least frequently used fields in your schema and assess the impact of deprecating GraphQL fields.
AWS AppSync now emits log events in a fully structured JSON format. This enables seamless integration with log analytics services such as Amazon CloudWatch Logs Insights and Amazon Elasticsearch Service, and other log analytics solutions. We have also added new fields to log events to increase your visibility into the performance and health of your GraphQL operations.
To learn more, see the release blog and the AWS AppSync web page.
 
Quelle: aws.amazon.com