Azure Data Box family now enables import to Managed Disks

The Azure Data Box offline family lets you transfer hundreds of terabytes of data to Microsoft Azure in a quick, inexpensive, and reliable manner. We are excited to share that support for managed disks is now available across the Azure Data Box family of devices, which includes Data Box, Data Box Disk, and Data Box Heavy.

With managed disks support on Data Box, you can now move your on-premises virtual hard disks (VHDs) as managed disks in Azure with one simple step. This allows you to save a significant amount of time in lift and shift migration scenarios.

How managed disks work with Data Box solution?

The Data Box family supports the following managed disk types: Premium SSD, Standard SSD, and Standard HDD. When you place your order for any of the Data Box data transfer solutions in the Azure portal, you can now select your storage destination as managed disks and specify the resource groups for ingestion. You will be asked to select a staging storage account, which is used to stage VHDs as page blobs and to then convert page blobs to managed disks. 

When your Data Box device arrives, it will have the shares or folders corresponding to the selected resource groups. These shares or folders are further broken down by managed disk storage types – Premium SSD, Standard SSD, and Standard HDD. Copying your data to the target managed disk type is as easy as copying the VHDs to the corresponding folders using utility like robocopy or just drag and drop.  

For more information on movement to managed disks, please refer to the following, 

Data Box documentation for managed disks, “Tutorial: Use Data Box to import data as managed disks in Azure.”
Data Box Disk documentation for managed disks, “Tutorial: Copy data to Azure Data Box Disk and verify.”

You can also place an order for a Data Box today and import your VHDs as managed disks. Please continue to provide your valuable thoughts and comments by posting on Azure Feedback.
Quelle: Azure

Run your code and leave build to us

You’ve followed an excellent walkthrough and built a solid prototype web app. You run npm start locally and browse to http://localhost and all looks great. Now you’re ready to put your app in the cloud, utilize a managed database and managed authentication, and share a link with all your coworkers and friends. But wait a minute, it looks like you’ll first have to set up cloud pipelines and container images, then brush up on Bash or PowerShell and write a Dockerfile. Getting your app to the cloud is more work than you anticipated. Is there a faster way?

We’re happy to share that yes there is a faster way. When you need to focus on app code you can delegate build and deployment to Azure with App Service web apps. Push your git repo directly to Azure or point to one hosted in GitHub, Azure DevOps, or BitBucket and we’ll take care of building and running your code the way you expect. You may be using this already for your .NET apps; we now support Node.js and Python as well.

Do you write apps in Node.js, JavaScript, or TypeScript? We’ll install your dependencies and use the build steps specifed in your package.json scripts. Prefer yarn over npm? Include a yarn.lock file or use the engines field in package.json and we’re happy to oblige.

Perhaps you’re a pythonist and prefer Django? Well then, we’ll install your dependencies as specified in requirements.txt, prepare your static assets by running collectstatic, and give you a post-build hook to apply database migrations. We’ll even run the application module from Django’s conventional wsgi.py file with gunicorn. We also support other WSGI frameworks like Flask, Bottle or Pyramid; configuration details are here.

Check out our docs for all the details on the new system and configuration options. Ready to start using App Service for your project? Follow the documentation, “Create a Node.js app in Azure App Service on Linux” to set up an app in the Azure portal or with the az CLI.

We’re happy to now support Node.js and Python but realize our work is far from done. Please participate in our questionnaire so we can ensure your needs and scenarios are covered.

Finally, visit our issue tracker to ask questions, offer suggestions, or submit a pull request. Happy coding!
Quelle: Azure

Accelerating enterprise digital transformation through DevOps

IT organizations are under more pressure than ever to do more with less, they are expected to drive competitive advantage and innovation with higher quality while managing smaller teams. This shift to the enterprise cost-to-value equation has created a transformative inflection point across every business domain, underpinned by new enabling technologies and development paradigms. Organizations must now adapt by adopting rapid and strategic transformation while simultaneously working diligently to keep the lights on, and all with the important goal of reducing costs. When done right, three clear benefits appear:

Going cloud native means more than simply offloading datacenter costs and complexity. It means software architecture that is loosely coupled can allow features and bug fixes to be shipped whenever and wherever they need to be by smaller development, QA, release, and production support teams.
DevOps is more than a facelift on release management. New pipeline tools coupled with new design and development patterns spur a cultural shift occurring inside IT organizations. DevOps is a revolution in how software is created and supported.
Modernizing existing legacy applications and infrastructure doesn’t require a massive, time consuming, and expensive re-write. Through the judicious application of microservices, new development, and delivery methodologies, the elephant can be eaten “one bite at a time” with the added benefits of reducing costs, feeding innovation, and ensuring greater stability and quality across production scenarios.

While the value of these three benefits are explicit and obvious, the investment to make these changes can be prohibitive in both cost and time. Often there is no clear starting point or an easily discernible roadmap to success.

Accelerate the transformation and ensure the outcome

To address these challenges, Sirrus7, GitHub, and HashiCorp have joined together to create the DevOps Acceleration Engine. This is an enterprise-grade, out-of-the-box, integrated DevOps infrastructure executed and demonstrated through a tailored four month engagement.

Seamlessly integrated for the immediate creation of value, the DevOps Acceleration Engine brings together best of breed industry leading tools at a discount, including GitHub Enterprise, Terraform Enterprise, and CircleCI or the customer’s CI/CD tool of choice, all delivered in a highly targeted, success-driven engagement by a team of proven industry experts who work directly onsite with customers.

The solution drives transformation at a significantly lower cost, in both time and capital, with faster execution by pre-integrating these best of breed technologies. Customers receive:

An integrated platform
Discounted licensing costs
An experienced team of experts specializing in best practices and methodologies
Engagement period focused on working hand-in-hand with customers, and moving features/fixes from backlog to production faster with higher quality than ever before

Next steps

To learn more, go to the DevOps Acceleration Engine in the Azure Marketplace and select Contact me.
Quelle: Azure

Azure.Source – Volume 73

Now in preview

Azure Premium Blob Storage public preview

Announcing the public preview of Azure Premium Blob Storage. Premium Blob Storage is a new performance tier in Azure Blob Storage, complimenting the existing Hot, Cool, and Archive tiers. Premium Blob Storage is ideal for workloads with high transactions rates or require very fast access times, such as IoT, Telemetry, AI, and scenarios with humans in the loop such as interactive video editing, web content, online transactions, and more.

Service Fabric Processor in public preview

Service Fabric Processor is a new library for consuming events from an Event Hub that is directly integrated with Service Fabric, it uses Service Fabric's facilities for managing partitions, reliable storage, and for more sophisticated load balancing. Service Fabric Processor is currently in preview and available on NuGet. The source code and a sample application is available on GitHub. See post for links.

Azure Blob Storage on IoT Edge now includes Auto-Tiering and Auto-Expiration functionalities

Introducing auto-tiering and auto-expiration functionalities to our “Azure Blob Storage on IoT Edge” module and are now available in public preview. Azure Blob Storage on IoT Edge is a light-weight, Azure-consistent module which provides local block blob storage allowing you to store and access data efficiently, process if required, and then automatically upload to Azure. These new features are available for Linux AMD64 and Linux ARM32.

Also available in preview

Azure Monitor for VMs (preview) available in Central Canada and UK South
Threat intelligence-based filtering for Azure Firewall is now available in preview
New capabilities in Azure Monitor alerts

Now generally available

Secure server access with VNet service endpoints for Azure Database for MariaDB

Now generally available for Azure Database for MariaDB, secure server access with VNet service endpoints. VNet service endpoints enable you to isolate connectivity to your logical server from a given subnet within your virtual network. There is no additional billing for virtual network access through VNet service endpoints. The current pricing model for Azure Database for MariaDB applies as is.

Scaling out read workloads in Azure Database for MySQL

Read replicas are now generally available to all Azure Database for MySQL users. For read-heavy workloads that you are looking to scale out, you can now use read replicas which make it easy to horizontally scale out beyond a single database server by supporting continuous asynchronous replication of data from one Azure Database for MySQL server to up to five Azure Database for MySQL servers in the same region.

Now available: Azure DevOps Server 2019

Announcing the official release of Azure DevOps Server 2019, previously known as Team Foundation Server. Azure DevOps Server 2019 brings the power of Azure DevOps into your dedicated environment and you can install it into any datacenter or sovereign. Azure DevOps includes developer collaboration tools which can be used together or independently, including Azure Boards (Work), Azure Repos (Code), Azure Pipelines (Build and Release), Azure Test Plans (Test), and Azure Artifacts (Packages). These tools support all popular programming languages, any platform (including macOS, Linux, and Windows) or cloud, as well as on-premises environments. Azure DevOps Server 2019 is now generally available.

Microsoft opens first datacenters in Africa with general availability of Microsoft Azure

Announcing the general availability of Microsoft Azure from our new cloud regions in Cape Town and Johannesburg, South Africa. The launch of these regions marks a major milestone for Microsoft as we open our first enterprise-grade datacenters in Africa, becoming the first global provider to deliver cloud services from datacenters on the continent. The new regions provide the latest example of our ongoing investment to help enable digital transformation and advance technologies such as AI, cloud, and edge computing across Africa.

Real-time serverless applications with the SignalR Service bindings in Azure Functions

Announcing the general availability of SignalR Service bindings in Azure Functions. SignalR Service is a fully managed Azure service that simplifies the process of adding real-time web functionality to applications over HTTP. This real-time functionality allows the service to push messages and content updates to connected clients using technologies such as WebSocket. As a result, clients are updated without the need to poll the server or submit new HTTP requests for updates. SignalR Service bindings in Azure Functions is available in all global regions where Azure SignalR Service is available.

Azure Databricks – New capabilities at lower cost

Announcing the general availability of Azure Databricks now with support for Data Engineering Light and Azure Machine Learning. Azure Databricks provides a fast, easy, and collaborative Apache Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions backed by industry leading SLAs. Customers can now get started with Azure Databricks and a new low-priced workload called Data Engineering Light that enables customers to run batch applications on managed Apache Spark with the added benefit of having an optimized, autoscaling, collaborative workspace, automated machine learning, and end-to-end Machine Learning Lifecycle management.

Microsoft continues to build the case for data estate modernization on Azure

Announcing the general availability of Azure Data Lake Gen 2 and Azure Data Explorer. With the latest release of Azure SQL Data Warehouse, Microsoft doubles-down on Azure SQL DW as one of the core data services for digital transformation on Azure. In addition to the fundamental benefits of agility, on-demand scaling, and unlimited compute availability, the most recent price-to-performance metrics from the GigaOM report are one of several compelling arguments made to customers for adopting Azure SQL DW together with Power BI for rich visualization, these enhanced set of capabilities cement Microsoft’s leadership position around Cloud Scale Analytics.

Also generally available

General availability: Azure Kubernetes Service in France Central
General Availability: Azure Lab Services

News and updates

Announcing new capabilities in Azure Firewall

Announcing the launch of two new key capabilities to Azure Firewall: threat intelligence-based filtering and service tags filtering. Azure firewall can now be configured to alert and deny traffic to and from known malicious IP addresses and domains in near real-time as well as to use service tags in the network rules destination field. Azure Firewall is a cloud native firewall-as-a-service offering which enables customers to centrally govern all their traffic flows using a DevOps approach.

Announcing new Azure Security Center capabilities at RSA 2019

Announcing new Azure Security Center capabilities in Azure and Microsoft 365 that strengthen unified security management and advanced threat protection solutions for hybrid cloud workloads. Azure Security Center now leverages machine learning to reduce the attack surface of internet facing virtual machines. It’s adaptive application controls have been extended to Linux and on-premises servers, and extends the network map support to peered virtual network (VNet) configurations. If you have Azure Security Center in your Azure subscription, you can take advantage of these new capabilities for all your Internet-exposed Azure resources immediately.

Presenting the new IIC Security Maturity Model for IoT

To help organizations deploying IoT solutions to address security concerns, Microsoft co-authored and edited the Industrial Internet Consortium (IIC) IoT Security Maturity Model (SMM) Practitioner’s Guide. The SMM leads organizations as they assess the security maturity state of their current organization or system, and as they set the target level of security maturity required for their IoT deployment. Once organizations set their target maturity, the SMM gives them an actionable roadmap that guides them from lower levels of security maturity to the state required for their deployment.

Conversational AI updates for March 2019

Announcing the release of Bot Framework SDK version 4.3 with updates for the Conversational AI releases that let you connect with your users wherever your users are. This release includes new channel support for popular messaging apps, a simplified approach for activity message handling, web API integration for .NET developers, Web Chat support that lets developer add a messaging interface for their bot on websites or mobile apps, and more.

Guardian modules: Bringing Azure Sphere security to brownfield IoT

As the value of connectedness increases, enterprises need a mechanism to securely connect these devices that are already in service. But how do businesses leverage IoT for the billions of devices already in the field without creating a large security risk? Azure Sphere enables secure, connected, microcontroller- (MCU-) based devices by establishing a foundation on which an enterprise can trust a device to run securely in any environment. With an Azure Sphere-enabled device, enterprise customers can more confidently connect existing devices to the cloud and unlock scenarios related to preventive maintenance, optimizing utilization, and even role-based access control.

Rerun activities inside your Azure Data Factory pipelines

Data Integration is complex with many moving parts. It helps organizations combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows and require rerunning failed activities inside your data integration workflows. Azure Data Factory now allows you to rerun activities inside your pipelines. Get started building pipelines easily and quickly using Azure Data Factory.

Power BI updates

Clone Visual API for Power BI Embedded
Themes API for Power BI Embedded
Control all Power BI Embedded visual menu actions programmatically
APIs for Power BI app content
Schedule DirectQuery cache refresh for Power BI Embedded
Schedule refreshes with the Power BI Embedded REST API

Additional news and updates

.NET Core February 2019 Update availability on App Service
Protect on-premises VMs by directly replicating to managed disks in Azure
Retiring Azure Batch AI
SignalR Service bindings
M-series virtual machines (VMs) are now available in the China North 2 region
Continuous integration and deployment to Azure API Management
Microsoft Azure Logic Apps Connector for 3270 Screens available now in Public Preview
Azure Monitor Log Analytics is now available in Azure China
Extended Security Updates for SQL Server 2008 are available now
HDInsight networking resources provisioned in the HDInsight resource group
AKS 2019-03-07 release

Technical content

Make it Yours: Customizing The Azure Portal Dashboard

Jasmine Greenaway shares how to customize the Azure Portal, creating tiles and multiple dashboard views. From adding clocks and gifs to greet you at log-in to creating, programmatically updating, and publishing dashboards for multiple purposes (demos, projects, sandbox/evaluation).

"Hello World" from your CLI with Azure ARM Templates!

Working with Azure Resource Manager Templates provides you with a way to codify your infrastructure using JSON. In this tutorial, Jay Gordon shows how to get started with using different parameters alongside your template, and deploy to Azure — all from the Azure CLI (command-line) tool.

Migrating over to Azure Pipelines from Travis CI

In this live-stream recording, Ryan Levick walks through the process of moving a Rust project’s CI/CD from Travis to Azure Pipelines.

Get started with Apache Spark and TensorFlow on Azure Databricks

In this step by step tutorial, Adi Polak shows you how to use the new Spark TensorFrame library – running on Azure Databricks – to start start working with TensorFlow on top of ApacheSpark (and why you'd want to).

Build Serverless Real-time Java Apps with Azure SignalR Service

In this post, Anthony Chu walks through how to create a collaborative drawing canvas using Java Azure Function and SignalR Service.

Cloud in 5 Minutes: Deploy an Azure Function V2 (to unzip automatically your files) with Visual Studio Code

Frank Boucher shows us how to use Azure Functions V2, VS Code, and the VS Code Azure Functions extension to automatically unzip files in Azure Blob Storage. Get a quick introduction to Azure Functions abd a few of Frank's favorite VS Code Azure Function extension tips and tricks.

An architecture for real-time scoring in R

In this blog post, David references architecture for real-time scoring with R, published in Microsoft Docs, and describes a Kubernetes-based system to distribute the load to R sessions running in containers.

Azure Stack IaaS – part 3

The third in a series of posts on Azure Stack, this installment focuses on Fundamentals of IaaS. Azure Stack is an instance of the Azure cloud that you can run in your own datacenter. Microsoft has taken the experience and technology from running one of the largest clouds in the world to design a solution you can host in your facility. This forms the foundation of Azure Stack’s infrastructure-as-service (IaaS).

Classroom Labs with Azure Lab Services

In digital economy, customers are struggling to build and keep the talent pool to develop digital assets to meet their business demands and stay competitive in the market place by improving employees’ skillset. With the Azure Lab Services, customers can quickly set up classroom labs for its employees to gain practical experience not only with the latest technologies, but also with their internal and external business applications.

Build a CI/CD pipeline for API Management

With the strategic value of APIs, a continuous integration (CI) and continuous deployment (CD) pipeline has become an important aspect of API development allowing organizations to automate deployment of API changes without error-prone manual steps; as well as to detect issues earlier and deliver value to end users faster. Walk through a conceptual framework for implementing a CI/CD pipeline for deploying changes to APIs published with Azure API Management.

Three reasons to choose Microsoft for your hybrid data platform

Companies are faced with the trade-offs for having an on-premises security solution or the convenience of moving data to the cloud. SQL Server and Azure SQL Database now provide the most consistent hybrid data platform with frictionless migration across on-premises, cloud, and private cloud; all at a lower cost. Review three reasons why Microsoft should be your hybrid data platform of choice.

e-signature retirement in Azure DevOps service and Azure DevOps Server

Get insights about e-signature requirements with Team Foundation Server, Azure DevOps Services, and Azure DevOps Server in this post that describes how to satisfy the Code of Federal Regulations, Title 21, PART 11 ELECTRONIC RECORDS; ELECTRONIC SIGNATURES requirements.

Monitor local storage usage on General Purpose Azure SQL Managed Instance

Azure SQL Managed Instance has predefined storage space that depends on the values of reserved storage and vCores that you choose when you provision the instance. See how to check remote storage usage, create alerts using SQL Agent, and monitor storage space on the Managed Instance.

Integrating with SAP from PowerApps & Flow using Azure Logic Apps

Did you know you can integrate with SAP from PowerApps and Flow using Azure Logic Apps? Read this post to see how to connect PowerApps & Flow with SAP in an end-to-end working example.

Additional technical content

Lesson Learned #75: The importance of having the connection pooling parameter enabled in your connection string using PHP
Lesson Learned #76: The strange case between an indexed view and date type conversion
Lesson Learned #77: Importing data from bacpac using bcp command utility
Use Azure File Sync to bridge your storage SMBs and NFS needs with Azure Files Cloud Storage for Windows Virtual Desktop, Citrix Virtual Desktops and other DaaS workloads on Azure
Managing and Working with #Azure Network Security Groups (NSG)
Azure BOTs – getting extra access tokens
Accessibility Testing with Azure DevOps Pipelines

Azure shows

Episode 269 – Women in Azure | The Azure Podcast

The Azure Podcast commemorates International Women's Day 2019, as the team talks to Chole Condon, a Senior Cloud Developer Advocate at Microsoft, about her Azure learning journey and her experience as a woman in cloud computing.

HTML5 audio not supported

Episode 268 – ExpressRoute Roadmap | The Azure Podcast

On this episode of the Azure Podcast, Paresh Mundade, a Senior PM in the Azure ExpressRoute team, presents an update on the service and a glimpse into the roadmap of planned features.

HTML5 audio not supported

Five Things About Azure Functions | Five Things

In this episode of Five Things About Azure Functions, John Papa and Jeff Hollan bring you five reasons you should check out Azure Functions today. You can also listen to Jeff dive deeper into serverless on his recent episode of Real Talk JavaScript.

An overview of Azure App Service Deployment Center | Azure Friday

Learn how App Service Deployment Center helps you follow agile development best practices to automate deployments of your code in seconds.

Gen Studio | AI Show

Gen Studio is a prototype concept which was created over a two-day hackathon with collaborators across The Metropolitan Museum of Art (The Met), Microsoft, and Massachusetts Institute of Technology (MIT). Gen Studio uses Microsoft AI to allow you to visually and creatively navigate the shared features and dimensions underlying The Met’s Open Access collection. Within the Gen Studio is a tapestry of experiences based on generative adversarial networks (GANs) which allow you to explore, search, and even be immersed within the latent space underlying The Met’s encyclopedic collection.

Clean Water AI | AI Show

Clean Water AI is a device that uses a deep learning neural network to detect dangerous bacteria and harmful particles in water. Users can see drinking water at a microscopic level, just like they would view footage from a security camera, with real-time detection and contamination mapping.

Privacy models for private consortiums | Block Talk

This session provides an overview of some of the more popular privacy features employed by private consortiums to enable sharing data only with specific participants in a network. This is implemented in a variety of ways and the architecture of these are discussed with a brief demo using the Quorum blockchain.

Hardware Acceleration for AI at the Edge | Internet of Things Show

One thing you really have to consider when bringing Artificial Intelligence to the edge is the hardware you will need to run these powerful algorithms. Ted Way from the Azure Machine Learning team joins Olivier on the IoT Show to discuss hardware acceleration at the Edge for AI.

IoT Deep Dive Live: Location Intelligence for Transportation with Azure Maps | Internet of Things Show

Come learn how to use Azure Maps to provide location intelligence in different areas of transportation such as fleet management, asset tracking, and logistics.

Real-time web applications with ASP.NET Core SignalR | On .NET

Brady Gaster joins Cecil Phillip to show how easy it is to add real-time functionality to your web applications using ASP.NET Core SignalR. They discuss topics such as targeting with clients, SignalR transports, and options for running your SignalR application in the cloud. Now, you can even leverage the Hub protocol spec is the available on GitHub if you're interested in creating your own SignalR client.

A quick tour of Azure DevOps projects using Node.js and AKS: Part 1 | Azure Tips and Tricks

Learn what Azure DevOps projects are and how to use them with Node.js and Azure Kubernetes Service. In part 1, you’ll learn how Azure DevOps projects makes it easy for you to create and build deployments.

How to manage virtual machine connectivity with the Azure Portal | Azure Portal Series

Learn how to easily manage virtual machine connectivity through the Azure Portal. You’ll learn how to manage virtual machine network security groups for virtual network subnets and virtual machines.

Greg Leonardo on Deploying the Azure Way – Episode 27 | The Azure DevOps Podcast

In this episode, Jeffrey Palermo and Greg Leonardo continue their conversation on deploying Azure — this time going deeper as they discuss some of the topics from Greg's book, Hands-On Cloud Solutions with Azure: Architecting, developing, and deploying the Azure way; infrastructure as code; provisioning environments; how to watch your environments; and much more on what developers targeting Azure need to know.

HTML5 audio not supported

Events

Azure Communications is hosting an “Ask Me Anything” session!

The Azure Communications team is hosting a special "Ask Me Anything" (AMA) session on Reddit and Twitter. Look for the Reddit session Monday, March 11th, from 10:00 AM to noon PST. Participate by posting to the /r/Azure subreddit when the AMA is live. Look for the Twitter session on Wednesday, March 13th, from 10:00 AM to noon PST. Be sure to follow @AzureSupport before March 13th and tweet us during the event using the hashtag #AzureCommsAMA.

Customers, partners, and industries

Intel and Microsoft bring optimizations to deep learning on Azure

Announcing a Microsoft and Intel partnership to bring optimized deep learning frameworks to Azure. Over the last few years, deep learning has become the state of the art for several machine learning and cognitive applications. Innovations in deep neural networks in these domains have enabled these algorithms to reach human level performance in vision, speech recognition, and machine translation. The Intel Optimized Data Science VM is an ideal environment to develop and train deep learning models on Intel Xeon processor-based VM instances on Azure. These optimizations are available in a new offering on the Azure marketplace called the Intel Optimized Data Science VM for Linux (Ubuntu).

Azure Marketplace and Cloud Solution Provider updates – March 2019

Our partners are delivering more innovation in AI by expanding their business through co-selling opportunities and leveraging distribution options through our commercial marketplaces such as Azure Marketplace and AppSource. We are now rolling out an initial set of platform changes to open new opportunities for our partners to go to market with Microsoft. Get a sneak peek on our public marketplace roadmap.

Azure This Week – 8 March 2019 | A Cloud Guru – Azure This Week

Lars gives us the latest Azure news from his farm in rural Australia! He discusses a new intelligent security tool called Microsoft Azure Sentinel, Azure Monitor AIOps Alerts with Dynamic Thresholds, and Java support for Azure Functions which is now in general availability.

Quelle: Azure

IoT in Action: A more sustainable future for farming

The future of food security and feeding an expanding global population depends upon our ability to increase food production globally—an estimated 70 percent by the year 2050, according to the Food and Agriculture Organization of the United Nations. But challenges ranging from climate change, soil quality, pest control, and shrinking land availability, not to mention water resource constraints, must be addressed.

So how can we increase yields in a sustainable, intelligent way?

We believe that the Internet of Things (IoT) technology and data-driven agriculture is one answer. In fact, IoT is already showing promising results.

Find out how IoT solves some of agriculture’s most vexing challenges by helping farmers connect fields and herds, reduce risks, streamline operations, and increase yield. To learn more, register for the IoT in Action event in Sydney on March 19, 2019.

How IoT is redefining agriculture

IoT offers countless benefits to agriculture in countless scenarios. Microsoft Project FarmBeats is a cost-effective, artificial intelligence (AI) and IoT platform that is based on Windows IoT devices and Azure cloud technologies. By combining low-cost sensors, drones, and vision and machine learning algorithms to map farms, Microsoft Project FarmBeats enables data-driven, precision agriculture, and the ability to increase density, quality, sustainability, and yield.

IoT-enabled sensors in the field can monitor everything from soil pH and quality to water saturation to ensure site-specific applications of irrigation, pesticides, and fertilizers. IoT provides opportunities for phenotyping and targeting seed varieties where they’ll best thrive. Drones and robots help monitor crops, identify optimal harvest times, and mitigate threats from pests and disease in real-time.

For operations that raise livestock or produce other animal products, connected field sensors and animal tags can be used to track and manage herds, monitor animal health and fertility, alert farmers to predators, and manage feed.

Of course, connecting devices and uploading data to the cloud can be especially challenging in rural areas. Microsoft has found a way to overlay WiFi signals over TV whitespaces—that is, unused TV channels—to transport data from sensors, drones, cameras, and tractors back to the farmer’s office. From there Azure IoT Edge running on PCs handles most of the computing, including Project FarmBeats AI and Computer Vision algorithms, and transmits data to the cloud, regardless of broadband speed.

Real-life applications of IoT in agriculture

As one of Australia’s fastest growing dairy farms, Australian Consolidated Milk (ACM), serves more than 180 farms and handles around 350 million liters of milk annually. Ensuring the quality and safety of milk is a top priority and maintaining the right temperature from collection to transport is key. One spoiled tanker-load of milk can cost up to $10,000 and have negative environmental impacts.

To help mitigate this, ACM is working with Advance Computing to trial a cloud-based IoT solution to provide greater visibility into milk temperature so that actions can be taken as soon as an anomaly is detected. The solution sends quality and temperature notifications to farmers in real-time so they can make necessary changes without delay.

Water is also a major concern in agriculture, consuming approximately 70 percent of our global water resources, according to the Food and Agriculture Organization. New Zealand-based Blackhills Farm is doing its part to lower that percentage.

Using the SCADAFarm system by WaterForce, which combines IoT solutions from Schneider Electric and Microsoft, Blackhills Farm is able to remotely monitor and control their irrigation system. Sprinklers can be customized for individual crops, soils, and moisture levels and be adjusted quickly for rain, heat, and other conditions. The solution has helped Blackhills Farm reduce water and power usage while realizing higher crop yields.

Meanwhile, during harvest season, Echuca-based Kagome receives some 180 tons of tomatoes arriving at their plant each hour. It enlisted the help of Advance Computing to devise an IoT-based solution that uses data from on farm sensors, in truck devices, and technology installed in Kagome’s loading bay to ensure the company has a clear window on its operations. Tracing shipments is now automated and information can be accessed anytime and anywhere. According to Kagome CEO Jason Fritsch, the solution has paid for itself five times over in the first season.

See how IoT is reshaping agriculture at IoT in Action in Sydney

IoT in Action is coming to Sydney on March 19, 2019. Register for this one-day, in-person event to discover how partners and customers are unlocking the potential of intelligent edge and intelligent cloud solutions to transform success in agriculture and other industries. Gain actionable insights around the latest topics in IoT business transformation, innovations in IoT security, the intelligent edge, and more. Plus, meet face-to-face with IoT experts, partners, and technical and business decision makers.
Quelle: Azure

Azure Stack IaaS – part 3

This blog post was co-authored by David Armour Principal PM Manager, Azure Stack and Tiberiu Radu, Senior Program Manager, Azure Stack.​

Foundation of Azure Stack IaaS

Remember back in the virtualization days when you had to pick a host for your virtual machine? Some of my business units could tell by the naming convention the make and manufacturer of the hardware. Using this knowledge, they’d fill up the better gear first, leaving the teams that didn’t know better with the oldest hosts.

Clouds take a different approach. Instead of hosts, VMs are placed into a pool of capacity. The physical infrastructure is abstract. The compute, storage, and networking resources consumed by the VM are defined through software.

Azure Stack is an instance of the Azure cloud that you can run in your own datacenter. Microsoft has taken the experience and technology from running one of the largest clouds in the world to design a solution you can host in your facility. This forms the foundation of Azure Stack’s infrastructure-as-service (IaaS).

Let’s explore some of the characteristics of the Azure Stack infrastructure that allows you to run cloud-native VMs directly in your facility.

Cloud inspired hardware

Microsoft employees can’t just purchase their favorite server and rack it into an Azure datacenter. The only servers that enter an Azure datacenter have been specifically built for Azure. Not only are the servers built for Azure, so are the networking devices, the racks, and the cabling. This extreme standardization allows the Azure team to operate an Azure datacenter with just a handful of employees. Because all the servers are standardized and can be uniformly operated and automated, adding additional capacity to a datacenter doesn’t require hiring more employees to operate them.

Other advantages of standardizing hardware configurations is the standardization leads to expected, repeatable results – not only for Microsoft and Azure, but for its customers. The hardware integration has been validated and is a known recipe. Servers, storage, networking, cabling layout, and more are all well-known and based on these recipes, the ordering, delivery, and integration of new hardware components. Servicing and eventual retirement are repeatable and scalable. The full end-to-end validation of these configurations is done once with quick checks in place when the capacity is delivered and installed.

These principles are applied to Azure Stack solutions as well. The configurations, their capabilities, and validation are all well-known and the result is a repeatable and supportable product. Microsoft, its partners, and most importantly the end customer benefit. While an Azure Stack customer is limited to the defined, partner solutions, they have been built with reasonable flexibility so the customer can choose the specific capabilities or capacities required. Please note, there is one exception – the Azure Stack Development Kit (ASDK) allows you to install Azure Stack on any hardware that meets the hardware requirements. The ASDK is for evaluation purposes and not supported as a production environment.

Learn more:

Azure Stack Capacity Planner
Best practices for planning Azure Stack deployment and post-deployment integrations with Azure

Azure Stack hardware partnerships

Microsoft has partnered and co-engineered solutions with a variety of hardware partners or OEMs. The benefit is that Azure Stack can meet you where your existing relationships exist. These relationships may be based on existing hardware purchasing agreements, geographic location, or support capabilities. Keeping in mind the principles of operating a solution in a well-defined manner, Microsoft has set minimum requirements for Azure Stack hardware solutions. Each of our partners can then choose from their portfolio the components, servers, and network switches that best meet your needs. This creates a well-defined variety that continues to be supportable and delivers the overall solution value.

Our current solutions partners are as follows:

Resiliency to failure

One of the principles we have taken from Microsoft’s experience in the enterprise and from Azure is overall solution resilience. The world of software and hardware is not perfect; things fail – cables go bad, software has bugs, power outages occur, and on and on. While we work to build better software and with our solution partners to continually improve, we must expect that things fail. Azure Stack solutions are not perfect, but have been constructed with the intent to overcome the common points of failure. For example, each copy of tenant/user data is stored on three separate storage devices in three separate servers. The physical network paths are redundant and provide better performance and resiliency to potential failure. The internal software of Azure Stack are services that coordinate across multiple instances. This type of end-to-end architectural design and implementation leads to a better end experience. Combining this approach to infrastructure resilience with the well-known and validated solutions approach described above provides for a better experience for the customer.

Learn more:

Understanding architectural patterns and practices for business continuity and disaster recovery on Microsoft Azure Stack

Hardened by default

When you run your IaaS VMs in Azure Stack you should know they are running on a secure foundation. It turns out that one of the reasons people select Azure Stack is because they have data and/or processes that are either regulated or defined in a contractual agreement. Azure Stack not only gives its owners control of their data and processes, it comes with an infrastructure which is secured by default. In fact, the underlying infrastructure is locked down in a way that neither the owner nor Microsoft can access it. If it ever needs to be accessed because of a support issue, both the owner and Microsoft combine their keys to obtain access to the system and for a limited time.

Azure leads the industry in security compliance, and security compliance is important for Azure Stack as well. In Azure, Microsoft fully manages the technology, people, and processes as well as its compliance responsibilities. Things are different with Azure Stack. While the technology is provided by Microsoft, the people and processes are managed by the operator. To help operators jump-start the certification process, Azure Stack has gone through a set of formal assessments by a third party-independent auditing firm to document how the Azure Stack infrastructure meets the applicable controls from several major compliance standards. The documentation is an assessment of the technology not a certification of Azure Stack due to the standards including several personnel-related and process-related controls, but they help you get started. The technology assessments include the following standards:

PCI-DSS – Addresses the payment card industry
CSA Cloud Control Matrix –  A comprehensive mapping across multiple standards, including FedRAMP Moderate, ISO27001, HIPAA, HITRUST, ITAR, NIST SP800-53, and others
FedRAMP High – For government customers

To download the Azure Stack compliance documentation please see, "Azure Security and Compliance Additional Frameworks."

Learn more:

Azure Trust Center
Azure Stack infrastructure security posture
Security and compliance in Azure Stack
Using the privileged endpoint in Azure Stack

Get started by reviewing your options

As noted earlier, Azure Stack is sold as an integrated hardware system, with software pre-installed on the validated hardware. It typically comes in a standard server rack. You choose where your system will be located. You can host it in your data center or perhaps you want to run it in a service provider’s facility.

With the Azure Stack running in your location of choice, you also have a choice of who operates the Azure Stack infrastructure. An Azure Stack operator is responsible for giving access to the Azure Stack, keeping the software and firmware up-to-date, providing the content in the marketplace, monitoring the system health, and diagnosing issues. Azure Stack provides automation, documentation, and training for all of these processes so that someone from your organization can operate Azure Stack. e also provide trained partner experts who can operate your Azure Stack either in their facility or yours.

Here is an overview of your options when you acquire your Azure Stack:

A system you manage

Typically on-premises
You control management and ops
Buy Azure Stack from Microsoft
Buy hardware from the vendor
Call Microsoft for support

A managed service

Typically at service provider premises
Service is managed for you
Buy service from service provider
Service includes hardware and software
Call the service provider for support

Learn more:

Azure Stack operator documentation
Azure Stack technology and service partners

Tuning your IaaS VMs for a cloud infrastructure

Once you have your Azure Stack up and running and you begin to plan your first IaaS VM deployments, you need to think about these VMs as cloud deployments, not virtualization deployments. IaaS VMs run best when they take advantage of the cloud infrastructure that they are running on. Many times, the way you tune a VM in your cloud infrastructure will be very different than the way you tuned VMs in your traditional virtualization environment. That said, you can always start with what you already have and improve those solutions through modern operations.

A great example of this is the use of multiple disks to get the needed IOps and throughput required of the application. As is the case in Azure, virtual machines placed in Azure Stack have limits applied for their disk activity. This limits the impact of one VM’s activity on another VM – aka noisy neighbor. While these limits are great for IaaS environments, it may take extra work to deploy workloads that get the appropriate resources needed, and in this example, it is IOps.

For optimization of SQL Server deployments, our documentation provides guidance on how to configure storage to obtain the needed performance. In this case, the approach is to attach multiple disks and stripe them to obtain the capacity and performance. When you use managed disks for your VMs, it allows the system to optimize where the physical data gets stored within your Azure Stack. Moving from virtualization environments to IaaS is reasonably straightforward and has its benefits, but requires a little bit of work on your first deployment. You can always use tools like Azure Monitor and the Virtual Machine solutions to better understand your workloads and gain insights on the performance of your VMs. When your VMs are not answering the performance requirements, you can also use the Azure Performance Diagnostics VM Extension for Windows to troubleshoot and identify potential bottlenecks.

The great thing about IaaS, and specifically Azure Stack, is the ability to easily reuse the deployment templates or artifacts to reduce the work for migration of similar workloads. We will cover this more in a future blog post.

Learn more:

Create virtual machine disk storage in Azure Stack
Optimize SQL Performance on Azure Stack
Azure Managed Disks Overview
Frequently asked questions about Azure IaaS VM disks
Considerations for Managed Disks on Azure Stack

Infrastructure purpose built for running cloud-native VMs

Few organizations can claim that they have experience building one the largest cloud infrastructures in the world. When you buy an Azure Stack, you get the benefit of Microsoft’s Azure experience. Microsoft has partnered with the best OEMs to deliver a standardized configuration so that you don’t have to worry about these details. The infrastructure of Azure Stack is purpose-built to get the best for your IaaS VMs – keeping them safe, secure, and performant.

Learn more:

How to buy Azure Stack
Azure Stack datacenter integration

In this blog series

We hope you come back to read future posts in this blog series. Here are some of our passed and upcoming topics:

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform
Start with what you already have
Fundamentals of IaaS
Do it yourself
Pay for what you use
It takes a team
If you do it often, automate it
Protect your stuff
Build on the success of others
Journey to PaaS

Quelle: Azure

Intel and Microsoft bring optimizations to deep learning on Azure

This post is co-authored with Ravi Panchumarthy and Mattson Thieme from Intel.

We are happy to announce that Microsoft and Intel are partnering to bring optimized deep learning frameworks to Azure. These optimizations are available in a new offering on the Azure marketplace called the Intel Optimized Data Science VM for Linux (Ubuntu).

Over the last few years, deep learning has become the state of the art for several machine learning and cognitive applications. Deep learning is a machine learning technique that leverages neural networks with multiple layers of non-linear transformations, so that the system can learn from data and build accurate models for a wide range of machine learning problems. Computer vision, language understanding, and speech recognition are all examples of deep learning at play today. Innovations in deep neural networks in these domains have enabled these algorithms to reach human level performance in vision, speech recognition and machine translation. Advances in this field continually excite data scientists, organizations and media outlets alike. To many organizations and data scientists, doing deep learning well at scale poses challenges due to technical limitations.

Often, default builds of popular deep learning frameworks like TensorFlow are not fully optimized for training and inference on CPU. In response, Intel has open-sourced framework optimizations for Intel® Xeon processors. Now, through partnering with Microsoft, Intel is helping you accelerate your own deep learning workloads on Microsoft Azure with this new marketplace offering.

"Microsoft is always looking at ways in which our customers can get the best performance for a wide range of machine learning scenarios on Azure. We are happy to partner with Intel to combine the toolsets from both the companies and offer them in a convenient pre-integrated package on the Azure marketplace for our users” 

– Venky Veeraraghavan, Partner Group Program manager, ML platform team, Microsoft.

Accelerating Deep Learning Workloads on Azure

Built on the top of the popular Data Science Virtual Machine (DSVM), this offer adds on new Python environments that contain Intel’s optimized versions of TensorFlow and MXNet. These optimizations leverage the Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) to accelerate training and inference on Intel® Xeon® Processors. When running on an Azure F72s_v2 VM instance, these optimizations yielded an average of 7.7X speedup in training throughput across all standard CNN topologies. You can find more details on the optimization practice here.

As a data scientist or AI developer, this change is quite transparent. You still code with the standard TensorFlow or MXNet frameworks. You can also use the new set of Python (conda) environments (intel_tensorflow_p36, intel_mxnet_p36) on the DSVM to run your code to take full advantage of all the optimizations on an Intel® Xeon Processor based F-Series or H-Series VM instance on Azure. Since this product is built using the DSVM as the base image, all the rich tools for data science and machine learning are still available to you. Once you develop your code and train your models, you can deploy them for inferencing on either the cloud or edge.

“Intel and Microsoft are committed to democratizing artificial intelligence by making it easy for developers and data scientists to take advantage of Intel hardware and software optimizations on Azure for machine learning applications. The Intel Optimized Data Science Virtual Machine (DSVM) provides up to a 7.7X speedup on existing frameworks without code modifications, benefiting Microsoft and Intel customers worldwide”

– Binay Ackalloor, Director Business Development, AI Products Group, Intel.

Performance

In Intel’s benchmark tests run on Azure F72s_v2 instance, here are the results comparing the optimized version of TensorFlow with the standard TensorFlow builds.

Figure 1: Intel® Optimization for TensorFlow provides an average of 7.7X increase (average indicated by the red line) in training throughput on major CNN topologies. Run your own benchmarks using tf_cnn_benchmarks. Performance results are based on Intel testing as of 01/15/2019. Find the complete testing configuration here.

Getting Started

To get started with the Intel Optimized DSVM, click on the offer in the Azure Marketplace, then click “GET IT NOW”. Once you answer a few simple questions on the Azure Portal, your VM is created with all the DSVM tool sets and the Intel optimized deep learning frameworks pre-configured and ready to use.

The Intel Optimized Data Science VM is an ideal environment to develop and train deep learning models on Intel Xeon processor-based VM instances on Azure. Microsoft and Intel will continue their long partnership to explore additional AI solutions and framework optimizations to other services on Azure like the Azure Machine Learning service and Azure IoT Edge.

Next steps

Create your Intel Optimized Data Science VM instance from the Azure Marketplace.
Learn more about the Intel Optimized Data Science VM.
Build AI solutions and deploy machine learning models in production at scale using Azure Machine Learning service.
New to Azure? Get your free trial.

Quelle: Azure

Microsoft continues to build the case for data estate modernization on Azure

Special thanks to Rik Tamm-Daniels and the Informatica team for their contribution to this blog post. ​

With the latest release of Azure SQL Data Warehouse, Microsoft doubles-down on Azure SQL DW as one of the core data services for digital transformation on Azure. In addition to the fundamental benefits of agility, on-demand scaling and unlimited compute availability, the most recent price-to-performance metrics from the GigaOM report are one of several the compelling arguments they have made for customers to adopt Azure SQL DW. Interestingly, Microsoft is also announcing the general availability of Azure Data Lake Gen 2 and Azure Data Explorer. Along with Power BI for rich visualization, these enhanced set of capabilities cement Microsoft’s leadership position around Cloud Scale Analytics.

Every day, I speak with joint Informatica and Microsoft customers who are looking to transform their approach to their data estate with a cohesive data lake and cloud data warehousing solution architecture. These customers range from global logistics companies, to auto manufacturers to the world’s largest insurers, and all of them see the tremendous potential of the Microsoft modern data estate approach; in fact, just via Informatica's iPaaS (integration platform-as-a-service) offering, Informatica Intelligent Cloud Services, we’ve seen a significant quarter-to-quarter growth in customer data volumes being moved to Azure SQL DW.

Of course, as compelling as the Azure SQL DW technology is, for many customers, modernizing a legacy enterprise data warehouse is a daunting proposition to even consider. The thought of touching the intricate web of dependencies around the warehouse can keep even the most battle-tested CIO up at night. A key consideration when attempting your own cloud data warehousing/cloud data modernization initiative is to ensure you have intelligence about the existing schemas, lineage and dependencies to enable companies to incrementally unravel the data web surrounding the warehouse, and with laser-like precision, begin to move workloads and use case to Azure SQL DW.

Enter Informatica’s Enterprise Data Catalog with full end-to-end source-to-destination lineage and searchable machine-learning and AI-driven intelligent metadata about what data lives where in the warehouse to clear the fog of complexity and illuminate a clear path to cloud data warehousing. In fact, the concept of discovery and catalog driven-modernization is such a compelling leap forward that Microsoft and Informatica developed a single-sign-on Data Accelerator on Informatica’s Intelligent Cloud Services on Azure that can be accessed directly from the Azure SQL DW management console with your Azure credentials.

Data Accelerator for Azure

Want to see how Informatica and Microsoft can jumpstart your cloud data warehousing modernization initiative? Join us on Informatica's world tour of hands-on workshop at a Microsoft Technology Center near you. Workshops are taking place in North America right now and will be coming to EMEA and APJ very soon!

Register here: Cloud Data Warehouse Modernization for Azure Workshop. 
Quelle: Azure

Build a CI/CD pipeline for API Management

APIs have become mundane. They have become the de facto standard for connecting apps, data, and services. In the larger picture, APIs are driving digital transformation in organizations.

With the strategic value of APIs, a continuous integration (CI) and continuous deployment (CD) pipeline has become an important aspect of API development. It allows organizations to automate deployment of API changes without error-prone manual steps, detect issues earlier, and ultimately deliver value to end users faster.

This blog walks you through a conceptual framework for implementing a CI/CD pipeline for deploying changes to APIs published with Azure API Management.

The problem

Organizations today normally have multiple deployment environments (e.g., Development, Testing, Production) and use separate API Management instances for each environment. Some of these instances are shared by multiple development teams, who are responsible for different APIs with different release cadences.

As a result, customers often come to us with the following challenges:

How to automate deployment of APIs into API Management?
How to migrate configurations from one environment to another?
How to avoid interference between different development teams who share the same API Management instance?

We believe the approach described below will address all these challenges.

CI/CD with API Management

The proposed approach is illustrated in the above picture. In this example, there are two deployment environments: Development and Production. Each has its own API Management instance. The Production instance is managed by a designated team, called API publishers. API developers only have access to the Development instance. 

The key in this proposed approach is to keep all configurations in Azure Resource Manager templates. These templates should be kept in a source control system. We will use Git as an example. As illustrated in the picture, there is a Publisher repository that contains all configurations of the Production API Management instance in a collection of templates:

Service template: Contains all service-level configurations (e.g., pricing tier and custom domains).
Shared templates: Contains shared resources throughout an API Management instance (e.g., groups, products, and identity providers).
API templates: Includes configurations of APIs and their sub-resources (e.g., operations and policies).
Master template: Ties everything together by linking to all templates.

API developers will fork and clone the Publisher repository. In most cases, they will focus on API templates for their APIs and should not change the shared or service templates.

When working with Resource Manager templates, we realize there are two challenges for API developers:

First, API developers often work with Open API specifications and may not be familiar with Resource Manager schemas. To simplify creation of templates, we created a utility tool to automate the creation of API templates based on Open API specifications.
Second, for customers who have already been using API Management, another challenge is how to extract existing configurations into Resource Manager templates. We created another tool to generate templates based on existing configurations.

Once developers have finished developing and testing an API, and have generated the API template, they will submit a pull request to the Publisher repository. API publishers can validate the pull request and make sure the changes are safe and compliant. Most of the validations can be automated as part of the CI/CD pipeline. When the changes are approved and merged successfully, API publishers will deploy them to the Production instance. The deployment can also be easily automated with Azure Pipelines.

With this approach, the deployment of API changes into API Management instances can be automated and it is easy to promote changes from one environment to another. Since different API development teams will be working on different sets of API templates, it also reduces the chances of interference between different teams.

Next steps

You can find the guidance, examples, and tools in this GitHub repository. Please give it a try and let us know your feedback and questions.

We realize our customers bring a wide range of engineering cultures and existing automation solutions. The approach and tools provided here are not meant to be a one-size-fits-all solution. That's why we published and open-sourced everything on GitHub, so that you can extend and customize the solution.
Quelle: Azure

Azure Databricks – New capabilities at lower cost

Azure Databricks provides a fast, easy, and collaborative Apache Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions backed by industry leading SLAs.

With Azure Databricks, customers can set up an optimized Apache Spark environment in minutes. Data scientists and data engineers can collaborate using an interactive workspace with languages and tools of their choice. Native integration with Azure Active Directory (Azure AD) and other Azure services enables customers to build end-to-end modern data warehouse, machine learning and real-time analytics solutions.

We have seen tremendous adoption of Azure Databricks and today we are excited to announce new capabilities that we are bringing to market.

General availability of Data Engineering Light

Customers can now get started with Azure Databricks with a new low-priced workload called Data Engineering Light that enables customers to run batch applications on managed Apache Spark. It is meant for simple, non-critical workloads that don’t need the performance, autoscaling, and other benefits provided by Data Engineering and Data Analytics workloads. Get started with this new workload.

Additionally, we have reduced the price for the Data Engineering workload across both the Standard and Premium SKUs. Both the SKUs are now available at up to 25 percent lower cost. To check out the new pricing for Azure Databricks SKUs, visit the pricing page.

Preview of managed MLflow

MLflow is an open source framework for managing the machine learning lifecycle. With managed MLflow, customers can access it natively from their Azure Databricks environment and leverage Azure Active Directory for authentication. With managed MLflow on Azure Databricks customers can:

Track experiments by automatically recording parameters, results, code, and data to an out-of-the-box hosted MLflow tracking server. Runs can now be organized in experiments from within the Azure Databricks, and results can be queried from within the Azure Databricks notebooks to identify the best performing models.
Package machine learning code and dependencies locally in a reproducible project format and execute remotely on a Databricks cluster.
Quickly deploy models to production.

Learn more about managed MLFlow.

Machine learning on Azure with Azure Machine Learning and Azure Databricks

Since the general availability of Azure Machine Learning service (AML) in December 2018, and its integration with Azure Databricks, we have received overwhelmingly positive feedback from customers who are using this combination to accelerate machine learning on big data. Azure Machine Learning complements the Azure Databricks experience by:

Unlocking advanced automated machine learning capability which enables data scientists of all skill levels to identify suitable algorithms and hyperparameters faster.
Enabling DevOps for machine learning for simplified management, monitoring, and updating of machine learning models.
Deploying models from the cloud and the edge.
Providing a central registry for experiments, machine learning pipelines, and models that are being created across the organization.

The combination of Azure Databricks and Azure Machine Learning makes Azure the best cloud for machine learning. Customers benefit from an optimized, autoscaling Apache Spark based environment, an interactive collaborate workspace, automated machine learning, and end-to-end Machine Learning Lifecycle management.

Get started today!

Try Azure Databricks and let us know your feedback!

Try Azure Databricks through a 14-day premium trial.
Try Azure Machine Learning.
Watch the webinar on Machine Learning on Azure.

Quelle: Azure