Step up your machine learning process with Azure Machine Learning service

Everyone’s talking about machine learning (ML). Business decision makers are finding ways to deploy machine learning in their organizations. Data scientists are keeping up with all the advancements, tools, and frameworks available. Media outlets are reporting on awe-inspiring breakthroughs in the artificial intelligence revolution.

We believe the way forward lies in democratizing artificial intelligence and machine learning by proxy. This means making machine learning services available to singular data scientists and developers, small to medium sized businesses, and global organizations–all with the ability to scale their models up and out.

This means offering automated and prebuilt algorithms, as well as the ability to create highly customized models. It also means ensuring they are compatible with open source frameworks.

The challenges of machine learning

As you likely already know, machine learning is a data science technique that allows computers to use existing data to forecast future behaviors, outcomes, and trends. But the promises of machine learning come with challenges. Here are just a few:

There is a lot of manual math, data analysis, programming, training, and experimentation.
There are multiple ways to solve every problem.
Challenges arise in monitoring and evaluating the precision, accuracy, and efficacy of a given model.
Data scientists struggle to find the right development tools, debugging tools, and educational resources.

Azure Machine Learning service

The Azure Machine Learning service provides a cloud-based service you can use to develop, train, test, deploy, manage, and track machine learning models. With Automated Machine Learning and other advancements available, training and deploying machine learning models is easier and more approachable than ever.

Below are three of the key pillars of Azure Machine Learning service that give us an edge. I’ll be going into greater detail about each of these pillars in subsequent blogs, so stay tuned!

These three pillars apply largely to automated machine learning, also provided under Azure Machine Learning service. Automated machine learning helps users of all skill levels accelerate their pipelines, leverage open source frameworks, and scale easily. Automated machine learning, a form of deep machine learning, makes machine learning more accessible across an organization.

1. End-to-end ML lifecycle management

There’s a lot that goes into the machine learning lifecycle. Data preparation, experimentation, model training, model management, deployment, and monitoring traditionally require time and manual effort. Azure Machine Learning service seamlessly integrates with Azure services to provide end-to-end capabilities for the entire machine learning lifecycle, making it simpler and faster than ever. With Azure Machine Learning Service, you can:

Create multiple or common workspaces to collaborate easily across teams.
Centralize management of all model artifacts.
Schedule runs in parallel.
Manage scripts and data separately.
Ensure ease of support and maintenance with CI/CD while driving quality over time and preventing model drift.
Easily track your experiments and version your models.
Manage and monitor your models directly in the Azure portal.

2. Power productivity and ease-of-use with an open platform

Data scientists and developers are empowered to easily build and train highly accurate machine learning and even deep-learning models through the frameworks and tools that they’re familiar with. You can now bring machine learning models to market faster with flexible open tools. With Azure Machine Learning, you can:

Use your favorite open source frameworks.
Use a familiar and rich set of tools, such as Jupyter Notebooks, with the Python extension for Visual Studio Code.
Reduce friction and refocus on building models.
Easily leverage the multi-cloud interoperability with built-in ONNX support.

3. Scale up and out to the cloud or edge easily

Previously, machine learning requires powerful compute capabilities in order to train models quickly. With hardware acceleration (GPUs, containers, etc.), scaling up or out is much easier. With Azure Machine Learning, you can:

Use any data and deploy models anywhere.
Scale out training from your local laptop or workstation to the cloud with compute on-demand.
Get GPU and deep learning framework support.
Distribute training for faster results by running models over a cluster of GPU-equipped computers in tandem.
Feel confident in enterprise-grade security, audit, and compliance.
Have reliable model deployment across cloud and edge.
Get cost effective inferencing with batch prediction and scoring.
Consume real-time scoring for targeted outcomes.

As you can see, Azure Machine Learning service provides an effective solution to a number of top concerns for individuals and organizations seeking to deploy machine learning models and are making an effort to advance machine learning for everyone’s benefit. Look out for more upcoming blogs in this series, where we will cover each of these three pillars in more detail.

Learn more

Learn more about the Azure Machine Learning service.

Get started with a free trial of Azure Machine Learning service.
Quelle: Azure

Azure.Source – Volume 76

Hybrid strategy | Preview | Generally available | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries

Build a Successful Hybrid Strategy

Do you have workloads in the cloud & on-premises? Then you know how important it is to have a comprehensive hybrid design and implementation plan. To help you approach hybrid cloud even more effectively, Microsoft announced two new hybrid cloud services: Azure Stack HCI Solutions and Azure Data Box Edge. Whether you need a single or multi-cloud, or are looking to bring intelligent edge computing to your business, you need a consistent and secure environment, no matter where your data resides.

Enabling customers’ hybrid strategy with new Microsoft innovation

The ability for customers to embrace both public cloud and local datacenter, plus edge capability, is enabling customers to improve their IT agility and maximize efficiency. The benefit of a hybrid approach is also what continues to bring customers to Azure, the one cloud that has been uniquely built for hybrid. We haven’t slowed our investment in enabling a hybrid strategy, particularly as this evolves into the new application pattern of using intelligent cloud and intelligent edge. We are continuing to expand Azure Stack offerings to meet a broader set of customer needs, so they can run virtualized applications in their own datacenter. Join the on-demand hybrid cloud virtual event.

Announcing Azure Stack HCI: A new member of the Azure Stack family

Announcing Azure Stack HCI solutions are now available for customers who want to run virtualized applications on modern hyperconverged infrastructure (HCI) to lower costs and improve performance. Azure Stack HCI solutions feature the same software-defined compute, storage, and networking software as Azure Stack, and can integrate with Azure for hybrid capabilities such as cloud-based backup, site recovery, monitoring, and more. Azure Stack HCI solutions are designed to run virtualized applications on-premises in a familiar way, with simplified access to Azure for hybrid cloud scenarios. A great hybrid cloud strategy is one that meets you where you are, delivering cloud benefits to all workloads wherever they reside.

Accelerated AI with Azure Machine Learning service on Azure Data Box Edge

Announcing the preview of Azure Machine Learning hardware accelerated models powered by Project Brainwave on Data Box Edge. This preview enhances Azure Machine Learning service by enabling you to train a TensorFlow model for image classification scenarios, containerize the model in a Docker container, and then deploy the container to a Data Box Edge device with Azure IoT Hub. Applying machine learning models to the data on Data Box Edge provides lower latency and savings on bandwidth costs, while enabling real-time insights and speed to action for critical business decisions.

Azure Data Box family meets customers at the edge

Announcing the general availability of Azure Data Box Edge and the Azure Data Box Gateway. Data Box Edge is an on-premises anchor point for Azure and can be racked alongside your existing enterprise hardware or live in non-traditional environments from factory floors to retail aisles. Data Box Edge comes with a built-in storage gateway. If you don’t need the Data Box Edge hardware or edge compute, then the Data Box Gateway is also available as a standalone virtual appliance that can be deployed anywhere within your infrastructure. You can get these products today in the Azure portal.

Now in preview

New updates to Azure AI expand AI capabilities for developers

Continuing our quest to make Azure the best place to build AI, we have introduced a preview of the new Anomaly Detector Service which uses AI to identify problems so companies can minimize loss and customer impact. We have also announced the general availability of Custom Vision to more accurately identify objects in images. From using speech recognition, translation, and text-to-speech to image and object detection, Azure Cognitive Services makes it easy for developers to add intelligent capabilities to their applications in any scenario.

People Recognition Enhancements – Video Indexer

Announcing Video Indexer enhancements that makes custom Person model training and management faster and easier. Enhancements include a centralized custom Person Model Management page for creating multiple models in your account; giving you the ability to train your account to identify people based on images of people’s faces even before you upload any video. Video Indexer now also supports up to 50 Persons Models per account, where each of the models supports up to 1 million different people. The new Video Indexer features are now in public preview.

Azure Search – New Storage Optimized service tiers available in preview

Announcing the preview of two new service tiers for Storage Optimized workloads in Azure Search. Azure Search is an AI-powered cloud search service for modern mobile and web app development. Azure Search is the only cloud search service with built-in artificial intelligence (AI) capabilities that enrich all types of information to easily identify and explore relevant content at scale. With Azure Search, you spend more time innovating on your websites and applications, and less time maintaining a complex search solution.

Announcing the public preview of Data Discovery & Classification for Azure SQL Data Warehouse

Announcing the public preview of Data Discovery & Classification for Azure SQL Data Warehouse, an additional capability for managing security for sensitive data. Data Discovery & Classification alleviates the pain-point of protecting sensitive data from becoming unmanageable to discover, classify, and protect as your data assets grow. Azure SQL Data Warehouse is a fast, flexible, and secure cloud data warehouse tuned for running complex queries fast and across petabytes of data.

Also available in preview

Public preview: Windows Server container support in Azure App Service
Public preview: Data Discovery & Classification for Azure SQL Data Warehouse
Update 19.03 for Azure Sphere public preview now available in Retail feed

Now generally available

Larger, more powerful Managed Disks for Azure Virtual Machines

Announcing the general availability of larger and more powerful Azure Managed Disk sizes of up to 32 TiB on Premium SSD, Standard SSD, and Standard HDD disk offerings. In addition, we support disk sizes up to 64 TiB on Ultra Disks in preview. We are also increasing the performance scale targets for Premium SSD to 20,000 IOPS and 900 MB/sec. With the general availability (GA) of larger disk sizes, Azure now offers a broad range of disk sizes for your production workload needs, with unmatched scale and performance. Our next step is to enable the preview of Azure Backup for larger disk sizes providing you full coverage for enterprise backup scenarios by the end of May 2019. Similarly, Azure Site Recovery support for on-premises to Azure, and Azure to Azure Disaster Recovery will be extended to all disk sizes soon.

Azure Premium Block Blob Storage is now generally available

Announce general availability of Azure Premium Blob Storage. Premium Blob Storage is a new performance tier in Azure Blob Storage for block blobs and append blobs; complimenting the existing Hot, Cool, and Archive access tiers. Premium Blob Storage is ideal for workloads that require very fast response times and/or high transactions rates, such as IoT, Telemetry, AI, and scenarios with humans in the loop such as interactive video editing, web content, online transactions, and more. Premium Blob Storage is available with Locally-Redundant Storage (LRS) and comes with High-Throughput Block Blobs (HTBB), which provides very high and instantaneous write throughput when ingesting block blobs larger than 256KB. Premium Blob Storage is initially available in US East, US East 2, US Central, US West, US West 2, North Europe, West Europe, Japan East, Australia East, Korea Central, and Southeast Asia regions with more regions to come.

Azure Blob Storage lifecycle management generally available

Announcing the general availability of Blob Storage Lifecycle Management to automate blob tiering and retention with custom defined rules. Azure Blob Storage Lifecycle Management offers a rich, rule-based policy which you can use to transition your data to the best access tier and to expire data at the end of its lifecycle. This feature is available in all Azure public regions.

Azure Storage support for Azure Active Directory based access control generally available

Announcing the general availability of Azure Active Directory (AD) based access control for Azure Storage Blobs and Queues. Enterprises can now grant specific data access permissions to users and service identities from their Azure AD tenant using Azure’s Role-based access control (RBAC).  Administrators can then track individual user and service access to data using Storage Analytics logs. Storage accounts can be configured to be more secure by removing the need for most users to have access to powerful storage account access keys.

Blob storage interface on Data Box is now generally available

Announcing the general availability of a blob storage interface on Data Box. The blob storage interface allows you to copy data into the Data Box via REST and makes the Data Box appear like an Azure storage account. Applications that write to Azure blob storage can be configured to work with the Azure Data Box. With this capability, partners like Veeam, Rubrik, and DefendX are now able to use the Data Box to assist customers moving data to Azure.

Also generally available

Greater storage capacity and performance with new Azure Disks SKU
Azure Security Center change from monthly to hourly unit of measure
Event Hubs resource GUID changes
App Service updating PHP to latest versions
Azure Site Recovery: Firewall support for replication of on-premises machines
Azure API Management roundup of features and fixes

News and updates

Clean up files by built-in delete activity in Azure Data Factory

Azure Data Factory (ADF) is a fully-managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. You must periodically clean up files from the on-premises or the cloud storage server when the files become out of date. The ADF built-in delete activity, which can be part of your ETL workflow, deletes undesired files without writing code. You can use ADF to delete folder or files from Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, File System, FTP Server, sFTP Server, and Amazon S3.

What’s new in Azure IoT Central – March 2019

This post recaps the new features now available in Azure IoT Central; including embedded Microsoft Flow, updates to the Azure IoT Central connector, Azure Monitor action groups, multiple dashboards, localization support, and highlights the recently expanded Jobs functionality. With these new features, you can more conveniently build workflows as actions and reuse groups of actions, organize your visualizations across multiple dashboards, and work with IoT Central with your favorite language.

Incrementally copy new files by LastModifiedDate with Azure Data Factory

Azure Data Factory (ADF) is the fully-managed data integration service for analytics workloads in Azure. Using ADF, users can load the lake from 80 plus data sources on-premises and in the cloud, use a rich set of transform activities to prep, cleanse, and process the data using Azure analytics engines, while also landing the curated data into a data warehouse for getting innovative analytics and insights. Now, ADF provides a new capability for you to incrementally copy new or changed files only by LastModifiedDate from a file-based store. The feature is available when loading data from Azure Blob Storage, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Amazon S3, File System, SFTP, and HDFS.

High-Throughput with Azure Blob Storage

Announcing that High-Throughput Block Blob (HTBB) is globally enabled in Azure Blob Storage. HTBB provides significantly improved and instantaneous write-throughput when ingesting larger block blobs, up to the storage account limits for a single blob. We have also removed the guesswork in naming your objects, enabling you to focus on building the most scalable applications. High-Throughput Block Blob is now available in all Azure regions and is automatically active on your existing storage accounts at no extra cost.

Additional news and updates

Happy birthday to managed Open Source RDBMS services in Azure!
Azure Cache for Redis resource GUID changes
IoT Hub supports new Azure Monitor metric alerts
Service Bus Messaging Unit name changes
Video Indexer is now ISO, SOC, HiTRUST, FedRAMP, HIPAA, PCI certified
ExpressRoute Resource GUID name change from "Port" to "Direct"
New tool available to migrate from classic monitoring alerts

Technical content

Building serverless microservices in Azure – sample architecture

Distributed applications take full advantage of living in the cloud to run globally, avoid bottlenecks, and always be available for users worldwide. Most cloud native applications use a microservices architecture to maximize the wide range of managed services for managing infrastructure, scaling, and improving critical processes like deployment or monitoring. This post focuses on how building serverless microservices is a great fit for event-driven scenarios, and how you can use the Azure Serverless platform.

Analysis of network connection data with Azure Monitor for virtual machines

Azure Monitor for virtual machines (VMs) collects network connection data that you can use to analyze the dependencies and network traffic of your VMs. Analyze the number of live and failed connections, bytes sent and received, and the connection dependencies of your VMs down to the process level. Get started with log queries in Azure Monitor for VMs.

Resource governance in Azure SQL Database

When you choose a specific Azure SQL Database service tier, you are selecting a pre-defined set of allocated resources across several dimensions such as CPU, storage type, storage limit, memory, and more. Ideally, you select a service tier that meets the workload demands of your application. With each service tier selection, you are also inherently selecting a set of resource usage boundaries & limits. Learn how to use governance to help set a balanced set of allocated resources.

How to run Ghost blogging software on Azure in a Linux Docker Container

In this post, Jessica details the steps needed for running a Ghost blog in a Docker container on Azure.

Get an official service issue root cause analysis with Azure Service Health

Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance. Learn to use Azure Service Health’s health history to review past health issues and get official root cause analyses (RCAs) to share with your internal and external stakeholders.

AKS Networking Policies

This blog post looks at securing traffic between pods in Azure Kubernetes Service. It outlines the basics of a demo that demonstrates the process using the Cloud Shell.

How to access Azure Linux virtual machines with Azure Active Directory

In this blog post, Neil Paterson walks through the basic configuration steps for accessing Azure Linux virtual machines using Azure AD credentials.

MSDEV podcast: The MXChip with Suz Hinton

The popular podcast MSDev is joined by Suz Hinton to discuss the MXChip Microcontroller board. Folks learned what it was, why you would use it, and some other technical learnings around hardware and Azure IoT in general.

HTML5 audio not supported

Serverless — from the beginning, using Azure Functions (Azure portal), Part I

Part 1 in this series covers the essentials of Serverless computing in the cloud. It defines the term and explains how to get started with Azure Functions in the Azure Portal. This is the first part of five. In this part Chris also looks at Function apps, triggers and bindings, and the practical approaches needed to use Serverless within your apps.

Deploying Deep Learning models using Kubeflow on Azure

In this blog post, we will be looking into two machine learning toolkits Azure Machine Learning service (AML) and Kubeflow to compare the two approaches for a computer vision scenario where one would like to deploy a trained deep learning model for image classification. We hope this will help data scientists make a more informed decision for their next deployment problem.

Azure Stack IaaS – part six

A fundamental quality of a cloud is that it provides an elastic pool for your resource to use when needed. Since you only pay for what you use, you don’t need to over provision. Instead, you can optimize capacity based on demand. See some of the ways you can do this for your IaaS VMs running in Azure and Azure Stack. Azure and Azure Stack makes it easy for you to resize, scale out, add and remove your VM from the portal.

Additional technical content

Azure Developer – Get the list of conference rooms using Microsoft Graph API programmatically
Preparing for AZ-300 and AZ-301 with Pluralsight courses

Azure shows

Episode 272 – The New Azure Monitor | The Azure Podcast

Shankar Sivadasan, a Senior Azure Product Marketing Manager, gives us all the details on how the trusty Azure Monitor service has evolved into the main monitoring solution in Azure.

HTML5 audio not supported

Read the transcript

Deploy to Azure using GitHub Actions | Azure Friday

Gopi joins Donavan to discuss how to deploy to Azure using GitHub Actions, which helps you to configure CI/CD from the GitHub UI.

Using GitHub Actions to Deploy to Azure | The DevOps Lab

Damian sits down with Product Manager Gopinath Chigakkagari to talk about deploying to Azure using GitHub Actions. In this episode, Gopi walks through a deployment process inside GitHub Actions to deploy a containerized application to Azure on a new push to a repository. Along the way, he'll also show some of the features and advantages of GitHub Actions itself.

Azure IoT Certification Service | Internet of Things Show

Azure IoT Certification Service can streamline your IoT device certification processes and reduce validation processes for device manufacturers.

Five Ways You Can Build Mobile Apps with JavaScript | Five Things

Why are there so many options for developing mobile apps? What should you choose? How can you slipstream your way into mobile and take advantage of the cloud? Todd Anglin has all the answers and wears some snazzy clothing, in this episode of Five Things.

Investigating Production Issues with Azure Monitor and Snapshot Debugger | On .NET

In this episode, Isaac Levin joins us to share how the developer exception resolution experience can be better with Azure Monitor and Snapshot Debugger. The discussion talks about what Azure Monitor is and an introduction to Snapshot Debugger, and quickly goes into demos showcasing what developers can do with Snapshot Debugger.

Using Ethereum Logic Apps to push ledger data into to a MySQL or PostgreSQL database | Block Talk

In this episode we show how to use the Ethereum Logic App connector to integrate a ledger with common backend systems like popular open-source databases, MySQL and PostgreSQL.

How to add Azure Alerts as push notifications on your phone | Azure Portal Series

The Azure mobile app allows you to receive Azure Alerts as push notifications on your mobile device. In this video of the Azure Portal “How To” Series, learn how you can setup Azure Alerts such as metric alerts, log analytics, Application Insights, and Activity Log from Azure Monitor on the Azure portal.

How to use Azure Automation with PowerShell | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how to use Azure Automation with a Windows Machine with PowerShell. Azure Automation makes it easy to do common tasks like, scaling Azure SQL Database up and down and starting and stopping a virtual machine.

Matt Mitrik on GitHub with Azure Boards | Azure DevOps Podcast

Jeffrey Palermo and Matt Mitrik discuss GitHub with Azure Boards. They talk about the level of integration that’s going to be in Azure Boards (how they’re thinking about things right now and where they want to go), their efforts towards new project workflow and integration for Azure Boards, and the timeline Matt’s team is looking at for these changes. Matt also gives his pitch for GitHub as the future premiere offering and why you should consider migrating.

HTML5 audio not supported

Episode 4 – Azure Enthusiast: Kevin Boland | AzureABILITY

AzureABILITY host Louis Berman talks Azure with Bentley Systems' Kevin Boland—an Enterprise Cloud Architect who manages one of the largest and most complex set of Azure deployments on the planet.

HTML5 audio not supported

Read the transcript

Additional Azure shows & videos

Azure Container Registry (ACR) repository and tag locking | Azure Friday
What is Azure Mixed Reality Services? | One Dev Question | One Dev Minute 
What are you most proud of on HoloLens 2? | One Dev Question | One Dev Minute
What are logic apps? | One Dev Question
What can I use Azure Function Triggers for? | One Dev Question
Microsoft Azure Security Center for IoT | Azure Security
Secure your IoT solution with Microsoft Azure | Azure Security
How to set up your first Azure Service Health alert | Maintenance and Resilience in Azure
What's single sign on for SaaS applications? | Azure Active Directory
How to deploy single sign on for SaaS applications | Azure Active Directory
How to roll out single sign on for SaaS applications | Azure Active Directory

Events

Hannover Messe 2019: Azure IoT Platform updates power new, highly-secured Industrial IoT Scenarios

Hannover Messe 2019 is taking place this week (01-05 April) in Hannover, Germany and Azure is there. Manufacturing continues to be one of the leading industries adopting IoT for a growing set of scenarios to improve safety, efficiency, and reliability for people and devices. We’ve made several significant additions to our IoT platform to address these needs; including the launch of Azure Digital Twins and Azure Sphere, and the general availability of Azure IoT Central and Azure IoT Edge. Introducing a set of new product capabilities and programs that make it easier for our customers to build enterprise-grade industrial IoT solutions with open standards, while ensuring security and innovation protection across cloud boundaries.

Customers, partners, and industries

Azure Sphere ecosystem accelerates innovation

How can device builders bring a high level of security to the billions of network-connected devices expected to be deployed in the next decade? It starts with building security into your IoT solution from the silicon up. In this post, you learn about the holistic device security of Azure Sphere and how the expansion of the Azure Sphere ecosystem is helping to accelerate the process of taking secure solutions to market.

Why IoT is not a technology solution—it's a business play

To help you plan your IoT journey, we’re rolling out a four-part blog series. In the upcoming posts, we’ll cover how to create an IoT business case, overcome capability gaps, and simplify execution; all advice to help you maximize your gains with IoT. In this first post, explore the mindset it takes to build IoT into your business model.

Umanis lifts the hood on their AI implementation methodology

Umanis, a systems integrator and preferred AI training partner based in France, has been innovating in Big Data and Analytics in numerous verticals for more than 25 years and has developed an effective methodology for guiding customers into the Intelligent Cloud. Umanis has found it to be a robust way of rolling out end-to-end data and AI projects while minimizing friction and risk. By using this approach to present a Data & AI project to both customers and internal teams, everyone can get a good feeling of what activities, technologies, and challenges are involved.

Azure Marketplace new offers – Volume 34

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the second half of February we published 50 new offers.

Azure Windows Virtual Desktop in public preview and a big win for Cosmos DB | A Cloud Guru – Azure This Week

This time on Azure This Week, Lars covers Windows Virtual Desktop in public preview, Azure Cosmos DB gets another big win, and Microsoft and NVIDIA extend video analytics to the intelligent edge.

Quelle: Azure

Azure Search – New Storage Optimized service tiers available in preview

Azure Search is an AI-powered cloud search service for modern mobile and web app development. Azure Search is the only cloud search service with built-in artificial intelligence (AI) capabilities that enrich all types of information to easily identify and explore relevant content at scale. It uses the same integrated Microsoft natural language stack as Bing and Office, plus prebuilt AI APIs across vision, language, and speech. With Azure Search, you spend more time innovating on your websites and applications, and less time maintaining a complex search solution.

Today we are announcing the preview of two new service tiers for Storage Optimized workloads in Azure Search. These L-Series tiers offer significantly more storage at a reduced cost per terabyte when compared to the Standard tiers, ideal for solutions with a large amount of index data and lower query volume throughout the day, such as internal applications searching over large file repositories, archival scenarios when you have business data going back many years, or e-discovery applications.     

Searching over all your content

From finding a product on a retail site to looking up an account within a business application, search services power a wide range of solutions with differing needs. While some scenarios like product catalogs need to search over a relatively small amount of information (100MB to 1GB) quickly, for others it’s a priority to search over large amounts of information in order to properly research, perform business processes, and make decisions. With information growing at the rate of 2.5 quintillion bytes of new data per day, this is becoming a much more common–and costly– scenario, especially for businesses.

What’s new with the L-series tier

The new L-Series service tiers support the same programmatic API, command-line interfaces, and portal experience as the Basic and Standard tiers of Azure Search. Internally, Azure Search provisions compute and storage resources for you based on how you’ve scaled your service. Compared to the S-Series, each L-Series search unit has significantly more storage I/O bandwidth and memory, allowing each unit’s corresponding compute resources to address more data. The L-Series is designed to support much large indexes overall (up to 24 TB total on a fully scaled out L2) for applications.

 

Standard S1

Standard S2

Standard S3

Storage Optimized L1

Storage Optimized L2

Storage

25 GB/partition
(max 300 GB documents per service)

100 GB/partition
(max 1.2 TB documents per service)

200 GB/partition
(max 2.4 TB documents per service)

1 TB/partition

(max 12 TB documents per service)

2 TB/partition

(max 24 TB documents per service)

Max indexes per service

50

200

200 or 1000/partition in high density2 mode

10

10

Scale out limits

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Up to 36 units per service
(max 12 partitions; max 12 replicas)
up to 12 replicas in high density2 mode

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Up to 36 units per service
(max 12 partitions; max 12 replicas)

Please refer to the Azure Search pricing page for the latest pricing details.

Customer success and common scenarios

We have been working closely with Capax Global LLC, A Hitachi Group Company to create a service tier that works for one of their customers. Capax Global combines well-established patterns and practices with emerging technologies while leveraging a wide range of industry and commercial software development experience. In our discussions with them, we found that a storage optimized tier would be a good fit for their application since it offers the same search functionality at a significantly lower price than the standard tier. 

“The new Azure Search Storage Optimized SKU provides a cost-effective solution for customers with a tremendous amount of content. With it, we’re now able to enrich the custom solutions we build for our customers with a cloud hosted document-based search that meets the search demands of millions of documents while continuing to lead with Azure. This new SKU has further strengthened the array of services we have to utilize to help our customers solve their business problems through technology.”

– Mitch Prince, VP Cloud Productivity + Enablement at Capax Global LLC, A Hitachi Group Company

The Storage Optimized service tiers are also a great fit for applications that incorporate the new cognitive search capabilities in Azure Search, where you can leverage AI-powered components to analyze and annotate large volumes of content, such as PDFs, office documents, and rows of structured data. These data stores can result in many terabytes of indexable data, which becomes very costly to store in a query latency-optimized service tier like the S3. Cognitive search combined with the L-Series tiers of Azure Search provide a full-text query solution capable of storing terabytes of data and returning results in seconds.

Regional availability

For the initial public preview, the Storage Optimized service tiers will be available in the following regions:

West US 2
South Central US
North Central US
West Europe
UK South
Australia East

We’ll be adding additional regions over the coming weeks. If your preferred region is not supported, please reach out to us directly at azuresearch_contact@microsoft.com to let us know.

Getting started

For more information on these new Azure Search tiers and pricing, please visit our documentation, pricing page, or go to the Azure portal to create your own Search service.
Quelle: Azure

Resource governance in Azure SQL Database

This blog post continues the Azure SQL Database architecture series where we share background on how we run the service, as described by the architects who originally created the service. The first two posts covered data integrity in Azure SQL Database and how cloud speed helps SQL Server database administrators. In this blog post, we will talk about how we use governance to help achieve a balanced system.

Allocated and governed resources

When you choose a specific Azure SQL Database service tier, you are selecting a pre-defined set of allocated resources across several dimensions such as CPU, storage type, storage limit, memory, and more. Ideally you will select a service tier that meets the workload demands of your application, however if you over or under-size your selection you can easily scale up or down accordingly.

With each service tier selection, you are also inherently selecting a set of resource usage boundaries or limits. For example, a business critical, Gen 4 database with eight cores has the following resource allocations and associated limits:

Compute size
BC_Gen4_8

Memory (GB)
56

In-memory OLTP storage (GB)
8

Storage type
Local SSD

Max data size (GB)
650

Max log size (GB)
195

TempDB size (GB)
256

IO latency (approximate)

Target IOPS (64KB)
1-2 millisecond (write)
1-2 millisecond (read)
40000

Log rate limits (MBps)
48

Max concurrent workers (requests)
1600

Max concurrent logins (requests)
1600

Max allowed sessions
30000

Number of replicas
4

As you increase the resources in your tier, you may also see changes in limits up to a certain threshold. Furthermore, these limits can be automatically relaxed over time, but never further restricted without penalty to the customer.

We document resource allocation by service tier and also the associated resource governance limits in the following resources:

vCore Model: Azure SQL Database vCore-based purchasing model limits for a single database
DTU Model: Resource limits for single databases using the DTU-based purchasing model

While the resource allocation by service tier is intuitive to customers because the more you pay, the more resources you get, resource governance and boundaries has historically been less clear of a subject with customers. While we are increasing transparency around these governing mechanisms, it is important to understand the broader purposes behind resource governance in a database as a service (DBaaS). For this, we’ll talk next about what it takes to achieve a balanced system.

Providing a balanced database as a service (DBaaS)

For the context of this blog post, we define a system as balanced if all resources are sufficiently maximized without encountering bottlenecks. This balance includes an interplay of resources such as CPU, IO, memory, network paired with an application’s workload characteristics, maximum tolerated latency, and desired throughput.

With Azure SQL Database, our view of a balanced system must also take a broad and comprehensive perspective in order to meet articulated DBaaS requirements and customer expectations.

Azure SQL Database surfaces a familiar and popular database ecosystem with the intent of giving customers the following additional benefits:

Elasticity of scale – Customers can provision a database based on the throughput requirements of their application. As throughput requirements change, the customer can easily scale up or down.
Automated backups with self-service restore to any point in time – Database backups are automatically handled by the service, with log backups generally occurring every five to ten minutes.
High availability – Azure SQL Database supports a differentiated availability SLA with a maximum of 99.995 percent, backed by availability zone resilience to infrastructure failures.
Predictable performance – Customers on the same provisioned resource level always get the same performance with the same workload.
Predictable scalability – Customers using the hyperscale service tier can rely on predictable latency of the online scaling operations backed by a verifiable scaling SLA. This gives the customer a reliable tool to react to, changing compute capacity demands in a timely manner.
Automatic upgrades – Azure SQL Database is designed to facilitate transparent hardware, software upgrades, and periodic, lightweight software updates.
Global scale – Customers can deploy databases around the world and easily provision geographically distributed database replicas enabling regional data access and disaster recovery solutions. These solutions are backed by strong geo-replication and failover SLAs.

For the Azure SQL Database engineering team, providing a balanced DBaaS system for customers goes well beyond simply providing the purchased CPU, IO, memory, and storage. We must also honor all aforementioned factors and aim to balance these key DBaaS factors along with overall performance requirements.

The following figure shows some of the key resources that are governed within the service.

Figure 1: Governed resources in Azure SQL Database

We need to provide this balanced system in such a way that allows us to continually improve the service over time. This requirement for continual improvement implies a necessary level of component abstraction and over-arching governance. Governance in Azure SQL Database ensures that we properly balance requirements around scale, high availability, recoverability, disaster recovery, and predictable performance.

To illustrate, let’s use transaction log rate governance as an example of why we actively manage in order to provide a balanced DBaaS. Transaction log governance is a process in Azure SQL Database used to limit high ingestion rates for workloads such as bulk insert, select into, and index builds.

Why govern this type of activity? Consider the following dimensions and the impact of transaction log generation rate.

Dimension

Log generation rate impact

Database recoverability

We make guarantees around the maximum window of possible data loss based on transaction log backup frequency.

High availability

Local replicas must remain within a recoverability and availability (up-time) range that aligns with our SLAs.

Disaster recovery

Globally distributed replicas must remain within a recoverability range that minimizes data loss.

Predictable performance

Log generation rates must not over-saturate the system or create unpredictable performance.

Log rates are set such that they can be achieved and sustained in a variety of scenarios, while the overall system can maintain its functionality with minimized impact to the user load. Log rate governance ensures that transaction log backups stay within published recoverability SLAs and prevents an excessive backlog on secondary replicas. We have similar impact and interdependencies across other governed areas including CPU, memory, and data IOPs.

How we govern resources in Azure SQL Database

While we use a multi-faceted approach to governance, today we do rely primarily on three main technologies, Job Objects, File Server Resource Manager (FSRM), and SQL Server Resource Governor.

Job Objects

Azure SQL Database leverages multiple mechanisms for governing overall performance for a database. One of the features we leverage is Windows Job Objects, which allows a group of processes to be managed and governed as a unit.   We use this functionality to govern file virtual memory commit, working set caps, CPU affinity, and rate caps. We onboard new governance capabilities as the Windows team releases them.

File Source Resource Manager (FSRM)

Available in Windows Server, we use FSRM to govern file directory quotas.

SQL Server Resource Governor

A SQL Server instance has multiple consumers of resources, including user requests and system tasks. SQL Server Resource Governor was introduced to ensure fair sharing of resources and prevent out-of-control requests from starving other requests. This feature was introduced in SQL Server years ago and over time was extended to help govern several resources including CPU, physical IO, memory, and more for a SQL Server instance. We use this functionality in Azure SQL Database as well to help govern IOPs both local and remote, CPU caps, memory, worker counts, session counts, memory grant limits, and the maximum number of concurrent requests.

Beyond the three main technologies, we also created additional mechanisms for governing transaction log rate.

Configurations for safe and predictable operations

Consider all the settings one must configure for a well-tuned on-premises SQL Server instance, including database file settings, max memory, max degree of parallelism, and more. In Azure SQL Database we pre-configure several settings based on similar best practices. And as mentioned earlier, we pre-configure SQL Server Resource Governor, FSRM, and Job Objects to deliver fairness and prevent starvation. The reasoning behind this is to aim for safe and predictable operation. We can also provide varying settings for customers based on their workload and specific needs, assuming it conforms to safety limits defined for the service.

Improvements over time

Sometimes we deploy software changes that improve the performance and scalability of specific operations. Customers benefit automatically and we might exceed the defined limits and/or increase them for all customers in the future. Furthermore, as we enhance the hardware of machines, storage, and network, these benefits may also be transparently available to an application. This is because we have defined this DBaaS abstraction layer instead of just providing a specific physical machine.

Evolving governance

The Azure SQL Database engineering team regularly enhances governance capabilities used in the service. We continually review our models based on feedback and production telemetry and we modify our limits to maximize available resources, increase safety, and reduce the impact of system tasks.

If you have feedback to share, we would like to hear from you. To contact the engineering team with feedback or comments on this subject, please email SQLDBArchitects@microsoft.com.
Quelle: Azure

Umanis lifts the hood on their AI implementation methodology

Microsoft creates deep, technical content to help developers enhance their proficiency when building solutions using the Azure AI Platform. Our preferred training partners redeliver our LearnAI Bootcamps for customers around the globe on topics including Azure Databricks, Azure Machine Learning service, Azure Search, and Cognitive Services. Umanis, a systems integrator and preferred AI training partner based in France, has been innovating in Big Data and Analytics in numerous verticals for more than 25 years and has developed an effective methodology for guiding customers into the Intelligent Cloud. Here, Philippe Harel, the AI Practice Director at Umanis, describes this methodology and shares lessons learned to empower customers to do more with data and AI.

2019 is the year when artificial intelligence (AI) and machine learning (ML) are shifting from being mere buzzwords to real-world adoption and rollouts across the enterprise. This year reminds us of the cloud adoption curve a few years ago, when it was no longer an option to stay on-premises alone, but a question of how to make the shift. As you draw up plans on how to best use AI, here are some learnings and methodologies that Umanis is following.

Given the ever-increasing speed of change in technology, along with the variety of sectors and industries Umanis works in, they focused on building a methodology that could be standardized across AI implementations from project to project. This methodology follows an iterative cycle: assimilate, learn, and act, with the goal of adding value with each iteration.

The Azure platform acts as an enabler of this methodology as seen in the image below.

In most data and artificial intelligence (AI) projects implemented at Umanis, several trends are gaining momentum and are likely to amplify in 2019:

More unstructured, big, and real-time data.
An increased need for fast and reliable AI solutions to scale up.
Increasing expectations from customers.

In this blog post, we will explain how you can address these kinds of projects, and how Umanis maps their approach to the Azure offering to deliver solutions that are easy to use, operationalize, and maintain.

The 3 phases of the AI implementation methodology

1. Assimilate

In this initial phase, you can be hit by anything. From the good to the big, bad, and ugly: databases, text, logs, telemetry, images, videos, social networks, and more are flowing in. The challenge is to make sense of everything, so you can serve the next phase (Learn) successfully. By assimilating, we mean:

Ingest: The performance of an algorithm depends on the quality of the data. We consider “ingesting” to be checking the quality of the data, the quality of the transmission, and building the pipelines to feed the subsequent parts.
Store: Since the data will be used by highly demanding algorithms (I/O, processing power) that will mix data from various sources, you need to store the data in the most efficient way for future access by algorithms or data visualizations.
Structure: Finally, you’ll need to prepare the data for an algorithms’ consumption and execute as many transformations, preprocessing, and cleaning tasks as you can to speed up the data scientists’ activities and algorithms.

2. Learn

This is the heart of any AI project: Creating, deploying, and managing models.

Create: Data scientists use available data to design algorithms, train their models, and compare the results. There are two key points to this:

Don’t make them wait for results! Data scientists are rare resources and their time is precious.
Allow any language or combination of languages. On that perspective, Azure Databricks is a great solution as it addresses this natively by allowing different languages to be used in a single block of code.

Use: Once algorithms are deployed as APIs and consumed, the need for parallelization goes up. SLAs and testing the performance of the sending, processing, and receiving pipeline is crucial.
Refine: Refining the quality of algorithms ensures reliable results over time. The easy part of this activity is automatic re-training on a regular basis. The less obvious one is what we call the “human in the loop” activity. In short, a Power BI report showing the results of predictions that a human can re-classify quickly as needed, and the machine uses this human expertise to get better at its task.

3. Act

All of the above phases are useless unless you actually make good use of the algorithm’s added value.

Inform: Any mistake in code, misunderstanding in requirements, or bug can be devastating as first user impressions are crucial. Therefore, instead of a “big bang” of visualizations, start very small, iterate very quickly, and make a few key users on-board to secure adoption before widening the audience.
Connect: Systems that use the information from algorithms need to be plugged in. This is called RPA, IPA, or automation in general, and the architectures can vary greatly on each project. Don’t overlook the need for human monitoring of this activity. Consider the impact of the most wrong answer from an algorithm, and you will get a good feel of the need for human supervision.
Dialog: When dealing with human interaction, so much comes into play that to be successful, the scope of the interaction needs to be narrowed down to the actions that really add value and are not trivial. (This is not easily possible via classic interfaces.)

Conclusion

This methodology will certainly change and adapt overtime. Nevertheless, Umani has found it to be a robust way of rolling out end-to-end data and AI projects while minimizing friction and risk. By using this approach to present a Data & AI project to both customers and internal teams, everyone can get a good feeling of what activities, technologies, and challenges are involved. It’s one way to address the “Urgent need to build shared context, trust, and credibility with your team” as Satya Nadella states in his book, Hit Refresh. This methodology, is a great way to build trust in your relationships.

If you want more information about the methodology used by Umanis, you can find them at upcoming conferences in the next two months (in French) discussing this topic in Luxembourg, Paris, and Nantes.

Learn More

Learn more about the Azure Machine Learning service

Get started with a free trial of Azure Machine Learning service
Quelle: Azure

Azure Marketplace new offers – Volume 34

We continue to expand the Azure Marketplace ecosystem. From February 16 to February 28, 2019, 50 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

Analytics Zoo: A unified Analytics + AI platform: Analytics Zoo provides a unified analytics and AI platform that unites Spark, TensorFlow, Keras, and BigDL programs into an integrated pipeline. The pipeline can then transparently scale out to a large Hadoop/Spark cluster.

Blender 3D On Windows Server 2016: Studios around the world use Blender as their go-to 3-D software for remodeling, rendering, animation, video editing, compositing, texturing, and more. Apps4Rent helps you deploy Blender on Microsoft Azure.

 

CIS CentOS 7.5 Benchmark L1: This image of CentOS 7.5 is preconfigured by CIS to the recommendations in the associated CIS Benchmark. CIS Benchmarks are vendor-agnostic, consensus-based security configuration guides.

IBM DB2 Advanced Enterprise Server Edition 11.1: Install IBM DB2 Advanced Enterprise Server Edition in just a few minutes. IBM DB2 is ideal for development, test, and production infrastructure, and MidVision’s RapidDeploy is shipped for streamlined administration.

IBM DB2 Advanced Workgroup Server Edition 11.1: Install IBM DB2 Advanced Workgroup Server Edition in just a few minutes. IBM DB2 is ideal for development, test, and production infrastructure, and MidVision’s RapidDeploy is shipped for streamlined administration.

Kotlin Programming Language Windows Server 2012R2: Kotlin is flexible and interoperable with other platforms and native languages, offering code sharing between JVM and JavaScript platforms. It's also tool-friendly, as any Java IDE can be chosen.

Kotlin Programming Language Windows Server 2016: Kotlin is flexible and interoperable with other platforms and native languages, offering code sharing between JVM and JavaScript platforms. It's also tool-friendly, as any Java IDE can be chosen.

MayaNAS Cloud Enterprise: MayaNAS Cloud is a full-featured, enterprise-grade, software-defined storage solution that provides high-performance unified file and block services using cloud-native disks and object storage.

MayaScale Cloud Data Platform: MayaScale Cloud Data Platform offers high-performance shared storage using NVMe (non-volatile memory express) fabric over TCP and iSCSI protocols.

Qorus Integration Engine 4.0 on Oracle Linux 7: This agile and scalable platform for back-office IT business process automation serves as a low-cost and low-code enterprise integration solution.

Robotic Process Automation (RPA): Download and use the trial edition of Kryon Studio to experience how easy it can be to automate processes. This free trial is a useful tool for anyone looking to evaluate Kryon’s robotic process automation solutions.

XCFrontier – Virtualisation Services: XCFrontier is an innovative cloud virtualization solution for faster internet browsing that works with the Microsoft Office suite and other software applications.

Web applications

Azure Monitor Agent for Citrix Environments: Use the power of Azure Monitor and Log Analytics with this agent for your Citrix workers, servers and desktops. You don’t need an SQL server or additional infrastructure for monitoring data.

Azure Monitor for RDS and Windows Virtual Desktop: Monitor user experiences within Remote Desktop Services and Windows Virtual Desktop.

Check Point CheckMe: CheckMe runs simulations that test if your security technologies are equipped to mitigate advanced threats, and it provides a comprehensive report on your security state.

D3 Security: Rapidly validate threats with out-of-the-box security integrations and adaptable playbooks that guide your security operations platform to automated incident response.

Discovery Hub with Azure Data Lake: Deploy the Discovery Hub application server and Azure Data Lake. Discovery Hub is a high-performance data management platform that accelerates your time to data insights.

Forscene Edge – BYOL: The Forscene Edge is a professional two-way video transcoding engine for generating lightweight Blackbird video-editing proxies. The Blackbird proxy provides frame-accurate navigation and plays media and edits completely render-free.

Integris Data Privacy Automation: Use Integris to discover and classify sensitive data across any system, apply data-handling policies, assess risk, and take action.

Intel Optimized Data Science VM for Linux (Ubuntu): This preconfigured data science virtual machine comes with Python environments optimized for deep learning on Intel Xeon processors.

Jira Service Desk Data Center: By linking Jira Service Desk with Jira Software, IT and developer teams can collaborate on one platform to fix incidents faster and push changes with confidence.

SCOM Alert Management: SCOM Alert Management extends the capabilities of Microsoft Alert Management with automation of alert rules for the System Center Operations Manager group connected to the Log Analytics workspace.

Security for Microsoft 365: SoftwareONE's Security for Microsoft 365 is a managed security service helping customers improve the return on their Microsoft security investments. SoftwareONE security consultants will plan, set up, enhance, and maintain threat detection.

SIMBA Chain: SIMBA Chain's Blockchain-as-a-Service platform allows users to quickly deploy decentralized applications (dApps). These dApps allow secure, direct connections between users and providers, eliminating third parties.

Container solutions

Decent Blockchain Node: DCT is the platform cryptographic asset on the DCore blockchain that serves as the fundamental currency for publishing and purchasing. It also funds the miners and seeders who maintain the platform. This image contains the DCore node and CLI wallet.

Consulting services

Active Directory Assessment: 4-Week Assessm. (GB): This assessment by Dots. will review your Active Directory environment, architecture, DNS configuration, backup policy, and administrative procedures to provide audit findings and best-practice recommendations.

AD Connect: 1 Day Implementation: CDW will assist your organization in creating storage accounts in Microsoft Azure for use with an on-premises, cloud-enabled storage appliance, resulting in a hybrid cloud storage solution.

Airnet Azure Foundations: 2-day Implementation: Migrate to the cloud quickly and easily with an automated setup of your Azure environment using a scalable, standardized, and pre-architected framework from Airnet Group Inc.

Airnet Systems Assessment Tool: 1-day Assessment: Review tiered budgeting options for your move to Azure based on Airnet Group Inc.'s detailed reports of server core level inventory, cost, and performance data from your entire IT infrastructure.

App Modernization: 2 Hour Briefing: Oakwood Systems Group will review your business drivers, establish goals for modernization, discuss approaches, provide recommendations for Azure services, and help you develop a better understanding of the options available.

Application Modernization: 2 Week Assessment: RDA will work with your technical team to collect data about identified applications and then design, plan, and document key considerations for an application modernization effort using Azure.

Azure AD Single Sign-On (SSO): 2-Day Implementation: Mismo Systems LLP will configure Azure Active Directory Single Sign-On, enabling you to centrally manage users' access across Software-as-a-Service applications.

Azure Assessment: 1-Week Assessment: Tallan will work with your team to review your on-premises and cloud environments, cover best practices for deployment and app modernization, and provide documentation and recommendations.

Azure DevOps: 1 Hour Briefing: This comprehensive briefing by Oakwood Systems Group will help you develop a better understanding of how to implement Azure DevOps within your business, no matter how big your IT department or what tools you’re using.

Azure Disaster Recovery: 1-Day Workshop: You will walk away with a comprehensive understanding of Azure Backup and Azure Site Recovery. In many cases, a partial or complete implementation can be achieved in this workshop from InsITe Business Solutions.

Azure Migration 6-Wk Assessment & Implementation: TapLogic’s Azure Platform Migration Service gives service providers in the agricultural industry the tools and resources to develop a plan for adopting the best Microsoft Azure solution for their business needs.

Azure Site Recovery: 3-Day Implementation: CDW will install and configure Azure Site Recovery, establishing a Disaster Recovery-as-a-Service solution that allows you to replicate up to five of your virtual machines to Microsoft Azure.

Azure Storage for Backup: 1-Day Implementation: The Microsoft Azure Storage for Backup engagement by CDW will provide best practices and knowledge transfer in demonstrating and maximizing the benefits of utilizing Azure Storage.

CCG Customer Intelligence for Retail: In this engagement, CCG Analytics will implement Customer Intelligence, an analytics platform developed for mid-market retailers who want to elevate the customer experience and dominate the retail omnichannel.

Cloud Aware – Events: 5 Week Implementation: This implementation by Meylah Corporation involves Cloud Aware – Event in a Box, a collection of event planning resources to simplify the process of the customer acquisition.

Cloud Migration Assessment – 6 Days Assessment: Incremental Group’s Cloud Migration Assessment is carried out by one of our senior cloud engineers and will involve compiling a complete review and cloud migration proposal for your organization.

Connecting with S2S VPN: 1-Day Implementation: CDW will assist you in configuring Azure to allow connectivity between your Azure tenant resources and on-premises resources via a site-to-site VPN.

Data Compliance Monitoring – 1 Hour Briefing: Discover how you can automate your data compliance and governance strategy by leveraging Azure, Azure Cosmos DB, and Brilliant IG. Brilliant IG, by CTO Boost, is an automated compliance monitoring platform on Azure.

Data Science Discovery Pack: 2-wk Assessment: Elastacloud combines the delivery of a data architecture blueprint using the latest Azure platform tools and services with an innovative data science work package.

ERP to Azure Migration: 2 Week Implementation: DXC will provide a streamlined migration for organizations desiring to move their Dynamics GP, Dynamics SL, or Dynamics NAV solution to Azure Infrastructure-as-a-Service.

Optimized Architecture: 1-Day Workshop (Virtual): Compare Infrastructure-as-a-Service and Platform-as-a-Service hosting options to save money through the use of Azure App Service. This workshop by Dynamics Edge is intended for cloud architects and IT professionals.

QuickBooks DT on Azure single install: 4-hr imp: Get your existing QuickBooks desktop software running on your Azure cloud server, complete with integrated applications, in this implementation by Mendelson Consulting.

TCO & Cloud Readiness Assessment – 6 Wk Assessment: Ensono's assessment will involve data gathering, creation of an HCP tenant, ingestion of the initial server list, data tagging, application readiness scoring, and a presentation of the findings.

TFS to Azure DevOps Migration: 2-Wk Implementation: Tallan will work with your team to create an Azure DevOps migration plan to be developed during the assessment portion of this implementation. From there, we will start the migration process to Azure DevOps.

TFS to Azure DevOps: 4-week Implementation: Oakwood Systems Group's three-phase migration plan will move your on-premises Team Foundation Server (TFS) to Azure DevOps Services.

Quelle: Azure

Announcing the public preview of Data Discovery & Classification for Azure SQL Data Warehouse

Today we’re announcing the public preview of Data Discovery & Classification for Azure SQL Data Warehouse, an additional capability for managing security for sensitive data. Azure SQL Data Warehouse is a fast, flexible, and secure cloud data warehouse tuned for running complex queries fast and across petabytes of data.

While it’s critical to protect the privacy of your customers and other sensitive data, it becomes unmanageable to discover, classify, and protect such sensitive data as your businesses and data assets are growing rapidly. The Data Discovery & Classification feature that we’re introducing natively with Azure SQL Data Warehouse helps alleviate this pain-point. The overall benefits of this capability are:

Meeting data privacy standards and regulatory compliance requirements such as General Data Protection Regulation (GDPR).
Restricting access to and hardening the security of data warehouses containing highly sensitive data.
Monitoring and alerting on anomalous access to sensitive data.
Visualization of sensitive data in a central dashboard on the Azure portal.

What is Data Discovery & Classification?

Data Discovery & Classification introduces a set of advanced capabilities aimed at protecting data and not just the data warehouse itself.

Auto-discovery and recommendations – Underlying classification engine automatically scans your data warehouse and identifies columns containing potentially sensitive data. It also provides you an easy way to review and apply appropriate classification recommendations through the Azure portal.
Classification/Labeling – Sensitivity classification labels tagged on the columns can be persisted in the data warehouse itself.
Reporting – Data classification can be centrally viewed on a dashboard in the Azure portal. In addition, you can download a report in Microsoft Excel format for compliance and auditing purposes.
Monitoring/Auditing – Auditing has been enhanced to log sensitivity classifications or labels of the actual data that were returned by the query. This would enable you to gain insights on who is accessing sensitive data.

How does Data Discovery & Classification work?

The Data Discovery & Classification capability have built-in automated classification engines that identify columns containing potentially sensitive data and provides a list of recommendations for you to choose from. This data can be persisted as sensitivity metadata on top of the columns directly in the data warehouse. You can manually classify and label your columns. You can also define custom labels and information types in addition to those generated by the system.

You can also use T-SQL to add, remove, and retrieve column classifications across all tables in your data warehouse:

Add/update the classification of one or more columns, add sensitivity classification
Remove the classification from one or more columns, drop sensitivity classification
View all classifications on the database, sys.sensitivity classifications

Additionally, Azure SQL Data Warehouse engine utilizes the column classifications to determine the sensitivity of query results. Combined with Azure SQL Data Warehouse Auditing, this enables you to audit the sensitivity of the actual data being returned by queries.

This capability is now available in all Azure regions as part of Advanced Data Security and including Vulnerability Assessment and Threat Detection. For more information on Data Discovery & Classification in Azure SQL Data Warehouse, refer to our online documentation “Azure SQL Database Data Discovery & Classification.”

Azure SQL Data Warehouse continues to lead in the areas of security, compliance, privacy, and auditing. Check out our latest videos on Azure SQL Data Warehouse security related topics:

Monitoring Access for threats and Securing Data
Virtual Networks and Security Roadmap

Next steps

For more information about Azure SQL Data Warehouse security capabilities, refer to the “Guide to enhancing privacy and addressing GDPR requirements with the Microsoft SQL platform” from the Microsoft Trust Center, or our online documentation.
To get started today, create an Azure SQL Data Warehouse.
To stay up-to-date on the latest Azure SQL Data Warehouse news and features, follow us on Twitter @AzureSQLDW.
For feature requests, please vote on our UserVoice.

Quelle: Azure

Get an official service issue root cause analysis with Azure Service Health

After you experience a Microsoft Azure service issue, you likely need to explain what happened to your customers, management, and other stakeholders. That’s why Azure Service Health provides official incident reports and root cause analyses (RCAs) from Microsoft.

Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance. In this blog, we’ll cover how you can use Azure Service Health’s health history to review past health issues and get official root cause analyses (RCAs) to share with your internal and external stakeholders.

Review past health issues and get official root cause analyses (RCAs)

You can see 90 days of history about past incidents, maintenance, and health advisories in Azure Service Health’s “Health history” section. This is a tailored view of the Azure Activity Log provided by Azure Monitor.

If you experienced downtime, your internal or external stakeholders might expect an official report or RCA. As soon as they become available, RCAs can be found under any incident. Meanwhile, you can download and share Microsoft’s issue summary as a PDF.

Learn more about getting downloadable explanations in the Service Health documentation.

Get started with Azure Service Health

Azure Service Health provides a large amount of information about incidents, planned maintenance, and other health advisories that could affect you. While you can always visit the dashboard in the portal, the best way to stay informed and take action is to set up Azure Service Health alerts. With alerts, as soon as we publish any health-related information, you’ll get notified on whichever channels you prefer, including email, SMS, push notification, webhook into ServiceNow, and more. We’ll also notify you when we publish RCAs.

Next steps

Review your Azure Service Health dashboard and set up alerts in the Azure portal. If you need help getting started visit the Azure Service Health documentation. We always welcome feedback. Submit your ideas at Azure Service Health feedback forum or email us with any questions and comments at servicehealth@microsoft.com.
Quelle: Azure

Hannover Messe 2019: Azure IoT Platform updates power new, highly-secured Industrial IoT Scenarios

We’re proud to be showcasing at Hannover Messe once again next week. Manufacturing continues to be one of the leading industries adopting IoT for a growing set of scenarios to improve safety, efficiency, and reliability for people and devices. Every year, I get to meet with partners and customers and learn about how their needs and use cases are growing and changing, as they continue to digitize their operations and deliver on the promise of Industry 4.0. They want security more integrated into every layer, protecting data from different industrial processes and operations from the edge to the cloud. They want to enable proof-of-concepts quickly to improve the pace of innovation and learning, and then scale quickly and effectively. And they want to manage digital assets at scale, not dozens of devices and sensors. Over the last year, we’ve made several significant additions to our IoT platform to address these needs, including the launch of Azure Digital Twins and Azure Sphere and the general availability of Azure IoT Central and Azure IoT Edge. Next week at Hannover Messe, we’re introducing a set of new product capabilities and programs that make it easier for our customers to build enterprise-grade industrial IoT solutions with open standards, while ensuring security and innovation protection across cloud boundaries.
Securing IoT solutions
Securing IoT solutions requires new capabilities to protect the thousands of devices deployed on the edge. To truly secure an IoT solution, you must secure devices, their connectivity to the cloud, the services running in the cloud, and the applications built on top of them. 
At Hannover Messe, we’re thrilled to announce Azure Security Center for IoT, the worlds first comprehensive security offering for IoT.
With Azure Security Center for IoT, customers can benefit from a holistic view of their IoT security and take measures aligned with industry best practices, such as monitoring devices for open ports. The ever-evolving threat landscape requires customers to go far beyond this, by also inspecting and monitoring the security properties of devices and workloads for potential attacks. Azure has unique threat intelligence sourced from the more than 6 trillion signals that Microsoft collects every day and makes that available to customers in Azure Security Center.
Beyond the security posture management and threat protection capabilities provided in Azure Security Center many SecOps teams rely on SIEM tools for advanced hunting and threat mitigation across their entire enterprise. At RSA earlier this month we announced Azure Sentinel which is the first cloud-native SIEM from a major public cloud provider. Today, we take it a step further by enhancing the capabilities of Azure Sentinel by enabling customers to combine their IoT security data with the security data from across the enterprise, to then apply analysis techniques or machine learning to identify and mitigate threats.
This announcement empowers manufacturers to reduce the attack surface of Azure IoT solutions running across all their operations, remediate issues before they become serious, and apply analytics and machine learning to prevent attacks. Azure is the first major public cloud provider to deliver the breadth of these security innovations for end-to-end IoT solutions and this announcement marks an important leap forward as we offer new security layers for your IoT workloads. 
We also want to continue driving innovation in IoT, which requires us to take measures to protect our customers’ and partners’ innovations. That’s why today we’re extending the Azure IP Advantage benefits to Azure customers with IoT devices connected to Azure, and devices that are powered by Azure Sphere and Windows IoT. Thyssenkrupp, Bühler, and MediaTek are three companies that see the benefit of added protections from IP risk as they transition into Industry 4.0 and generate value from their IoT workloads. The program offers customers uncapped indemnification coverage for Azure Sphere and Windows IoT and access to 10,000 Microsoft patents that are available to Azure customers and can be critical in deterring competitors from suing for patent infringement. More detail about the new program is available on the Microsoft on the Issues blog.
Accelerate Industrial IoT Solutions with an Open Cloud Platform, Open Interoperability Standards and Open Source
We’ve continued to innovate by developing additional open-source components based on open interoperability standards (OPC UA) for our open cloud platform. These new components provide security management as well as performance optimization and simplify the experience for our customers. Today we’re announcing OPC Twin, which creates a digital twin for OPC UA-enabled machines, makes their information model available in the cloud, and enables machine interaction from the cloud. We’ve also extended our OPC UA security and certificate management by launching OPC Vault. OPC Vault automates security management by creating, managing, and revoking certificates for OPC UA-enabled machines on a global scale. Both components simplify their integration into existing or new cloud applications by providing REST interfaces and are available on GitHub today. In addition, we’re excited announce enhancements to the Connected Factory solution accelerator, which now also integrates an OPC Twin dashboard. Connected Factory is designed to accelerate proof-of-concepts in Industrial IoT and additionally offers OEE data across customers’ factories via a centralized dashboard.
For Industrial IoT scenarios, time series data is a critical component to unlocking exciting opportunities to drive growth by providing operational insights in fractions of a second on a global scale. Later in the summer we will be building on our recent momentum with Azure Time Series Insights (TSI) by enabling our customers to take advantage of integrating both warm and cold path analytics into a single offering under the pay-as-you-go version that was announced in December of last year. This provides customers a more predictable, cost-effective, and flexible analytics platform for their Industrial IoT scenarios. We are also working towards delivering a wide variety of analytics scenarios by offering support for storage tier configuration based on retention and released enhancements to the user experience.
Build enterprise-grade Industrial IoT solutions across cloud boundaries
Last year we announced Azure IoT Hub on Azure Stack in limited preview to meet industrial manufacturers’ latency and connectivity requirements, as well as their specific regulatory and compliance policies. Customers that are working with us are benefiting from running their IoT solutions on a hybrid model. Rockwell Automation has partnered with us to build IoT solutions that stretch from the intelligent cloud to the intelligent edge. It’s not uncommon to have facilities that are in remote areas or immersed in conditions that cause inconsistent network connectivity. Rockwell Automation is participating in the Azure IoT Hub on Azure Stack limited preview to extend a consistent solution at the edge of your production. Running IoT on Azure Stack in a hybrid model has empowered ZEISS to continue providing clients with new insights about their products, production, and processes. ZEISS spectroscopy helps clients to optimize their processes based on valuable insights about their products and production, when they need it and where they need it – thanks to smart solutions and connected technology. Their solutions for the food industry provide real-time measurement of important quality indicators, such as fat, moisture, and salt content directly on the production line. This data is then sent to the cloud, allowing production managers to optimize quality almost immediately, while enabling a more efficient way of using raw materials and energy.
It’s an exciting time to be a manufacturer, when you have the power of data and connected devices at your fingertips to drive real-time insights and actions. We hope to see you at Hannover Messe where you can see and learn more about these announcements as well as see partners and customers’ showcasing these solutions. We will be at the Digital Factory Fair in Hall 7 – stop by and meet us.
Quelle: Azure