Let Deep Learning VMs and Jupyter notebooks burn the midnight oil for you: robust and automated training with Papermill

In the past several years, Jupyter notebooks have become a convenient way of experimenting with machine learning datasets and models, as well as sharing training processes with colleagues and collaborators. Often times your notebook will take a long time to complete its execution. An extended training session may cause you to incur charges even though you are no longer using Compute Engine resources.This post will explain how to execute a Jupyter Notebook in a simple and cost-efficient way.We’ll explain how to deploy a Deep Learning VM image using TensorFlow to launch a Jupyter notebook which will be executed using the Nteract Papermill open source project. Once the notebook has finished executing, the Compute Engine instance that hosts your Deep Learning VM image will automatically terminate.The components of our system:First, Jupyter NotebooksThe Jupyter Notebook is an open-source web-based, interactive environment for  creating and sharing IPython notebook (.ipynb) documents that contain live code, equations, visualizations and narrative text. This platform supports data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.Next, Deep Learning Virtual Machine (VM) imagesThe Deep Learning Virtual Machine images are a set of Debian 9-based Compute Engine virtual machine disk images that are optimized for data science and machine learning tasks. All images include common ML frameworks and tools installed from first boot, and can be used out of the box on instances with GPUs to accelerate your data processing tasks. You can launch Compute Engine instances pre-installed with popular ML frameworks like TensorFlow, PyTorch, or scikit-learn, and even add Cloud TPU and GPU support with a single click.And now, PapermillPapermill is a library for parametrizing, executing, and analyzing Jupyter Notebooks. It lets you spawn multiple notebooks with different parameter sets and execute them concurrently. Papermill can also help collect and summarize metrics from a collection of notebooks.Papermill also permits you to read or write data from many different locations. Thus, you can store your output notebook on a different storage system that provides higher durability and easy access in order to establish a reliable pipeline. Papermill recently added support for Google Cloud Storage buckets, and in this post we will show you how to put this new functionality to use.InstallationSubmit a Jupyter notebook for executionThe following command starts execution of a Jupyter notebook stored in a Cloud Storage bucket:The above commands do the following:Create a Compute Engine instance using TensorFlow Deep Learning VM and 2 NVIDIA Tesla T4 GPUsInstall the latest NVIDIA GPU driversExecute the notebook using PapermillUpload notebook result (with all the cells pre-computed) to Cloud Storage bucket in this case: “gs://my-bucket/”Terminate the Compute Engine instanceAnd there you have it! You’ll no longer pay for resources you don’t use since after execution completes, your notebook, with populated cells, is uploaded to the specified Cloud Storage bucket. You can read more about it in the Cloud Storage documentation.Note: In case you are not using a Deep Learning VM, and you want to install Papermill library with Cloud Storage support, you only need to run:Note: Papermill version 0.18.2 supports Cloud Storage.And here is an even simpler set of bash commands:Execute a notebook using GPU resourcesExecute a notebook using CPU resourcesThe Deep Learning VM instance requires several permissions: read and write ability to Cloud Storage, and the ability to delete instances on Compute Engine. That is why our original command has the scope “https://www.googleapis.com/auth/cloud-platform” defined.Your submission process will look like this:Note: Verify that you have enough CPU or GPU resources available by checking your quota in the zone where your instance will be deployed.Executing a Jupyter notebookLet’s look into the following code:This command is the standard way to create a Deep Learning VM. But keep in mind, you’ll need to pick the VM that includes the core dependencies you need to execute your notebook. Do not try to use a TensorFlow image if your notebook needs PyTorch or vice versa.Note: if you do not see a dependency that is required for your notebook and you think should be in the image, please let us know on the forum (or with a comment to this article).The secret sauce here contains two following things:Papermill libraryStartup shell scriptPapermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.Papermill lets you:Parameterize notebooks via command line arguments or a parameter file in YAML formatExecute and collect metrics across the notebooksSummarize collections of notebooksIn our case, we are just using its ability to execute notebooks and pass parameters if needed.Behind the scenesLet’s start with the startup shell script parameters:INPUT_NOTEBOOK_PATH: The input notebook located Cloud Storage bucket.Example: gs://my-bucket/input.ipynbOUTPUT_NOTEBOOK_PATH: The output notebook located Cloud Storage bucket.Example: gs://my-bucket/input.ipynb.PARAMETERS_FILE: Users can provide a YAML file where notebook parameter values should be read.Example: gs://my-bucket/params.yamlPARAMETERS: Pass parameters via -p key value for notebook execution.Example: -p batch_size 128 -p epochs 40.The two ways to execute the notebook with parameters are: (1) through the Python API and (2) through the command line interface. This sample script supports two different ways to pass parameters to Jupyter notebook, although Papermill supports other formats, so please consult Papermill’s documentation.The above script performs the following steps:Creates a Compute Engine instance using the TensorFlow Deep Learning VM and 2 NVIDIA Tesla T4 GPUsInstalls NVIDIA GPU driversExecutes the notebook using Papermill toolUploads notebook result (with all the cells pre-computed) to Cloud Storage bucket in this case: gs://my-bucket/Papermill emits a save after each cell executes, this could generate “429 Too Many Requests” errors, which are handled by the library itself.Terminates the Compute Engine instanceConclusionBy using the Deep Learning VM images, you can automate your notebook training, such that you no longer need to pay extra or manually manage your Cloud infrastructure. Take advantage of all the pre-installed ML software and Nteract’s Papermill project to help you solve your ML problems more quickly! Papermill will help you automate the execution of yourJupyter notebooks and in combination of Cloud Storage and Deep Learning VM images you can now set up this process in a very simple and cost efficient way.
Quelle: Google Cloud Platform

Secure server access with VNet service endpoints for Azure Database for MariaDB

This blog post was co-authored by Sumeet Mittal, Senior Program Manager, Azure Networking.

Ensure security and limit access to your MariaDB server with the virtual network (VNet) service endpoints now generally available for Azure Database for MariaDB. VNet service endpoints enable you to isolate connectivity to your logical server from a given subnet within your virtual network. The traffic to Azure Database for MariaDB from your VNet always stays within the Azure network. Preference for this direct route is over any specific ones that route Internet traffic through virtual appliances or on-premises.

There is no additional billing for virtual network access through VNet service endpoints. The current pricing model for Azure Database for MariaDB applies as is.

Using firewall rules and VNet service endpoints together

Turning on VNet service endpoints does not override firewall rules that you have provisioned on your Azure Database for MariaDB, both remain applicable.

VNet service endpoints don’t extend to on-premises. To allow access from on-premises, you can use firewall rules to limit connectivity only to your public (NAT) IPs.

To learn more about VNet protection view our documentation, “Use Virtual Network service endpoints and rules for Azure Database for MariDB.”

Turning on service endpoints for servers with pre-existing firewall rules

When you connect to your server with service endpoints turned on, the source IP of database connections switches to the private IP space of your VNet. Configuration is via the “Microsoft.Sql” shared service tag for all Azure Databases including Azure Database for MariaDB, MySQL, PostgreSQL, Azure SQL Database and Managed Instance, and Azure SQL Data Warehouse. If at the present time your server or database firewall rules allow specific Azure public IPs, then the connectivity breaks until you allow the given VNet/subnet by specifying it in the VNet firewall rules. To ensure connectivity, you can preemptively specify VNet firewall rules before turning on service endpoints by using the IgnoreMissingServiceEndpoint flag.

Support for ASE

As part of general availability, we support service endpoints for App Service Environment (ASE) subnets deployed into your virtual networks.

Next steps

Get started with the service by creating your first Azure Database for MariaDB server using the Azure portal or Azure CLI.
Learn how to configure VNet service endpoints for MariaDB using the Azure portal or Azure CLI.
Reach us by emailing our team AskAzureDBforMariaDB@service.microsoft.com.
File feature requests on UserVoice.
Follow us on Twitter @AzureDBMariaDB to keep up with the latest features.

Quelle: Azure

Scaling out read workloads in Azure Database for MySQL

For read-heavy workloads that you are looking to scale out, you can use read replicas, which are now generally available to all Azure Database for MySQL users. Read replicas make it easy to horizontally scale out beyond a single database server. This is useful in workloads such as BI reporting and web applications, which tend to have more read operations than write.

The feature supports continuous asynchronous replication of data from one Azure Database for MySQL server (the “master” server) to up to five Azure Database for MySQL servers (the “read replica” servers) in the same region. Read-heavy workloads can be distributed across the replica servers according to your preference. Replica servers are read-only except for writes replicated from data changes on the master.

What’s supported with read replicas?

You can create or delete replica servers based on your workload’s needs. A master server can support up to five replica servers within the same Azure region. Stopping replication to any replica server makes it a standalone read-write server.

You can easily manage your replica servers using the Azure portal and Azure CLI.

From the Azure portal:

Use Azure Monitor to track replication with the “replication lag in seconds” metric:

From the Azure CLI:

az mysql server replica create -n mydemoreplica1 -g myresourcegroup -s mydemomaster

Below are some application patterns used by our customers and partners that leverage read replicas for scaling workloads.

BI reporting

Data from disparate data sources is processed every few minutes and loaded into the master server. The master server is dedicated for loads and processing, not directly exposing it to BI users for reporting or analytics to ensure predictable performance. The reporting workload is scaled out across multiple read replicas to manage high user concurrency with low latency.

Microservices

In this architecture pattern, the application is broken into multiple microservices, with data modification APIs connecting to the master server while reporting APIs connect to read replicas. The data modification APIs are prefixed with “Set-”, while reporting APIs are prefixed with “Get-“. The load balancer is used to route the traffic based on the API prefix.

Next steps

Get started with the service by creating your first Azure Database for MySQL server using the Azure portal or Azure CLI.
Learn more about read replicas and how to create them in the Azure portal or Azure CLI.
Reach us by emailing our team AskAzureDBforMySQL@service.microsoft.com.
File feature requests on UserVoice.
Follow us on Twitter @AzureDBMySQL to keep up with the latest features.

Quelle: Azure

Azure Marketplace and Cloud Solution Provider updates – March 2019

In February, Microsoft shared an ambitious vision to continue innovating as a technology provider and to improve the experience for solution developers and service providers when engaging with Microsoft. Our partners are delivering more innovation in AI, expanding their business through more co-selling opportunities, and leveraging distribution options through our commercial marketplaces such as Azure Marketplace and AppSource.

Today, we’re very excited to begin rolling out an initial set of platform changes which open new opportunities for our partners to go to market with Microsoft. This work sets the stage for more enhancements coming this winter and spring that continue to drive partner business acceleration. Get a sneak peek on our public marketplace roadmap.

Microsoft makes Azure Marketplace offers available to CSP channel partners

Microsoft’s Cloud Solution Provider (CSP) partner program is the largest channel program in the industry with more than 60,000 channel partners serving millions of Microsoft customers worldwide. Starting today, ISVs can choose to make their transactable Azure Marketplace offer available for distribution through the CSP channel. Partners in the CSP program will be able to sell, deploy, and bundle Azure services with Azure-optimized ISV software from the marketplace to better serve customers and grow their managed services business.

Within Partner Center’s new marketplace page, CSP partners can discover, evaluate, and learn about all the Azure Marketplace solutions available through the channel. Software-as-a-Service (SaaS) subscriptions can be established through the standard purchase workflow in Partner Center, and Azure resources such as VM or container images may be deployed and procured through the Azure management portal. All of the transactions will now be available on a consolidated invoice making legal, billing, and account support much simpler.

Publisher partners can learn how to create or update offerings for CSP channel availability.

CSP partners can also explore the marketplace page in Partner Center.

Screenshot of new marketplace discovery experience in Partner Center under the “Sell” navigation pane

Expanding the market opportunity for partners with new geographies and business models

As customer adoption for marketplaces continues to grow around the world, we expanded our global marketplace reach. We are pleased to share that Azure Marketplace has expanded coverage to 53 new countries, allowing partners now to sell into a total of 141 countries with 17 currencies.

Last year, we launched SaaS in Azure Marketplace with the ability to pay per month. As enterprise adoption of SaaS continues to grow, customers are looking for a variety of billing options. Today, we are pleased to highlight a new annual billing option for SaaS offers. We are also releasing a new set of tools to reduce procurement complexity for customers and ISV partners. For example, the new standard contract allows ISVs to leverage a unified and common set of terms and conditions for end customers.

Have questions or feedback? See the new or updated resources below. We also invite you to join us in the Microsoft Partner Community for discussion!

Publisher resources

Become a publisher
Publisher Guide for Partners
New: Publish Offerings for Resellers
New: Marketplace Roadmap
Azure Marketplace FAQs
Marketplace Support for Publishers

Cloud Solution Provider resources

Partner Center Documentation
New: Marketplace Offerings in Partner Center
Partner Center API Documentation
CSP Operations Guide

Quelle: Azure

Azure Communications is hosting an “Ask Me Anything” session!

Have you ever wondered where those service notifications in the Azure Portal and the Azure Status page come from? Curious why some messages appear to have more information than others? Interested in learning more about what goes into an outage statement? This is your chance to find out!

The Azure Communications team will be hosting a special "Ask Me Anything" (AMA) session on Reddit and Twitter. The Reddit session will be held on Monday, March 11th, from 10:00 AM to noon PST. Customers can participate by posting to the /r/Azure subreddit when the AMA is live. The Twitter session will follow soon after on Wednesday, March 13th, from 10:00 AM to noon PST. Be sure to follow @AzureSupport before March 13th and tweet us during the event using the hashtag #AzureCommsAMA.

Who are we?

We are among the first responders during times of crisis – the ones who draft, approve, and publish most customer-facing communications when outages happen. You've probably seen our messages on the Azure Service Health blade in the Portal, the Azure Status webpage, or even on the @AzureSupport Twitter handle. We'd like to think we bridge the gap between customers and the action happening behind the scenes.

What kind of communications do we provide?

Our communications are very crisis-oriented. When there is any kind of service interruption with the Azure platform, we are responsible for making sure our customers are provided with timely and accurate information. This includes information about outages, maintenance updates, and other good-to-know information for customers. We do not, however, manage any advertising or promotional communications.

Where do we communicate?

We have three primary channels for communicating with customers: Azure Service Health in the Portal, the Azure Status webpage, and @AzureSupport on Twitter.

Azure Service Health – Azure Service Health provides personalized guidance directly to customers when issues with the Azure platform affect their resources. Customers can review a personalized dashboard in the Portal, set up targeted notifications (email, SMS, webhook, etc.), receive support, and share details easily.

Azure Status webpage – This public-facing webpage (which does not require signing in) is only used to provide updates for major incidents or when there are known issues preventing access to Azure Service Health in the Portal. Customers should only refer to this page if they are not able to access Azure Service Health.

@AzureSupport on Twitter – In many ways, the @AzureSupport Twitter handle is a dynamic component of our communications process. Twitter allows engineering and support teams to gauge the pulse of the Azure platform through customer engagements on Twitter and act as a complementary resource to the Azure Status webpage. If necessary, @AzureSupport can even be used as a communications medium when Azure Service Health or the Azure Status webpage are not available.

Why are we hosting an AMA?

We’re going to be honest – we want there to be as little a disconnect as possible between customers and the action happening behind the scenes. An AMA provides us with an opportunity to connect with customers on a more intimate, informal level, and allows us to receive feedback directly from customers about some of the real-time decisions that are made during times of crisis. Hosting a multi-channel AMA through both Reddit and Twitter allows us to connect with a broad social community while providing customers with an experience based on transparency that is second to none.

Who will be there? How can I participate?

If you’re on Reddit, subscribe to the /r/Azure subreddit by Monday, March 11th, at 10:00 AM PST. Pochian Lee, Drey Zhuk, and others from the Azure Communications team will be answering questions on Reddit until noon. Just log in and post your questions to get started. If you’re on Twitter, be sure to follow @AzureSupport before Wednesday, March 13th. Starting at 10:00 AM PST, the team will be answering questions on Twitter until noon. Just tweet your questions to @AzureSupport during the event and be sure to include the #AzureCommsAMA hashtag so we don't miss you! While we will only be answering questions live during the event, customers are encouraged to post or tweet their questions any time starting at 10:00 AM on March 8th. This gives customers in different time zones an opportunity to participate.

Questions you may already have:

Why am I impacted and still see green on the Azure Status webpage?

The majority of our outages impact a limited subset of customers and we reach out to the impacted customers directly via the Azure Service Health blade. The Azure Status webpage provides information regarding any major impacting event that customers should be aware of on a broader scale.

Do I raise a support case during an outage?

If after looking in the Azure Service Health blade to see if you have been impacted by an outage, you find that your problem doesn’t match the impact that we’re observing, we would recommend that you create an Azure support case.

If you do not see a notification within the Azure Service Health blade and believe that there is an outage, please create an Azure support case.

If I don't see a notification in the Azure Service Health blade during an outage, should I raise a support ticket?

If you believe you are impacted by an outage, then yes!

How do I get notifications via email, SMS, etc. during an outage?

Customers can receive additional notifications via Azure Service Health and Azure Resource Health.

Azure Service Health – To get notifications during an outage, you would need to setup alerts within Azure Service Health to your preference (i.e you want to be notified for any outage affecting my virtual machines in the eastern US). Find additional information, and learn how to request notifications in this documentation.

Azure Resource Health – To get notifications based on the health of your individual resources (i.e. I want to be notified about issues for these two specific Virtual Machines), customers can configure Resource Health alerts by following the steps in this article.

If I have Service Health alerts set up, why have I not received a notification of an outage?

We try our best to inform impacted customers of an outage using our telemetry and are actively working on improving our telemetry to make sure we alert all customers that are impacted. We’ve made great progress for certain scenarios where our automated alerting is triggered from high-fidelity monitoring within our system. We’re looking to further develop this telemetry to ensure that the right customers are informed in a timely manner.

Where can I find Azure Service Health communications?

Communications can be seen within the Azure Service Health blade.

Why do I get communications so late in the Portal?

As soon as we’re able to validate customer impact and the services involved, we inform customers immediately. We’re actively working on improving our automation and telemetry to make sure customers are aware in real-time.

Why aren't these communications more visible when I log into the Portal?

We have heard this feedback before and are currently collaborating with partner teams to improve the visibility of the communications in the Azure Portal.

Be sure to subscribe to /r/Azure and follow @AzureSupport on Twitter before March 11th, 2019, in order to participate. We look forward to answering your questions live during the event!
Quelle: Azure

Service Fabric Processor in public preview

Microsoft clients for Azure Event Hubs have always had two levels of abstraction. There is the low-level client, which includes event sender and receiver classes which allow for maximum control by the application, but also force the application to understand the configuration of the Event Hub and maintain an event receiver connected to each partition. Built on top of that low-level client is a higher-level library, Event Processor Host, which hides most of those details for the receiving side. Event Processor Host automatically distributes ownership of Event Hub partitions across multiple host instances and delivers events to a processing method provided by the application.

Service Fabric is another Microsoft-provided library, which is a generalized framework for dividing an application into shards and distributing those shards across multiple compute nodes. Many customers are using Service Fabric for their applications, and some of those applications need to receive events from an Event Hub. It is possible to use Event Processor Host within a Service Fabric application, but it is also inelegant and redundant. The combination means that there are two separate layers attempting to distribute load across nodes, and neither one is aware of the other. It also introduces a dependency on Azure Storage, which is the method that Event Processor Host instances use to coordinate partition ownership, and the associated costs.

Service Fabric Processor is a new library for consuming events from an Event Hub that is directly integrated with Service Fabric, it uses Service Fabric's facilities for managing partitions, reliable storage, and for more sophisticated load balancing. At the same time it provides a simple programming interface that will be familiar to anyone who has worked with Event Processor Host. The only specific requirement that Service Fabric Processor imposes is that the Service Fabric application in which it runs must have the same number of partitions as the Event Hub from which it consumes. This allows a simple one on one mapping of Event Hub partitions to application partitions, and lets Service Fabric distribute the load most effectively.

Service Fabric Processor is currently in preview and available on NuGet at the “Microsoft.Azure.EventHubs.ServiceFabricProcessor” web page. The source code is on GitHub in our .NET Event Hubs client repository. You can also find a sample application available on GitHub.

From the developer's point of view, there are two major pieces to creating an application using Service Fabric Processor. The first piece is creating a class that implements the IEventProcessor interface. IEventProcessor specifies methods that are called when processing is starting up for a partition (OpenAsync), when processing is shutting down (CloseAsync), for handling notifications when an error has occurred (ProcessErrorAsync), and for processing events as they come in (ProcessEventsAsync). The last one is where the application's business logic goes and is the key part of most applications.

The second piece is integrating with Service Fabric by adding code to the application's RunAsync method, which is called by Service Fabric to run the application's functionality. The basic steps are:

Create an instance of EventProcessorOptions and set any options desired.

Create an instance of the IEventProcessor implementation. This is the instance that will be used to process events for this partition.

Create an instance of ServiceFabricProcessor, passing the options and processor objects to the constructor.

Call RunAsync on the ServiceFabricProcessor instance, which starts the processing of events.

Next steps

For more details follow our programming guide which is available on GitHub. Did you enjoy this blog post? Don't forget to leave your thoughts and feedback in the comment section below. You can also learn more about Event Hubs by visiting our product page.
Quelle: Azure

Presenting the new IIC Security Maturity Model for IoT

Organizations deploying IoT solutions often ask similar questions as they address security—What is the risk my organization takes on as we adopt IoT? How much security do we need for our scenario? Where should we invest for the biggest impact? To answer those questions, Microsoft co-authored and edited the Industrial Internet Consortium (IIC) IoT Security Maturity Model (SMM) Practitioner’s Guide. The SMM leads organizations as they assess the security maturity state of their current organization or system, and as they set the target level of security maturity required for their IoT deployment. Once organizations set their target maturity, the SMM gives them an actionable roadmap that guides them from lower levels of security maturity to the state required for their deployment.

Because not all IoT scenarios require the same level of security maturity, the goal of the SMM is to allow organizations to meet their scenario needs without over-investing in security mechanisms. For example, a manufacturing or an oil and gas solution involving safety needs an especially high maturity state.

The SMM complements Microsoft’s body of existing research and standards for IoT security, such as the “Seven Properties of Highly Secure Devices.” While the research in the Seven Properties paper provides a comprehensive deep dive into device security, the SMM takes a broader view of IoT security. This comprehensive model is used in the IoT security space to assess the maturity of organizations’ systems including governance/process, technology, and system security management. Other models typically address IT but not IoT, or IoT but not security, or security but not IoT. The SMM covers all these aspects and leverages other models where appropriate.

Applying the SMM to your organization

While the SMM’s intended audience is owners of IoT systems, decision makers, and security leaders, we expect assessment companies, and assessments groups within organizations, to be the main practitioners of the model. The SMM allows decision makers and security leaders to understand and consistently apply assessments performed by different groups. It also provides flexibility for industry extensions (currently being explored with several industry groups and associations) and allows for different types of visualization of the model results.

The SMM is organized as a hierarchy and includes domains, subdomains, and practices. This hierarchical approach enables the maturity and gap analysis to be viewed at different levels of detail, making it easier for organizations to prioritize gaps.

The SMM also makes an important distinction between security levels and security maturity states, helping organizations to understand the differences between what their goals need to be and where they are in their security journey. In the SMM, a security level is a measure of how much security you have. The SMM does not dictate what the appropriate security level should be for your organization. Rather, it provides guidance and structure so organizations can identify considerations for different security maturity states appropriate for their industry and systems.

Security maturity, on the other hand, is a measure of how well your security technology, processes, and operations meet your organization’s specific needs. The SMM helps you determine how much security you need, based on cost, benefit, and risk. The model allows you to consider various factors such as the specific threats to your organization's industry vertical, regulatory, and compliance requirements, the unique risks present in the environments your IoT operates in, and your organization's threat profile.

As you begin working with the SMM, it guides you through each step of the assessment using the model. Your organization begins by establishing a target state or identifying a relevant industry profile you want to target. Your organization then conducts an assessment to capture a current maturity state. By comparing the target and current states, organizations identify gaps. Based on the gap analysis, business and technical stakeholders can establish a roadmap, take action, and measure the progress. Organizations improve their security state by making continued security assessments and improvements over time. No matter how far along you are with IoT security, the model will help you close gaps that bring you to your desired security maturity.

Assessing security details with the security maturity model

Once you begin working with the SMM, it guides you through a rigorous approach to defining how well your security state meets your needs. To help you identify actionable areas to improve and to avoid blind spots in your plan, the SMM introduces domains, subdomains, and practices. You can gauge how well your organization is doing in each domain, subdomain, and practice along two dimensions — comprehensiveness and scope. Comprehensiveness is a measure of the depth and with higher levels indicating higher degree of maturity of a process or technology. Scope allows for identifying general, industry, and system specific requirements, ensuring the SMM can be tailored to your industry and use case with more precision than previous models could achieve.

SMM Hierarchy

The domains in the SMM include governance, enablement, and hardening. These domains determine the priorities of security maturity enhancements at the strategic level.

Governance influences business process and includes program management, risk management, and supply chain and third-party management.
Enablement covers architecture considerations and security technology mechanisms and includes identity management, access control, and physical protection.
Hardening defines countermeasures to deal with incidents and includes monitoring, events detection, and remediation.

The subdomains reflect the means of obtaining the priorities at the planning level. The practices define typical activities associated with subdomains identified at the tactical level.

The SMM includes practice tables grouped by domains and subdomains. Each SMM practice includes a table describing what must be done to reach a given comprehensiveness level at the general scope. For each comprehensiveness level, the table describes the objective and general considerations, a description of the level, practices that should be in place to achieve that level, and indicators of accomplishment to help assessors determine if the organization has met the requirements of the level.

Of course, general guidelines are often difficult to apply to specific scenarios. For that reason, an example follows each table using various industry use cases to demonstrate how an organization might use the table to pick a target state or to evaluate a current state. The guide also contains three case studies that show IoT stakeholders how to apply the process. The case studies include a smarter data-driven bottling line, an automotive gateway supporting Over the Air (OTA) updates, and consumer residential settings using security cameras. As our work on the SMM continues, we will work with industry organizations and associations to define industry profiles for the SMM.

Getting started with the SMM

If you want more information on exactly how the SMM works or how you can begin, the best spot to start is with the model itself: evaluate or improve your IoT security with the SMM today. To learn more about the SMM from its authors, watch our SMM introduction webinar.

For details on building your secure IoT solution on the trusted Azure IoT cloud services, see our Azure IoT Security Architecture for more information or start your free trial to get immediate hands on experience.
Quelle: Azure

Announcing new capabilities in Azure Firewall

Today we are excited to launch two new key capabilities to Azure Firewall.

Threat intelligence based filtering
Service tags filtering

Azure Firewall is a cloud native firewall-as-a-service offering which enables customers to centrally govern all their traffic flows using a DevOps approach. The service supports both application (such as *.github.com), and network level filtering rules. It is highly available and auto scales as your traffic grows.

Threat intelligence based filtering (preview)

Microsoft has a rich signal of both internal threat intelligence data, as well as third party sourced data. Our vast team of data scientists and cybersecurity experts are constantly mining this data to create a high confidence list of known malicious IP addresses and domains. Azure firewall can now be configured to alert and deny traffic to and from known malicious IP addresses and domains in near real-time. The IP addresses and domains are sourced from the Microsoft Threat Intelligence feed. The Microsoft Intelligent Security Graph powers Microsoft Threat Intelligence and provides security in multiple Microsoft products and services, including Azure Security Center and Azure Sentinel.

Threat intelligence-based filtering is default-enabled in alert mode for all Azure Firewall deployments, providing logging of all matching indicators. Customers can adjust behavior to alert and deny.

Figure 1 – Azure Firewall concept architecture

Managing your firewall

Logging analysis of threat data and actionable insights are all crucial and central themes to planning, building, and operating applications and infrastructure.

Azure Firewall provides full integration with Azure Monitor. Logs can be sent to Log Analytics, Storage, and Event Hubs.  Azure Log Analytics allows for the creation of rich dashboards and visualization. Along with custom data queries this powerful integration provides a common place for all your logging needs, with vast options to customize the way you consume your data. Customers can send data from Azure Monitor to SIEM systems such as Splunk, ArcSight and similar third party offerings.

Figure 2 – Azure Firewall detecting a compromised VM using threat intelligence and blocking these outbound connections

Figure 3 – Azure Firewall detecting port scan attempts using threat intelligence and blocking these inbound connections

Service tags filtering

Along with threat intelligent-based filtering, we are adding support for service tags which have also been a highly requested feature by our users. A service tag represents a group of IP address prefixes for specific Microsoft services such as SQL Azure, Azure Key Vault, and Azure Service Bus, to simplify network rule creation. Microsoft today supports service tagging for a rich set of Azure services which includes managing the address prefixes encompassed by the service tag, and automatically updating the service tag as addresses change. Azure Firewall service tags can be used in the network rules destination field. We will continue to add support for additional service tags over time.

Central management

Azure Firewall public REST APIs can be used by third party security policy management tools to provide a centralized management experience for Azure Firewalls, Network Security Groups, and network virtual appliances (NVAs). In September 2018, we announced the private preview for Barracuda’s new service, AlgoSec CloudFlow and Tufin. We are happy to announce that AlgoSec CloudFlow is now available as a public beta. Learn more and join at the Algosec website.

We want to thank all our customers for their amazing feedback since Azure Firewall became generally available in September 2018. We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Please do keep your feedback coming and we look forward to continuing to advance the service to meet your needs.

Learn more

Azure Firewall Documentation
Azure Firewall Threat Intelligence
Azure Firewall Service Tags
Pricing

Quelle: Azure