Send your Azure alerts to ITSM tools using Action Groups

At Ignite 2017, we announced the new IT Service Management (ITSM) Action in Azure Action Groups. As you might know, Action Groups is a reusable notification grouping for Azure alerts. Users can create an action group with functions such as sending an email or SMS, as well as calling a webhook and re-use it across multiple alerts. The new ITSM Action will allow users to create a work item in the connected ITSM tool when an alert is fired.

ITSM Connector Solution in Log Analytics

This action builds on top of the IT Service Management Connector Solution in Azure Log Analytics. The ITSM Connector solution provides a bi-directional connection with the ITSM tool of your choice. Currently the solution is in public preview and supports connections with ITSM tools such as System Center Service Manager, ServiceNow, Provance, and Cherwell. Today, through the ITSM Action, we are bringing the same integration capabilities to Azure alerts.

The IT Service Management Connector allows you to:

Create work items (incidents, alerts, and events) in the connected ITSM tool when a Log Analytics alert fires, or manually from a Log Analytics log record.
Combine the power of help desk data, such as incidents and change requests, and log data, such as activity and diagnostic logs, performance, and configuration changes, to mitigate incidents quickly.
Derive insights from incidents using the Azure Log Analytics platform.

Using the new ITSM Action

Before you can start using the ITSM Action, you will need to install and configure the IT Service Management Connector Solution in Log Analytics. Once you have the solution configured, you can follow the steps below to use the ITSM Action.

1. In Azure portal, click on Monitor.

2. In the left pane, click on Action groups.

3. Provide Name and ShortName for your action group. Select the Resource Group and Subscription where you want your action group to get created.

4. In the Actions list, select ITSM from the drop-down for Action Type. Provide a Name for the action and click on Edit details.

5. Select the Subscription where your Log Analytics workspace is located. Select the Connection (i.e your ITSM Connector name) followed by your Workspace name. For example, "MyITSMMConnector(MyWorkspace)."

6. Select Work Item type from the drop-down.

7. Choose to use an existing template or complete the fields required by your ITSM product.

8. Click OK

When creating/editing an Azure alert rule, use an Action Group which has an ITSM Action. When the alert triggers, a work item is created in the ITSM tool.

Note: Currently only Activity Log Alerts support the ITSM Action. For other Azure alerts, this action is triggered but no work item will be created.

We hope you will find this feature useful in integrating your alerting and Service Desk solutions. Learn more and get information on IT Service Management Connector Solution and Action Groups.

We would love to hear your feedback. Send us any questions or feedback to azurealertsfeedback@microsoft.com. 
Quelle: Azure

September 2017 Leaderboard of Database Systems contributors on MSDN

Congratulations to our September top-10 contributors! Alberto Morillo maintains his first position in the cloud ranking while Olaf Helper climbs to the top in the All Databases ranking.

This Leaderboard initiative was started in October 2016 to recognize the top Database Systems contributors on MSDN forums. The following continues to be the points hierarchy (in decreasing order of points):

Quelle: Azure

ADAL.NET 3.17.0 released

ADAL.NET (Microsoft.IdentityModel.Clients.ActiveDirectory) is an authentication library which enables developers to acquire tokens from Azure AD and ADFS to access Microsoft APIs or applications registered with Azure Active Directory. ADAL.NET is available on several .NET platforms including Desktop, Universal Windows Platform, Xamarin / Android, Xamarin iOS, Portable Class Libraries, and .NET Core. We just released ADAL.NET 3.17.0 which enables new capabilities and brings improvements in terms of usability, privacy, and performance.

Enabling new capabilities

ADAL.Net 3.17.0 enables you to:

Write more efficient applications, tolerant to Azure AD throttling.
Force end users of your apps to choose an identity even when s/he is logged-in.
Process more effective conditional access.

Enabling more efficient applications (Retry-After for instance)

You might have seen some of our samples processing, acquiring, and catching an AdalException with an ErrorCode "temporarily_unavailable". When the Service Token Server (STS) is too busy because of “too many requests”, it returns an HTTP error 429 with a hint about when you can try again (Retry-After response field) as a delay in seconds, or a date.

Previously, ADAL.NET did not surface this information. Therefore, to handle the error we advised to retry an arbitrary number of times after waiting for a hard-coded arbitrary delay. For a console application, the code could look like the following:

do
{
retry = false;
try
{
result = await authContext.AcquireTokenAsync(resource, certCred);
}
catch (AdalException ex)
{
if (ex.ErrorCode == "temporarily_unavailable")
{
retry = true;
retryCount++;
Thread.Sleep(3000);
}

}

} while ((retry == true) && (retryCount < 2));

From ADAL.NET 3.17.0, we are now surfacing the System.Net.Http.Headers.HttpResponseHeaders as a new property named Headers in the AdalServiceException. Therefore, you can leverage additional information to improve the reliability of your applications. In the case we just described, you can use the RetryAfter property (of type RetryConditionHeaderValue) and compute when to retry.

Note that depending on whether you are using ADAL.Net for a confidential client application or a public client application, you will have to catch the AdalServiceException directly or as an InnerException of the AdalException.

The following code snippet should give you an idea of how to proceed depending on the case:

do
{
retry = false;
TimeSpan ?delay;
try
{
result = await authContext.AcquireTokenAsync(resource, certCred);
}

// Case of a Confidential client flow
// (for instance auth code redemption in a Web App)
catch (AdalServiceException serviceException)
{
if (ex.ErrorCode == "temporarily_unavailable")
{
RetryConditionHeaderValue retry= serviceException.Headers.RetryAfter;
if (retry.Delta.HasValue)
{
delay = retry.Delta;
}
else if (retry.Date.HasValue)
{
delay = retry.Date.Value.Offset;
}
}

}

// Case of a client side exception
catch (AdalException ex)
{
if (ex.ErrorCode == "temporarily_unavailable")
{
var serviceEx = ex.InnerException as AdalServiceException;
// Same kind of processing as above
}

}

if (delay.HasValue)
{
Thread.Sleep((int)delay.Value.TotalSeconds); // sleep or other
retry = true;
}

} while (retry);

Forcing the user to select an account

More people are using multiple personal, professional, and organization identities. You might have a use case in your application where you want your user to choose which identity to use. To enable such use cases, we added a new value SelectAccount in the PromptBehavior enumeration for the platforms supporting interaction (Desktop, WinRT, Xamarin iOS, and Xamarin Android). If you use it, you will force your app’s user to choose an account, even when s/he is already logged-in, bypassing the cache lookup, and presenting the UI directly.

You might have used PromptBehavior.Always in the past, which also bypasses the token cache and presents a User interface. PromptBehavior.SelectAccount is different because it tells Azure AD to display available users as tiles and does not force users to sign in again (assuming the cookies are available, remember the interaction between the user and Azure AD happens in a browser). The presence of tiles does not guarantee a Single Sign On experience because the behavior is determined by the cookie lifetime that it managed completely outside the library’s purview.

Enabling your applications to handle conditional access (and other claim challenges)

We try to keep most of our samples simple, however, you probably know that if you want to produce enterprise ready applications, you will have to put a bit more effort into error handling. To that effect, in ADAL.NET 3.16.0, we enable you to process claim challenges sent by Azure AD when your application needs to involve the user to let him accept that the application access additional resources, or to let him do multi-factor authentication. In ADAL.NET 3.17.0, we enabled this feature by passing back to the API caller the HttpRequestWrapperException as an inner exception to AdalClaimChallengeException so that you can get missing claims. You can then pass these additional claims to the acquireToken overrides which have a new claims member.

The code snippet below is extracted from the active-directory-dotnet-webapi-onbehalfof-ca sample. It illustrates the claim challenge received from Azure AD by the TodoList service (confidential client) and how this challenge is propagated to the clients so that they can, in their turn, have the needed user interaction (for instance two-factor authentication).

try
{
result = await authContext.AcquireTokenAsync(caResourceId, clientCred,
userAssertion);
}
catch (AdalClaimChallengeException ex)
{
HttpResponseMessage myMessage = new HttpResponseMessage
{ StatusCode = HttpStatusCode.Forbidden, ReasonPhrase = INTERACTION_REQUIRED,
Content = new StringContent(ex.Claims) };
throw new HttpResponseException(myMessage);
}
catch (AdalServiceException ex)
{

On the client side (TodoListClient), the code getting this challenge when calling the TodoList service and re-requesting the token with more claims is the following:

// We successfully got a token.
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", result.AccessToken);

// while calling the API.
HttpResponseMessage response = await httpClient.GetAsync(todoListBaseAddress
+ "/api/AccessCaApi");

if (response.StatusCode == System.Net.HttpStatusCode.Forbidden
&& response.ReasonPhrase == INTERACTION_REQUIRED)
{
// We need to re-request the token to account for a Conditional Access Policy
String claimsParam = await response.Content.ReadAsStringAsync();

try
{
result = await authContext.AcquireTokenAsync(todoListResourceId, clientId,
redirectUri, new PlatformParameters(PromptBehavior.Always),
new UserIdentifier(displayName,
UserIdentifierType.OptionalDisplayableId),
extraQueryParameters:null,
claims: claimsParam);

More details are described in the Developer Guidance for Azure Active Directory Conditional Access and the samples linked from this article.

Usability improvements

ADAL.NET NuGet package now contains one DLL for each platform

ADAL.NET used to be packaged as one common assembly which dynamically loaded another platform specific assembly using dependency injection. This was causing issues (like #511). When you were referencing the NuGet package from a portable library, you also had to reference the platform specific assembly from your main assembly, which was not very intuitive.

Starting with ADAL.NET 3.17.0, the NuGet package now contains a single DLL for each platform.

In case you are interested in the implementation details, have a look at ADAL.NET’s source code on GitHub, you’ll see that we’ve moved to a multi-target project for ADAL.NET.

Removing confusion by hiding the APIs which did not make sense in some platforms

WinRT Apps can now only use one ClientCredential constructor

Even if WinRT applications are generally public client applications, they can also use client credential flow to enable kiosk mode scenarios where no user is logged-in. So far the ClientCredential class used in confidential client scenarios had two overrides:

One with application secret, public ClientCredential(string clientId, string clientSecret).
One that redeems an authorizationcode or pass in a user assertion.

The later did not make sense for WinRT applications. It’s now only available on desktop applications.

Device Profile API is now only available in Desktop, .NET core and UWP apps

The Device Profile API AcquireTokenByDeviceCodeAsync(DeviceCodeResult deviceCodeResult) acquires a security token from the STS using a device code previously requested using one of the overrides of AcquireDeviceCodeAsync (See Invoking an API protected by Azure AD from a text-only device). AcquireTokenByDeviceCodeAsync is no longer available for Xamarin iOS and Xamarin Android which are not text-only devices. It should only be used in desktop, .Net Core and UWP (for IoT) apps. This is making things more consistent as AcquireDeviceCodeAsync was already not available for Android and iOS.

Improved documentation

We fixed a number of issues in the reference documentation, which was confusing for UserPasswordCredential and AcquireToken (See #654). We also updated the library’s readme.md with instructions on how to enable logging by implementing the IAdalLogCallback interface, and how to interact with an external broker (case of Xamarin iOS / Xamarin Android).

Privacy and performance improvements

As you might know, you can activate Logging in ADAL.NET logger by assigning to LoggerCallbackHandler. Callback is an instance of a class implementing the IAdalLogCallback interface. When you did that, and chose to see Verbose information, when ADAL.Net was sending a request to Azure AD, you could see two messages.

“Navigating to ‘complete URL’”
“Navigated to ‘complete URL’”

Where ‘Complete URL’ was the complete URL sent to Azure AD. Including, with some prompt behaviors, personal information such as the User Principal Name (UPN) of the user.

We improved privacy by no longer logging the complete URL.

We also improved performance by fixing a memory leak specific to the Xamarin iOS platform.

In closing

As usual we’d love to hear your feedback. Please:

Ask questions on  Stack Overflow using the ADAL tag.
Use GitHub Issues on the ADAL.Net open source repository to report bugs or request features
Use the User Voice page to provide recommendations and/or feedback

Quelle: Azure

Microsoft’s Azure SQL Database ranked #1 Database as a Service for developer satisfaction by SlashData

Today we’re excited and honored to be recognized for three Developer Satisfaction awards from SlashData, a leading analyst firm in the developer community. On Oct 10, 2017 at their annual Future Developer Summit Azure SQL Database was announced the winner in the category of Database as a Service (DBaaS) developer satisfaction, with two first runner-up awards for developer training and engagement. 

Azure SQL Database is an intelligent, fully-managed relational cloud database service built for developers. It makes building and maintaining applications easier and more productive, supporting more languages and platforms than ever before. With SQL Database, developers can also accelerate their app development by taking advantage of intelligent features that are built right into SQL Database that learn app patterns and adapt to maximize performance, reliability, and data protection.

Microsoft is investing in the developer community, providing the tools and platforms to help developers do what they do best: build great apps. Our commitment extends beyond the product to engaging them through virtual events connected to our Microsoft Build and Connect() shows and through training partners such as Pluralsight. Deep documentation, hands-on-labs and code repos provide the content and tools they need to quickly move projects along.

The Developer Satisfaction Awards recognize the software products and brands that developers are most satisfied with. Results are based on the independent and unbiased opinions of over 40,000 developers annually, from around the globe, combined with SlashData's research methodology.

According to Andreas Constantinou, CEO and Founder of SlashData, “We want to congratulate Microsoft on their Developer Satisfaction awards today and thank them for their ongoing commitment to the developer community. Our bi-annual Developer Economics survey measures satisfaction across the tools that developers use, and Microsoft’s Azure SQL Database was ranked #1 in the Database as a Service category.”

These awards from SlashData are additional validation of the work Microsoft and the SQL team are doing to deliver great developer experiences and database services designed for developers. We would like to thank our community for your feedback and support on this journey and the SlashData team for this recognition.
Quelle: Azure

Announcing support for X.509 CA on Azure IoT Hub

We’re pleased to announce support for X.509 Certificates Authorities (X.509 CA) on Azure IoT Hub!

The use of X.509 CA simplifies the creation of initial unique Internet of Things (IoT) certificate identities for devices in the device manufacturing flow. Instead of pre-creating the identities for every device and having to protect associated secrets during manufacturing, the use of X.509 CA simplifies the flow into two, one-time processes for the certificate owner.

Authorize the factory once, enabling it to create initial identities for IoT devices, or enable downstream factories and/or service providers in the manufacturing flow. This enablement process, called signing, only needs to happen once. Learn more about signing and certificate chains.
Upload the X.509 CA certificate to the Azure IoT Hub where it will be used to authenticate IoT devices as they connect. The upload is a one-time process.

Figure 1: Using X.509 Certificate Authorities on Azure IoT Hub

The X.509 CA feature can be used alone or in conjunction with the Azure IoT Hub Device Provisioning Service (DPS). When used with DPS it enables provisioning for true zero-touch secure identity management for IoT.

X.509 CA reduces the burden of keeping private keys secret in a supply chain, especially when multiple custodians are involved. Private keys are an integral part of the certificate identities for IoT devices. Without using X.509 CA, unique private keys would have to be pre-generated and kept secret until securely injected into the IoT device, for every device. For each device, a unique attribute of the key called a thumbprint is created and registered to IoT Hub. The thumbprint in IoT Hub would then be used to authenticate the device when it connects. Using X.509 CA certificate in contrast means you only have to register a CA certificate once. You can use it to authenticate as many devices as needed. The burden is further reduced when private keys are generated within secure silicon hardware, eliminating the injection process altogether. Learn more about how Microsoft supports a wide variety of secure hardware.

Creating support for X.509 CA on Azure IoT Hub is part of Microsoft’s relentless efforts towards simplifying deployment of secure Internet of Things. Simplifying the creation of initial device identities is another step towards enabling IoT at scale, and allows customers to use DPS to provision devices.

Learn more about Microsoft IoT

Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started — ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. To see how Microsoft IoT can transform your business, visit www.InternetofYourThings.com.

Get started using the X.509 CA feature!
Quelle: Azure

Last week in Azure for the week of 02 October 2017

To paraphrase Ferris Bueller, “Azure moves pretty fast. If you don't stop and look around once in a while, you could miss it.” To help you do that without spending too much time looking around, this weekly series will highlight what’s new from across Azure over the previous week; however, it won’t provide a comprehensive list of everything that’s new.

That said, here're five highlights from last week:

1. Java on Azure

JavaOne was last week in San Francisco, so there was a lot happening around the Java offering on Azure. Prior to the event, we announced support for securely deploying and redeploying Java apps on Kubernetes in Azure Container Service using Maven. At the event, we announced Java support for Azure Functions, which is now in public preview. Check out the Azure Functions Java developer guide to learn more. We released Azure Management Libraries for Java v1.3, which adds support for Availability Zones, Network Peering, Virtual Network Gateways, and Azure Container Instances. Start exploring what you can do with Java on Azure.

2. Reflections on Microsoft Ignite 2017

Corporate Vice President, Julia White, gathered her thoughts to provide highlights from the recent Microsoft Ignite event in Orlando, FL, which includes links to some useful articles that came out from the event. Be sure to give her post a read: Azure is the Enterprise Cloud – highlights from Microsoft Ignite 2017.

3. SQL Server 2017 Linux & Windows VMs now available in Azure Marketplace

SQL Server 2017 images on Linux and Windows are now available in the Azure Marketplace. Deploying SQL Server in Azure VMs combines the industry-leading performance and security, built-in artificial intelligence, and business intelligence of SQL Server, now available on both Linux and Windows, with the flexibility, security, and hybrid connectivity of Azure. For more information, see Announcing new Azure VM images: SQL Server 2017 on Linux and Windows.

4. Azure Building Blocks

Last week we released Azure Building Blocks, which are a set of tools and Azure Resource Manager templates that are designed to simplify deployment of Azure resources. The Microsoft patterns & practices team is working on a set of Azure Building Block tutorials on GitHub to help you master their use.

5. New episodes of Azure Friday

Experimental cmdlets in Azure PowerShell – Aaron Roney joins Scott Hanselman to check out the new Experimental cmdlets in Azure PowerShell.

Azure App Service with Hybrid Connections to On-premises Resources – Christina Compy joins Scott Hanselman to talk about the recently re-launched App Service Hybrid Connections.

Quelle: Azure

Scale up your deep learning with Batch AI preview

Imagine reducing your training time for an epoch from 30 minutes to 30 seconds, and testing many different hyper-parameter weights in parallel. Available now, in public preview, Batch AI is a new service that helps you train and test deep learning and other AI or machine learning models with the same scale and flexibility used by Microsoft’s data scientists. Managed clusters of GPUs enable you to design larger networks, run experiments in parallel and at scale to reduce iteration time and make development easier and more productive. Spin up a cluster when you need GPUs, then turn them off when you’re done and stop the bill.

Developing powerful AI involves combining large data sets for training with clusters of GPUs for experimenting with network design and optimization of hyper-parameters. Having access to this capability as a service helps data scientists and AI researchers get results faster and focus on building better models instead of managing infrastructure. This is where Batch AI comes in as part of the Microsoft AI platform.

"Deep learning researchers require increasing computing time to train complex neural networks with big data. Large computing clusters on Microsoft Azure is one of the solutions to resolve our researchers' pain, and Azure Batch AI will be the key solution to connect on-premises and cloud environments. Preferred Networks is excited to integrate Chainer & ChainerMN with this service." –Hiroshi Maruyama, Chief Strategy Officer, Preferred Networks, Inc

Joseph Sirosh, Corporate Vice President of the Cloud AI Platform, spoke at the recent Microsoft Ignite conference about delivering Cloud AI for every developer with a comprehensive family of infrastructure for AI in Azure, services for AI, and tools to make AI development easier. Batch AI is part of this infrastructure, enabling easily distributed computing on Azure for parallel training, testing, and scoring. Scale-out to as many GPUs as you want.

There’s a great demo in Joseph’s Ignite talk (25 minutes in) that shows an end-to-end experience of data wrangling, training at scale, and using a trained AI model in Excel. The model was developed initially using a Data Science Virtual Machine in Azure, then scaled out to speed up experimentation, hyper-parameter tuning, and training. Using Batch AI, our data scientists were able to scale from 1 to 148 GPUs for the model, reducing training time per epoch from 30 minutes to 30 seconds. This made a huge difference in productivity when you need to run thousands of epochs. Our data scientists were able to experiment with the network design and hyper-parameter values and see results quickly. A version of the code behind this demo will available as a tutorial to use with Batch AI and Azure ML Machine Learning Services and Workbench.

What is Batch AI

Batch AI provides an API and services specialized for AI workflows. The key concepts are clusters and jobs.

Cluster describes the compute resources you want to use. Batch AI enables:

Provisioning clusters of GPUs or CPUs on demand

Installing software in a container or with a script

Automatic or manual scaling to manage costs

Access to low priority virtual machines for learning and experimentation

Mounting shared storage volumes for training and output data

 

A job is the code you want to run — a command line with parameters. Batch AI supports:

Using any deep learning framework or machine learning tools

Direct configuration of options for popular frameworks

Priority-based job queue for sharing a GPU quota or reserved instances

Restarting jobs if a virtual machine becomes unavailable

SDK, command line, portal and tools integration

Building systems of intelligence

Dr. Yogendra Narayan Pandey, Data Scientist at Halliburton Landmark, used Azure Batch AI and Azure Data Lake to develop predictive deep learning algorithms for static reservoir modeling to reduce the time and risk in oil field exploration compared to traditional simulation. He shared his work at the Landmark Innovation Forum & Expo 2017.

“With the huge amounts of storage and compute power of the Azure cloud, we are entering the age of predictive model-based discovery. Batch AI makes it straightforward for data scientists to use the tools they already know. Without Azure Batch AI and GPUs, it would have taken hours if not days for each model training job to complete.”

Batch AI includes recipes for popular AI frameworks that help you get started quickly without needing to learn the details of working with Azure virtual machines, storage, and networking. The recipes include cluster and job templates to use with the Azure CLI interface, as well as Jupyter Notebooks that demonstrate using the Python API.

End-to-end productivity

The Batch AI team is working to integrate with Microsoft AI tools including the Azure Machine Learning services and Workbench for data wrangling, experiment management, deployment of trained models, and Visual Studio Code Tools for AI.

 

Partners around the world are also using Batch AI to help their customers scale-up their training to Azure and its powerful fleet of NVIDIA GPUs.

“We have long needed a service like Azure Batch AI. It is an appealing solution for deep learning engineers to speed up deep neural network training & hyper parameter search. I’m looking forward to creating end-to-end solutions by integrating our deep learning service CSLAYER and Azure Batch AI.”  –Ryo Shimizu, President & CEO of UEI Corporation

Getting started

We invite you to try Batch AI for training your models in parallel and at scale in Azure. We have sample recipes for popular AI frameworks to help you get started. We recommend starting with low priority virtual machines to minimize costs.

With Batch AI, you only pay for the compute and storage used for your training. There’s no additional charge for the cluster management and job scheduling. Using low priority virtual machines with Batch AI is the most cost-effective way to learn and develop until you are ready to leverage GPUs.

The team would like to hear any feedback or suggestions you have. We’re listening on Azure Feedback, Stack Overflow, MSDN, and by email.

Quelle: Azure

Monitor your Azure IoT solutions with Azure Monitor and Azure Resource Health

Azure IoT Hub is now fully integrated with Azure Monitor and Azure Resource Health to provide you with rich, frequent data about the operations of your Azure IoT Hub, and diagnose problems quickly. We also deliver actionable and relevant guidance to reduce the time you spend in diagnosing and troubleshooting issues with your Azure IoT Hub. Performance issues in your IoT solutions can impact your business. It is important to monitor the health of your resources to ensure that your IoT solutions stay up and running in a healthy state. We can help you achieve that goal with Azure Monitor and Azure Resource Health.

In case of an Azure event that can impact your resources, for example a planned maintenance or a platform issue that impacts your IoT Hub, Azure Resource Health helps you diagnose issues and get support through a personalized dashboard with current and past health status. It also provides details on the event, describes the recovery process, and enables you to contact support even if you don't have an active Microsoft support agreement. Azure Resource Health is now available in your Azure IoT Hub.

Azure Monitor provides a single source of monitoring and a common logging platform for all your Azure services with the ability to send logs to OMS Log Analytics, Event Hubs, or Azure Storage for custom processing. With Azure Monitor, you can get a holistic view of your Azure IoT hubs and devices connected to it through metrics and Diagnostic Settings, and get real-time visibility into issues occurring in any resource in Azure, all in one place.

On October 10, 2018, we will be deprecating operations monitoring functionality in Azure IoT Hub since Azure Diagnostics Settings makes the Azure IoT Hub's operations monitoring feature obsolete.

Note: This deprecation only impacts Azure IoT Hubs created in the public Azure cloud, not in Azure in China, Azure Germany, or Azure Government cloud.

Customers using Operations Monitoring on the Azure IoT Hub can now use Azure Diagnostic Settings to monitor the status of operations on their Azure IoT Hub. Once you collect the data through metrics and diagnostic settings, Azure Monitor gives you the capability to use it in various ways. You can stream the collected data to other locations in real time, store it, query it, or visualize it by using Azure Portal, Power BI, or 3rd party applications. You can even use monitoring data to trigger alerts to execute other processes.

We recommend that you switch to using Diagnostics Settings on the Azure IoT Hub, to ensure there is no interruption in log collection for your Azure IoT Hub when the operations monitoring functionality is removed. Follow these migration steps before October 10, 2018 to ensure no interruption. You will also receive email reminders from us regarding these changes.

Important Dates: 

October 10, 2017: The public announcement of the deprecation of Operations Monitoring on Azure IoT Hub.
October 10, 2018: Operations Monitoring will be removed from all Azure IoT Hubs.

Don’t delay in using these new tools to monitor your Azure resources. Find problems before your customers do! Learn more about monitoring your IoT hub.
Quelle: Azure

Quarterly Microsoft Azure SOC reports: Compliance at warp speed

Responding to customers’ need for speed, Microsoft Azure has published six new Service Organization Control (SOC) reports, just three months after the previously issued reports. Azure is the first and only enterprise cloud provider to support quarterly SOC reports.

A quarterly publishing cadence allows customers to more frequently receive current reports which address their compliance obligations for new services as they become available. In addition, a customer’s need to rely upon CSP-issued bridge letters is reduced dramatically.

Azure provides the deepest and most comprehensive compliance coverage in the industry and the latest SOC reports have the largest scope for a cloud provider in terms of services covered, and regions and locations included. Our SOC reports assess three unique cloud environments: Azure, Azure Government, and Azure Germany.

Microsoft has issued a SOC 1 Type 2 report according to the latest AICPA SSAE 18 standard, as well as a SOC 2 Type 2 report relevant to the security, availability, confidentiality and processing integrity trust principles. In addition, the SOC 2 Type 2 report includes an additional attestation based on the Cloud Security Alliance (CSA) Cloud Control Matrix (CCM). The Azure Germany SOC 2 Type 2 report also includes the Cloud Computing Compliance Controls Catalog (C5) attestation designed for cloud providers to demonstrate sound security practices.

Highlights of the SOC reports:

6 total SOC reports published on August 7 that include:

Azure and Azure Government SOC 1/2/3
Azure Germany SOC 1/2/3

63 customer-facing offerings included
New services added: Azure Container Registry, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Analysis Services, Azure Security Center, and Microsoft Stream.

Learn more about Azure compliance offerings, and download the latest SOC reports at the Microsoft Azure Trust Center.

See https://azure.microsoft.com/en-us/regions/ for more on Azure regions, including those coming soon.
Quelle: Azure