Accelerate your business revolution with IoT in Action

There’s a revolution underway that is positioning companies to take operational efficiency to new levels and inform the next generation of products and services. This revolution of course, is the Internet of Things (IoT).

Here at Microsoft, we’re committed to helping our customers harness the power of IoT through our Azure IoT solutions. We’re also committed to helping customers take the first steps through our IoT in Action series. Our next delivery is coming February 13, 2018 in San Francisco, which I’d encourage you to attend.

But first, I’d like to introduce you to some recent updates to Azure IoT Suite that are making IoT solutions easier and more robust than ever.

Azure IoT powers the business revolution

With our long history of driving business success and digital transformation for our customers, it’s no surprise that we’re also focused on powering the business revolution through our robust Azure IoT suite of products.

So how does Azure IoT benefit businesses?

First off, it’s a quick and scalable solution. Our preconfigured solutions can accelerate your development process, so you can get up and running quickly. You can connect existing devices and add new ones using our device SDKs for platforms including Linux, Windows, and real-time operating systems. Scaling is easy, whether you want to add a few devices or a million.

Azure IoT Suite can easily integrate with your existing systems and applications like Salesforce, SAP, and Oracle. You can also enhance security by setting up individual identities and credentials for each of your connected devices. Plus, Azure comes complete with built-in artificial intelligence and built-in machine learning.

Watch the following interview with Sam George, Director of Azure IoT at Microsoft, to learn how Azure IoT is accelerating the digital transformation for businesses.

So, what’s new with Azure IoT?

Microsoft continues to evolve its suite to offer you the world’s best IoT technology. Here are three notable releases that are smoothing the road to IoT.

Microsoft IoT Central

This highly scalable SaaS solution was recently released for public preview. It delivers a low-code way for companies to build IoT production grade applications in hours without needing to manage backend infrastructure or hire specialized talent. Features include device authentication, secure connectivity, extensive device SDKs with multi-language support, and native support for IoT protocols. Learn more about Microsoft IoT Central.

Azure IoT Hub

Use the Azure IoT Hub to connect, monitor, and manage billions of IoT assets. This hub enables you to securely communicate with all your things, set up identities and credentials for individuals, connected devices, and quickly register devices at scale with our provisioning service. Learn more about Azure IoT Hub Device Provisioning Service.

Azure Stream Analytics on IoT Edge

This on-demand, real-time analytics service is now available for your edge devices. Shifting cloud analytics and custom business logic closer to your devices where the data is produced is a great solution for customers who need low-latency, resiliency, and efficient use of bandwidth. It also enables organizations to focus on more business insights instead of data management. Learn more about Azure Stream Analytics on Iot Edge.

Register for IoT in Action

To learn more about how Azure IoT can help you accelerate your business revolution, attend IoT in Action in San Francisco on February 13.

Get expert insights from IoT industry pioneers like James Whittaker and Sam George. Learn how to unlock the intelligent edge with Azure IoT. Take an in-depth exploration of two Microsoft approaches to building IoT solutions, Azure PaaS and SaaS. Find out how to design and build a cloud-powered AI platform with Microsoft Azure + AI. Plus, connect with partners who can help you take your IoT solution from concept to reality.

Register for this free one-day event today, space is limited.
Quelle: Azure

Compatibility Level 140 is now the default for Azure SQL Database

Database Compatibility Level 140 is now the default for new databases created in Azure SQL Database across almost all regions. At this point in time, there are already 539,903 databases in Azure SQL Database already running in Compatibility Level 140.

Frequently asked questions related to this announcement:

Why move to database Compatibility Level 140?

The biggest change is the enabling of the adaptive query processing feature family, but there are also query processing related fixes and batch mode improvements as well. For details on what Compatibility Level 140 specifically enables, see the blog post Public Preview of Compatibility Level 140 for Azure SQL Database.

What do you mean by "database Compatibility Level 140 is now the default"?

If you create a new database and don’t explicitly designate COMPATIBILITY_LEVEL, the database Compatibility Level 140 will be used.

Does Microsoft automatically update the database compatibility level for existing databases?

No, we do not update database compatibility level for existing databases. This is up to customers to do at their own discretion. With that said, we highly recommend customers plan on moving to the latest compatibility level in order to leverage the latest improvements.

My application isn’t certified for database Compatibility Level 140 yet. For this scenario, what should I do when I create new databases?

For this scenario, we recommend that database configuration scripts explicitly designate the application-supported COMPATIBILITY_LEVEL rather than rely on the default.

I created a logical server before 140 was the default database compatibility level. What impact does this have?

The master database of your logical server will reflect the database compatibility level that was the default at the time of the logical server creation. New databases created on a logical server with an older compatibility level for the master database will still use database Compatibility Level 140 if not explicitly designated. The master database compatibility cannot be changed without recreating the logical server. Having master at an older database compatibility level will not impact user database behavior.

I would like to change to the latest database compatibility level, any best practices for doing so?

For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for SQL Server and Azure SQL DB.
Quelle: Azure

Azure Analysis Services now available in East US, West US 2, and more

Since being generally available in April 2017, Azure Analysis Services has quickly become the clear choice for enterprise organizations delivering corporate business intelligence (BI) in the cloud. The success of any modern data driven organization requires that information be available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions.

Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required including finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics before they can explore the data to derive insights. With Azure Analysis Services a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data to gain insights. Azure Analysis Services use a highly optimized in memory engine to provide responses to user queries at the speed of thought.

We are excited to share that Azure Analysis Services is now available in 4 additional regions including East US, West US 2, USGov-Arizona and USGov-Texas. This means that Azure Analysis Services is available in the following regions: Australia Southeast, Brazil South, Canada Central, East US, East US 2, Japan East, North Central US, North Europe, South Central US, Southeast Asia, UK South, West Central US, West Europe, West India, West US and West US 2.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

Announcing the extension of Azure IP Advantage to Azure Stack

Azure IP Advantage now covers workloads deployed to Azure Stack. As customers rely on Azure Stack to enable hybrid cloud scenarios and extend the reach of Azure to their own data centers or in hosted environments, they increasingly need to navigate unfamiliar IP risks inherent in the digital world. The Azure IP Advantage benefits, such as the uncapped IP indemnification of Azure services, including the open source software powering these services, or the defensive portfolio of 10,000 patents, are available to customers innovating in the hybrid cloud with Azure Stack.

Customers use Azure Stack to access cloud services on-premises or in disconnected environments. For example, oil and gas giant Schlumberger use Azure Stack to enhance its drilling operations. Customers such as Saxo Bank also use Azure Stack in sovereign or regulated context where there is not an Azure region, while reusing the same application code globally. With Azure Stack, customers can rely on a consistent set of services and APIs to run their applications in a hybrid cloud environment. Azure IP Advantage IP protection benefits now cover customers consistently in the hybrid cloud.

With Azure IP Advantage, Azure Stack services receive uncapped indemnification from Microsoft, including for the open source software powering these services. Eligible customers can also access a defensive portfolio of 10,000 Microsoft patents to defend their SaaS application in Azure Stack. This portfolio has been ranked among the top 3 cloud patent portfolios worldwide. They can also rely on a royalty free springing license to protect them in the unlikely event Microsoft transfers a patent to a non-practicing entity.

As the cloud is often used for mission critical applications, considerations for choosing a cloud vendor are becoming wide-ranging and complex. When they select Azure and Azure Stack, customers are automatically covered by Azure IP Advantage, the best-in-industry IP protection program, for their hybrid cloud workloads.
Quelle: Azure

Azure Marketplace new offers in December 2017

We continue to expand the Azure Marketplace ecosystem. In December 2017, seven new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Heimdall Data SQL Optimization Platform: Heimdall Data is an all-in-one SQL platform for the application developer and DBA.

Luminate Security Connector: Luminate Security revolutionizes the way enterprises provide secure access to corporate applications and services hosted in Microsoft Azure.

Renku Language Detection Engine: Renku Language Detection Engine performs language detection on natural language text. When given input text, Renku applies statistical methods to determine probabilities of the text belonging to over 100 languages.

BeeGFS Free – Community Support: High performance distributed, parallel file system from Thinkparq.

Elastic Stack on Kubernetes: Bring your own license enabled. This application, from Visual Studio China, is free to use, without any software fees and is fully functional without limitations or the need to purchase a license.

Quartus® Pro and Intel® FPGA SDK For OpenCL™: The revolutionary Intel Quartus Prime design software includes everything you need to design for Intel FPGAs, SoCs, and CPLDs from design entry and synthesis to optimization, verification, and simulation.

Wordpress LEMP Max Performance: The maximum performance one click install solution for WordPress 4 from Jetware. A free and open source content management system (CMS), running on completely integrated, pre-configured and optimized LEMP stack with the freshest PHP 7.

Quelle: Azure

Manage and Auto-scale your IoT solution with a predictable IoT Cloud

As companies continue to fully roll out their IoT projects, management of the various components of the solution becomes a critical part of their operations. The flexibility of Azure IoT Hub to enable customers to start small, paying only for the amount of IoT Hub capacity needed at any point along the device deployment curve, helps drive predictability in the cost of an IoT solution.

However, the potentially irregular rate of device and message growth in an IoT solution does add a unique challenge for operations. When the number of messages ingested from devices in a given day exceeds the limit of the chosen IoT Hub capacity, the IoT Hub will begin to reject messages until either the IoT Hub is scaled-up, or the time rolls over into the next day (UTC time). Wouldn’t it be nice to have IoT Hub just automatically scale up to a higher capacity when a certain threshold of messages is met, before this limit is reached?

While at this point, IoT Hub does not have this capability built into the service, we have published a sample solution for monitoring and automatically scaling your IoT Hub based on reaching a specific threshold of messages. The sample, published on the Azure-Samples site, leverages the Azure Durable Functions framework and the IoT Hub Management Client to continually monitor the consumption of your IoT Hub message quota and, when needed, programmatically scale up your IoT Hub capacity.

Azure Durable Functions

To orchestrate our IoT Hub scaling solution, we leverage the Singleton Orchestrator pattern of the Azure Durable Functions framework. The key benefit of this pattern is the ability to ensure that exactly one instance of the scaling solution for a given IoT Hub is running at a time. That frees us from having to worry about the possible race conditions of multiple instances of our scaling function running concurrently. The pattern really consists of three functions that operate our solution:

IotHubScaleInit – this function is executed on a regular timer (by default, once per hour). This function checks to see if an instance of the Orchestrator function is running and, if not, starts one.
IotHubScaleOrchestrator – this function implements the Orchestrator for the solution. It’s role in the pattern is to manage the execution of the worker function
IotHubScaleWorker – this is the function that performs the actions of checking to see if the IoTHub needs to be scales and, if so, scaling it.

We start with a timer-initiated IoTHubScaleInit function that runs occasionally (in the sample, once an hour) and checks to see if an instance of the orchestrator is already running and, if not, starts one. The relevant code, from the IoTHubScaleInit function is shown below, some code removed for brevity.

const string IotHubScaleOrchestratorInstanceId = "iothubscaleorchestrator_1";

var existingInstance = await starter.GetStatusAsync(IotHubScaleOrchestratorInstanceId);

if (existingInstance == null)
{
await starter.StartNewAsync(IotHubScaleOrchestratorName,IotHubScaleOrchestratorInstanceId, input: null);
}

The key to this function is the constant instance ID. By default, when you launch an orchestrator, the system will generate a unique instance ID. In our case, by specifying an ID, we can check and see if that instance is already running with the GetStatusAsync function.

The IotHubScaleOrchestrator function, as the name implies, orchestrates the execution of the solution. It recovers from failures in execution, and also allows dehydration of the code while waiting on the next execution. But most importantly, it allows us to make sure we kick off another instance of the scaling function after the existing one finishes. This is the critical part of making sure we never have more than one instance executing at a given time. The key parts of this function are:

await context.CallActivityAsync(IotHubScaleWorkerName);

DateTime wakeupTime = context.CurrentUtcDateTime.Add(TimeSpan.FromMinutes(JobFrequencyMinutes));
await context.CreateTimer(wakeupTime, CancellationToken.None);

context.ContinueAsNew(null);

After calling and waiting on the worker function, we create a timer via the Durable Functions framework. The ContinueAsNew method of the context object then tells the framework to end this instance and schedule another one to fire up when the timer expires. The framework takes care of the rest.

The remainder of the solution is the IotHubScaleWorker function, which performs the actual work of checking the status of the IoT Hub usage and, if necessary, scaling it.

IoT Hub Management Client

The IoT Hub Management Client enables you to interact with the control plane of the IoT Hub service, including creating, deleting, and managing the configuration of your IoT Hubs. Within the worker function, the client does all of the heavy lifting of interacting with the IoT Hub service.

For example, the following two snippets from the code get the current configuration details of the IoT Hub, the most important of which for our purposes is the current SKU (S1, S2, or S3) and the current number of units. The second line gets the current operational metrics of the hub. The primary one of interest is the TotalMessages metric which gives the current number of messages the IoT Hub has ingested that day.

IotHubDescription desc = client.IotHubResource.Get(ResourceGroupName, IotHubName);
IPage<iothubquotametricinfo> mi = client.IotHubResource.GetQuotaMetrics(ResourceGroupName, IotHubName);

Once we have that information, we determine via a couple of helper functions included in the sample, if we need to scale the IoT Hub by comparing the current message count with a defined threshold for that SKU/unit combination. If we need to scale, we simply update the SKU and units within the IoTHubDescription object we obtained above, and leverage the CreateOrUpdate management function to update the configuration of our IoT Hub. This performs the scale up of the IoT Hub with no interruption to existing devices or clients.

desc.Sku.Name = newSkuName;
desc.Sku.Capacity = newSkuUnits;
client.IotHubResource.CreateOrUpdate(ResourceGroupName, IotHubName, desc);

Scaling Down

With the trajectory of most IoT projects being growth, and for simplicity, we focused this sample on scaling-up IoT Hubs. However, there are certainly valid scenarios where an IoT Hub may need to be automatically scaled down to lower costs when previous message volumes drop. In the sample documentation, we offer some suggestions for modifying the solution for scaling down IoT Hubs when necessary.

Give the sample a try and sleep better tonight knowing you have one fewer operational tasks on your plate!

A few notes about the sample

The sample only works for the Standard tiers of IoT Hub. The Free tier of IoT Hub can’t be scaled, so it’s not applicable. Also note that you cannot convert directly from the Free tier of IoT Hub to a Standard tier.
The sample provides one straightforward implementation of a scaling algorithm, but with the supplied source code, you can customize it to meet your unique scaling needs.
For the sake of your IoT budget, due consideration should be given to automatically scaling IoT Hub as you reach the higher service levels, such as S3, as each unit increase adds both significant capacity as well as cost.

Quelle: Azure

Azure Security Center and Microsoft Web Application Firewall Integration

Web applications are increasingly becoming targets of attacks such as cross-site scripting, SQL injection, and application DDoS. While OWASP provides guidance on writing applications that can make them more resistant to such attacks, it requires rigorous maintenance and patching at multiple layers of application topology. Microsoft Web Application Firewall (WAF) and Azure Security Center (ASC) can help secure web applications against such vulnerabilities.

Microsoft WAF is a feature of Azure Application Gateway (layer 7 load balancer) that protects web applications against common web exploits using OWASP core rule sets. Azure Security Center scans Azure resources for vulnerabilities and recommends mitigation steps for those issues. One such vulnerability is the presence of web applications that are not protected by WAF. Currently, Azure Security Center recommends a WAF deployment for public facing IPs that have an associated network security group with open inbound web ports (80 and 443). Azure Security Center offers provisioning of application gateway WAF to an existing Azure resource as well as adding a new resource to an existing web application firewall. By integrating with WAF, Azure Security Center can analyze its logs and surface important security alerts.

In some cases, the security admin may not have resource permissions to provision WAF from ASC or the application owner has already configured WAF as part of the app deployment. To accommodate these scenarios, we are pleased to announce that Azure Security Center will now automatically discover non-ASC provisioned Microsoft WAF instances. Previously provisioned WAF instances will be displayed in ASC security solutions pane under discovered solutions where the security admin can integrate them with Azure Security Center. Connecting existing Microsoft WAF deployments will allow customers to take advantage of ASC detections regardless of how WAF was provisioned. Additional configuration settings such as custom firewall rules sets are available in the WAF console which is linked directly from security center. This article on configuring Microsoft WAF can provide more guidance on provisioning process.

We would love to hear your feedback! If you have suggestions or questions, please leave a comment at the bottom of the post or reach out to ascpartnerssupport@microsoft.com.

Interested in learning more about Azure Security Center?

Intro to Azure Security Center

Azure Security Center FAQ
Quelle: Azure

Asynchronous refresh with the REST API for Azure Analysis Services

Azure Analysis Services unlocks datasets with potentially billions of rows for non-technical business users to perform interactive analysis. Such large datasets can benefit from features such as asynchronous refresh.

We are pleased to introduce the REST API for Azure Analysis Services. Using any programming language that supports REST calls, you can now perform asynchronous data-refresh operations. This includes synchronization of read-only replicas for query scale out. Please see the blog post Introducing query replica scale-out for Azure Analysis Services for more information on query scale out.

Data-refresh operations can take some time depending on various factors, including data volume and level of optimization using partitions. These operations have traditionally been invoked with existing methods such as using TOM (Tabular Object Model), PowerShell cmdlets for Analysis Services, or TMSL (Tabular Model Scripting Language). The traditional methods may require long-running HTTP connections. A lot of work has been done to ensure the stability of these methods, but given the nature of HTTP, it may be more reliable to avoid long-running HTTP connections from client applications.

The REST API for Azure Analysis Services enables data-refresh operations to be carried out asynchronously. It therefore does not require long-running HTTP connections from client applications. Additionally, there are other built-in features for reliability such as auto retries and batched commits.

Please visit our documention page for details on how to use the REST API for Azure Analysis Services. It covers how to perform asynchronous refreshes, check their status, and cancel them if necessary. Similar information is provided for query-replica synchronization. Additionally, the C# RestApiSample on GitHub code sample is provided.
Quelle: Azure

Whitepaper: Selecting the right secure hardware for your IoT deployment

How do you go about answering those perplexing questions such as what secure hardware to use? How do I gauge the level of security? How much security do I really need and hence how much premium should I place on secure hardware? We’ve published a new whitepaper to shed light on this subject matter.

In our relentless commitment to securing IoT deployments worldwide, we continue to raise awareness to the true nature of security—that it is a journey and never an endpoint. Challenges emerge, vulnerabilities evolve, and solutions age thereby triggering the need for renewal if you are to maintain a desired level of security.

Securing your deployment as desired comprises planning, architecture, and execution main phases. For IoT, these are further broken down into sub-phases to include design assessment, risk assessment, model assessment, development, and deployment as shown in Figure 1. The decision process at each phase is equally important, the process must take all other phases into consideration for optimal efficacy. This is especially true when choosing the right secure hardware, also known as secure silicon or Hardware Secure Module(HSM), to secure an IoT deployment.
 

Figure 1: The IoT Security Lifecycle

Choosing the right secure hardware for securing an IoT deployment requires that you understand what you are protecting against (risk assessment), which drives part of the requirements for the choice. The other part of the requirements entails logistical considerations like provisioning, deployment and retirement, as well as tactical considerations like maintainability. These requirements in turn drive architecture and development strategies which then allow you to make the optimal choice of secure hardware. While this prescription is not an absolute guarantee for security, following these guidelines allows one to comfortably claim due diligence for a holistic consideration towards the choice of the right secure hardware, and hence the greatest chance of achieving security goals.

The choice itself requires knowledge of available secure hardware options as well as corresponding attributes such as protocol and standards compliances. We’ve developed a whitepaper, The Right Secure Hardware for Your IoT Deployment, to highlight the secure hardware decision process. This whitepaper educates on the Architecture Decision phase of the IoT security lifecycle. It comprises the second whitepaper for the IoT security lifecycle decision making series following previously published whitepaper, Evaluating Your IoT Security, which offers education on the Planning phase.
 
Download IoT Security Lifecycle whitepaper series:

Evaluating Your IoT Security.
The Right Secure Hardware for Your IoT Deployment.

What strategies do you use in selecting the right hardware to secure your IoT devices and deployment? We invite you to share your thoughts in comments below.
Quelle: Azure

Using Qubole Data Service on Azure to analyze retail customer feedback

It has been a busy season for many retailers. During this time, retailers are using Azure to analyze various types of data to help accelerate purchasing decisions. The Azure cloud not only gives retailers the compute capacity to handle peak times, but also the data analytic tools to better understand their customers.

Many retailers have a treasure trove of information in the thousands, or millions, of product reviews provided by their customers. Often, it takes time for particular reviews to show their value because customers "vote" for helpful or not helpful reviews over time. Using machine learning, retailers can automate identifying useful reviews in near real-time and leverage that insight quickly to build additional business value.

But how might a retailer without deep big data and machine learning expertise even begin to conduct this type of advanced analytics on such a large quantity of unstructured data? We will be holding a workshop in January to show you how easy that can be through the use of Azure and Qubole’s big data service.

Using these technologies, anyone can quickly spin up a data platform and train a machine learning model utilizing Natural Language Processing (NLP) to identify the most useful reviews. Moving forward, a retailer can then identify the value of reviews as they are generated by the user base and gain insights that can impact many aspects of their business.

Join Microsoft, Qubole, and Precocity for a half-day, hands on lab experience where we will show how to:

Leverage Azure cloud-based services and Qubole Data Service to increase the velocity of managing advanced analytics for retail
Ingesting a large retail review data set from Azure and leverage Qubole notebooks to explore data in a retail context
Demonstrate the autoscaling capability of a Qubole Spark cluster during a Natural Language Processing (NLP) pipeline
Train a machine learning model at scale using Open Source technologies like Apache Spark and score new customer reviews in real-time
Demonstrate use of Azure’s Event Hub and CosmosDB coupled with Spark Streaming to predict helpfulness of customer reviews in real-time

This workshop can be the basis of creating business value from reviews for other purposes including:

Fake review fraud detection
Identifying positive product characteristics
Identify influencers
Uncover new feature attributes for a product to inform merchandising

Register today for our event in Dallas, Texas on January 30th, 2017.

Space is limited, so register early!
Quelle: Azure