Azure IoT Hub message routing dramatically simplifies IoT solution development

IoT solutions can be complex, and we’re always working on ways to simplify them.

As we work with customers on real-world, enterprise-grade IoT solutions built on Azure IoT, one pattern we’ve noticed is how businesses route inbound messages to different data processing systems.

Imagine millions of devices sending billions of messages to Azure IoT Hub. Some of those messages need to be processed immediately, like an alarm indicating a serious problem. Some messages are analyzed for anomalies. Some messages are sent to long term storage. In these cases, customers have to build routing logic to decide where to send each message:

While the routing logic is straightforward conceptually, it’s actually really complex when you consider all of the details you have to handle when you build a dispatching system: handling transient faults, dealing with lost messages, high reliability, and scaling out the routing logic.

To make all this easier, we’ve made a great new feature to IoT Hub generally available: message routing. This allows customers to setup automatic routing to different systems via Azure messaging services and routing logic in IoT Hub itself, and we take care of all of the difficult implementation architecture for you:

You can configure your IoT hub to route messages to your backend processing services via Service Bus queues, topics, and Event Hubs as custom endpoints for routing rules. Queuing and streaming services like Service Bus queues and Event Hubs are used in many if not all messaging applications. You can easily set up message routing in the Azure portal. Both endpoints and routes can be accessed from the left-hand info pane in your IoT Hub:

You can add routing endpoints from the Endpoints blade:

You can configure routes on your IoT Hub by specifying the data stream (device telemetry), the condition to match, and the endpoint to which matching messages are written.

Message routing conditions use the same query language as device twin queries and device jobs. IoT Hub evaluates the condition on the properties of the messages being sent to IoT Hub and uses the result to determine where to route messages. If messages don’t match any of your routes, the messages are written to the built-in messages/events endpoint just like they are today.

We have also enhanced our metrics and operations monitoring logs to make it easy for youto tell when an endpoint is misbehaving or whether a route was incorrectly configured. You can learn about the full set of metrics that IoT Hub provides in our documentation including how each metric is used.

Azure IoT is committed to offering our customers high availability message ingestion that is secure and easy to use. Message routing takes telemetry processing to the next step by offering customers a code-free way to dispatch messages based on message properties. Learn more about today&;s enhancements to Azure IoT Hub messaging by reading the developer guide. We firmly believe in customer feedback, so please continue to submit your suggestions through the Azure IoT User Voice forum or join the Azure IoT Advisors Yammer group.
Quelle: Azure

Scaling Up "Project Springfield" using Azure

This guest post is from the Project Springfield team in Microsoft’s Artificial Intelligence and Research group. Project Springfield delivers pioneering artificial intelligence for finding security issues as a cloud service. Learn how the team used Azure to meet and exceed scaling challenges on a tight timeline.

The Project Springfield engineering team, led by William Blum, had built the first release of “Project Springfield,” which helped customers find “million-dollar” security bugs by combining pioneering “whitebox fuzzing” technology from Microsoft Research with the elasticity of the cloud. Customers could upload their software to Project Springfield, which created a fuzzing lab in Azure. Each fuzzing lab tested with a portfolio of methods, looked for crashes from the test cases, and then picked the highest value issues to report. The power of Azure enabled this compute intensive process to scale up and scale down as customers’ demands changed, while simultaneously collecting data from every run to improve the service. To do this, Project Springfield had to dynamically create large numbers of virtual machine and network resources and manage them on behalf of the customer.

We had built the initial product on Azure, using the classic Azure management interface to dynamically provision virtual machines and networking resources. Now it was time to prepare for a new wave of customers – which meant scaling up the service by orders of magnitude. Scaling with the classic Azure would be a challenge. For example, each fuzzing lab used up a different cloud service on Project Springfield, yet there was limit of just 200 cloud services per subscription. That meant if the customers, in aggregate, ever needed to test more than 200 pieces of software at a time, Project Springfield would need to partition fuzzing labs across subscriptions even if each subscription otherwise had enough virtual machines available to serve customers. There had to be a better way.

We found that better way by re-architecting the service with the Azure management interface Azure Resource Manager as well as Service Fabric, Microsoft&;s micro-service-based application platform. With Azure Resource Manager, virtual machines, virtual networks, and load balancers are all treated as different resources. These resources can be combined in an Azure Resource Manager template, which is a JSON object defining what resources we needed and how they fit together. All the resources Project Springfield needs for a security testing lab are specified by a single template. When a customer needs a new lab, Azure can read the template and then dynamically create all the resources needed from the template. With Service Fabric, we could easily port our backend worker roles to micro services and dynamically scale up and scale down backend resources based on customer needs. The payoff was that instead of being locked into a single inflexible bundle, we could dynamically reshape the way resources were deployed.

Re-architecting the service around the new deployment concepts introduced by ARM required some work. The work paid off as we found that Azure’s infrastructure-as-a-service capabilities gave us better control and finer granularity over the configuration of our network and compute resources. Once adjusted to a new way of thinking, we could see how to make Project Springfield even more efficient and deliver value. For example, we realized that by using Azure Network Security Groups, we could enable each customer to set different IP address restrictions on who could access their Project Springfield resources – a key feature for enterprise users.

Even better, betting on Azure made us “future proof.” As Azure launched new features, such as support for Red Hat Linux, Windows Server Containers and more, we could see how they would let Project Springfield meet customer needs. With Azure Resource Manager, these are now just different kinds of resources in a resource manager template. That gave the team a single consistent way for managing fuzzing labs and laid the foundation for eventually offering different types of fuzzing labs for different customer needs.

By using the new capabilities of Azure, such as Azure Resource Manager, we achieved our scale goals in four months. That meant we could bring on customers and partners for trials of Project Springfield as fast as we could call them, without worrying that we would run out of capacity. What’s more, building on Azure set the team up for success as new capabilities came online. At Microsoft Ignite 2016, OSIsoft & Deschutes Brewery, EY, and Leviathan Security Group stood on stage and told the world about the value they saw in Project Springfield. Within a week over a thousand people signed up for trials! That’s a win by any standard.

Related content

Security testing in the cloud with F# and Project Springfield

Azure Resource Manager overview

Project Springfield: a cloud service built entirely in F#

More about Project Springfield

The Project Springfield team
Quelle: Azure

StorSimple as a Backup Target

We are pleased to announce that backup-to-disk workloads will be supported on the StorSimple 8000 Series devices running Update 3 or later. We are also making deployment guides available to help configure backup applications that use StorSimple as a disk-based backup target.

StorSimple combines the performance and compatibility of local storage with the scale, resiliency, and availability of cloud object storage, providing the best of both worlds.

StorSimple Updates?

With the release of Update 3 for the StorSimple 8000 series devices, we have introduced a new backup mode that tunes the device and enables it to be used as a backup-to-disk target. We have also included improvements to cloud throughput and enhancements around dealing with high frequency data churn. These additions enable the device to be configured as both primary and secondary backup target.

Why use StorSimple as a backup target?

Typically, >95% of restores are served from the last 7 days of backups. StorSimple’s cloud-centric architecture ensures that the most recently written backup will remain on the device’s local storage capacity, while the older, archival-class backups tier out to cloud storage.

This approach facilitates:

 

Faster backups using local storage
Faster restore from local storage
Azure as offsite storage
On-demand elastic expansion, cloud scale, and cloud economics for backup capacity
Built-in Disaster Recovery in Azure IaaS
Removal of offsite media management
No media migration between backup media types or formats
Enterprise grade device with dual storage controllers, both 1 GbE and 10 GbE connectivity, and seamless integration with Microsoft Azure.
Reuse data in Azure using StorSimple device in Azure

Once the backup data is in the cloud, StorSimple device in Azure and StorSimple Data Manager service can be used to transform the data into workable datasets for a myriad use cases.

Which backup software applications are supported today?

StorSimple is certified with Veritas Backup Exec, Veritas NetBackup, and Veeam

Please refer the below configuration guides for vendor-specific configurations.

Configure StorSimple with Veritas Backup Exec
Configure StorSimple with Veritas NetBackup
Configure StorSimple with Veeam

There are ongoing efforts to certify additional software vendors – watch this space for the latest news!

Typical deployment topology

The StorSimple 8000 series volumes are deployed as disk-based backup targets for backup applications. An example of such a deployment is as follows:

Quelle: Azure

Creating your first data model in Azure Analysis Services

Azure Analysis Services is a new preview service in Microsoft Azure where you can host semantic data models. Users in your organization can then connect to your data models using tools like Excel, Power BI and many others to create reports and perform ad-hoc data analysis.

To understand the value of Azure Analysis Services, imagine a scenario where you have data stored in a large database. You want to make that data available to your business users or customers so they can do their own analysis and build their own reports. To do this, one option would be to give those users access to that database. Of course, this option has several drawbacks. The design of that database, including the names of tables and columns may not be easy for a user to understand. They would need to know which tables to query, how those tables should be joined, and other business logic that needs to be applied to get the correct results. They would also need to know a query language like SQL to even get started. Most often this will lead to multiple users reporting the same metrics but with different results.

With Azure Analysis Services, you can encapsulate all the information needed into a semantic model which can be more easily queried by those users in an easy drag-and-drop experience. And you can ensure that all users will see a single version of the truth. Some of the metadata included in the semantic model includes; relationships between tables, friendly table/column names, descriptions, display folders, calculations and row level security.

Once your data is properly modeled for your users to consume, Azure Analysis Services offers additional features to enhance their querying experience. The biggest of which is the option to put the data in an in memory columnar cache which can accelerate queries to sub second performance. This not only improves the query experience but by hitting the cache also reduces the query load on your underlying database.

Ready to give it a try? Follow the steps in the rest of this blog post and you’ll see how easy it is.

Before getting started, you’ll need:

Azure Subscription – Sign up for a free trial.

SQL Server Data Tools – Download the latest version for free.

Power BI Desktop – Download the latest version for free.

Create an Analysis Services server in Azure

1. Go to http://portal.azure.com.

2. In the Menu blade, click New.

3. Expand Intelligence + Analytics, and then click Analysis Services.

4. In the Analysis Services blade, enter the following and then click Create:

Server name: Type a unique name.
Subscription: Select your subscription.
Resource group: Select Create new, and then type a name for your new resource group.
Location: This is the Azure datacenter location that hosts the server. Choose a location nearest you.
Pricing tier: For our simple model, select D1. This is the smallest tier and great for getting started. The larger tiers are differentiated by how much cache and query processing units they have. Cache indicates how much data can be loaded into the cache after it has been compressed. Query processing units, or QPUs, are a sign of how many queries can be supported concurrently. Higher QPUs may mean better performance and allow for a higher concurrency of users.

Now that you’ve created a server, you can build your first model. In the next steps, you’ll use SQL Server Data Tools (SSDT) to create a data model and deploy it to your new server in Azure.

Create a sample data source

Before you can create a data model with SSDT, you’ll need a data source to connect to. Azure Analysis Services supports connecting to many different types of data sources both on-premises and in the cloud. For this post, we’ll use the Adventure Works sample database.

1. In Azure portal, in the Menu blade, click New.

2. Expand Databases, and then click SQL Database.

3. In the SQL Database blade, enter the following and then click Create:

Database name: Type a unique name.
Subscription: Select your subscription.
Resource group: Select the same resource group you created for your Analysis Services server.
Select source: Select Sample (Adventure Works LT).
Server: Choose a location nearest you.
Pricing tier: For your sample database, select B.
Collation: Leave the default, SQL_Latin1_General_CP1_CI_AS.

Now that you’ve created a sample data source, you’ll have some data to connect to when you build your data model.In the next steps, you’ll use SQL Server Data Tools (SSDT) to connect to your new data source, create a data model, and deploy it to your new server in Azure.

Create a data model

To create Analysis Services data models, you’ll use Visual Studio and an extension called SQL Server Data Tools (SSDT).

1. In SSDT, create a new Analysis Services Tabular Project.

If asked to select a workspace type, select Integrated.

2. Click the Import From Data Source icon on the toolbar at the top of the screen.

3. Select Microsoft SQL Azure as your data source type and click Next.

4. Fill in the connection information for the sample SQL Azure database created earlier and click Next.

Server Name: Name of SQL Azure server to connect to.
User Name: Name of the user which will be used to login to the server.
Password: Password for the account.
Database Name: Name of the SQL database to connect to.

Note: If using SQL Azure ensure that you have allowed your IP address access through the firewall. Also, ensure that “Allow access to Azure Services” is set to “on” for the firewall.

5. Select Service Account for the impersonation mode and click Next.

6. Select the tables you wish to import into cache and click Finish:

At this step, you can optionally provide a friendly name for each table. For large tables, which may not fit into cache, you can also specify a filter expression to reduce the number of rows. When complete, click next.
Data will now be read from the database and pulled into a local cache within Visual Studio.
Once loading is complete, you will have your first model created and will be able to see each table and the data within them. You can also switch to a diagram view by clicking the little diagram option at the bottom right of the screen:

The diagram view makes it really easy to see all of the tables and the relationships between them.

Improving the model

Now that your basic model is built, you could start querying it now or you could enhance it further by using more of the available modeling features. Some of these features include:

Create or edit relationships. You can add, remove or change relationships between tables by going to the diagram view and dragging a line between two columns in different tables. Once tables are joined together, they can automatically be queried together when a user selects columns from both tables.
Edit properties for a table or column. You can update multiple properties for tables and columns by clicking on them and updating the values in the properties pane.

Add more business logic to the model by creating calculations and measures.

Deploy

Once your model is complete, you can now deploy it to the Azure AS server which you created in the first step. This can be done with the following steps:

1. Copy your Azure Analysis Services server name for the Azure portal. This can be found at the top of the overview section of your server.

2. In the solution explorer in Visual Studio, right click on the project and click properties.

3. Change the deployment server to the name of your Azure AS server and click OK.

4. Right click the project name again, but this time click Deploy.

Connect

Now that you model has been creating you can connect with it through tools like the Power BI Desktop or Excel.

Power BI Desktop

If you don’t already have the Power BI Desktop, you can download it for free.

1. Open the Power BI Desktop

2. Click Get Data.

3. Select Databases/SQL Server Analysis Services and then click connect.

4. Enter your Azure AS server name and click OK.

5. On the Navigator screen, select your model and click OK.

You will now see your model displayed in the field list on the side. You can drag and drop the different fields on to your page to build out interactive visuals.

Excel

Learn more about connecting through Excel.

Learn more about Azure Analysis Services.
Quelle: Azure

Azure Media Indexer 2: Japanese support, punctuation improvements, no more time limit

On the heels of Microsoft&;s groundbreaking new developments in speech recognition, we have are continuing along our path: improving the quality of the transcripts generated by Azure Media Indexer and expanding our locale support to eventually accomplish our goal of being able to recognize all human speech on the Azure cloud.

Today we are ready to release the following improvements to Azure Media Indexer 2 Preview:

Japanese language models for public (preview) consumption in Azure Media Indexer 2
Removal of the 10 minute processing limit 
Additional quality improvements with respect to punctuation and grammar

This Japanese language works in an identical manner to all other language models, simply provide the proper language code in the configuration file.

The following configuration will allow you to process a file with Japanese speech content (with defaults in all other Options)

{
&039;Version&039;: &039;1.0&039;,
&039;Features&039;: [{
&039;Options&039;: {
"Language": "JaJp"
}
}]
}

 

Still not sure what Azure Media Indexer 2 is?  Read the introductory blog post to learn how to extract the speech content from your media files.

To learn more about Azure Media Analytics, check out the introductory blog post.

Have feedback?  Share it on our feedback forum.
Quelle: Azure

Query Store ON is the new default for Azure SQL Database

We are happy to announce that Query Store is turned ON in all Azure SQL databases (including Elastic Pools) which will bring benefits both to the end users and the entire Azure SQL Database platform.

Why is this important?

Query Store acts as a “flight data recorder” for the database, continuously collecting critical information about the queries. It dramatically reduces resolution time in case of performance incidents, as pre-collected, relevant data is available when you need it, without delays.

You can use Query Store in scenarios when tracking performance and ensuring database performance predictability is critical. The following are some examples where Query Store is going to significantly improve your productivity:

Identifying and fixing application performance regressions (more details in this blog article)
Tuning the most expensive queries considering different consumption metrics (elapsed time, CPU time, used memory, read and write operations, log I/O, etc.)
Keeping performance stability with compatibility level 130 in Azure SQL Database (more details in this blog article)
Assessing impact of any application or configuration change (A/B testing)
Identifying and improving ad-hoc workloads (more details here)

Query Store also provides the foundation for performance monitoring and tuning features, such as the SQL Database Advisor. Query Store powers SQL Database Performance Insights which allows you to monitor and troubleshoot database performance directly from the Azure portal. With Query Store turned ON we ensure that relevant information about your most critical queries is available first time you open the queries chart on SQL Database Performance Insights:

We strongly recommend keeping Query Store ON. Thanks to an optimal default configuration and automatic retention policy, Query Store operates continuously using an insignificant part of the database space with a negligible performance overhead, typically in the range of 1-2%.

The default configuration is automatically applied by Azure SQL Database. If you want to switch to a customized Query Store configuration, use ALTER DATABASE with Query Store options. Also check out Best Practices with the Query Store to learn how to choose optimal parameter values.

Next steps

For more detailed information, check out the online documentation:

Query Store: A flight data recorder for your database
Query Store Usage Scenarios
Monitoring Performance by Using the Query Store

Quelle: Azure

Project Bletchley – New Blockchain partners come to Azure Marketplace

As promised, we continue to release early and release often in the blockchain space together with our partners.  I am pleased to be back to announce more great additions to our blockchain offering on Azure.

In just the last couple of weeks, we have added two new partners and solutions to the growing blockchain ecosystem in the Azure Marketplace.

Parity – Ethcore recently published their high performance, low footprint, reliable Ethereum blockchain client, Parity, on Azure.  This offering simplifies setting up a new Parity node in the cloud, with little configuration from the user.  In a matter of minutes, a user can have a single node, private Ethereum network up and running for development and test.  

Blockstack Core v14 – Blockstack is building a new decentralized web of server-less applications where users can control their own data.  Applications run locally and utilize user-specific data stores as their backend to maintain decentralization and control.  Users can seamlessly deploy Blockstack Core nodes on Microsoft Azure.  Blockstack Core nodes provide the core functionality of the Blockstack stack, processing data from a standard blockchain layer to construct a global view of security and ownership mappings.

In addition to our growing blockchain partner ecosystem, over the last few weeks, we have focused on functionality to improve deployment resiliency for our own consortium network blockchain solution releasing several updates that should improve overall deployment success rates.  More new features are already in the works, so stay tuned for updates!
Quelle: Azure

Sneak peek: A new Azure Cloud Console

For a while now, I have been passionate about containers and how they are revolutionizing and truly delivering the promise of cloud native computing. However, even as excited as I am about revolutionizing container compute with Azure, I’m equally passionate about user interface.  After all, is useless if it can’t be accessed from a useful interface.  So, today, I’m excited to show you how we’re bringing these passions together in the new cloud console for the Azure portal.

Traditional cloud user interfaces have been divided into either a web-based graphical interface or a command line terminal interface. Each of these interfaces provide their utility and different users prefer different interfaces for different tasks. However, most Azure users use both interfaces to manage their applications on Azure.  Much like developing code before integrated development environments like Visual Studio or Visual Studio Code, switching between these interfaces requires switching between applications, a context switch that slows users and makes it harder to accomplish their goals.  In some cases, (for example tablets and other mobile devices) a terminal interface may not even be available and a user may have to switch devices.

To address these needs, we built an integrated workflow enabling users to build their applications on Azure using graphical and command line tools, even on devices where command line tools aren&;t installed. Today, we&039;re giving you a sneak peek of this new cloud shell experience that we are adding into the Azure portal. As you can see from the video below, the shell is integrated into the portal so users can quickly drop into a command line experience while simultaneously viewing their cloud resources in the graphical web interface.

Using Azure Cloud Console to deploy a VM

 

Using Azure Cloud Console with GIT

 

The key features of this experience are:

Automatic authentication to the command line tools from your existing web login
All Azure command line tools, as well as relevant command line utilities pre-installed
Personalized, persistent workspace that preserves your code, configuration and activity across cloud shell sessions.

With a single click, you are dropped into a terminal command line tools pre-configured with your existing Azure credentials.  This terminal is a fully featured experience featuring not only the Azure command line tools, but also standard editors and tools you would expect.  Further, the cloud shell preserves context for you. When you save files to disk, they are persisted in Azure’s cloud so you can resume where you left off in your next cloud shell session, even if you are on a different device or network.

So how does the cloud console relate to containers? Well, the shell itself is packaged as a container to provide a clean, consistent interface every time you launch a new session.  Of course this is based on the container we&039;ve already built for the Azure 2.0 command line tool. You can use today on your own machine with:

$ docker run -it microsoft/azure-cli

Going forward, we&039;re looking for a few hardy souls who are willing to test and provide feedback on this new console experience as we bring it to general availability over the next few months. If you are interested, please sign up and we&039;ll be in touch!

I&039;m super excited about how containers are revolutionizing compute on Azure, and especially excited about how we ourselves can use container technology to offer new, integrated interfaces for developing your applications on the Azure cloud.
Quelle: Azure

New price-performance choices for Azure SQL Database elastic pools

Azure SQL Database elastic pools provide a simple cost effective solution for managing the performance of multiple databases with unpredictable usage patterns. New price-performance choices for elastic pools provide even more cost effectiveness and greater scale than before.

More cost effectiveness
Now available are smaller elastic pool sizes and pools with higher database limits. These new choices lower the starting price for pools, lower the effective cost per database, and reduce price jumps between pool sizes.

Greater scale
Also, now available are larger sizes for Basic, Standard, and Premium pools, and higher eDTU limits per database for Premium pools. These new choices provide more storage and eDTU headroom for greater scale and the most demanding workloads.

Highlights

More pool eDTU sizes

New sizes range from 50 eDTUs for Basic and Standard pools up to 4000 eDTUs for Premium pools with additional sizing choices in between.

More storage for Standard pools

Up to 2.9 TB for 3000 eDTU Standard pools.

Higher database limits per pool

Up to 500 databases for Basic and Standard pools of at least 200 eDTUs.
Up to 100 databases for Premium pools of at least 250 eDTUs.

Higher eDTU limits per database for Premium pools

Max eDTUs per database increase to 1750 eDTUs (P11 level) and 4000 eDTUs (P15 level) for the largest Premium pools.

Learn more

To learn more about SQL Database elastic pools and these new choices, please visit the SQL Database elastic pool webpage.  And for pricing information, please visit the SQL Database pricing webpage.
Quelle: Azure