Microsoft Ignite: Learn about Azure Stack infrastructure operations and management

We are thrilled to host you at Ignite next week. For us on the engineering team, it&;s an opportunity to spend time, share and learn from you. This year we have a lot to share, particularly about Azure Stack. We have tried our best to ensure the sessions are structured in a logical progression. Wale authored a great blog post that walked through the outline. The chart below, summarizes how our sessions will be organized.

I am honored to be providing a first look at what we are doing to enable you to manage and operate Azure Stack in your datacenter in the session titled BRK2188: Learn about Azure Stack infrastructure operations and management. With Azure Stack, our goal is to bring Azure into your datacenters, under your operational control. Azure itself is operated by Microsoft and we have a dedicated team of people who use a variety of operational tools and processes for its upkeep. With Azure Stack, we have to provide a facsimile of such tools so you can own and operate it. That is the gist of what we will cover in this session. This starts from how we are working with our partners (Dell, HPE and Lenovo) to deliver integrated systems and provides an overview of the operational tooling that we will provide as a part of Azure Stack.

We shared our thoughts last month in a short blog post and video. I am sure that it has whet your appetite to dig deeper and this is your chance. Start your journey in understanding how we have conceived of Azure Stack being deployed, integrated, monitored, patched/updated and in general how you would manage its lifecycle. After attending this session, you should attend the architecture session, then head to session that will start your journey to turn you into an Azure Stack Infrastructure rock star. A 9:00 am session on the day after the party is always a fun one, but I know your passion and I can be pretty loud, so we will make it work. Look forward to seeing you all there.

Two interesting tidbits, first watch out for this T-Shirt at Ignite. Anyone wearing this, is a part of the Azure Stack team, so stop by and say hello. And second, those blinky lights are saying “I am almost running TP2.”

 

Quelle: Azure

Microsoft Research: How we operate Deep Neural Network with Log Analytics

Microsoft Research is at the forefront of solving and tackling cutting edge problems with technologies such as Machine Learning and Deep Neural Networks (DNN). These technologies employ next generation server infrastructure that span immense Windows and Linux cluster environments. Additionally, for DNNs, these application stacks don’t only involve traditional system resources (CPUs, Memory), but also graphic processing units (GPUs).

With a nontraditional infrastructure environment, the Microsoft Research Operations team needed a highly flexible, scalable, and Windows and Linux compatible service to troubleshoot and determine root causes across the full stack.

Enter Azure Log Analytics

Azure Log Analytics, a component of Microsoft Operations Management Suite, natively supports log search through billions of records, real time metric collection, and rich custom visualizations across numerous sources. These out of the box features paired with the flexibility of available data sources made Log Analytics a great option to produce visibility & insights by correlating across DNN clusters & components.

The following diagram illustrates how Log Analytics offers the flexibility for different hardware and software components to send real time data within a single Deep Neural Network cluster node.

1. Linux Server System Resource Monitoring

Deep Neural Networks traditionally run on Linux, and Log Analytics supports major Linux distributions as first class citizens. The OMS Agent for Linux was also recently made generally available, built on the open source log collector FluentD.  By leveraging the Linux agent, we were able to easily collect system metrics at 10 second interval and all of our Linux logs without any customization effort.

2. NVIDIA GPU Information

The Log Analytics platform is also extremely flexible, allowing users to send data via a recently released HTTP POST API. We were able to write a custom Python application to retrieve data from their NVIDIA GPUs and unlock the ability to alert based off of metrics such as GPU Temperature. Additionally, these metrics can be visualized with Custom Views to create rich performance graphs for the team to further monitor.

Whoa, I’d love to learn more

We wrote this post to showcase the flexibility Log Analytics offers customers in the type of data sources that can onboard. Additionally, check out the full walkthrough on the MSOMS blog that includes Python code examples if you are interested in replicating this type of insight.
Finally, if you are completely new to Log Analytics be sure to try our fully hydrated demo environment located here, or sign up for a free Microsoft Operations Management Suite subscription so you can test out all these capabilities with your own environment.
Quelle: Azure

Azure Stream Analytics support for IoT Hub Operations Monitoring

Azure Stream Analytics is a real-time, highly scalable, and fully managed stream analytics service that can process millions of events per second and perform aggregations and computations in near real-time.

Stream Analytics and Azure IoT Hub have worked really well together for some time, allowing you to easily gather insights over the data your IoT devices send to the IoT Hub. To get going with the IoT Hub all you need to do is simply configure an Input as described in the Create an IoT Hub data stream input documentation. We have many customers using this today with their IoT solutions and it works really well; however, a common ask from customers is how to monitor the status of operations on your IoT hub in real time. Well, today, we’re happy to announce that you can now do this by hooking up to the IoT Hub Operations Monitoring endpoint.

IoT Hub operations monitoring enables you to monitor the status of operations on your IoT hub in real time. IoT Hub tracks events across several categories of operations, and you can opt into sending events from one or more categories to an endpoint of your IoT hub for processing. You can monitor the data for errors or set up more complex processing based on data patterns.

IoT Hub monitors five categories of events:

Device identity operations
Device telemetry
Cloud-to-device commands
Connections
File uploads

For more information on IoT Hub Operations Monitoring please refer to the Introduction to operations monitoring documentation.

A common ask by customers is how to know in near real-time if a device disconnects from your IoT Hub and does not reconnect within a period of time, say one minute. When this occurs you want to send an email or kick-off a workflow in near real-time. For some devices it is crucial that these alerts go out as soon as possible and maintenance is carried out before there becomes a problem.

To demonstrate how easy it is to do this with Stream Analytics we’re going to use the IoT Hub Operations Monitoring capabilities and configure the “Connections” event monitoring.

 

As you can see from the image above, enabling starts logging events when devices connect and when they disconnect. To start using this, toggle the Connections switch to Verbose as shown:

 

Once this has been configured, events should start being captured when devices connect to your IoT hub. To verify this, use a tool like the Service Bus explorer and connect it to the "Event Hub – compatible name / endpoint" and view the messages.

A sample message collected could resemble something like this:

{
"durationMs": 1234,
"authType": "{"scope":"hub","type":"sas","issuer":"iothub"}",
"protocol": "Amqp",
"time": "2016-09-13T20:00Z",
"operationName": "deviceConnect",
"category": "Connections",
"level": "Error",
"statusCode": 4XX,
"statusType": 4XX001,
"statusDescription": "MessageDescription",
"deviceId": "device-ID"
}

Similarly, when a device disconnects from the IoT Hub, an event will be captured with the operationName == deviceDisconnect.

Now that we have confirmed these messages are arriving in our IoT Hub, using them in a Stream Analytics job is easy:

1. Create a new Stream Analytics job.

For assistance in creating a new Stream Analytics job, refer to How to create a data analytics processing job for Stream Analytics

2. Create a data stream Input pointed to your IoT Hub. Be sure to select “Operations monitoring” from Endpoint and not “Messaging”.

3. Create the following query:

WITH
Disconnected AS (
SELECT *
FROM input TIMESTAMP BY [Time]
WHERE OperationName = &;deviceDisconnect&039;
AND Category = &039;Connections&039;
),
Connected AS (
SELECT *
FROM input TIMESTAMP BY [Time]
WHERE OperationName = &039;deviceConnect&039;
AND Category = &039;Connections&039;
)

SELECT Disconnected.DeviceId, Disconnected.Time
INTO Output
FROM Disconnected
LEFT JOIN Connected
ON DATEDIFF(second, Disconnected, Connected) BETWEEN 0 AND 180
AND Connected.deviceId = Disconnected.deviceId
WHERE Connected.DeviceId IS NULL

This query has two steps: the first step that gets all disconnect events, and the second that gets all connect events.

We then join these two streams together using the Stream Analytics DATEDIFF operation on the LEFT JOIN, and then filter out any records where there was a match. This gives us devices that had a disconnect event, but no corresponding connect event within 180 seconds.

The output of this job can now be directed to any of the supported Stream Analytics outputs, including Service Bus queues. Once it lands in a Service Bus Queue, it is easy to create an Azure Function, or even an Azure Logic App, which will run as soon any message is published to the queue.

And just like that, with a very simple SQL-like query you can have real-time updates from your IoT Hub Operations Monitoring endpoint.

Sometimes I miss the good old days when coding complex scenarios like this was difficult and time consuming…no, wait, I don’t! Using the PaaS services and serverless computing capabilities of Azure is so much easier and powerful, allowing me to focus on building value add.

Related Services

Microsoft Azure IoT Hub – connect, monitor, and manage millions of IoT assets

Microsoft Azure Event Hubs – ingest data from websites, apps, and devices

Microsoft Azure Service Bus – Keep apps and devices connected across private and public clouds

Microsoft Azure Logic Apps – Quickly build powerful integration solutions

Microsoft Azure Functions – Process events with serverless code

Next Steps

We’re really excited about this close integration with IoT Hubs and hope it will unlock many new, exciting capabilities for you and your IoT applications.

We invite you to provide feedback on our User Voice page about what you want added next to the service!

If you are new to either Microsoft Azure or Stream Analytics, try it out by signing up for a free Azure trial account and create your first Stream Analytics job.

If you need help or have questions, please reach out to us through the MSDN or Stackoverflow forums, email the product team directly.
Quelle: Azure

Microsoft Azure Germany now available via first-of-its-kind cloud for Europe

Today, Microsoft Azure is generally available from the new Microsoft Cloud Germany, a first-of-its-kind model in Europe developed in response to customer needs. It represents a major accomplishment for our Azure team.

The Microsoft Cloud Germany provides a differentiated option to the Microsoft Cloud services already available across Europe, creating increased opportunities for innovation and economic growth for highly regulated partners and customers in Germany, the European Union (EU) and the European Free Trade Association (EFTA).

Customer data in these new datacenters, in Magdeburg and Frankfurt, is managed under the control of a data trustee, T-Systems International, an independent German company and subsidiary of Deutsche Telekom. Microsoft’s commercial cloud services in these datacenters adhere to German data handling regulations and give customers additional choices of how and where data is processed.

With Azure available in Germany, Microsoft now has announced 34 Azure regions, and Azure is available in 30 regions around the world — more than any other major cloud provider. Our global cloud is backed by billions of dollars invested in building a highly secure, scalable, available and sustainable cloud infrastructure on which customers can rely. 

Built on Microsoft’s Trusted Cloud principles of security, privacy, compliance and transparency, the Microsoft Cloud Germany brings data residency, in transit and at rest in Germany, and data replication across German datacenters for business continuity. Azure Germany offers a comprehensive set of solutions providing customers with the ability to transition to the cloud on their terms through services available today.

For businesses, including automotive, healthcare and construction that rely on SAP enterprise applications, SAP HANA is now certified to run in production on Azure, which will simplify infrastructure management, improve time to market and lower costs. Specifically, customers and partners can now take the advantage of storing and processing their most sensitive data.
Addressing the global scale of IoT while ensuring data resides in-country, Azure IoT Suite enables businesses, including the robust industrial and manufacturing sector in Germany, to adopt the latest cloud and IoT solutions. Azure IoT Suite enables enterprises to quickly get started connecting their devices and assets, uncovering actionable intelligence and ultimately modernizing their business.
With Industry 4.0-compatible integration of OPC Unified Architecture into Azure IoT Suite, customers and partners can connect their existing machines to Azure for sending telemetry data for analysis to the cloud and for sending commands to their machines from the cloud (i.e. control them from anywhere in the world) without making any changes to their machines or infrastructure, including firewall settings.
Microsoft, and particularly Azure, has been a significant and growing contributor to open source projects supporting numerous open source programming models, libraries and Linux distributions. Startups, independent software vendors and partners can take advantage of a robust open source ecosystem including Linux environments, Web/LAMP  implementations and e-commerce PaaS solutions from partners. 
Furthermore, with the open source .NET Standard reference stack and sample applications Microsoft has recently contributed to the OPC Foundation’s GitHub, customers and partners can quickly create and save money maintaining cross-platform OPC UA applications, which easily connect to the cloud via the OPC Publisher samples available for .NET, .NET Standard, Java and ANSI-C.
Azure ExpressRoute provides enterprise customers with the option of private connectivity to our German cloud. It offers greater reliability, faster speeds, lower latencies and more predictable performance than typical internet connections and is delivered in partnership with a number of the leading network service providers including Colt Telekom, e-Shelter, Equinix, Interxion and T-Systems International.

The Microsoft Cloud Germany is our response to the growing demand for Microsoft cloud services in Germany and across Europe. Customers in the EU and EFTA can continue to use Microsoft cloud options as they do today, or, for those who want the option, they’re able to use the services from German datacenters.

Read more about customers choosing the Microsoft Cloud Germany at the Microsoft News Centre Europe and learn more about the product at Azure Germany product page.
Quelle: Azure

Bletchley – release & roadmap – Cryplets deep dive

In the introduction of Project Bletchley white paper in June, we introduced some of the requirements needed for building consortium-based blockchains as well as Cryptlets, a primitive for next generation blockchain applications. Today, I&;m proud to announce the release of Bletchley v1 and the next level of detail regarding the roadmap of features for Cryptlets.

Bletchley v1 is the release of the first consortium blockchain template that allows customers and partners to spin up a private consortium Ethereum network from a handfull of nodes to 100s of nodes in the network.  It reduces the estimated 3 week process of setting up a globally distributed multi-node consortium Ethereum network down to 8 questions and 5-8 minutes.  Not only does Bletchley v1 automate the setup of the network infrastructure but it sets up a portal for rapidlly getting started developing applications on Ethereum.

Additionally, more information about the roadmap for Bletchley with details about Cryptlets is available here. Cryptlets are building blocks for a new layer of capability we are calling the Cryptlet Fabric, where these components can be developed, published and accessed in a standard way.  They will be discoverable within developer, architect, and business process modeling tools for easy use and can be created with an SDK to expose your own logic for reuse and sale.  

Cryptlets provide a common and approachable way for developers to use cross cutting capabilities like integration into existing systems, secure execution and data, privacy, scalability in programming languages enterprise developers use most.  Microsoft Azure offers a world wide footprint that will allow Bletchley to offer a hyper-scale secure data and execution platform to help build the next generation applications on any blockchain platform.

Click here to view the Bletchley Roadmap – Cryptlet Deep-Dive Features and Behaviors.
Quelle: Azure

Data Factory supports multiple web service inputs for Azure ML Batch Execution

For orchestrating workloads on Azure ML (Machine Learning) batch execution web services, Azure Data Factory supports a built-in activity, namely Azure ML Batch Execution activity. Customers can leverage this activity to operationalize their ML models at scale.

Little while ago, Azure ML added support to allow multiple Web Service Inputs for a given experiment. Consequently, customers have been looking to leverage this capability through Azure Data Factory. Data Factory now supports configuring the ML Batch Execution Activity to pass multiple Web Service Inputs to the ML web service.

Suppose you have an Azure ML experiment which accepts more than one Web Service Input.

Note the names of the created Web service inputs, as you must use these names when specifying the endpoints in your Data Factory Pipeline. The name can be found in the Properties pane of the module. By default the first Web Service Input module you create will be named “input1,” the next one “input2,” and so on. If you rename the modules, be sure to update the names in the webServiceInputs property in your Data Factory pipeline accordingly.

In your Azure Data Factory Pipeline, you can use the new WebServiceInputs property instead of the existing WebServiceInput property to specify the inputs into your experiment. 

"typeproperties":
{
"webServiceInputs":
{
"trainingData": "NameOfInputDataset1",
"scoringData": "NameOfInputDataset2"
},
"webServiceOutputs":
{
"output1": "NameOfOutputDataset"
},
"globalParameters": {}
}

For more information on the Azure ML Batch Execution activity in Azure Data Factory, refer to this documentation page.

If you have any feedback on the above capabilities please visit Azure Data Factory User Voice and/or MSDN Forums to reach out. We are eager to hear from you!
Quelle: Azure

Project Bletchley – Blockchain infrastructure made easy

We are very excited to announce the next phase of our support of blockchain on Microsoft Azure with the launch of an early version of the Bletchley v1 infrastructural substrate.  We expand on the work we have done to create a diverse distributed ledger ecosystem for private dev/test to focus on the requirements of private multi-node consortium networks.

 

By leveraging the capabilities of Microsoft Azure Resource Manager (ARM), we have shipped an Azure Quickstart Template to make it sufficiently easier and quicker to deploy and configure a consortium Ethereum network with minimal Azure and Ethereum knowledge.  With a handful of user inputs and a simple single-click deployment, you can provision a fully configured blockchain network topology consisting of many nodes in minutes, using Microsoft Azure compute, networking, and storage services across the globe.

 

Rather than spending hours building out and configuring the infrastructure, we have automated these time-consuming pieces to allow you to focus on your core business – re-envisioning and reinventing business processes to come up with the new stories of tomorrow.

 

An illustration of the deployed network topology is shown below.  At a high level, the template provisions and configures a subnet of mining nodes for each consortium member and a set of load-balanced transaction nodes that members can share to communicate with the network.  Through the administrator web page, you can configure additional Ethereum accounts to get started with smart contract and eventually application development.  For additional information and a detailed walkthrough of the template, we have published a detailed walkthrough.

 

 

However, this is just the beginning!  We are “releasing early and releasing often” to provide you with the latest updates quickly and to get your feedback throughout the development of the service.  Keep an eye out for further updates including support for additional Microsoft services, like Azure Active Directory and Key Vault, and other blockchain protocols.  In the meantime, we genuinely value your feedback.  Do not hesitate to leave a comment or send me an email with questions, feedback, or additional requests as you begin.  We are excited to be embarking on this journey with you.
Quelle: Azure

Announcing the release of Azure Mobile Apps Node SDK v3.0.0

We are excited to announce the next iteration of the Azure Mobile Apps Node.js Server SDK.  This release includes a stack of new features, improvements and bug fixes.

Data Transforms, Filters and Hooks

One of the great features within the Server SDK was the ability to provide security filtering and record transformation at the server level, allowing the developer to refine the request-response pipeline for the server by writing JavaScript code.  With the v3.0 release, we&;ve further refined the extensibility points to allow you to manipulate incoming queries and items, and trigger functionality after each data operation. You can, of course, create and distribute your own filters, transforms and hooks. However, we&039;ve packaged some common filters that reduce the amount of code you need to write.

Per-User Tables

Perhaps the most common filter request is to provide per-user data.  Per-user tables can be used with authentication to restrict data within the table to individual users.  To use this filter, add perUser = true to the table definition.  For example:

var table = require("azure-mobile-apps").table();
table.access = "authenticated";
table.perUser = true;
module.exports = table;

Web Hooks

Web hooks can be used to call external HTTP endpoints (for example, Azure Functions) after each data operation completes:

var table = require("azure-mobile-apps").table();
table.webhook = { url: "https://function.azurewebsites.net/apo/HttpTriggerNodeJS1" };
module.exports = table;

For more information on this feature, including the structure that is posted to the HTTP endpoint, refer to the API Reference.

Record Expiry

Another commonly requested filter is the ability to prevent access to records older than a certain interval. For example, you may want to deny access to records older than 1 day:

var table = require("azure-mobile-apps").table();
tables.recordsExpire = { days: 1 };
module.exports = table;

For more information on specifying intervals, see the API reference.

Data Query Improvements

Azure Mobile Apps Servers sometimes have to refer to other tables to produce the right tables. We&039;ve made some improvements to the Query API to make specific common scenarios easier.

Including Soft Deleted Records

When you have soft-delete turned on, records are marked as deleted instead of being actually deleted from the SQL table. This information then flows down to other mobile devices so that they can update their offline cache. You can specify .includeDeleted() in the query to include deleted items:

table.insert((context) => {
return context.tables(&039;otherTable&039;).includeDeleted().read()
.then((results) => {
context.item.count = results.length;
return context.execute();
});
});

Retrieving Records by ID

We&039;ve added a simple find function to make retrieving records by ID much simpler:

table.insert((context) => {
return context.tables(&039;otherTable&039;).find(context.item.parentId)
.then((parent) => {
context.item.parentName = parent.name;
return context.execute();
});
});

Object Queries

Previously, you could use object based queries to query tables, but the same functionality was not available on update and delete operations. This functionality is now available and allows you to, for example, delete dependent records:

table.delete((context) => {
return context.tables(&039;childTable&039;)
.delete({ parentId: context.item.id })
.then(context.execute);
});

Handling Callbacks in Table Functions

Prior SDK releases had no support for callbacks within table operation functions.  Such methods required you to re-factor the code to produce a Promise.  In v3.0.0, we directly support callbacks.  When the callback is completed, call context.next(err), passing in any error.  For example:

var mongo = require(&039;mongodb&039;).MongoClient;

table.insert(function (context) {
context.execute().then(function () {
mongo.connect(&039;mongodb://localhost:27017/test&039;, function(err, db) {
db.collection(&039;items&039;).insertOne(context.item, function (err) {
// signal that the operation is complete, passing in any error that may have occurred
context.next(err);
});
});
});
});

Breaking Changes

When executing a query with both a skip() and take() clause against SQL Server, an additional column (ROW_NUMBER) was returned that was generated by the underlying query. This column is no longer returned. Because this change was implemented using T-SQL features available in SQL Server 2012 and above, versions of SQL Server prior to 2012 are no longer supported.

Installing the SDK

Whether you are creating a new Mobile App or upgrading an existing app, The Azure Mobile Apps Server SDK for Node is installed via npm:

npm install –save azure-mobile-apps@3.0

You can find full API documentation at our GitHub repository and a handy HOWTO document explaining how to build a mobile backend.
Quelle: Azure

Get notified at any time with Real time alert from Azure CDN

In addition to the advanced and real time analytics capabilities in Azure CDN premium offering, you can now set up real time alerts to get notified on delivery anomalies, such as bandwidth, cache statuses, and concurrent connections to your content. This enables continuous monitoring of the health of your CDN service at any time.

Learn how this feature works and set up your first alert today!

 

See also:

Real time analytics

CDN feature overview

Azure CDN UserVoice

Azure CDN MSDN forum
Quelle: Azure

Build cloud apps at warp speed

One of your best customers just tweeted about a problem with your product and you want to respond to them ASAP. It would be great if you could automatically catch this type of communications and automagically respond with either the right documentation or escalate this to your support team. But the thought of writing an application to handle this event, with all that entails – allocating VMs, assigning staff to manage either the IaaS instances or the cloud service, not to mention the cost of development, (which might include software licenses) all that seems like a lot just to recognize and handle a tweet.

What if you could catch the tweet, direct it to the right person and respond to the customer quickly with no code and no infrastructure hassles: no systems-level programming, no server configuration step, not even code required – just the workflow. Just the business process.

It’s possible in the new era of .  With newly introduced capabilities in the Microsoft Cloud – Microsoft Flow, Microsoft PowerApps, and Azure Functions, you can design your workflow in a visual designer and just deploy it.

Now in preview, these new cloud offerings foreshadow the future of cloud applications.

Intrigued? Read on.

Take a look to the left. There’s the Microsoft Flow designer being set up to tell your Slack channel any time somebody complains about your product. 

That’s it. One click and voila: your workflow is running!

(And there’s the result in Slack!)

But perhaps your smart support representative contacts the unhappy customer – who it turns out has a valid issue. Your rep takes down the relevant information and starts a new workflow to have the issue looked at.

Need a server for that? No! With Microsoft Power Apps, you can visually design a form for your rep: and it can kick off a Flow.  Want that app mobile-enabled on any smartphone? No problem, as you see below. And as it shows you use the Common Data Model available in PowerApps enabling a lingua franca between applications.

If you need more sophisticated, or custom processing, your developers can create Azure Functions on the event, say, updating an on-premises or cloud-based sentiment analysis engine with the tweet, or invoking a marketing application to offer an incentive. Again: no server. (In fact, no IDE either: your devs write their business logic code directly on the Azure portal and deploy from there.)

So why do I say Microsoft Flow, PowerApps and Functions presage a new model of cloud applications? Because increasingly, cloud apps are evolving toward a lego-block model of “serverless” computing: where you create and pay only for your business logic, where chunks of processing logic are connected together to create an entire business application.

Infrastructure? Of course it’s there (“serverless” may not be the best term), but it’s under the covers: Azure manages the servers, configures them, updates them and ensures their availability. Your concern is what it should be: your business logic.

This is potentially a seismic shift in how we think about enterprise computing.

Think about it: with PowerApps your business users can quickly create apps, and with Microsoft Flow, create business processes with a few clicks. With Flow’s bigger cousin, Azure Logic Apps, you can quickly connect to any industry-standard enterprise data source such as your local ERP system, a data warehouse, support tools and many others via open protocols and interfaces such as EDIFACT/X.12, AS2, or XML. And you can easily connect to a wide variety of social media and internet assets, like Twitter, Dropbox, Slack, Facebook and many others. With Functions you can catch events generated by Logic Apps and make decisions in real time.

And you haven’t deployed a single server. What code you’ve written is business logic only; not administration scripts or other code with no business value. Your developers have focused on growing your business. And, most importantly, you’ve created a rich, intelligent end-to-end application –by simply attaching together existing blocks of logic.

Like Lego blocks. Other cloud platforms offer serverless options, but none as deep and as varied as Microsoft’s, empowering everyone in your organization, from business analyst to developer, with tools appropriate to their skills. For enterprises, the implications could not be more profound.

Maybe it’s appropriate, on this fiftieth anniversary of Star Trek, that with tools on the Microsoft Cloud, you can run your business at warp speed using Azure.
Quelle: Azure