Introducing the Azure Analysis Services web designer

Today we are releasing a preview of the Azure Analysis Services web designer. This new browser-based experience will allow developers to start creating and managing Azure Analysis Services (AAS) semantic models quickly and easily. While SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) are still the primary tools for development, this new experience is intended to make simple changes fast and easy. It is great for getting started on a new model or to do things such as adding a new measure to a development or production AAS model.

This initial release includes three major capabilities, model creation, model editing, and doing queries. The model can be created directly from Azure SQL Database and SQL Data Warehouse, or imported from Power BI Desktop PBIX files. When creating from a database, you can chose which tables to include and the tool will create a Direct Query model to your data source. Then you can view the model metadata, edit the Tabular Model Scripting Language (TMSL) JSON, and add measures. There are shortcuts to open a model in Power BI Desktop, Excel, or even open a Visual Studio project which is created from the model on the fly. You can also create simple queries against the model to see the data or test out a new measure.

Navigate to your Azure Analysis Services server in the Azure Portal and click the “Open” link from the Overview blade.

Once the web designer opens, it will take you directly to your server where you can examine existing models or create new ones.

Now that you have an AAS server, you can add a model directly from a database or you can import a Power BI Desktop PBIX file. Note, for PBIX import, only Azure SQL Database, Azure SQL Data warehouse, Oracle, and Teradata are supported at this time. Also, Direct Query models are not yet supported for import. We will be adding new connection types for import every month, so let us know if your desired data source is not yet supported.

Now that you have created a new model from a database or PBIX file, you can see it on the list of models in the AAS server and can edit the model or browse the model. To edit the model, click the pencil icon next to the model name.

When editing, you have access to the full TMSL and can make changes directly to the metadata of the model. Keep in mind that if you save changes, it is updating the model in the cloud. It is a best practice to make changes to a development server and propagate changes to production with a tool such as BISM Normalizer. For prototyping and development, you can make changes quickly and easily this way.

If you click Query, you can view data through a simple view or you can use the “Open in” button to open in Power BI Desktop or Excel. We think of the simple web view as a quick way to check your model or new measures you have created without having to use SSMS.

Finally you will notice that you can press the blue “Measures” link on the field list to open the simple measure editor. This is a great place to see all of your measures in one place, and also to quickly add measures to test in the UX.

That is a brief introduction to the new Azure Analysis Services web designer. We hope you find the tool useful and we welcome your feedback in the Azure Analysis Services feedback forum. We think this new experience will help you get started quickly and give you new options to make changes right in the web experience. We will be adding additional functionality each month, so let us know what you would like to see next!

Learn more about Azure Analysis Services.
Quelle: Azure

Instant File Recovery from Azure VM backups is now generally available

Today, we are excited to announce that Instant recovery of files and folders from Azure VM backups by Azure Backup is now generally available (GA). This adds to the repertoire of cloud-first features we have been delivering from Azure Backup. We earlier announced that File-folder recovery from Azure Windows VM backups and Linux VM backups were available in preview. We received great feedback from preview and we have enhanced the feature in terms of customer experience, security, and performance.

Value Proposition Recap

To recap the value proposition, with this file recovery feature, now you can securely

Recover files instantly – Now instantly recover files from the cloud backups of Azure VMs without any additional infrastructure. Whether it’s accidental file deletion or simply validating the backup, instant restore drastically reduces the time to recover your data.
Open application files without restoring them – Our iSCSI-based approach allows you to open/mount application files directly from cloud recovery points to application instances. You need not restore the entire VM and thus save on time taken for recovery and consumption of bandwidth. For e.g. in case of backup of a Azure Linux VM running MongoDB, you can mount BSON data dumps from the cloud recovery point and quickly validate the backup or retrieve individual items such as tables without having to download the entire data dump.

Related links and additional content

Want more details about this feature? Check out our Azure Backup File Restore documentation.
Need help? Reach out to Azure Backup forum for support.
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates.
If you are new to Azure Backup, sign up for a free Azure trial subscription.
If you are looking for instant recovery from on-premises refer to Use Instant Restore documentation.

Quelle: Azure

Introducing Enterprise Smart Contracts

Introduction

Enterprise Smart Contracts – Full Whitepaper

It has been a little over a year since we announced Project Bletchley, and since that time we have been working directly with our partners and customers to understand how the cloud can help developers build a new generation of modern applications with blockchain as a core data layer.

Our initial goal for “Blockchain as a Service” was to make it easier for developers to set up a lab environment, get their hands dirty, and build something useful for the enterprise using a blockchain. Microsoft Azure makes it ridiculously easy to spin up the blockchain of your choice, including leading platforms such as Ethereum, Quorum (EEA), Hyperledger Fabric, R3 Corda and Chain Core – we’ve made great progress against those initial goals.

Easing the deployment of this “plumbing” exposed perhaps an even larger challenge: how to build applications for these new environments. Our customers and partners often say to us, “Ok, you’ve made it easy for me to stand up these blockchain networks, but what do I do now?”

This echoes the mid-to-late 1990s when just getting connected to the internet was made much easier with TCP/IP and a web browser included in your operating system for “surfing” the web. The early “surfing” experience, however, was limited to a small set of static web pages and basic email. There was a long way to go from Windows 95 to the .com craze, as the early software development platforms and tools were simply not designed for the web. The result: it took a lot of learning and iterative innovation before you could stand up a viable e-commerce site. Blockchain platforms are in a similar position today:  Azure makes it easier to get connected to the “network,” but you can’t do much with it yet.

So, we took a step back to consider what customers truly wanted to build. We learned from early adopters across industries that the business processes most interesting for blockchain were shared. Processes that cross both organizational and trust boundaries. Managing interactions with semi-trusted parties means clearly articulating each parties’ obligations, rules on how to meet them and penalties for failing to meet them. In short, a contract.

The introduction of Smart Contracts in 2015 was largely responsible for the explosion of interest in the enterprise for blockchains. However, the first versions of “Public Smart Contracts” were not designed for enterprise use and cannot deliver against the requirements of the enterprise.

Enterprise Smart Contracts are “what’s next” in the blockchain revolution.

Enterprise Smart Contracts decompose the “Public Smart Contract” approach, reflecting on both “contract” and technology evolution to provide a model for delivering on the promise of blockchain in the enterprise. A critical first step was to introduce separation of concerns in implementation, which modularizes data, logic, contract participants and external dependencies. What Enterprise Smart Contracts deliver are a set of components that can be combined to create contract templates that when executed, provide the privacy, scale, performance and management capabilities expected in the enterprise.

Enterprise Smart Contract Components

What exactly is an Enterprise Smart Contract? Let’s look at the major components:

Schema – the data elements required for the execution and fulfillment of contract obligations between counterparties and the cryptographic proofs needed to maintain the integrity and trust across counterparties and observers such as auditors or regulators.
Logic – business rules defined in the schema and agreed to by the counterparties and observers. Cryptographic proofs required for the execution, versioning and integrity of both the code and its results are persisted to the blockchain as defined in the schema.
Counterparties – identities of participants (people, organizations and things) agreeing to the terms and execution of the contract, authenticated through cryptographic primitives such as digital signatures.
External Sources – external input of data or triggers required to fulfill the execution requirements of the contract. These external sources and conditions for interaction are agreed to by the counterparties and observers.  As with the others, cryptographic proof is required to prove authenticity and establish trust in the external sources.
Ledger – the immutable instance of a contract on a distributed ledger (blockchain) containing the data items in the schema to record all contract activities and proofs. This can be either a public “distributed trustless database” or a “shared, permissioned, semi-trusted, discretionarily private database.”
Contract Binding – A Enterprise Smart Contract Binding is the composition of these parts creating a unique instance of an Enterprise Smart Contract. It is created when a contract begins negotiation between counterparties and becomes versioned and locked when each counterparty signs the contract. Once signed and locked the Enterprise Smart Contract begins the execution of the terms and conditions that lead to fulfillment.

Implementing Enterprise Smart Contracts

Now that we have defined what makes up an Enterprise Smart Contract, it’s time to implement them. The cloud is the perfect companion to distributed ledger systems implementing Enterprise Smart Contracts. Since blockchains are comprised of distributed nodes that maintain the database, a globally-distributed, highly-available public cloud provides a great companion platform for services supporting these networks.

“Public Smart Contracts” execute or run every single transaction on every single computer on the network. This is by design and necessary for trustless networks.  However, this need not be true for the enterprise. One of the benefits of Enterprise Smart Contracts is the ability to execute terms and conditions “off-chain,” allowing for vastly more flexible performance, scalability, privacy, versioning and an enterprise friendly architecture and development environment. The cloud can provide a shared logic execution platform for Enterprise Smart Contracts at massive scale, while sharing the true cost of running them only between counterparties. I call it “splitting the check.” When two or more people share a contract (dinner) with each other, the bill arrives, you simply split the check. Actually, we can be more discerning than that, but you get the picture. Sharing cloud resources with your counterparties makes “splitting the check” possible while not getting mired in whose datacenter the logic for your contract will execute.

Depending on the underling blockchain, transactions may or may not be private. Meaning, unless your blockchain supports an authorization framework that only allows certain identities to read certain data properties or encrypts data such that only certain identities can decrypt the data, your data is in the clear.  If you use encryption to encrypt your data AND execute your Smart Contract logic on the blockchain, then every node on that network needs to be able to decrypt that data in order to be able to compute against it.  Some platforms like Quorum and Hyperledger Fabric support these using various ways.

However, executing your logic in an Enterprise Smart Contract means that you can keep not only your data private between counterparties, but your logic as well.  The resulting outputs of your logic can be encrypted before posting to the ledger creating a more discrete and flexible privacy model.

This also creates a multi-trust model where Enterprise Smart Contracts can implement different privacy measures based on the blockchain underneath, utilizing privacy features that may be available for certain blockchains.  Without Enterprise Smart Contracts, you are limited to the trust model of the blockchain platform only.

Azure – The Enabler

The Microsoft Azure platform provides the building blocks needed to deliver the core technical capabilities for implementing Enterprise Smart Contracts. However, composing them together is a complex undertaking. Last year, when the concept of Cryptlets was introduced, the benefits of separating the logic from the data while using the same cryptographic properties of blockchains was the primary focus. However, it is not enough to just share the same cryptographic primitives as blockchains. You need a platform and a framework that provides the infrastructure for creating Enterprise Smart Contracts. This is not something each counterparty in the network should have to build and maintain. A shared platform and framework for building Enterprise Smart Contracts or PaaS performs the difficult bits like:

Key Management
Secure integration of enterprise identity to secrets/private keys to allow “on-behalf of” transactions
Cryptographic proof generation across cipher suites (ECC, RSA, etc.) and shared infrastructure for expensive operations like Zero Knowledge Proofs, Ring Signatures and Threshold or Homomorphic encryption
Abstracted integration into any blockchain or distributed ledger so logic is portable across platforms
Enable blockchain to blockchain interoperability scenarios like an Enterprise Smart Contract using multiple different blockchains at the same time like recording a transaction proof on a public network for a private transaction.  Also, the potential to create Enterprise Smart Contracts that can perform asset transfers from one network to another.
Open APIs for abstracting underlying blockchains for a common way of invoking Enterprise Smart Contracts as well as surfacing blockchain events and data to external systems.
Common enterprise development environments like .NET and Java (CLR, JVM) and newer functional languages and DSLs.
Extensible Data Services plugins to digest and make sense out of blockchain transactions with data lakes, warehouses, BI/Analytics and AI/Machine Learning.
Enterprise Operations and Management tools

Using a combination of enabling technologies in the Azure cloud like Key Vault and Azure Active Directory along with a code attestation engine, which we will detail later, the foundations for this development framework are in place.

Enterprise Smart Contracts – Framework

The Enterprise Smart Contract Framework provides the infrastructure and tools to build on this platform allowing you to harness existing enterprise investments in infrastructure and development skills. This framework is comprised of four major components.

 

Secrets, Control and Configuration – authorizes access to secrets stored in Azure Key Vault for specific identities (people, cryptlets, IoT devices, etc.) authenticated via Azure Active Directory. Manages the bindings for Enterprise Smart Contracts.
Runtime Environment Services – provides attested execution for Cryptlets and abstracts underlying identity mapping and key management. This allows for developers to write Cryptlets in the language of their choice focusing on Enterprise Smart Contract logic and not the underlying “infrastructure.” The services dynamically create attested execution environments for Cryptlets to run in, securely provision secrets to these environments, and generates cryptographic proofs automatically.
Transaction Builder and Router – marshals and formats Cryptlet Messages into blockchain specific formats and routes these transactions to the appropriate blockchain with reliable delivery.
API – exposes a secure, authenticated message-based API for sending messages to and receiving messages from Enterprise Smart Contracts at massive scale.

Summary

Enterprise Smart Contracts and Microsoft Azure provide the infrastructure required to usher in “real world” enterprise applications to be built on blockchain networks. Cryptlets and the framework provide architects and developers a middle tier platform where existing skills, tools and components can be integrated.

The framework allows enterprises to build blockchain based applications with diverse capabilities:

Publishing of attested data to and from external sources.
Integration point for existing enterprise systems like ERP and CRM.
Logic and behavior for all types of contracts, financial products: derivatives, bonds, insurance policies.
Logic for license contracts: government issued business licenses, certifications, etc.
IoT Integration through Azure IoT suite
Cross blockchain coordination: asset or token transfer between different networks and platforms, e.g. Hyperledger Fabric to Enterprise Ethereum, record private proofs on a public blockchain.
Token or Coin "minter" using off-chain integration, proofs and advanced coinbase behavior.
Personal bot agent for "on behalf of" use cases.
Distributed transaction support between blockchains and other enterprise systems, ledger resource compensation, two-phase commit, etc.

Enterprise Smart Contracts enable these capabilities by providing a secure, confidential, distributed, multi-party application platform for running shared business logic, with a cryptographic proof system that natively integrates with multiple blockchains. Enterprise Smart Contracts and Azure provide this platform that will allow the distribution of costs, risk, identity and more for building next generation distributed applications. These capabilities help unlock the power of blockchain based applications, while taking advantage of the flexibility and power of the cloud, in a way that works in modern enterprise environment.

You can learn more about our Enterprise Smart Contract framework in our technical whitepaper.
Quelle: Azure

How Azure Security Center helps protect your servers with Web Application Firewall

Our adversaries have many tools available on the Internet for use in mounting cyberattacks. Many of these tools enable them to gain access and control of enterprise IT resources. In the meantime, security professionals are not always aware of the vulnerabilities built into the IT resources they are tasked to defend. Azure Security Center (ASC) can help bridge this gap.

This blog post is for IT and security professionals interested in using Azure Security Center (ASC) to detect and protect Azure-based resources from SQL injection attacks among others. The goal of this post is to 1) explain how this well-known code injection occurs and 2) illustrate how ASC detects and resolves this attack to secure your IT resources. 

Tools make SQL injection easy

Servers and applications are easy targets for cybercriminals. One well-known method for attacking data-driven applications is via SQL injection. SQL injection is an attack technique where malicious code is injected for execution which leads to un-intended database access. A popular tool attackers can use for malicious injection is sqlmap. By using sqlmap, it is easy to discover vulnerable SQL databases and expose their contents. An attacker only needs to provide the appropriate request headers to authenticate and discover the databases, their tables, and even dump the users and hashed passwords. Once the attacker has this data, their next step is to use brute force analysis on the exposed hashes, another built-in feature of the sqlmap tool to obtain the plaintext user credentials as depicted below.

Identifying risk with Azure Security Center

Azure Security Center (ASC), available on every subscription tier of Azure including free and trial subscriptions, can help identify connected IT assets with an HTTP endpoint. Additionally, ASC can automate the deployment of a Web Application Firewall (WAF) resource to help protect non-compliant resources, while pointing out detected malicious SQL injection attempts. The list of detections points to unprotected web servers where security remediation is needed. ASC scans virtual machines across an Azure subscription and makes recommendations to add Web Application Firewalls where applicable to at-risk resources.

ASC then offers guidance through the process of deploying and configuring a Web Application Firewall for partner or first party solutions.

Further guidance on tunneling IP traffic through the Web Application Firewall is also provided. This process provides an added layer of protection to the vulnerable web application.

Azure Security Center provides you with visibility, now that it’s been added to your resources, on the protections and detections including the Web Application Firewall.

Remedial actions with prevention

Configuring the WAF into prevention mode will prevent the sqlmap tool from accessing databases and tables it shouldn’t have access to. Thus, sqlmap can be prevented from even enumerating the type of database running on the backend, let alone traversing the databases for content. In prevention mode, the WAF prevents suspicious activity. ASC detects this and reports on the activity as it is blocked!

Conclusion

Attack tools such as sqlmap are cheap and easily available. A “defense in depth” approach is critical to ensure applications are not vulnerable to SQL injection. Having visibility and control to detect and protect your resources against these attacks is crucial. ASC enables IT and security professionals to scan cloud-based resources for at-risk endpoints. Following recommendations by ASC, detection and protection can be achieved, helping organizations to meet their standards of information security compliance. While ASC is available on all subscription tiers of Azure, those on the Standard tiers can access a deeper level of insights, actions, and threat protection. If your enterprise requires a deep and granular level of cloud security, activate a free trial of Azure Standard to see how ASC can help your business.

This blog post compliments a deeper dive, step-by-step playbook. To learn more please read ASC Playbook: Protect Servers with Web App Firewall.

Have questions? Email us at AskCESec@microsoft.com.

– Hayden @hhainsworth
Quelle: Azure

Build custom video AI workflows with Video Indexer and Logic Apps

With the Video Indexer connector for Logic Apps, you can now set up custom workflows connecting your most used apps with Video Indexer to further automate the process of extracting deep insights from your videos.

In this blog, I will walk you through an example of using the Video Indexer connector in a Logic App to set up a workflow where whenever a new video file is created in a specific folder of your OneDrive, the video is automatically uploaded and indexed. Once completed, the insights of the newly indexed video will be stored as a JSON file in the designated folder of your OneDrive.

Limitations to note

Currently, there is a 50 MB file size limit for OneDrive and other storage connectors to trigger. The Video Indexer connector allows you to upload a video via file content from a storage connector or a shared access signature URL. Although there is currently no way to access a URL to the video from storage connectors, we are in the process of adding in this feature on OneDrive, OneDrive for Business, and Azure Storage. Once implemented, we will be able to work with videos larger than 50 MB. However, we have to work with the limit for now.

Setting up the Logic App

To begin, log into your Azure Portal and create a new Logic App. You can follow the tutorial to learn how to create and deploy a new Logic App.

Once you have created the Logic App, go to the Logic Apps Designer and select Blank Logic App.

 

The first thing we will need is a “Trigger” that will fire off an event when a new file has been created in your OneDrive folder for videos.

In the search bar for connectors and triggers, search for “OneDrive”. You will see options for OneDrive (consumer) and OneDrive for Business. You can do either based on the account that you have or want because they have similar steps. In this tutorial, I am using OneDrive.

Click on the OneDrive connector. This will show you all of the triggers available for OneDrive.

Select “When a file is created”. This will fire the trigger in the Logic App each time a new file is dropped into the designated OneDrive folder. Once complete you will be prompted to sign into your OneDrive account.

After you have signed in, you will see the trigger and its different fields. For the Folder field, click on the folder icon and navigate to your folder for videos. I have selected a folder for videos on my OneDrive called “Video”.  You can choose any folder that is appropriate or create a new folder specific to your own workflow. It is important to note that this folder should only have video or audio files. Any other files will result in an upload error in the Video Indexer connector.

You can also set how often you want the trigger to check whether a file has been created in the specified folder. Under the “How often do you want to check for items?” section, I have set Frequency to Minute and Interval to 3 to have my trigger check every 3 minutes.

Next, you will need to set an action that uploads the video that has been created in your OneDrive folder to your Video Indexer portal. Click Next Step and select Add an action.

Search for “Video Indexer” and select the Video Indexer connector. You should see the different actions listed out. We currently do not have any triggers for the Video Indexer, however, triggers will come later when support for WebHooks will be added to the Video Indexer API.

   

You should see two options for uploading a video onto your Video Indexer portal. One is called Upload video and index and will allow you to upload a video using file content data. The other option is called Upload video and index (using a URL) and will allow you to upload a video using an URL. Both options will automatically index the videos upon upload.

In this tutorial we will be uploading the video using file content data, so select Upload video and index.

You should be prompted to create a connection using your Video Indexer API Key. Enter in a name for the connection as well as your API Key. You can follow the tutorial to learn how to subscribe to the Video Indexer API and how to access your API key.

Upon creating the connection, it should open the Upload video and index action. If you click on any of the fields, you should see response elements from the OneDrive trigger. For the File Content field, select the File content response element. For Video Name, you can select the File name response element or type any name that you want. Set your privacy as you want. Here, I have set privacy to “Private”.

Upon clicking Show advanced options, you will see many more fields that you can fill out to provide more information on your video. I will be leaving them blank here because they are not required fields.

The Upload video and index action returns the id of the video upon upload, however, that does not mean that the indexing has been completed. For this, you need to add in a check that will only let the Logic App move forward if the video has been fully processed. Select New Step and then More. You can then select Add a do until.

You should now see an Until loop. Within the Until loop, select Add an action. Search for the Video Indexer connector again and select the action Get processing state. For the Video Id field, select the Video Id response element from Upload video and index.

   

For the “Choose a value” field in the Until loop, select the State response element from Get processing state. For the field that says “Choose a value”, type in “Processed”. The State being “Processed” is an indication that the indexing of the video is complete.

Within the Until loop and after the Get processing state action, select Add an action. Search for and select the Delay action (it is a part of the Schedule connector). You will need to set the count and unit fields to essentially determine how often to check if the State is “Processed”. Here, I have set Count to “3” and Unit to “Minute” to check every 3 minutes.

The next step is to attain the insights of the newly processed video. Outside of the Until loop, select New Step and search for the Video Indexer Connector. Select the Get video breakdown action. For the Video Id field, select the Video Id response element from Upload video and index. This action will give you all of the insights of the video.

Now that you have the insights of the video through Get video breakdown, you will now create a file with the new insights and store it in an appropriate folder of your OneDrive.

Select Add an action and search for the OneDrive connector. Select the action Create file. For the Folder path field, click on the folder icon and navigate to the appropriate folder for storing the insights of your video. I chose my folder called “Insights”.

For the File name field, type or select a name for the new text file. Here, I selected the Name response element from the Get video breakdown and typed in “Insights” after. For the File content field, select Summarized Insights or whichever specific response element from Get video breakdown that you want to store. Learn more about the response elements.

Save your logic app, and you are done! You should now test the logic app.

Testing the Logic App

Start by selecting Run. Then, to trigger the logic app, upload a video file onto the OneDrive video folder that you specified in the trigger. As mentioned before, there is a 50 MB file size limit for the OneDrive trigger, so you will need to select your video file appropriately. 

You should be able to look at the run details of your Logic App under the Run History section of the Overview page of your logic app.

You are now ready to test out many different combinations of workflows using the Video Indexer connector to find what works best to make your processes automated and more efficient.

You can create custom workflows that integrate live and on demand workflows in Media Services with Video Indexer using samples from the Media Services GitHub site and the Video Indexer connector as long as your video files are within the 50 MB limit for now. You can also create a Logic App to push the insights from Video Indexer into systems like Cosmos DB and use Azure Search to query across the metadata or to join the insights to your own custom metadata.
Quelle: Azure

Artificial Intelligence tunes Azure SQL Databases

Automatic tuning To stay competitive, today’s businesses need to make sure that they focus on their core competencies that directly deliver customer value while relying on their cloud provider to offer an affordable, reliable, and easy to use computing infrastructure that scales with their needs. In the world of cloud services, where many aspects of running the application stack are delegated to the cloud platform, having artificial intelligence to manage resources is a necessity when dealing with 100,000’s of resources. Azure SQL Database is continuously innovating by building artificial intelligence into the database engine to improve performance, reduce resource usage, and simplify management. The most prominent use of artificial intelligence is the automatic tuning feature that has been globally available since January 2016. Automatic tuning uses artificial intelligence to continuously monitor database workload patterns and recognize opportunities to improve the database performance. Once confidence is built that a certain tuning action would improve the performance of a database, the Azure SQL Database service automatically does the tuning action in a safe and managed fashion. The service monitors each tuning action and the benefits to performance are reported to the database owners. In the infrequent case of a performance regression, the service quickly reverts the tuning action. Click here to read more about automatic tuning. In this blog, I want to share a few examples of how Azure SQL Database customers have benefited from the automatic tuning feature. Tailored indexes for each out of 28K databases SnelStart is a company from Netherlands that uses Azure and Azure SQL Database to run their software as a service. Over the last few years, SnelStart has worked closely with the SQL Server product team to leverage the Azure SQL Database platform to improve performance and reduce DevOps costs. In 2017, SnelStart received the Microsoft Country Partner Netherlands award, proving their heavy investment in Azure and collaboration with Microsoft. SnelStart provides an invoicing and bookkeeping application to small and medium-sized businesses. By moving from desktop software to a hybrid software-as-a-service offering built on Azure, SnelStart has drastically decreased time to market, increased feature velocity and met new demands from their customers. By using the Azure SQL Database platform, SnelStart became a SaaS provider without incurring the major IT overhead that an infrastructure-as-a-service solution requires. SnelStart uses a database per tenant architecture. A new database is provided for each business administration and each database starts off with the same schema. However, each of these thousands of customers have specific scenarios and specific queries. Before automatic tuning, it was infeasible to tune every database to its specific usage scenario. The result was over indexing from trying to optimize for every usage scenario in one fixed schema. Individual databases did not get the attention they needed to be tuned and this resulted in less than optimum performance for each database workload. Now, using automatic tuning, all of this is history. SnelStart now has about 28,000 databases and automatic tuning takes care of them all. Automatic tuning focuses on each database individually, monitors its workload pattern, and applies tuning recommendations to each individual database based on its unique workload. These recommendations are applied safely by choosing the time when database is not highly active. All automatic tuning actions are non-blocking and the database can be fully used before, during, and after each tuning action. For two months, SnelStart gradually enabled automatic tuning for their databases. During that period, automatic tuning executed 262 tuning operations on 210 databases, resulting in improved performance on 346 unique queries across these databases. The following chart shows the roll out of automatic tuning across the SnelStart database fleet. By enabling automatic tuning on their databases, SnelStart got a virtual employee that focused on optimizing database performance. In case of SnelStart, this virtual employee did a great job and optimized an average of ~3.5 databases per day. SnelStart saved a lot of time and was able to focus on improvements in their core value prop instead of on database performance. “Using automated index tuning, we can further maximize the performance of our solution for every individual customer.” – Henry Been, Software Architect at SnelStart. Managed index clean-up AIMS360 is a cloud-based service provider for fashion businesses that empowers fashion labels to manage and grow their business by giving their customers more control of and visibility into their business. Additionally, AIMS360 gives back the time to their customers to focus on fashion instead of processes around their business. AIMS360 has been in this business for over thirty years and working with their software is taught in fashion-related schools throughout the United States. Each fashion business that buys the AIMS360 service, gets its own database. AIMS360 has thousands of databases. The database schema for each database is identical and has evolved over time as new features and capabilities were added into their product. However, trying to optimize the performance of each workflow in their application left their databases with duplicated indexes. SQL Server allows duplicated indexes and once they exist for every related update, duplicated indexes need to be updated – resulting in unneeded use of database resources. Over indexing is a wide spread problem that exists on large numbers of databases. The cause is different people working on the same database without the time to analyze and/or track what happened previously on the database. Looking at the automatic tuning internal statistics, our team was surprised to see that there is double the amount of drop duplicate index recommendations compared to the create index recommendations. AIMS360 enabled automatic tuning across all their databases to simply take care of this duplicate index problem. Since enabling automatic tuning, the SQL Database service has executed 3345 tuning actions on 1410 unique databases and improving 1730 unique queries across these databases. By choosing the right time to drop duplicated indexes, automatic tuning got the problem safely out of the way. In background, over a couple of days, automatic tuning dropped all duplicated indexes. Automatic tuning takes care of not putting a lot of pressure on databases or elastic pools. When multiple tuning actions need to be executed on a single database or within a single elastic pool, these actions are queued and executed with safe parallelism. “Using the automatic tuning feature, our team was able to quickly and efficiently fine tune our databases. Since we have over 1400 databases, traditional methods of tuning would be very labor intensive. However, with automatic tuning we were able to analyze 1000’s of queries and get them tuned instantly. “ – Shahin Kohan, President of AIMS360 Reducing resource usage for thousands of applications Microsoft IT applications use Azure SQL Database heavily for thousands of applications. These applications support various internal applications at Microsoft. The footprint of Microsoft IT in Azure SQL Database is in the thousands of databases. These workloads are diverse – spanning from light, sporadic usage to enterprise grade workloads using resources in higher premium tiers. This variety of applications is not easy to keep an eye on. Microsoft is enabling automatic tuning on all internal workloads, including Microsoft IT, to reduce the DevOps cost and improve the performance across applications that are relying on Azure SQL Database. These same problems are present in any enterprise IT department around the world and all of them have a set of common goals: reduce the total cost of ownership, reduce DevOps cost, and improve performance. Automatic tuning, by continuously monitoring and tuning all the databases in parallel, is constantly making progress towards these goals. Microsoft IT started using automatic tuning as soon as it became available for preview, but the stronger engagement to enable it for all databases started in Q2 2017. Gradually rolling out the automatic tuning for different groups within Microsoft IT has enabled us to carefully measure the benefits achieved by each group. Special success was achieved within Microsoft IT Finance group. In the following chart, you can see the number of databases tuned each day that belong to the Microsoft IT Finance group. Spike that happened on 3rd of May is caused by enabling automatic tuning on all the databases that belong to this group. After every tuning action, Azure SQL Database measures the performance improvement by comparing the resource usage of queries before and after the tuning action. Verification lasts until statistically significant sample has been collected so improvements in performance can be accurately measured. During this verification frequency of queries that have been improved is measured as well. Using this information, we can calculate the amount of CPU hours that have been saved due to tuning actions. The preceding chart shows that databases that belong to Microsoft IT Finance group now use ~250H less CPU than before enabling automatic tuning. In addition to improving the performance by reducing the duration of the queries, this reduction of resource usage directly translates to the reduction of price – Microsoft IT Finance can now decrease the pricing tier of certain databases while keeping the improved performance. You can find all the details regarding this case in this video. Summary Azure SQL Database customers are already heavily relying on automatic tuning to achieve optimal performance. In these customer stories, you see how different applications benefit by using automatic tuning – from optimizing similar workloads for SaaS applications to optimizing thousands of different applications for enterprises. Additionally, automatic tuning helps you to finally get rid of all those unused and duplicated indexes without an effort! Enable automatic tuning on your database and let the platform do the work for you – Click here to read more about how to enable automatic tuning.
Quelle: Azure

Azure Cyber Analytics Program for Power and Utilities Customers

The utilities industry is under continuous and persistent threat. The Ukraine attack was a wake-up call for many utilities who would not have considered something as improbable as a serial-to-Ethernet gateway vulnerability to be one of the key factors in allowing hackers to turn-off power to more than 230,000 Ukrainian residents. The E-ISAC’s detailed analysis of the attack shows how existing SCADA and communications processes were used to compromise systems. As we learn more about the CrashOverride Malware at the heart of this attack, the importance of proactive protection becomes evident. The WannaCry cryptoworm ransomware attack underscored again the importance of updating and patching systems (Microsoft has published guidance for WannaCry), and just days ago, U.S. Power Firms were the target of attacks which, while not fully analyzed yet, show signs of credential harvesting in order to compromise power facilities including the Wolf Creek nuclear facility in Kansas. If passive defense is no longer sufficient, how can customers actively protect themselves and their systems?

Commitment to the Industry

Microsoft is deeply aware of the importance of cybersecurity for companies supporting the electric grid and is committed to helping partners and customers secure their nations’ most critical of critical infrastructure. In furtherance of this commitment, we are announcing a cyber program: “Microsoft Azure Certified Elite Partner Program for Cyber Analytics in Power and Utilities”. Microsoft has invested deeply in tools, analytics, cyber intelligence, and services for our own Cloud, and we believe it is imperative we engage customers to put these capabilities to work for them as well. While we are beginning this program in the U.S., there are plans to quickly expand worldwide.

Microsoft is demonstrating a commitment to the industry by covering the initial costs for deploying and running the Operations Management Suite (OMS) for program participants. The program is designed to engage Azure Certified Elite System Integrators to perform the OMS Service integration for utility customers enrolled in the program. What this means to the utilities industry is customers can better track threat actors currently in their network, identify malicious software dialing outbound from their servers, and establish an alerting system to enable active network cyber defense. The program also includes a limited Azure subscription which can be used to support training and development, and for expediting implementation/deployment projects. In short, there is significant upside to this program.

Microsoft Azure Certified Elite Partner Program

The program uses the Microsoft Azure OMS Advanced Log Analytics Service to analyze customer logs uploaded to an Azure Storage Account. This includes the data acquisition of network cyber logs across the utility enterprise and ICS networks to an Azure repository. Global malicious site and threat actor intelligence is used to provide utility companies greater visibility into the current security state of their networks. The OMS alerting capability is also used to notify a utility if intrusion or new malware is detected, almost immediately.

OMS Data Collection

Operations Management Suite is a collection of management services that were designed in the cloud from the start. Rather than deploying and managing on-premises resources, OMS components are entirely hosted in Azure so configuration is minimal, and you may be up and running literally in a matter of minutes. Data collected by Log Analytics is stored in the OMS repository hosted in Azure.

Connected sources generate the data that gets collected into the OMS repository. There are many types of connected sources supported:

An agent installed on a Windows or Linux computer connected directly to OMS.
A System Center Operations Manager (SCOM) management group connected to Log Analytics. SCOM agents continue to communicate with management servers which forward events and performance data to Log Analytics. OMS can forward log data via SCOM Agents as well.
An Azure storage account that collects Azure Diagnostics data from a worker role, web role, or virtual machine in Azure.
Various Azure resources (full list here) pushing data as a connector, extension, or via Diagnostics depending on the resource.
O365 Data
Custom logs

Threat intelligence

Microsoft runs dozens of cloud services across dozens of regions throughout the world, creating a truly global scale which enables us to achieve a unique view of the threat landscape. The insights we derive, informed by trillions of signals from billions of sources, create an intelligent security graph that we use to inform how we protect all endpoints, better detect attacks and accelerate our response. Microsoft’s sophisticated tools help us know, for example, where attacks came from, meaning we can better and more quickly identify malicious IP addresses. Our goal is to enable our customers to benefit from this knowledge to help protect their resources.

Antimalware assessment

One of the most important tools to defend your systems is antimalware software. Building upon existing antimalware capabilities in OMS, the antimalware solution has been extended to enable nearly full coverage for Microsoft Antimalware engines, as well as to detect the protection status of antimalware that registers its existence using the Windows Security Center APIs.

If you are interested in participating in this program, please contact your Microsoft Account Executive, or Larry Cochrane (L.Cochrane@Microsoft.com), Azure Energy Principal Program Manager.
Quelle: Azure

Tableau and Azure SQL DB, a match made in the cloud

In recent years, Microsoft SQL and Tableau Engineering teams have been working closely together to provide a superior user experience with the two platforms. Today we are sharing some advice on how to optimize the connectivity between Azure SQL DB and Tableau.

Our teams previously teamed up for the SQL Server 2016 launch and for the Azure SQL Data Warehouse launch. Today, this partnership relies on the fact that SQL Server is Tableau’s most common data source, in combined cloud and on-premises usage as detailed in the recent Tableau Cloud Data Brief. 

Our engineering benchmarks, and several global customer engagements, lead us to have a closer look at optimal connectivity and how to leverage the specificities of both platforms.

Without further ado, here are the main learnings.

Out-of-the-box experience works well

We observed that most customers fared well by simply replicating their on-premises approach. Azure SQL DB uses the same drivers as SQL Server 2016, which inherently reduces complexity. With Tableau Desktop UI there is a single SQL Server connector for Azure SQL DB, Azure SQL Data Warehouse, or SQL Server 2016, running on premises or in a public cloud like Azure.

Tableau Live Querying provides the best performance

Network bandwidth permitting, the Live Query mode of Tableau allows the heavy lifting to occur in Azure SQL DB, while also providing more up to date information to Tableau users, as opposed to extract based connectivity. This implies doing some sizing and performance testing with different Azure SQL DB SKUs. In our experience, Azure SQL DB latency and throughput can meet the most stringent Tableau requirements.

For example, we advised a joint-customer to move from S0 (10 DTUs) to P1 Premium (125 DTUs), which instantly removed latency issues. The cost impact is commonly offset by an improved user experience and increased customer satisfaction.

Other Tableau best practices

Isolate date calculations: As much as possible, pre-compute information. Tableau will compute it once and the database may be able to use an index.
Use Boolean fields: Don’t use 0 and 1 as indicators for true and false, just use Boolean fields. They are generally faster.
Don’t change case: Don’t put UPPER or LOWER in comparisons when you know the case of the values. 
Use aliases: Where possible, label text using Tableau’s alias feature, rather than in a calculation. Aliases aren’t sent to the database so they tend to be faster.
Use formatting when possible: Don’t use string functions when you can just use formatting. Then use aliases to label the fields.
Replace IF / ELSEIF with CASE: It’s a good idea to do this as CASE statements are generally faster.

Using a query tuning methodology

We used the following methodology to analyse Tableau queries in order to identify and address bottlenecks:

Enable query store
Run the provided workloads
Monitor DTU consumption using dynamic management views to ensure that tier limits are not being reached
Check for index recommendations and usage
Prioritize statements based on highest execution time
Examine top queries and associated execution plans
Apply suggestions
… iterate

Key tools for Azure SQL DB Optimization

For Azure SQL Database customers in general, consider recommending using the following:

Azure SQL Database Query Performance Insight
SQL Database Advisor

Checking service level constraints

To determine if you are hitting DTU limits for a workload, take a look at the following query:

SELECT [end_time], [avg_cpu_percent], [avg_data_io_percent],
       [avg_log_write_percent], [avg_memory_usage_percent]
FROM [sys].[dm_db_resource_stats];

This returns one row for every 15 seconds for the last hour. We used this for testing the provided workloads to determine if we needed to bump up to the next tier. For a less granular view of this data, we used sys.resource_stats catalog view in the master database.

Monitoring index recommendations and usage

Periodically check missing index recommendations

For any Tableau customer, given the diverse workload characteristics, it is a good idea to periodically check missing index recommendations. We don’t recommend adding all recommendations arbitrarily, but we do like to periodically assess the cost/benefit of specific recommendations over time. 

SELECT [migs].[group_handle], [migs].[unique_compiles], [migs].[user_seeks],
            [migs].[user_scans], [migs].[last_user_seek], [migs].[last_user_scan],
            [migs].[avg_total_user_cost], [migs].[avg_user_impact],
            [migs].[system_seeks], [migs].[system_scans],
            [migs].[last_system_seek], [migs].[last_system_scan],
            [migs].[avg_total_system_cost], [migs].[avg_system_impact],
            [mig].[index_group_handle], [mig].[index_handle], [mid].[index_handle],
            [mid].[database_id], [mid].[object_id], [mid].[equality_columns],
            [mid].[inequality_columns], [mid].[included_columns],
            [mid].[statement]
FROM [sys].[dm_db_missing_index_group_stats] AS [migs]
INNER JOIN [sys].[dm_db_missing_index_groups] AS [mig]
ON ( [migs].[group_handle] = [mig].[index_group_handle] )
INNER JOIN [sys].[dm_db_missing_index_details] AS [mid]
ON ( [mig].[index_handle] = [mid].[index_handle] );

Validate index usage over time

Conversely, we recommend making sure that indexes are pulling their weight over the long term. Some indexes may not be useful over a long period of time, so we recommend checking index usage via the following applicable dynamic management views. 

SELECT OBJECT_NAME([s].[object_id]) AS [Table Name],
            [i].[name] AS [Index Name], [s].[user_seeks], [s].[user_scans],
            [s].[user_lookups], [s].[user_updates], [s].[last_user_seek],
            [s].[last_user_scan], [s].[last_user_lookup], [s].[last_user_update],
            [s].[system_seeks], [s].[system_scans], [s].[system_lookups],
            [s].[system_updates], [s].[last_system_seek], [s].[last_system_scan],
            [s].[last_system_lookup], [s].[last_system_update]
FROM [sys].[dm_db_index_usage_stats] AS [s] 
INNER JOIN [sys].[indexes] AS [i] 
ON [s].[object_id] = [i].[object_id]
AND [i].[index_id] = [s].[index_id]
INNER JOIN [sys].[objects] AS [o]
ON [i].[object_id] = [o].[object_id]
WHERE OBJECTPROPERTY([s].[object_id], 'IsUserTable') = 1
ORDER BY [s].[user_updates] DESC;

Best practices

Monitor over time and ensure you do not drop indexes without ensuring that all representative workloads have been run over the testing period.
Be cautious about dropping indexes that are used to define uniqueness. The indexes may not be used for traversal, but still may be necessary for estimation and enforcing purposes.
The best scenario for monitoring index usage is for indexes where you are uncertain if it will be helpful and used once created. You can add the index, run the workload, and then check sys.dm_db_index_usage_stats. You can check the plans too, but for larger workloads, checking the DMV is faster.

We hope this is useful to you and we’re curious to read your comments and feedback on how you use Tableau and Azure SQL DB. If you are new to this scenario, Tableau is available with a trial license key, and try the Azure free trial to unlock the benifits you can use against Azure SQL DB. Tableau Server is also available as a ready-to-spin image on the Azure Marketplace.

If you’re looking at more complex deployment scenarios and want to upgrade your Tableau and Azure skills, we’d recommend a look at our Tableau and Cloudera Quickstart Azure template.

You can also follow and connect with the Azure SQL DB team on Twitter.

Acknowledgments

This article is a collaboration between several people. Special thanks to Dan Cory (Tableau), Nicolas Caudron (Microsoft) and Gil Isaacs (Microsoft).

 

Quelle: Azure

Resumable Online Index Rebuild is in public preview for Azure SQL DB

We are delighted to announce that Resumable Online Index Rebuild (ROIR) is now available for public preview in Azure SQL DB. With this feature, you can resume a paused index rebuild operation from where the rebuild operation was paused rather than having to restart the operation at the beginning. Additionally, this feature rebuilds indexes using only a small amount of log space. You can use the new feature in the following scenarios:

Resume an index rebuild operation after an index rebuild failure (such as after a database failover or after running out of disk space). There is no need to restart the operation from the beginning. This can save a significant amount of time when rebuilding indexes for large tables.
Pause an ongoing index rebuild operation and resume it later. For example, you may need to temporarily free up system resources in order to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index rebuild process, you can pause the index rebuild operation and resume it later without losing prior progress.
Rebuild large indexes without using a lot of log space and have a long-running transaction that blocks other maintenance activities. This helps log truncation and avoid out of log errors that are possible for long running index rebuild operations.

For more information about ROIR please review the following documents

Guidelines for Online Index Operations
ALTER INDEX (Transact-SQL)
sys.index_resumable_operations

For public preview communication on this topic please contact the ResumableIDXPreview@microsoft.com alias.
Quelle: Azure

Database Scoped Global Temporary Tables in public preview for Azure SQL DB

We are delighted to announce that Database Scoped Global Temporary Tables are in public preview for Azure SQL DB. Similar to global temporary tables for SQL Server, tables prefixed with ##table_name, global temporary tables for Azure SQL DB are stored in tempdb and follow the same semantics. However, rather than being shared across all databases on the server, they are scoped to a specific database and are shared among all users’ sessions within that same database. User sessions from other Azure SQL databases cannot access global temporary tables created as part of running sessions connected to a given database.  Any user can create global temporary objects.

Example

Session A creates a global temp table ##test in Azure SQL Database testdb1 and adds 1 row

     T-SQL command

CREATE TABLE ##test ( a int, b int);
INSERT INTO ##test values (1,1);

Session B connects to Azure SQL Database testdb1 and can access table ##test created by session A

     T-SQL command

SELECT * FROM ##test
—Results
1,1

For more information on Database Scoped Global Temporary Tables for Azure SQL DB see  CREATE TABLE (Transact-SQL).
Quelle: Azure