Higher database eDTU limits for Standard elastic pools in Azure SQL Database

Until now the maximum DTUs per database in a Standard elastic pool was limited to 100 eDTUs in Azure SQL Database. We are pleased to announce the public preview of an increase in this limit to as much as 3000 eDTUs with new click stop choices starting at 200 eDTUs. These higher limits are especially well suited for databases with activity bursts that demand more CPU than previously provided by Standard pools. For IO intensive workloads, Premium pools continue to provide the best performance experience with lower latency per IO and more IOPS per eDTU.

New choices in database eDTU limits for Standard pools

Learn More

To learn more about SQL Database elastic pools, please visit the SQL Database elastic pool webpage.  And for pricing information, please visit the SQL Database pricing webpage.
Quelle: Azure

IoT Hub message routing: now with routing on message body

IoT Hub message routing can now be done on message body! Thanks to the inundation of feedback from our customers requesting the ability to route messages based on the message body, the team prioritized the work and it’s now available for everyone to use.

Back in December, we released message routing for IoT Hub to simplify IoT solution development. Message routing allows customers to automatically route messages to different services based on customer-defined queries in IoT Hub itself, and we take care of all of the difficult implementation architecture for you. Message routing initially shipped based on message headers, and today I am happy to announce that you can now route messages based on message body for JSON messages, available today.

Message routing based on headers gives customers the ability to route messages to custom endpoints without the service cracking open the telemetry flowing through it, but it came with a limitation: customers had to add information to the headers that they weren’t including otherwise, which hindered the usefulness. Many customers wanted to be able to route directly based on the contents of the message body because that’s where the interesting information already was. Routing on message body is intuitive and allows customers full control over message routing.

It’s super simple to route based on message body: just use $body in the route query to access the message body. For example, my device uses the C# device SDK to send messages using this example code:

DeviceClient deviceClient = DeviceClient.CreateFromConnectionString(deviceClientConnectionString);string messageBody = @"{
""Weather"":{
""Temperature"":50,
""Time"":""2017-03-09T00:00:00.000Z"",
""PrevTemperatures"":[
20,
30,
40
],
""IsEnabled"":true,
""Location"":{
""Street"":""One Microsoft Way"",
""City"":""Redmond"",
""State"":""WA""
},
""HistoricalData"":[
{
""Month"":""Feb"",
""Temperature"":40
},
{
""Month"":""Jan"",
""Temperature"":30
}
]
}
}";

// Encode message body using UTF-8
byte[] messageBytes = Encoding.UTF8.GetBytes(messageBody);

using (var message = new Message(messageBytes))
{
// Set message body type and content encoding.
message.ContentEncoding = "utf-8";
message.ContentType = "application/json";

// Add other custom application properties.
message.Properties["Status"] = "Active";
await deviceClient.SendEventAsync(message);
}

 

There are a variety of ways to route messages based on the information provided in the example message. Here are a couple of types of queries you might want to run:

Simple body reference

$body.Weather.Temperature = 50
$body.Weather.IsEnabled
$body.message.Weather.Location.State = 'WA'

Body array reference

$body.Weather.HistoricalData[0].Month = 'Feb'

Multiple body references

$body.Weather.Temperature >= $body.Weather.PrevTemperatures[0] + $body.Weather.PrevTemperatures[1]
$body.Weather.Temperature = 50 AND $body.message.Weather.IsEnabled

Combined with built-in functions

length($body.Weather.Location.State) = 2
lower($body.Weather.Location.State) = 'wa'

Combination with message header

$body.Weather.Temperature = 50 AND Status = 'Active'

In order for IoT Hub to know whether the message can be routed based on its body contents, the message must contain specific headers which describe the content and encoding of its body. In particular, messages must have both these headers for routing on message body to work:

Content type of "application/json"
Content encoding must match one of:

"utf-8"
"utf-16"
"utf-32"

If you are using the Azure IoT Device SDKs, it is pretty straightforward to set the message headers to the required properties. If you are using a third-party protocol library, you can use this table to see how the headers manifest in each of the protocols that IoT Hub supports:

 
AMQP
HTTP
MQTT

Content type
iothub-content-type
iothub-contenttype
RFC 2396-encoded($.ct)=RFC 2396-encoded(application/json)

Content encoding
iothub-content-encoding
iothub-contentencoding
RFC 2396-encoded($.ce)=RFC 2396-encoded(encoding)

The message body has to be well-formed JSON in order for IoT Hub to route based on the message body. Messages can still be routed based on message headers regardless of whether the content type/content encoding are present. Content type and content encoding are only required for IoT Hub to route based on the body of the message.

This feature was brought to you in part by the outpouring of feedback we got requesting the ability to route messages based on message body, and I want to send a huge THANK YOU to everyone who requested the functionality. As always, please continue to submit your suggestions through the Azure IoT User Voice forum or join the Azure IoT Advisors Yammer group.
Quelle: Azure

Run massive parallel R Jobs in Azure, now at a fraction of the price

We continue to add new capabilities to our lightweight R package, doAzureParallel, built on top of Azure Batch that allows you to easily use Azure's flexible compute resource right from your R session. Combined with the recently announced low-priority VMs on Azure Batch, you can now run your parallel R jobs at a fraction of the price. We also included other commonly requested capabilities to enable you to do more, and to do it more easily, with doAzureParallel.

Using R with low priority VMs to reduce cost

Our second major release comes with full support for low-priority VMs, letting R users run their jobs on Azure’s surplus compute capacity at up to an 80% discount.

For data scientists, low-priority is great way to save costs when experimenting and testing their algorithms, such as parameter tuning (or parameter sweeps) or comparing different models entirely. And Batch takes care of any pre-empted low-priority nodes by automatically rescheduling the job to another node.

You can also mix both on-demand nodes and low-priority nodes. Supplementing your regular nodes with low-priority nodes gives you a guaranteed baseline capacity and more compute power to finish your jobs faster. You can also spin up regular nodes using autoscale to replace any pre-empted low-priority nodes to maintain your capacity and to ensure that your job completes when you need it.

​Other new features

Aside from the scenarios that low-priority VMs enable, this new release includes additional tools and common feature asks to help you do the following:

Parameter tuning & cross validation with Caret
Job management and monitoring to make it easier to run long-running R jobs
Leverage resource files to preload data to your cluster
Additional utility to help you read from and write to Azure Blob storage
ETL and data prep with Hadley Wickham’s plyr

​Getting started with doAzureParallel

doAzureParallel is extremely easy to use. With just a few lines of code, you can register Azure as your parallel backend which can be used by foreach, caret, plyr and many other popular open source packages.

Once you install the package, getting started is as simple as few lines of code:

# 1. Generate your credentials config and fill it out with your Azure information
generateCredentialsConfig(“credentials.json”)

# 2. Set your credentials
setCredentials(“credentials.json”)

# 3. Generate your cluster config to customize your cluster
generateClusterConfig(“cluster.json”)

# 4. Create your cluster in Azure passing, it your cluster config file.
cluster <- makeCluster(“cluster.json”)

# 5. Register the cluster as your parallel backend
registerDoAzureParallel(cluster)

# Now you are ready to use Azure as your parallel backend for foreach, caret, plyr, and many more

For more information, visit the doAzureParallel Github page for a full getting started guide, samples and documentation.

We look forward to you using these capabilities and hearing your feedback. Please contact us at razurebatch@microsoft.com for feedback or feel free to contribute to our Github repository.

Additional information:

Download and get started with doAzureParallel
For questions related to using the doAzureParallel package, please see our docs, or feel free to reach out to razurebatch@microsoft.com
Please submit issues via Github

Additional resources:

See Azure Batch, the underlying Azure service used by the doAzureParallel package
More general purpose HPC on Azure
Learn more about low-priority VMs
Visit our previous blog post on doAzureParallel

Quelle: Azure

Protect Windows Server 2016 and vCenter/ESXi 6.5 using Azure Backup Server

Azure Backup Server is a cloud-first backup solution that helps in protecting business critical applications as well as virtual machines running on Hyper-V or VMware VMs. With the latest release of Azure Backup Server, you can protect applications such as SQL 2016, SharePoint 2016, Exchange 2016, and Windows Server 2016, locally to disk for short term retention as well as to cloud for long term retention. Azure Backup Server also introduces Modern Backup Storage technology that helps in reducing overall Total Cost of Ownership (TCO) by providing savings on storage and faster backups. Azure Backup Server also guards your critical data not only against accidental deletion but also against various security threats such as ransomware. You also get free restores from cloud recovery points, thereby further reducing backup TCO.

Native Integration of Azure Backup Server with Windows Server 2016

Azure Backup Server natively integrates with Windows Server 2016 capabilities to provide more secure, reliable and efficient backups.

Value Propositions

Efficient: Azure Backup Server’s Modern Backup Storage technology leverages Windows Server 2016 capabilities such as ReFS Block Cloning, VHDX, and Deduplication to reduce storage consumption and improve performance. This leads to 3X faster disk backups and 50% reduction in on premise storage consumption. Azure Backup Server’s workload-aware backup storage technology gives you the flexibility to choose appropriate storage for a given data source type. This flexibility optimizes overall storage utilization and thus reduces backup TCO further.
Reliable: Azure Backup Server uses RCT (the native change tracking in Hyper-V), which removes the need for time-consuming consistency checks. Azure Backup Server also uses RCT for incremental backup. It identifies VHD changes for virtual machines, and transfers only those blocks that are indicated by the change tracker. With Hyper-V building this tracking feature natively within the platform, you can avoid painful consistency checks that would have led to restarting backups.
Secure: Azure Backup Server’s ability to do backup and recovery of Shielded VMs securely helps in maintaining security in backups. Azure Backup Server can protect Shielded VMs and maintain the security in the backups as well. Azure Backup Server’s security features are built on three principles – Prevention, Alerting, and Recovery – to enable organizations increase preparedness against ransomware attacks and equip them with a robust backup solution.
Flexible: Windows Server 2016 comes with Storage Spaces Direct (S2D), that eliminates the need for expensive shared storage and related complexities. Azure Backup Server can backup Hyper-V VMs on Windows Server 2016 deployed on Storage Spaces Direct. Azure Backup Server can also auto protect SQL instances and VMware VMs to cloud as well. Upgrading Azure Backup Server from an older release is simple and will not disrupt your production servers. After upgrading to latest version of Azure Backup Server and upgrading agents on production servers, the backups will continue without rebooting production servers.

Support for vCenter and ESXi 6.5

VMware VM backup with Azure Backup Server was announced as part of Update 1 of the previous release. There are a couple of enhancements with respect to VMware VM protection with the new version of Azure Backup Server:

Azure Backup Server comes with support for vCenter and ESXi 6.5 along with support for 5.5 and 6.0
Azure Backup Server comes with the added feature of auto protecting VMware VMs to cloud. If VMware VMs are added to a folder, they will be automatically protected to disk and cloud.

If Azure Backup Server is installed on Windows Server 2016, protection of VMware VMs is in preview mode until VMware releases support for VDDK 6.5 for Windows Server 2016.

Learn more about Azure Backup Server

1. How does Modern Backup Storage work with Azure Backup Server?

2. How to backup VMware VMs with Azure Backup Server?

3. Download Azure Backup Server for free and get started!

 

Related links and additional content

Want more details? Check out Azure Backup documentation and Azure Advisor documentation
New to Azure Backup and Azure Advisor, sign up for a free Azure trial subscription
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates

Quelle: Azure

Introducing Modern Backup Storage with Azure Backup Server on Windows Server 2016

One of the key features that was announced with the latest release of Azure Backup Server is Modern Backup Storage. Modern Backup Storage is a technology that leverages Windows Server 2016 native capabilities such as ReFS block cloning, deduplication and workload aware storage to optimize backup storage and time, and delivers nearly 50% disk storage savings and 3x faster backups. With Modern Backup Storage, Azure Backup Server goes a step further in enhancing enterprise backups by completely restructuring the way data is backed up and stored. 

How does Modern Backup Storage work?

Add volumes to Modern Backup Storage and configure Workload Aware Storage

Begin backing up by creating Protection Group with Modern Backup Storage

With these simple steps, you can efficiently store your backups using Modern Backup Storage technology.

Related links and additional content

Want more details? Check out Azure Backup documentation and Azure Advisor documentation
New to Azure Backup and Azure Advisor, sign up for a free Azure trial subscription
Need help? Reach out to Azure Backup forum for support
Tell us how we can improve Azure Backup by contributing new ideas and voting up existing ones.
Follow us on Twitter @AzureBackup for the latest news and updates

Quelle: Azure

Announcing public preview of disaster recovery for Azure IaaS virtual machines

I am excited to announce the public preview of disaster recovery for Azure IaaS virtual machines (VMs) using Azure Site Recovery (ASR). You can now easily replicate and protect IaaS based applications running on Azure to a different Azure region of your choice without deploying any additional infrastructure components or software appliances in your subscription. This new capability, along with Azure Backup for IaaS virtual machines, allows you to create a comprehensive business continuity and disaster recovery strategy for all your IaaS based applications running on Azure. As you move production applications to the cloud, Azure natively provides you the high availability and reliability that your mission critical workloads need. However, compliance requirements such as ISO 27001 still require that you have a provable disaster recovery solution in place as part of a business continuity plan (BCP). The set of features that we are announcing today for Azure IaaS fill this important need. Disaster Recovery for Azure IaaS applications extends ASR’s existing functionality, and further simplifies the onboarding experience for customers: Offered “as-a-Service” – You do not need any additional software infrastructure (VMs or appliances) in your Azure subscription to enable this functionality. You avoid all the complexity and cost associated with deploying, monitoring, patching and maintaining any DR infrastructure. Simplified experience – Enabling cross-region DR for an application is so simple that all you need to do is select the VMs you want to protect, choose a target Azure region, select replication settings and you are good to go. Application-aware recovery – Whether you are an application owner, disaster recovery admin or a managed service provider, ASR lets you stay in control at all times – you can decide when and how to orchestrate a failover. With support for best-in-class recovery point objective (RPO), recovery time objective (RTO) and ASR’s powerful Recovery Plans, your applications can meet the recovery requirements that your business demands. Non-disruptive DR drills – With ASR’s test failover capability, you can easily perform a DR drill anytime without any impact to the primary production workload or to ongoing replication, giving you the confidence that your DR solution will work when you need it. Cross-region DR feature is now available in all Azure public regions where ASR is available. Click here to get started.
Quelle: Azure

Streamlining Kubernetes development with Draft

I’m Gabe Monroy, the lead PM for containers on Microsoft Azure. I joined Microsoft in April in the acquisition of Deis. At Deis, our team was always laser-focused on making containers easier to use – and easier to deploy on Kubernetes specifically. Now as part of Microsoft, I’m thrilled to continue that mission and share the first piece of open-source software we’re releasing as part of the Azure Container Service team.

Application containers have skyrocketed in popularity over the last few years. In recent months, Kubernetes has emerged as a popular solution for orchestrating these containers. While many turn to Kubernetes for its extensible architecture and vibrant open-source community, some still view Kubernetes as too difficult to use.

Today, my team is proud to announce Draft, a tool that streamlines application development and deployment into any Kubernetes cluster. Using two simple commands, developers can now begin hacking on container-based applications without requiring Docker or even installing Kubernetes themselves.

Draft in action

Draft targets the “inner loop” of a developer’s workflow – while developers write code and iterate, but before they commit changes to version control. Let’s see it in action.

Setting sail for Kubernetes

When developers run “draft create” the tool detects the application language and writes out a simple Dockerfile and a Kubernetes Helm chart into the source tree. Language detection uses configurable Draft “packs” that can support any language, framework, or runtime environment. By default, Draft ships with support for languages including Node.js, Go, Java, Python, PHP, and Ruby.

You can customize Draft to streamline the development of any application or service that can run on Kubernetes. Draft packs are just a simple detection script, a Dockerfile, and a Helm chart.

This simple yet flexible user experience for developers is inspired by PaaS systems like Deis and Cloud Foundry which support the concept of buildpacks. However, Draft differs from buildpack-oriented PaaS systems because it writes out build and deployment configuration into the source tree, making it trivial to construct continuous integration (CI) pipelines that can bring these containers all the way to production.

Edit locally, dev remotely

Once developers run “draft create”, hacking on the application is as simple as typing “draft up”. This ships source code to any Kubernetes cluster (running either locally or in the cloud), builds it in that cluster using the Dockerfile, and deploys it into a dev environment using the Helm Chart. Developers can then test their app live, and any changes in their editor or IDE will be made available in seconds. With a Jenkins registry and a Kubernetes cluster, developers can build on a tight iteration loop from code through deployment, either in concert with an operations team or on their own laptop.

While some developers often start alone, pointing Draft at a Kubernetes cluster running on a laptop, Draft works equally well on a remote Kubernetes cluster. This allows developers to edit code locally, but have their dev environment running in the cloud where they can access all their app’s production dependencies. Three cheers for “dev” and “prod” parity!

Get started today

ACS has a long history of openness including support for multiple orchestrators and the open source ACS-Engine project, which helps customers customize their own ACS cluster. We are proud to extend that open spirit to new projects that help developers succeed with container technology. As of today, Draft is open source and available at https://aka.ms/draft. Spin up a Kubernetes Cluster on ACS and take Draft for a test drive today.
Quelle: Azure

Announcing the preview of Azure’s Largest Disk sizes

At Build Conference, we announced the addition of new Azure Disks sizes – which provide up to 4TB of disk space. These new sizes allow you to perform up to 250 MBps of storage throughput and 7,500 IOPS. The details of the announcement are captured in the Build session here. We introduced two new disk sizes, P40 (2TB) and P50 (4TB) for Managed/unmanaged Premium Disks; S40 (2TB) and S50 (4TB) for Standard Managed Disks. For Standard unmanaged disks, you can create disks with maximum size of 4095 GB. These new sizes are available to use now in our West US Central Region using Azure Powershell and CLIs through ARM. You’ll see us continue to expand availability and roll out the Azure Portal support around the world in more regions in the coming month. Along with that, we will release new versions of the Azure tools to support upload of VHDs more than 1TB. New Disk Sizes Details The below table provides more details on the exact capabilities of the new disk sizes:   P40 P50 S40 S50 Disk Size 2048 GB 4095 GB 2048 GB 4095 GB Disk IOPS 7,500 IOPS 7,500 IOPS Up to 500 IOPS Up to 500 IOPS Disk Bandwidth 250 MBps 250 MBps Up to 60 MBps Up to 60 MBps
Quelle: Azure

Getting Started with the Video Indexer API

Earlier this month at BUILD 2017, we announced the public preview of Video Indexer as part of Microsoft Cognitive Services. Video Indexer enables customers with digital and audio content to automatically extract metadata and use it to build intelligent innovative applications. You can quickly sign up for Video Indexer from https://vi.microsoft.com/ and try the service out for free during our preview period.

On top of using the portal, developers can easily build custom apps using the Video Indexer API. In this blog, I will walk you through an example of using the Video Indexer API to do a search on a keyword, phrase, or detected person’s name across all public videos in your account as well as sample videos and then to get the deep insights from one of the videos in the search results.

Getting Access to the Video Indexer API

To get started with the Video Indexer API, you must sign in using a Microsoft, Google, or Azure Active Directory account. Once signed in with your preferred account, you can easily subscribe to our free preview of the Video Indexer API. The following steps will walk you through the process of registering for access.

To subscribe to the API, go to the Products tab and click Free Preview. On the next page, click the Subscribe button. You should now have access to the API. If you find that you do not have access, contact visupport@microsoft.com.

After getting access, you can then return to the Products tab, and go to the Video Indexer APIs – Production link.

 

You should now see the Video Indexer API documentation page. On the left side of the page, you will see a list of several action options. Each action page contains information about that request including which parameters are optional and which ones are required. You can test any of these by clicking Try it, setting the appropriate parameters, and then clicking Send.

 

To use an external tool like Postman to test the API, you will need to download the Video Indexer APIs – Production Swagger .json file. You can do this by selecting the API definition download button on the top right of the page and choosing Open API to get the Swagger .json file. Save the file somewhere locally on your machine to use in the next section.

Here, I will demonstrate how to use Postman to test the API. To follow along, you can download and install Postman here. Launch Postman and click Import in the top left.

 

Navigate to and choose the Video Indexer APIs – Production Swagger .json file that you previously downloaded and saved locally.

 

You should now see the API actions under Collections.

 

To submit calls to any of the actions, you need a key that is specific to your subscription. This can be found by going back to Video Indexer APIs – Production page and clicking Try it. Once it gets to the next page, you can scroll down to the Headers section and find where it says Ocp-Apim-Subscription-Key. You can see your key by clicking on the eye icon.

 

Copy both the name of the key (Ocp-Apim-Subscription-Key) and the key itself because you will need both for Postman.

Running a Search Call Across Videos

Going back to Postman, go to the action call that you want to test out. In this case, I will start with search, which is going to be a typical user interaction with the API. In particular, I’m showing a search on a keyword across all public videos in your account as well as sample videos. Search is an HTTP GET call with the request URL https://videobreakdown.azure-api.net/Breakdowns/Api/Partner/Breakdowns/Search

 

Go to Headers and enter in the name of the key (Ocp-Apim-Subscription-Key) where it says key and the key itself where it says value. You can set a Header Preset on Postman using the key and value to prevent having to type them in every time. It saves time and is easy to set up, so it is definitely worth doing!

 

To set the parameters of the action, click Params and set the values for the parameters you wish to set. Remove the unchanged parameters by hovering to the right corner of the parameter and selecting the x.

In this search example, I’m setting the privacy to “Public”, language to “English”, textScope to “Transcript”, and searchInPublicAccount to “true”. I am also clearing out all the parameters that I have not changed. For query, enter in the word that you would like to search for across the videos. In this example, let’s search for the keyword “Azure”.

 

Upon selecting Send, you will get a JSON response with the results of the search.

 

The JSON response of the search contains a results section that gives back the videos that contain your query term and the relevant time ranges from each resulting video. The section is an array in which each element is a resulting video along with its basic information, social likes, number of views, and search matches with start times.

Below is an outline of some of what to expect in the JSON response for search.

JSON Response for Search

Results (Array – each element has the following info)

Basic video and user information

accountId
id
name
description
userName
createTime
privacyMode
state

social

likes
views

searchMatches (Array – each element has the following info)

startTime
type (tells the user whether match is from audio based transcript or OCR)
text

You should also test this out by uploading and processing a few of your own videos in your account. You can do this on the Video Indexer Preview site if you are logged in, or you can use the Upload HTTP POST call from the Video Indexer API in Postman.  For your search request, set searchInPublicAccount to “false” to only search through the videos on your account. Set the query to a keyword that is more relevant to your videos and privacy to either “Public” or “Private” based on the settings of your video.

Next, I will show how to take the results of a search and get the expanded insights of the video.

Running a Breakdown Call on a Video

Take the id of the first result of your search.

 

Now go to the breakdown action. Breakdown is the HTTP GET call with the request URL https://videobreakdown.azure-api.net/Breakdowns/Api/Partner/Breakdowns/:id

You will need to put in your subscription key name and key again. If you have a preset set up with the key, you will just need to select it.

Click the Params button and enter in the id from the search result for the id parameter. Set the language of the breakdown to “English” and click Send.

You should now see the JSON response for the breakdown request.

 

The JSON response of the breakdown contains general information on the video and the account that uploaded it in addition to three sections called summarizedInsights, breakdowns, and social.

The summarizedInsights section holds information on the distinct faces, topics, and audio effects in the video as well as the different time ranges in which each appear. In addition, the section  provides information on positive, negative, and neutral sentiments throughout the video as well as the time ranges for each.

The breakdowns section serves as a more expansive version of the summarizedInsights. Here, you will find transcript blocks, categories of audio effects, and information to allow for content moderation. The breakdowns section also provides more details on topics, faces, and voice participants of the video.

The trasncriptBlocks section within breakdowns serves as a timeline of the video. You will find information on lines, OCRs, faces, etc. for each time block.  The social section provides data on likes and number of views.

Below is an outline of some of what to expect in the JSON response for breakdown.

JSON Response for Breakdown

Basic video and user information

This section has extensive information on the name, owner, id, etc. of the video.

summarizedInsights

faces
topics
sentiments
audio effects

breakdowns

general information on video

accountID
id
state
processingProgress
externalURL

insights

trancsriptBlocks
topics
faces
contentModeration
audioEffectsCategories

social

likes
viewsYou can get a more information on the JSON response for breakdown here.You have the data and are now well on your way to having more insights on your videos and the opportunity to further innovate. Try a few more examples with your own content!

For more details, please take a look at the Video Indexer Documentation. Follow us on Twitter @Video_Indexer to get the latest news on the Video Indexer.

If you have any questions or need help, contact us at visupport@microsoft.com.
Quelle: Azure

Building an Azure Analysis Services Model for Azure Blobs — Part 2

The May 2017 release of SSDT Tabular introduces support for named expressions in Tabular 1400. The second part of the article “Building an Azure Analysis Services Model on Top of Azure Blob Storage” on the Analysis Services team blog takes advantage of this exciting capability to build streamlined table queries for the more than 1,000 source files in a 1-terabyte TPC-DS data set. The article also outlines and implements an incremental data loading strategy to keep the generated load against Azure Blob storage below throttling thresholds. The initial deployment of the data model on an Azure Analysis Services server with approximately 10 GB of source data finished in roughly 15 minutes. Subsequent load cycles will increase the data volume up to the maximum server capacity in Azure Analysis Services.

For more details, see the blog article “Building an Azure Analysis Services Model on Top of Azure Blob Storage – Part 2” on the Analysis Services team blog.
Quelle: Azure