Announcing Azure DevTest Labs support for creating environment with ARM templates

Today, we are very excited to announce that Azure DevTest Labs now supports the capability to create your environments using Azure Resource Manager (ARM) templates!

In case you haven’t heard about Azure DevTest Labs, this May, we announced the general availability (GA) of Azure DevTest Labs: your self-service sandbox environment in Azure to quickly create Dev/Test environments while minimizing waste and controlling costs. The goal for this service is to solve the problems that IT and development teams have been facing: delays in getting a working environment, time-consuming environment configuration, production fidelity issues, and high maintenance cost. It has been helping our customers to quickly get “ready to test” with a worry-free self-service environment. The reusable templates in the DevTest Labs can be used everywhere once created. The public APIs, PowerShell cmdlets and VSTS extensions make it super easy to integrate you Dev/Test environments from labs to your release pipeline. In addition to the Dev/Test scenario, Azure DevTest Labs can also be used in other scenarios like training and hackathon. For more information about its value propositions, please check out our GA announcement blog post. If you are interested in how DevTest Labs can help for training, check out this article to use Azure DevTest Labs for training.

 

In the past months, we’ve been talking with a lot of customers and listening to their stories and feedback. It turns out that Azure DevTest Labs has provided a very decent experience for users to create a single VM. When it comes to multi-VM environments (e.g. multi-tier web apps or a SharePoint farm), customers look for more productive way to provision and manage those environments. In addition, Azure offers Azure Resource Manager templates, which has been widely used by customers, for you to define the infrastructure/configuration of your Azure solution and repeatedly deploy in a consistent state. In order to address these needs, we implemented this feature that allows you to create multi-VM environments from your ARM templates. Here are a couple of highlights of the feature:

ARM templates are loaded directly from your source control repository (GitHub or VSTS Git).
Once set up, DevTest Labs users can create an environment by simply picking an ARM template from the Azure portal as what they can do with other types of bases.
Azure PaaS resources can be provisioned in an environment from ARM template in addition to IaasS VMs.
Cost of environments can be tracked in the lab in addition to individual VMs created by other types of bases.

To learn more how to use ARM templates to provision Azure DevTest Lab environments, please read our main announcement in Azure DevTest Labs blog. You can also see a live demo with more features we’ve shipped recently at our Connect(); 2016 session, What’s New in Azure DevTest Labs.

 

To get latest information on the service releases or our thoughts on the DevTest Labs, please subscribe the RSS feed of this team blog and our Service Updates.

The release of this feature is just one stop. There are still a lot of things in our roadmap that we can’t wait to build and ship to our customers. Your opinions are valuable for us to deliver the right solutions for your problems. We welcome ideas and suggestions on what DevTest Labs should support, so please do not hesitate to create an idea at the DevTest Labs feedback forum, or vote on others’ ideas.

If you run into any problems when using the DevTest Labs or have any questions, we are ready at the MSDN forum to help you.
Quelle: Azure

Top Cloud Myths of 2016

This post is authored by Julia White, CVP Azure & Security, Microsoft

Over the last few years, “cloud” has been one of the most used words in tech, and 2016 is no exception – for good reason. Nearly three-fourths (70 percent) of IT professionals report their organizations use public cloud solutions, and nine in 10 (92 percent) say their companies have services that should be running in the public cloud, but aren’t currently (source: IT Pro Cloud Survey). As organizations embrace the cloud globally, we’ve seen digital transformation of entire industries powered by the cloud – from automotive builders creating connected cars to new retail customers leveraging cloud-based data and advanced analytics to personally tailor customer experiences.

Public cloud is not just past the tipping point, it’s now mainstream. Yet, in talking to customers, a few common misconceptions about the cloud persist. As we head toward the new year, I believe it’s important to surface and dispel some of the top cloud myths we’ve seen to address the concerns business and IT decision makers have when it comes to the cloud.

Myth No. 1: Enterprises need only one cloud vendor

Enterprises have diverse needs when it comes to cloud apps, data analytics, development, management, and security. Belief that one cloud can meet all these diverse needs is simply out of touch with reality. While it is easy to believe this for SaaS technology across productivity and business applications, it is just as true for IaaS and PaaS cloud technology. Different business groups have different needs and often turn to two or sometimes three cloud providers to take advantage of differentiated capabilities. Multi-cloud management solutions, as well as open development tools and platforms, are proof that we have moved beyond the period of concerns around vendor lock-in and are instead addressing the needs of IT and developers to manage updates and monitor security across multiple clouds and develop on the platform and with the tools they prefer.

Myth No. 2: Cloud security is risker than on-premises

Fundamentally, maintaining cybersecurity is about staying multiple steps ahead of the hackers. A public cloud provider has the investment resources to deploy and maintain state-of-the-art security technology, and employ the world’s leaders in cybersecurity. Additionally, with massive cloud scale and broad geographical presence comes the ability to detect emerging threats quickly and address issues before they gain traction. Beyond security, ensuring compliance with global, local, and industry regulation is a significant burden to individual companies. When organizations turn to a global cloud provider, they are inheriting the compliance certifications and standards work already put in place for organizations around the globe. Across both security and compliance, global public cloud providers are able to invest massive amounts of resources that exceeds what any one individual organization can realistically deploy. Not all cloud vendors deploy resources equally, however, so it is still important to understand and complete due-diligence on security and compliance standards of any public cloud provider you are considering.

Myth No 3: The main benefit of public cloud is efficiency, more than innovation

Utilizing public cloud infrastructure (IaaS) enables cost reduction and more rapid app deployment with the ability to instantly tap into essentially unlimited infrastructure capacity. It is easy to view the economy of scale benefits as the central cloud benefit. Because IaaS continues to carry forward some of the traditional IT overhead of infrastructure management and security, it can limit speed of innovation. Platform services enable developers to focus entirely on application development rather than infrastructure management and maintenance. Fully unlocking the cloud promise means using a combination of cloud infrastructure and platform services – often times in concert. Using IaaS is a great first step in cloud adoption, but also taking advantage of the managed services and serverless compute technologies enable greater innovation as well as developer and IT productivity. Services such as machine learning, cognitive services, IoT, mobile app development, microservices, and event-driven functions are fueling incredible new innovation and business transformation.

Myth No. 4: Hybrid cloud is the connection of public and private clouds

For most companies, hybrid cloud is a reality, and it’s here to stay versus just being a transitory state. Recognizing that hybrid is a steady state, not a transition state, is why hybrid cannot just be a network connection between public and private clouds, but a consistent end user, IT management & security, and app development experience across public and private clouds. Hybrid consistency goes beyond network connectivity and the ability to “lift and shift” virtual machines, but truly providing the IT professional, developer, and end user experiences that don’t change based on the location of the app or resource. Consistency across a hybrid cloud environment enables uniform development, unified dev-ops and management, common identity and security, and seamless extension of existing applications and infrastructure to the cloud.  Without this consistency, hybrid cloud just means dealing with two different environments for the long term.

Myth No. 5: Public cloud leads to vendor lock-in

Cloud infrastructure and app development capabilities generally support all platforms and development languages. Historically, developers have been bound by the languages they code in for specific platforms. With more cloud vendors broadly supporting open-source and also frequently open-sourcing their technology it’s making it easier to build for any cloud, language, and OS. As such, developers are getting closer to the state developing in the language they want and deployment on any platform. Further, container technology supported by cloud providers enables app portability.

Myth No. 6: Open cloud development is a risk to innovation and intellectual property

One of the great benefits of the cloud is providing an environment to test ideas quickly. As more organizations embrace agile development in the cloud, they need to feel empowered to choose the right tools and code to do the job. Increasingly, that code is coming from the open source community. Nine in 10 (91 percent) of IT workers state that for them to be satisfied at work, it is important that they work for an organization that allows them to use open source technologies (source: IT Pro Cloud Survey). Open-sourcing select projects is just one step toward earning the trust and respect of developers and their customers. Cloud providers need to embrace open source as a key component of their own development cycles as well as their customers’ technology infrastructure. This means looking for technologies that can be developed and maintained in partnership with the open source community. Vendors should be openly developing and collaborating with the open source community, while also offering regular contributions to community projects. They should also be actively participating in GitHub projects, helping to set standards with organizations such as The Linux Foundation, and engaging with the community on forums.

Looking Ahead to 2017

As we look ahead to the coming year, we can expect to see more advanced, intelligent cloud technology enter the commercial realm – from bots to genomics and AI applications. The future of cloud will be written by the cloud vendors who demonstrate they are truly global, trusted, hybrid platforms with open development tools and cutting edge platform services.
Quelle: Azure

Application Insights Diagnostics Preview

At our recent Connect() event, we were excited to announce the general availability of Application Insights, with a large set of features being available for general use. In this post I’d like to share with you a preview of two new diagnostics features that are coming soon to Application Insights: Improvements to the Application Map that allows you to see health status of multiple services, pin-point issues, and easily find exceptions. The new Application Insights Profiler, which allows you to see detailed example traces of slow requests and drill into detailed analysis of code to see where you are spending the most time. You can get a quick tour of these features by watching our Connect() video: Multi-service Application Map with Error Pane We want the Application Map to help you understand the overall health of your service, identify problems spots, and get rich information about that help you to triage and root cause issues. The map previously only showed information about one service at a time, and it took a lot of clicks to get from the map to detailed exception information. To improve this experience, we have made the following improvements available under preview: You can now see multiple services on the same Application Map You can now click on a service to bring up an error pane which summarizes the top errors, and can click on the errors to jump right to detailed exception information If you have multiple services being monitored by Application Insights, you can see all of these services on the same map and the calls between them. We’ve done work in both the .NET and the Node.js SDKs to detect calls between services, and as a result we can show both a Node.js web application and a .NET Web API backend on the same map. In the picture below, teamstandup-web is a Node.js web service, and teamstandup-api is a .NET REST API service:   The map reflects the incoming and outgoing calls for the currently selected service. Above we can see the web browser calling into the teamstandup-web service. The map also shows an error pane on the right-hand side, summarizing the errors that occurred in that service grouped by operation name (i.e. the URL) and the function in which the error occurred. If we click on the teamstandup-api service we can see outgoing database calls made from the .NET API, and that there are 23 exceptions being thrown in the POST checkins method: Clicking on the ArgumentNullException takes us to a redesigned exception details blade, making it easy to see the exception name and message. From here we can take quick actions like searching for logs related to the exception, create a new work item, or searching the web: That was a quick tour of how we’ve made it easy for you to spot issues and drill into top errors in your services. To try out these features and give us feedback, be sure to check out  Application Insights Application Map Preview for instructions on updating your Application Insights SDK (required for the multi-service map) and enabling these features in the portal. Application Insights Profiler When running in production there can be a number of unexpected causes of slow requests. In many cases if you have some requests which are fast most of the time but can occasionally experience slowdowns. The new Application Insights Profiler runs with low overhead and will periodically enable profiling on your production service and collect detailed examples of performance traces for your application when something interesting happens. This means you’ll have examples of interesting issues with detailed profiles (code-level breakdown of request execution time) for a range of issues and you’ll have the details you need to pinpoint where time is spent. We believe the profiler will be useful for several scenarios, from identifying and triaging your slowest requests (even in the 95th percentile long-tail), to pinpointing what specifically is slowing down a request using a prerecorded example. To access the results from the Profiler, you can click on the Examples column in the Performance blade to see examples for a given operation:     We have added the 95th percentile column to this table, allowing you to compare the typical vs. long tail performance of each request. Clicking on examples brings you to a new page which gives you a list of requests at various response percentiles, by default we select the slowest one: For each request, we show you a call tree of the functions called during that request and the elapsed time spent in each function. The chart on the right-hand side provides an over-time visualization of the code: If you have a service running in Azure App Service and want to find out how to improve its performance, be sure to check out How to enable Application Insights Profiler for enabling the profiler on your account and sending us feedback. Wrapping Up We’re excited to make early previews of these new diagnostics features that help you to better diagnose and fix issues in your production applications. Let us know what you think by sending feedback from the Azure portal, or leaving comments here on this post!
Quelle: Azure

New pricing model for encoding with Azure Media Services coming in January 2017

Based on customer demand and feedback, starting January 1, 2017, we will implement a new pricing model for on-demand encoding of media with Azure Media Services. For the Standard Encoder and Premium Encoder, we will be calculating usage based on the total duration of the media files produced as output by the encoder. Further, for Media Reserved Units, we are lowering the rate, and making a change to calculate usage on a per-minute basis. In this post, I’ll be walking you through some examples to show how the new pricing models work.

New per-output-minute model for encoding

Starting on January 1, 2017, we will bill for encoding jobs based on the duration of the media files produced as output by the encoder. No code changes are needed in your application, as the new model will be automatically applied by our service.  Please refer to the official pricing page after January 1, 2017, in order to verify the final price for each data center.

Calculation of output minutes

To calculate the total output minutes for an encoding task, we make use of the following multipliers:

Multipliers for calculating output minutes

Quality
Multiplier
Example

Audio only output
0.25x
20 minutes of audio count as 5 SD minutes

SD (less than 1280×720)
1x

20 minutes

HD (1280×720 – 1920×1080)
2x
20 minutes of HD output count as 40 SD minutes

UHD (up to 4096×2160)
4x
20 minutes of UHD output count as 80 SD minutes

 

 

 

 

 

 

Pricing example for Standard Encoder

The Standard Encoder is the best option if your goal is to transcode a wide variety of input video/audio files into an output format suitable for playing back on a variety of devices (smartphones, tablets, PCs, consoles, TVs). You can read more about the capabilities of this (Media Encoder Standard) media processor in our documentation.

To determine the overall cost for using this encoder, you need to know the duration of your input video and its resolution, and the encoding preset. Suppose you have a high-quality QuickTime video at 1920x1080p resolution, which is 20 minutes in duration. If you want to encode this 1080p input for delivery via adaptive streaming protocol to a variety of devices, then you would typically use a preset such as the “H264 Multiple Bitrate 1080p.” Using such a preset will incur billing for each output layer/bitrate, along with a multiplier depending on the video resolution. For this preset, the following multipliers apply:

Layer
Resolution
Multiplier

HD Video 1
1920×1080
2x

HD Video 2
1920×1080
2x

HD Video 3
1280×720
2x

SD Video 1
960×540
1x

SD Video 2
960×540
1x

SD Video 3
640×360
1x

SD Video 4
640×360
1x

SD Video 5
320×180
1x

Audio
N/A
0.25x

Total
 
11.25x

Based on the table above, your encoding of the QuickTime video will result in a total of 11.25 * 20 = 225 output minutes. After January 1, 2017, you can refer to the official pricing page for the current rates for one output minute, applicable to your data center, and multiply that by 225 to determine the final cost.

Pricing example for Premium Encoder

If your objective is to transcode between formats common to the broadcast or movie industry, or if your video workflow requires complex logic, then you would need the Premium Encoder. See our documentation for an in-depth comparison of the feature differences between the two encoders. Additionally, this blog offers a flow chart to help choose between the two encoders. Suppose you have a high quality ProRes/QuickTime video that is 20 minutes in duration, with 1920x1080p video resolution but two audio tracks – one in English and the other in Spanish. You can use the Premium Encoder to transcode this video into a single bitrate MXF file, with one video track and two audio tracks. In this case, the multiplier would be 2 (for HD video) plus 2*0.25 (for each audio) adding up to 2.5. Thus, your encoding of the QuickTime video into an MXF file would result in a total of 20*2.5 = 50 output minutes. After January 1, 2017, you can refer to the official pricing page for the current rates for one output minute, applicable to your data center, and multiply that by 50 to determine the final cost.

Per-hour pricing for Media Reserved Units

You need to add Media Reserved Units (MRUs) to your Media Service account if your workload requires one or more videos to be processed concurrently. You can increase the overall throughput from the service by (a) increasing number of MRUs to get more videos processed concurrently, and (b) by using faster MRUs (eg. S3). See the documentation for more information. Based on customer demand and feedback, we are changing the pricing model for MRUs to more closely track the actual usage of such units. As of January 1, 2017, you will be charged based on the time each MRU is active in your accounted, pro-rated on a per-minute basis.

Example of MRU Pricing

Here is an example, that shows that your charges will be based on actual minutes of usage of MRUs. Suppose you had zero MRU to begin with, and at 10:00 am on a particular day, you set your account to use 2 S1 MRUs. More videos arrive in the afternoon, so you change your account to use 4 S3 RUs at 1:15 pm. All the videos are processed by 4:00 pm, and then you turn off the MRUs in his account (set number of MRUs to zero). Your usage, for that day, is calculated as follows.

S1 Media Reserved Units: 2 units * 3.25 hours (10AM to 1:15PM)  = 7.5 S1 hours
S3 Media Reserved Units: 4 units * 2.75 hours (1:15PM to 4PM)  = 11 S3 hours

After January 1, 2017, you can refer to the official pricing page for the current rates for one S1 hour, and one S3 hour, and multiply them by 7.5 and 11 respectively, and add the results to get the total cost.

Contact us

Keep monitoring the Azure Media Services blog for updates on encoding capabilities.

Send your feedback and feature requests to our UserVoice page.
Quelle: Azure

New A_v2-Series VM sizes

Today we are releasing 7 new general purpose compute VM sizes.   In this new version of our A-Series VM’s we have raised the amount of RAM per vCPU from 1.75 GiB or 7 GiB of RAM per vCPU to 2 GiB or 8 GiB per vCPU.  We have also improved our local disk random IOPS to be 2-10x faster than that of our existing A version 1 sizes. 

These new sizes use our new VM naming schema which is VM family letter followed by the number of vCPU’s of the VM.  The ‘m’ identifier after the vCPU count signifies our High Memory offerings (8 GiB/vCPU).  (e.g. Standard_A8m_v2)

 

Size

vCPU

RAM (GiB)

Temporary Disk (SSD)

Max Network Bandwidth

Standard_A1_v2

1

2

10 GB

Moderate

Standard_A2_v2

2

4

20 GB

Moderate

Standard_A4_v2

4

8

40 GB

High

Standard_A8_v2

8

16

80 GB

High

Standard_A2m_v2

2

16

20 GB

Moderate

Standard_A4m_v2

4

32

40 GB

High

Standard_A8m_v2

8

64

80 GB

High

With the new VM naming scheme it’s important to realize that the old Standard_A8 is succeeded by the new Standard_H8, not the A8_v2. The new A8_v2 sizes succeed the old A4 and A7.  To better understand the mapping of v1 to v2 we have provided the table below.

Comparison: A Standard vs. A_v2

 

Size

vCPU

RAM (GiB)

Disk Size

 

Size

vCPU

RAM (GiB)

Disk Size

A1

1

1.75

20 GB (HDD)

A1_v2

1

2

10 GB (SSD)

A2

2

3.50

70 GB (HDD)

A2_v2

2

4

20 GB (SSD)

A3

4

7

285 GB (HDD)

A4_v2

4

8

40 GB (SSD)

A4

8

14

605 GB (HDD)

A8_v2

8

16

80 GB (SSD)

A5

2

14

135 GB (HDD)

A2m_v2

2

16

20 GB (SSD)

A6

4

26

285 GB (HDD)

A4m_v2

4

32

40 GB (SSD)

A7

8

52

605 GB (HDD)

A8m_v2

8

64

80 GB (SSD)

Geographic availability

Today the A_v2-Series is available in most regions and will shortly be available in all regions.

Learn more

For more information on the A_v2-Series and the variety of VM options available in Azure please see Azure sizes for virtual machines documentation.
Quelle: Azure

Building a micro-service architecture based cloud applications on Azure

Smart devices and explosion of data clubbed with one-click-away user experience is pushing the application architectures to be revamped. World 2020 is predicted to handle four billion connected people with 25+ million applications processing 40 petabytes of data. Consumers demand variety and choice with always-on services. This affirms the fact that the traditional software architectures, deployment models and slow release processes are not going to suffice.

Architectural evolution

Developing and packaging a large monolithic application requires a higher level of release co-ordination across distributed teams. Many a times, integration issues are not discovered until the last minute and this can drag the release. To the rescue, comes the micro-service architecture that provides a mechanism to break down the monolithic silos in to distributed loosely coupled autonomous services that can be developed, tested and deployed independently. This helps in the following ways:

Separation of duties: Developers can focus on a specific service and develop the service using a language of their choice. This reduces the complex co-ordination issues across the teams.
Making instant releases: Each service can be packaged, maintained and deployed independently thus enabling just-in-time releases.

Due to the distributed and granular nature of the micro-services, it can pose a few challenges.

Integration and interdependencies across services: Though the services are isolated, they can be functionally dependent on each other. A composition of services needs to be built such that they deliver the required business goals.
Portable deployments: A robust application deployment requires mirroring the production environment across Dev and QA. However, application still needs to be reconfigured based on the other scalability needs. Abstracting the application packaging from the infrastructure dependencies can make the application portable across different environments and thus a hassle free and a robust software release.
Just-in-time releases: Instant just-in-time software releases require continuous integration and deployment along with versioning of the builds.
Measure release effectiveness: A lack of visibility in to the release effectiveness, like hand-offs across the teams, code integrity and build sanity etc. can go unnoticed and immeasurable.
On-demand provisioning of infrastructure: Creating a container or a container cluster requires Developers to have knowledge and experience on provisioning and managing infrastructures.

One of our DevOps partner, Bluemeric, has been supporting organizations to attain “DevOps excellence” and has recently announced its new version of goPaddle v3. goPaddle is an ALM platform for micro-services that gives an integration first approach where a micro-service architecture (design/composition) is created as the first step. Project management tools like Jira or Microsoft Team Foundation Server (TFS) can be used to create and manage software releases. goPaddle ensures that the active releases are ready for deployment anytime. It helps to create pipelines and associate them with releases planned in Jira/TFS. These pipelines can be triggered any time and the build effectiveness can be monitored.

Developers can now focus on their application development while goPaddle helps to package and build the services in the form of Docker containers, test and deploy applications based on the pre-defined workflows in the release pipeline.

The micro-services are packaged as docker containers such that the application can be designed once and can be deployed anywhere. Developers can leverage existing Azure cloud accounts to provision a clustering solution of their choice like Kubernetes or Docker Swarm and deploy their services. Azure Container Service(ACS) gives the flexibility to create scalable clusters like Docker Swarm and Mesos on top of Azure cloud using VM Scale set. goPaddle provides a seamless integration with ACS, such that Developers create the clusters on Azure in just few clicks.

Try goPaddle with single sign-on using your Microsoft account, register an existing TFS account, plan software releases, create Docker Swarm or Kubernetes cluster on Azure and deploy scalable applications seamlessly.
Quelle: Azure

Announcing auto-shutdown for VMs using Azure Resource Manager

We are excited to announce you can set any ARM-based Virtual Machines to auto-shutdown with a few simple clicks! This was a feature originally available only to VMs in Azure DevTest Labs: your self-service sandbox environment in Azure to quickly create Dev/Test environments while minimizing waste and controlling costs. In case you haven’t heard it before, the goal for this service is to solve the problems that IT and development teams have been facing: delays in getting a working environment, time-consuming environment configuration, production fidelity issues, and high maintenance cost. It has been helping our customers to quickly get “ready to test” with a worry-free self-service environment. The reusable templates in the DevTest Labs can be used everywhere once created. The public APIs, PowerShell cmdlets and VSTS extensions make it super easy to integrate you Dev/Test environments from labs to your release pipeline. In addition to the Dev/Test scenario, Azure DevTest Labs can also be used in other scenarios like training and hackathon. For more information about its value propositions, please check out our GA announcement blog post. If you are interested in how DevTest Labs can help for training, check out this article to use Azure DevTest Labs for training. In the past months, we’ve been very happy to see that auto-shutdown is the policy used by DevTest Labs customers. On the other hand, we also learned from quite a few customers that they have their centrally managed Dev/Test workloads already running in Azure and simply want to set auto-shutdown for those VMs. Since those workloads have already been provisioned and managed centrally, self-service is not really needed. It’s a little bit overkill for them to create a DevTest lab in this case just for the auto-shutdown settings. That’s why we make this popular feature, VM auto-shutdown, available to all the ARM-based Azure VMs. With this feature, setting auto-shutdown can’t be easier: Go to your VM blade in Azure portal. Click Auto-shutdown in the resource menu on the left-side. You will see an auto-shutdown settings page expanded, where you can specify the auto-shutdown time and time zone. You can also configure to send notification to your webhook URL 15 minutes before auto-shutdown. This post illustrates how you can set up an Azure logic app to send auto-shutdown notification. To learn more about this feature or see what’s more Azure DevTest Labs can do for you, please check out our announcement on the Azure DevTest Labs team blog. To get latest information on the service releases or our thoughts on the DevTest Labs, please subscribe to the team blog’s RSS feed and our Service Updates. There are still a lot of things in our roadmap that we can’t wait to build and ship to our customers. Your opinions are valuable for us to deliver the right solutions for your problems. We welcome ideas and suggestions on what DevTest Labs should support, so please do not hesitate to create an idea at the DevTest Labs feedback forum, or vote on others’ ideas. If you run into any problems when using the DevTest Labs or have any questions, we are ready at the MSDN forum to help you.
Quelle: Azure

Announcing token authentication with Azure CDN

We are pleased to announce the general availability of token authentication with Azure CDN. This feature is available in the Azure CDN from Verizon Premium offering. We have enabled the feature for all new and existing Verizon Premium customers.

Token-based authentication is a great tool to handle authentication for multiple users. It scales easily and provides security. Main benefits of token authentication include:

Easily scalable, no need to store user login information on the server.
Mobile application ready solution.
Provides security, each request must contain the token and after the token expires user needs to login again.
Prevents attacks such as cross-site request forgery (CSRF, also known as session riding).

By enabling this feature on CDN, each requests will be authenticated by CDN edge POPs before delivering the content which prevents Azure CDN from serving assets to unauthorized users. This is typically done to prevent hotlinking of content, where a different website, often a message board, uses your assets without permission. This can have an impact on your content delivery costs.

Please read the full feature documentation to learn how to set up token authentication today!

More Information

CDN overview
Rules engine

Is there a feature you&;d like to see in Azure CDN? Give us feedback!
Quelle: Azure

In-Memory OLTP in Azure SQL Database

We recently announced general availability for In-Memory OLTP in Azure SQL Database, for all Premium databases. In-Memory OLTP is not available in databases in the Standard or Basic pricing tiers today.

In-Memory OLTP can provide great performance benefits for transaction processing, data ingestion, and transient data scenarios. It can also help to save cost: you can improve the number of transactions per second, while increasing headroom for future growth, without increasing the pricing tier of the database.

For a sample order processing workload Azure SQL Database is able to achieve 75,000 transactions per second (TPS) in a single database, which is an 11X performance improvement from using In-Memory OLTP, compared with traditional tables and stored procedures. Mileage may vary for different workloads. The following table shows the results for running this workload on the highest available pricing tier, and also shows similar benefits from In-Memory OLTP even in lower pricing tiers.*

 

Pricing tier
TPS for In-Memory OLTP
TPS for traditional tables
Performance gain

P15
75,000
6,800
11X

P2
8,900
1,000
9X

Table 1: Performance comparison for a sample order processing workload

* For the run on P15 we used a scale factor of 100, with 400 clients; for the P2 run we used scale factor 5, with 200 clients. Scale factor is a measure of database size, where 100 translates to a 15GB database size, when using memory-optimized tables. For details about the workload visit the SQL Server samples GitHub repository.

In this blog post, we are taking a closer look at how the technology works, where the performance benefits come from, and how to best leverage the technology to realize performance improvements in your applications.

Keep in mind that In-Memory OLTP is for transaction processing, data ingestion, data load and transformation, and transient data scenarios. To improve performance of analytics queries, use Columnstore indexes instead. You will find more details about those in the documentation as well as on this blog, in the coming weeks.

How does In-Memory OLTP work?

In-Memory OLTP can provide great performance gains, for the right workloads. One of our customers, Quorum Business Solutions, managed to double a database’s workload while lowering DTU by 70%. In Azure SQL Database, DTU is a measure of the amount of resources that can be utilized by a given database. By reducing resource utilization, Quorum Business Solutions was able to support a larger workload while also increasing the headroom available for future growth, all without increasing the pricing tier of the database.

Now, where does this performance gain and resource efficiency come from? In essence, In-Memory OLTP improves performance of transaction processing by making data access and transaction execution more efficient, and by removing lock and latch contention between concurrently executing transactions: it is not fast because it is in-memory; it is fast because it is optimized around the data being in-memory. Data storage, access, and processing algorithms were redesigned from the ground up to take advantage of the latest enhancements in in-memory and high concurrency computing.

Now, just because data lives in-memory does not mean you lose it when there is a failure. By default, all transactions are fully durable, meaning that you have the same durability guarantees you get for any other table in Azure SQL Database: as part of transaction commit, all changes are written to the transaction log on disk. If there is a failure at any time after the transaction commits, your data is there when the database comes back online. In Azure SQL Database, we manage high availability for you, so you don’t need to worry about it: if an internal failure occurs in our data centers, and the database fails over to a different internal node, the data of every transaction you committed is there. In addition, In-Memory OLTP works with all high availability and disaster recovery capabilities of Azure SQL Database, like point-in-time restore, geo-restore, active geo-replication, etc.

To leverage In-Memory OLTP in your database, you use one or more of the following types of objects:

Memory-optimized tables are used for storing user data. You declare a table to be memory-optimized at create time.
Non-durable tables are used for transient data, either for caching or for intermediate result set (replacing traditional tables). A non-durable table is a memory-optimized table that is declared with DURABILITY=SCHEMA_ONLY, meaning that changes to these tables do not incur any IO. This avoids consuming log IO resources for cases where durability is not a concern.
Memory-optimized table types are used for table-valued parameters (TVPs), as well as intermediate result sets in stored procedures. These can be used instead of traditional table types. Table variables and TVPs that are declared using a memory-optimized table type inherit the benefits of non-durable memory-optimized tables: efficient data access, and no IO.
Natively compiled T-SQL modules are used to further reduce the time taken for an individual transaction by reducing CPU cycles required to process the operations. You declare a Transact-SQL module to be natively compiled at create time. At this time, the following T-SQL modules can be natively compiled: stored procedures, triggers and scalar user-defined functions.

In-Memory OLTP is built into Azure SQL Database, and you can use all these objects in any Premium database. And because these objects behave very similar to their traditional counterparts, you can often gain performance benefits while making only minimal changes to the database and the application. You will find a Transact-SQL script showing an example for each of these types of objects towards the end of this post.

Each database has a cap on the size of memory-optimized tables, which is associated with the number of DTUs of the database or elastic pool. At the time of writing you get one gigabyte of storage for every 125 DTUs or eDTUs. For details about monitoring In-Memory OLTP storage utilization and altering see: Monitor In-Memory Storage.

When and where do you use In-Memory OLTP?

In-Memory OLTP may be new to Azure SQL Database, but it has been in SQL Server since 2014. Since Azure SQL Database and SQL Server share the same code base, the In-Memory OLTP in Azure SQL DB is the same as the In-Memory OLTP in SQL Server. Because the technology has been out for a while, we have learned a lot about usage scenarios and application patterns that really see the benefits of In-Memory OLTP.

Resource utilization in the database

If your goal is to achieve improved performance for the users of you application, whether it is in terms of number of requests you can support every second (i.e., workload throughput) or the time it takes to handle a single request (i.e., transaction latency), you need to understand where is the performance bottleneck. In-Memory OLTP is in the database, and thus it improves the performance of operations that happen in the database. If most of the time is spent in your application code or in network communication between your application and the database, any optimization in the database will have a limited impact on the overall performance.

Azure SQL Database provides resource monitoring capabilities, exposed both through the Azure portal and system views such as sys.dm_db_resource_stats. If any of the resources is getting close to the cap for the pricing tier your database is in, this is an indication of the database being a bottleneck. The main types of resources In-Memory OLTP really helps optimize are CPU and Log IO utilization.

Let’s look at a sample IoT workload* that includes a total of 1 million sensors, where every sensor emits a new reading every 100 seconds. This translates to 10,000 sensor readings needing to be ingested into the database every second. In the tests executed below we are using a database with the P2 pricing tier. The first test uses traditional tables and stored procedures. The following graph, which is a screenshot from the Azure portal, shows resource utilization for these two key metrics.

Figure 1: 10K sensor readings per second in a P2 database without In-Memory OLTP

We see very high CPU and fairly high log IO utilization. Note that the percentages here are relative to the resource caps associated with the DTU count for the pricing tier of the database.

These numbers suggest there is a performance bottleneck in the database. You could allocate more resources to the database by increasing the pricing tier, but you could also leverage In-Memory OLTP. You can reduce resource utilization as follows:

CPU:

Replace tables and table variables with memory-optimized tables and table variables, to benefit from the more efficient data access.
Replace key performance-sensitive stored procedures used for transaction processing with natively compiled stored procedures, to benefit from the more efficient transaction execution.

Log IO:

Memory-optimized tables typically incur less log IO than traditional tables, because index operations are not logged.
Non-durable tables and memory-optimized table variables and TVPs completely remove log IO for transient data scenarios. Note that traditional temp table and table variables have some associated log IO.

Resource utilization with In-Memory OLTP

Let’s look at the same workload as above, 10,000 sensor readings ingested per second in a P2 database, but using In-Memory OLTP.

After implementing a memory-optimized table, memory-optimized table type, and a natively compiled stored procedure we see the following resource utilization profile.

Figure 2: 10K sensor readings per second in P2 database with In-Memory OLTP

As you can see, these optimizations resulted in a more than 2X reduction in log IO and 8X reduction in CPU utilization, for this workload. Implementing In-Memory OLTP in this workload has provided a number of benefits, including:

Increased headroom for future growth. In this example workload, the P2 database could accommodate 1 million sensors with each sensor emitting a new reading every 100 seconds. With In-Memory OLTP the same P2 database can now accommodate more than double the number of sensors, or increase the frequency with which sensor readings are emitted.
A lot of resources are freed up for running queries to analyze the sensor readings, or do other work in the database. And because memory-optimized tables are lock- and latch-free, there is no contention between the write operations and the queries.
In this example you could even downgrade the database to a P1 and sustain the same workload, with some additional headroom as well. This would mean cutting the cost for operating the database in half.

Do keep in mind that the data in memory-optimized tables does need to fit in the In-Memory OLTP storage cap associated with the pricing tier of your database. Let’s see what the In-Memory OLTP storage utilization looks like for this workload:

Figure 3: In-Memory OLTP storage utilization

We see that In-Memory OLTP storage utilization (the green line) is around 7% on average. Since this is a pure data ingestion workload, continuously adding sensor readings to the database, you may wonder, “how come the In-Memory OLTP storage utilization is not increasing over time?”

Well, we are using a memory-optimized temporal table. This means the table maintaining it’s own history, and the history lives on-disk. Azure SQL Database takes care of the movement between memory and disk under the hood. For data ingestion workloads that are temporal in nature, this is a great solution to manage the in-memory storage footprint.

* to replicate this experiment, change the app.config in the sample app as follows: commandDelay=1 and enableShock=0; in addition, to recreate the “before” picture, change table and table type to disk-based (i.e., MEMORY_OPTIMIZED=OFF) and remove NATIVE_COMPILATION and ATOMIC from the stored procedure

Usage scenarios for In-Memory OLTP

As noted at the top of this post, In-Memory OLTP is not a magic go-fast button, and is not suitable for all workloads. For example, memory-optimized tables will not really bring down your CPU utilization if most of the queries are performing aggregation over large ranges of data – Columnstore helps for that scenario.

Here is a list of scenarios and application patterns where we have seen customers be successful with In-Memory OLTP. Note that these apply equally to SQL Server and Azure SQL Database, since the underlying technology is the same.

High-throughput and low-latency transaction processing

This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with consistent low latency for individual transactions.

Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery. Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each shard of each file in a memory-optimized table.

Implementation considerations

Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the business transaction. The more of the logic you can push down into stored procedures in the database, the more benefit you will see from In-Memory OLTP.

To get started in an existing application, use the transaction performance analysis report to identify the objects you want to migrate, and use the memory-optimization and native compilation advisors to help with migration.

Data ingestion, including IoT (Internet-of-Things)

In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time. And it is often beneficial to ingest data into a SQL database compared with other destinations, because SQL makes running queries against the data really fast, and allows you to get real-time insights.

Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent read workload.

Implementation considerations

Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than updates) and In-Memory OLTP storage footprint of the data is a concern, either

Use a job to regularly batch-offload data to a disk-based table with a Clustered Columnstore index; or
Use a temporal memory-optimized table to manage historical data – in this mode, historical data lives on disk, and data movement is managed by the system.

The following sample is a smart grid application that uses a temporal memory-optimized table, a memory-optimized table type, and a natively compiled stored procedure, to speed up data ingestion, while managing the In-Memory OLTP storage footprint of the sensor data: release and source code.

Caching and session state

The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET application) and for caching.

ASP.NET session state is a very successful use case for In-Memory OLTP. With SQL Server, one customer was about to achieve 1.2 Million requests per second. In the meantime they have started using In-Memory OLTP for the caching needs of all mid-tier applications in the enterprise. Details: https://blogs.msdn.microsoft.com/sqlcat/2016/10/26/how-bwin-is-using-sql-server-2016-in-memory-oltp-to-achieve-unprecedented-performance-and-scale/

Implementation considerations

You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in Azure SQL Database. Finally, you can create a full relational cache through non-durable tables with a full relational schema, including various data types and constraints.

Get started with memory-optimizing ASP.NET session state by leveraging the scripts published on GitHub to replace the objects created by the built-in session state provider.

Tempdb object replacement

Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based temp tables, table variables, and table-valued parameters.

Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, when compared with traditional table variables and temp table.

Case study illustrating benefits of memory-optimized table-valued parameters in Azure SQL Database: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/04/07/a-technical-case-study-high-speed-iot-data-ingestion-using-in-memory-oltp-in-azure/

Implementation considerations

To get started see: Improving temp table and table variable performance using memory optimization.

ETL (Extract Transform Load)

ETL workflows often include load of data into a staging table, transformations of the data, and load into the final tables.

Implementation considerations

Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.

If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored procedures to speed up these transformations. If you can do these transformations in parallel you get additional scaling benefits from the memory-optimization.

Getting started

The following script illustrates how you create In-Memory OLTP objects in your database.

— memory-optimized table
CREATE TABLE dbo.table1
( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
  c2 NVARCHAR(MAX))
WITH (MEMORY_OPTIMIZED=ON)
GO
— non-durable table
CREATE TABLE dbo.temp_table1
( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
  c2 NVARCHAR(MAX))
WITH (MEMORY_OPTIMIZED=ON,
      DURABILITY=SCHEMA_ONLY)
GO
— memory-optimized table type
CREATE TYPE dbo.tt_table1 AS TABLE
( c1 INT IDENTITY,
  c2 NVARCHAR(MAX),
  is_transient BIT NOT NULL DEFAULT (0),
  INDEX ix_c1 HASH (c1) WITH (BUCKET_COUNT=1024))
WITH (MEMORY_OPTIMIZED=ON)
GO
— natively compiled stored procedure
CREATE PROCEDURE dbo.usp_ingest_table1
  @table1 dbo.tt_table1 READONLY
WITH NATIVE_COMPILATION, SCHEMABINDING
AS
BEGIN ATOMIC
    WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT,
          LANGUAGE=N&;us_english&039;)

  DECLARE @i INT = 1

  WHILE @i > 0
  BEGIN
    INSERT dbo.table1
    SELECT c2
    FROM @table1
    WHERE c1 = @i AND is_transient=0

    IF @@ROWCOUNT > 0
      SET @i += 1
    ELSE
    BEGIN
      INSERT dbo.temp_table1
      SELECT c2
      FROM @table1
      WHERE c1 = @i AND is_transient=1

      IF @@ROWCOUNT > 0
        SET @i += 1
      ELSE
        SET @i = 0
    END
  END

END
GO
— sample execution of the proc
DECLARE @table1 dbo.tt_table1
INSERT @table1 (c2, is_transient) VALUES (N&039;sample durable&039;, 0)
INSERT @table1 (c2, is_transient) VALUES (N&039;sample non-durable&039;, 1)
EXECUTE dbo.usp_ingest_table1 @table1=@table1
SELECT c1, c2 from dbo.table1
SELECT c1, c2 from dbo.temp_table1
GO

A more comprehensive sample leveraging In-Memory OLTP and demonstrating performance benefits can be found at: Install the In-Memory OLTP sample.

The smart grid sample database and workload used for the above illustration of the resource utilization benefits of In-Memory OLTP can be found here: release and source code.

 

Try In-Memory OLTP in your Azure SQL Database today!

Resources to get started:

SQL In-Memory technologies in SQL Database

Quick Start 1: In-Memory OLTP Technologies for Faster T-SQL Performance

Use In-Memory OLTP in an existing Azure SQL Application.

Improving temp table and table variable performance using memory optimization

System-Versioned Temporal Tables with Memory-Optimized Tables

Quelle: Azure

Announcing 4 TB for SAP HANA, Single-Instance SLA and Hybrid Use Benefit Images

Today, I’m excited to announce three new Azure enhancements to enable you to run the largest enterprise workloads, offer you best-in-class support for those workloads, and deliver it at the lowest possible cost.

At Ignite, we announced the general availability of large instances, specifically designed for SAP HANA workloads scaling up to 3TB per node on Azure. We are now announcing even larger scale, offering 4TB on a single node for OLTP scenarios and 32 TB for multi-node scale-out OLAP deployments, both available in December 2016.
We are constantly looking to make it easier for you to move more enterprise workloads into the cloud and run them without any changes. Prior to today, the virtual machine (VM) availability SLA required at least two instances, which presented challenges for some existing on-premises workloads that could not scale-out or where scale and management were expensive and cumbersome. I am excited to announce that, starting today, Azure now offers an SLA availability commitment on single instance VMs.
We are also announcing the availability of several Microsoft Hybrid Use Benefit (HUB) Azure gallery images, making it much easier to take advantage of this benefit. It builds on our announcement a few months ago offering the support for Microsoft Azure Hybrid Use Benefit (AHUB) where customers can use on-premises Windows Server licenses that include Software Assurance to run Windows Server virtual machines on Azure with significant cost savings.

SAP HANA Large Instances

Earlier this year, we announced the availability of purpose-built infrastructure for running SAP HANA applications in Azure. These instances (called SAP HANA Large Instances) can power the largest SAP HANA workloads of any hyperscale public cloud provider. Today, we’re excited to announce two new instance types – a 2TB RAM and a 4 TB RAM type and the first Azure size based on the new Intel® Xeon® Processor E7 v4 family (codename Broadwell) – providing more choice to customers for their SAP HANA workloads, including certified NetWeaver SAP environments.

Here are the details on the additional two new SKUs:

SAP SOLUTION

CPU

RAM

STORAGE

Optimized for OLAP: SAP BW, BW/4HANA or SAP HANA for generic OLAP workload.

(Can also be used for OLTP and Multi-tenant mixed workloads)

SAP HANA on Azure S192 – 4 x Intel® Xeon® Processor E7-8890 v4

2.0 TB

8 TB

Optimized for OLTP: SAP Business Suite on SAP HANA or S/4HANA (OLTP), generic OLTP

SAP HANA on Azure S192m – 4 x Intel® Xeon® Processor E7-8890 v4

4.0 TB

16 TB

Learn more about the Large Instance SKUs.

SAP HANA Large Instances offer an availability SLA of 99.99% for an HA pair, the highest among all hyperscale public cloud vendors. These instances provide built-in infrastructure support for backup and restore, high availability (HA) and disaster recovery (DR) scenarios. Additionally, these instances have integrated support with partners, including SUSE Linux Enterprise, Red Hat Enterprise Linux and SAP, so you can confidently bring your production workloads to Azure with HA, DR, and great performance.

Single Instance SLA

Over the last few months, we have done extensive work to improve availability of the Azure infrastructure, including innovative machine-learning to predict failing hardware early and offering premium storage to help improve reliability and performance of attached disks. Today, we are announcing a new 99.9% single-instance availability SLA to better support applications that cannot easily scale beyond single VMs. We hope this enables you to move even more workloads into Azure and take advantage of the agility of the cloud without compromising on your expectations of availability.

To qualify for the single instance virtual machine SLA, all storage disks attached to the VM must be using premium storage, which offers this high level of availability and performance with up to 80,000 IOPS and 2,000 MBps of disk throughput. In addition to this new availability commitment, customers can continue to build for multi-machine high availability by having two or more VMs deployed in the same Availability Set or by utilizing VM Scale Sets which both provide machine isolation, network isolation, and power unit isolation across multiple virtual machines.

For more information about our SLA for Virtual Machines, please visit our SLA page. For additional information on premium storage and how to get started on migrating your workloads, visit the product page.

Hybrid Use Benefit Images

With the Azure Hybrid Use Benefit, you can deploy your Windows Server licenses that include Software Assurance at a large discount, up to 44% in annual savings. Starting today, instead of requiring you to upload those images, you can now deploy pre-built HUB Windows Server images that you can access straight from Azure Marketplace (search for “[HUB] Windows Server”) or deploy them using your favorite client tools such as the new Azure CLI or the Azure PowerShell. You can also use these Microsoft-certified HUB images to leverage the rich automation features in the portal with Azure QuickStart GitHub templates (i.e. storage configuration, Automated Backup and automated Patching, etc.). This provides flexibility, convenience, and big cost savings.

Learn more about the Hybrid Use Benefit on our documentation page.

With these announcements, you can deploy even more applications and solutions into Azure with ease, speed, and low cost. I hope you find these new capabilities valuable and will take advantage of them when deploying on Azure. I am looking forward to hearing about your experiences.

– Corey
Quelle: Azure