Application Insights support for Microservices and Containers

We’ve recently made it much easier to monitor microservices and containerized applications using Azure Application Insights. A single Application Insights resource can be used for all the components of your application. The health of the services and the relationships between them are displayed on a single Application Map. You can trace individual operations through multiple services with automatic HTTP correlation. Metrics from Docker, and other containers, can be integrated and correlated with application telemetry. Segmenting Application Insights data by role Until now, Application Insights has assumed that you create one Application Insights resource, and instrumentation key, for each server component or microservice in your application. With a microservices application, a single application can be composed of many different services, and it can be very time consuming to create and maintain separate resources. It’s also difficult to correlate results between them. To solve this problem, we have added the capability for you to segment data in Application Insights by the cloud_RoleName property that is attached to all telemetry. This allows you to send all the data from all your servers to a single Application Insights resource, and filter on cloud_RoleName to see performance and health information for individual microservices. The cloud_RoleName property is set by SDKs to represent the appropriate name for your microservice, container, or app name in Azure App Service. In the Failures, Performance, and metrics explorer blades, you will see new Cloud role name properties in the filter menu: In the above example, we filtered the failures blade to show just the information for our front-end web service, filtering out failures from the CRM API backend. We’ve also enabled a preview feature that allows the Application Map to segment server nodes by cloud_RoleName. To enable this capability, set Multi-role Application Map to On from the Previews blade. After enabling this preview, your map will show one server node for each unique value set in the cloud_RoleName field:   The calls between servers are tracked by using correlation Ids passed in the headers of HTTP requests, which we’ll talk about next. Automatic HTTP correlation that works in containers With the latest SDKs, 2.4.0-beta3 for ASP.NET and 2.1.0-beta3 for ASP.NET Core, we automatically correlate calls between services by injecting headers into the HTTP requests and responses. Previously this functionality required installing an instrumenting profiler, using the Application Insights Site Extension for App Services, or installing Status Monitor for other Azure Compute services. These extensions are difficult to provision in Service Fabric or Docker container environments. This allows you to see all your microservices and containerized applications on the application map. You can also see all the telemetry related to cross-server calls in a single view. For example, here is an exception from our visitors app sample: Click through to a correlated list of telemetry for this operation across the front-end web server and the back-end API: To take advantage of this capability, install the current pre-release versions of Application Insights SDKs available on NuGet. SDK Support for .NET Core 2.0, Service Fabric, and Kubernetes We have made improvements to our .NET SDKs so that the above features will work for .NET applications running in Service Fabric and Kubernetes. Our ASP.NET Core 2.1 SDK now supports both .NET Core 1.1 and .NET Core 2.0 Preview 1, including automatic request and dependency tracking capabilities. This allows you to get the full Application Insights experience with the latest versions of .NET Core. If your application runs on Service Fabric, you can use our Service Fabric SDKs for Application Insights. Add the Microsoft.ApplicationInsights.ServiceFabric.Native NuGet package for Service Fabric reliable services and the Microsoft.ApplicationInsights.ServiceFabric package for guest executables and Docker containers. Many Service Fabric applications use EventSource for high-scale logging. You can now log EventSource events to Application Insights by adding the Microsoft.ApplicationInsights.EventSourceListener package. If your application runs in Docker on Kubernetes, you can use our Kubernetes SDK by adding the Microsoft.ApplicationInsights.Kubernetes package to your app. Use NuGet package manager in Visual Studio to add the packages mentioned in this section. Check the “include previews” option to find the packages mentioned in this section. Container metrics for Windows Docker Containers It’s useful to see CPU, Memory, and other metrics about your individual Docker containers so that you can understand the health of your containers and achieve the right density of containers on physical machines. We have enriched our Windows Azure Diagnostics Extension (WAD) with support for collecting metrics from Docker containers running on Windows. Once you have WAD installed, you can simply modify your diagnostics configuration file to collect Docker stats and send them to Application Insights. Try it out today We hope these new capabilities will allow you to have a great experience using Application Insights with microservices and containerized applications. Be sure to check out the docs, try out the SDKs and let us know how we can make Application Insights work better for you in these new environments. In addition to the capabilities listed in this post, we’ve recently announced many other improvements to Application Insights that will help you find and fix issues in your applications. As always, please share your ideas for new or improved features on the Application Insights UserVoice page. For any questions visit the Application Insights Forum.
Quelle: Azure

Dear DocumentDB customers, welcome to Azure Cosmos DB!

Dear DocumentDB customers,

We are very excited that you are now a part of the Azure Cosmos DB family!

Azure Cosmos DB, announced at the Microsoft Build 2017 conference, is the first globally distributed, multi-model database service for building planet scale apps. You can easily build globally-distributed applications without the hassle of complex, multiple-datacenter configurations. Designed as a globally distributed database system, Cosmos DB automatically replicates all of your data to any number of regions of your choice, for fast, responsive access. Cosmos DB supports transparent multi-homing and guarantees 99.99% high availability.

Only Cosmos DB allows you to use key-value, graph, and document data in one service, at global scale and without worrying about schema or index management. Cosmos DB allows you to use your favorite API including SQL (Document DB), JavaScript, Gremlin, MongoDB, and Azure Table storage to query your data. As the first and only schema-agnostic database, regardless of the data model, Azure Cosmos DB automatically indexes all your data to eliminate any friction, so you can perform blazing fast queries and focus on your app.

One of the APIs Azure Cosmos DB supports is the SQL (DocumentDB) API and the document data-model. You're already very well familiar with it and already using it to run your current DocumentDB applications. You are already using to run your current DocumentDB applications. These APIs are not changing – the NuGet package, the namespaces, and all dependencies remain the same. You don't need to change anything to continue running your apps built with SQL (DocumentDB) API. You are simply now a part of the service that gives you more capabilities at your disposal.

Why the move to Azure Cosmos DB?

The Cosmos DB project started in 2010 as “Project Florence” to address developer pain-points that are faced by large Internet-scale applications inside Microsoft. Observing that these problems are not unique to Microsoft’s applications, we decided to make Cosmos DB generally available to external developers in 2015 in the form of Azure DocumentDB – the service you’ve been using. The exponential growth of the service has validated our design choices and the unique tradeoffs we have made.

Azure Cosmos DB is the next big leap in globally distributed, at scale, cloud databases. As a DocumentDB customer, you now have access to the new breakthrough system and capabilities offered by Azure Cosmos DB. As a part of this release of Azure Cosmos DB, DocumentDB customers, with their data, are automatically Azure Cosmos DB customers. The transition is seamless and you now have access to all capabilities offered by Azure Cosmos DB. These capabilities are in the areas of the core database engine as well as global distribution, elastic scalability, and industry-leading, comprehensive SLAs.

Specifically, Cosmos DB is all about providing intelligent choices to developers and enabling you to build planet scale apps.

Cosmos DB exposes multiple well-defined consistency models: Databases today only offer two extreme choices for consistency – “strong” consistency and “eventual” consistency. In contrast, Cosmos DB is the first production globally distributed database service to have harvested a set of useful consistency models from decades of research and have operationalized them. Cosmos DB offers five well-defined consistency models which provide clear tradeoffs with respect to latency/availability, backed by SLAs.

Cosmos DB allows developers to model real world in its true form: No data is born relational. Cosmos DB allows developers to store and query their data in its original form. It exposes graph, documents, key-values, column-family data models and will enable others. The multi-model and multi-API capabilities remove the friction, allowing you to build with any data model and API.

Cosmos DB meets developers where they are: Cosmos DB offers a multitude of APIs to access and query data including, SQL and various popular OSS APIs.

What are the extra capabilities you get?

The current developer facing manifestation of this work is the new support for Gremlin and Table Storage APIs. And this is just the beginning… We will be adding other popular APIs and newer data models over time with more advances towards performance and storage at global scale.

It is important to point out that DocumentDB’s SQL dialect has always been just one of the many APIs that the underlying Cosmos DB was capable of supporting. As a developer using a fully managed service like Azure Cosmos DB, the only interface to the service is the APIs exposed by the service. To that end, nothing really changes for you as an existing DocumentDB customer. Azure Cosmos DB offers exactly the same SQL API that DocumentDB did. However, now (and in the future) you can get access to other capabilities, which were previously not accessible.

Another manifestation of our continued work is the extended foundation for global and elastic scalability of throughput and storage. One of the very first manifestations of it is the RU/m but we have more capabilities that we will be announcing in these areas. These new capabilities help reduce costs for our customers for various workloads. Please read our recent blog on RU/m here. We have made several foundational enhancements to the global distribution subsystem. One of the many developer facing manifestations of this work is the consistent prefix consistency model (making in total five well-defined consistency models). However, there are many more interesting capabilities we will release as they mature.

If you still have more questions

Here you can read the answers to the most frequently asked questions by other DocumentDB customers about Cosmos DB experience.

Next Steps

Thank you for being our customers! Cosmos DB wouldn’t be the same without you. We brought together your feedback, decades of distributed systems research combined with superb engineering and craftsmanship to create this service. Azure Cosmos DB is the database of the future – it is what we believe is the next big thing in the world of massively scalable databases! It makes your data available close to where your users are, worldwide. Our mission is to be the most trusted database service in the world and to enable you to build amazingly powerful, cosmos-scale apps, more easily.

Next, we recommend you:

Read these:  Azure Cosmos DB announcement blog and the technical overview blog
Understand the core concepts of Azure Cosmos DB
Learn more about the service and its capabilities by reading the documentation
Visit the pricing page to understand the billing

Try out the new capabilities in Azure Cosmos DB and let us know what you think! If you need any help or have questions or feedback, please reach out to us through askcosmosdb@microsoft.com. Stay up-to-date on the latest Azure Cosmos DB news (#CosmosDB) and features by following us on Twitter @AzureCosmosDB and join our LinkedIn Group. We are really excited to see what you will build with Cosmos DB.

— Your friends at Azure Cosmos DB @AzureCosmosDB
Quelle: Azure

Azure #CosmosDB: Introducing Per Minute (RU/m) provisioning to lower your cost, increase your performance

Last week at our annual Build conference, we’ve announced Azure Cosmos DB – our globally-distributed, multi-model database service and a set of new capabilities to enable developers to build apps that are out of this world. As a part of those new capabilities, customers can now provision request unit (RU) throughput at a per minute granularity – we call it RU/m. This new option, currently in preview with a 50% discount, is very complementary with the existing request unit per second (RU/s) provisioning model. With RU/s, you get a predictable performance at the granularity of a second, but it also means that you must provision for spikes and bursty workloads to avoid throttling. Now, with RU/m, you can consume more of what you provision and save on costs. No need to provision for peak anymore! By combining provisioning per second with provisioning per minute, you now can: Address workloads with large spikes Fit the workload patterns that need minute granularity (common in IoT) Have the flexibility in a dev/test environment: the first thing our developers want to do is to code and not have to think of how many request units they need Substantially lower your per-second provisioning needs and save up to 60% in costs, since you don’t need to provision for your peak workloads anymore With Azure Cosmos DB, our philosophy is to continuously innovate to deliver more value to our customers at a lower cost. This new option combines both. Here is what some of our early adopter customers say about this new and exciting capability: “RU/m is a real game changer for us, we see more than doubled “performance” in our load tests that simulate typical user’s behavior. And more importantly we are not blocked during temporary spikes of user’s activity.” – Sergii Kram, Lead Software Engineer, Quest “The RU/m feature is exactly what our project needed.  Previously we had to provision our service to four times our normal max load so that we didn’t throttle requests during spikes in traffic.  With the new RU/m feature, we were able to drastically reduce our DocDB cost and completely eliminate throttling during those spikes.” – Tyler Hennessy, Senior Software Engineer, Xbox “We will definitely use this feature to avoid overprovisioning and save money. Our traffic pattern is very “spiky” (multiple parallel data collection threads dump data hourly) and enabling RU/m provisioning provided the same service quality with a much lower overall throughput. An iterative tuning approach of “adjust-and-monitor” allowed us to scale the setup to a usable production configuration in a few days.” – Andreas Schiffler, Senior Software Engineer, Windows Servicing & Delivery – Data Analytics (WSD DA) How does RU/m work? RU/m is aligned with RU/s. Most important, RU/m can be enabled with a click in the portal or with a single line of code by using the SDKs. The amount of RU/m you get is linear with how many RU/s you provision. RU/m is billed hourly and in addition to reserved RU/s. You can consider RU/m as a flexible budget to consume RUs within a minute. Pricing is fixed so you will always get low cost and financial predictability, without taking the risk of variable pricing. RU/m can be enabled at container level. This can be done through the SDKs (Node.js, Java or .Net) or through the portal (also include MongoDB API workloads). For every 100 RU/s provisioned, you also get 1,000 RU/m provisioned (the ratio is 10x). This means that if you get 1,000 RU/s with 10,000 RU/m for a full month, you will spend $80/month ($60 for 1,000 RU/s+$20 for 10,000 RU/m with preview pricing). At a given second, a request unit will consume your RU/m provisioning only if you have exceeded your per second provisioning. Within a 60 second period (UTC), the per minute provisioning is refilled. RU/m can be enabled only on containers with no more than 5,000 RU/s per partition provisioned. You can decide which type of operations can access the RU/m budget. As an example, you can decide to use RU/m budget only for critical operations and disable RU/m for ad-hoc operations (e.g.: Queries, find more in the documentation). A concrete example Below is a concrete example, in which a customer can provision 10k RU/s with 100k RU/m, saving 73% in cost against provisioning for peak (at 50k RU/s). During a 90-second period on a collection that has 10,000 RU/s and 100,000 RU/m provisioned: Second 1: The RU/m budget is set at 100,000 Second 3: During that second the consumption of request units was 11,010 RUs, 1,010 RUs above the RU/s provisioning. Therefore, 1,010 RUs are deducted from the RU/m budget. 98,990 RUs are available for the next 57 seconds in the RU/m budget. Second 29: During that second, a large spike happened (>4x the per second provisioning) and the consumption of request units was 46,920 RUs. 36,920 RUs are deducted from the RU/m budget that dropped from 92,323 RUs (28th second) to 55,403 RUs (29th second). Second 61: RU/m budget is refilled to 100,000 RUs. Enabling/Disabling RU/m You can enable RU/m at the container level through the SDK or the portal. Through the portal, you only need to click on scale, select the container you want and enable RU/m. To learn how to provision RU/m through the SDK, please refer to the documentation. Currently RU/m is available for the following SDKs: .Net 1.14.1 Java 1.11.0 Node.js 1.12.0 Python 2.2.0 Support for other SDKs will be added soon. Scenarios and Impact with Early Adoption Customers During our beta preview, we’ve identified some interesting and illustrative scenarios to test how big a performance improvement and how much savings our customers were able to achieve with RU/m at scale and worldwide. By referring to those scenarios and our multi-step approach to gradually optimize your throughput, we hope you can also replicate the same improvements. If you refer to the documentation, you will be able to see how the portal metrics can be used to monitor throttling and RUs consumption. Example 1: Leverage RU/m to reduce throttling In e-commerce scenario, a retailer may expect spikes when a merchant registers a new batch of items in their inventory. A customer had a container with 400,000 RU/s provisioned, and 1.68% of the requests were throttled due to insufficient provisioning for spikes. As soon as RU/m provisioning was enabled, this customer experienced an 88% drop in throttled requests (down to 0.2%). As a second step, this customer lowered their provisioned capacity at per second granularity – from 400,000 RU/s to 300,000 RU/s with a throttling rate of 0.25%. As a third step, the customer lowered throughput provisioning to 200,000 RU/s (and 2m RU/m) with a throttling rate of 1.12%. Finally, their ideal provisioning level was found at 250K RU/s with 2.5 million RU/m. Outcome: 17% cost saving on provisioning 80% of throttling eliminated Example 2: Reduce throttling and lower provisioning costs with a spiky workload In this case, a customer was storing a telemetry data for devices with very spiky needs due to sporadic queries. This customer had a partitioned container with 100,000 RU/s provisioned. Due to spiky needs and despite high provisioning, this customer experienced some throttling (0.0109% of requests being throttled). Right after enabling RU/m, the ratio of throttled requests dropped to 0.000567%, representing 95% elimination of throttling. As a second step, they lowered the provisioning to 80K RU/s + 800K RU/m and were still able to hold the same ratio of 0.000677% throttled requests. As a third step, they decreased the provisioning to 50K RU/s + 500K RU/m. Throttling increased to 0.0121%, so the customer decided to increase back the provisioning per second to 60K RU/s + 600K RU/m. Throttling dropped back 0.00199%. Outcome: 20% cost saving on provisioning 80% of throttling eliminated Example 3: Lower provisioning cost and eliminate small throttling A customer from the gaming industry stored data mainly with a predictable access but just a few small spikes. They had provisioned 8,000 RU/s for one single partition and experienced a little bit of throttling (with 0.000053% of requests to be throttled). RU/m was a perfect capability to eliminate any throttling and give the customer a peace of mind. Working together, we also quickly realized that their workload had the potential to be further optimized. First, to enable RU/m, we had to lower their single partition provisioning to 5,000 RU/s (RU/m works only on partitions with a maximum of 5,000 RU/s). Despite a drop of 3,000 RU/s in provisioning, we were able to eliminate all throttling. Since the consumption of RU/m was minimal, this was a signal that we could lower the provisioning to 4,000 RU/s while keeping RU/m. They didn’t experience any throttling and were able to use more than 18% of what they provisioned. As seen in the graph below, we ended up provisioning only 2,000 RU/s with 20,000 RU/m while eliminating all the throttling.Their average cost of consumed RU was lower than any existing cloud service with throughput provisioning or consumption. Their average cost amounted to less than $0.10 per million RUs consumed, 75% cheaper than object store read transactions. Outcome: 53% cost saving on provisioning 100% of throttling eliminated (initially at low level) Example Use-Cases Summary Example Example 1 Example 2 Example 3 Initial Throughput 400,000 RU/s 100,000 RU/s 8,000 RU/s Final Throughput 250,000 RU/s + 2,500,000 RU/m 60,000 RU/s + 600,000 RU/m 2,000 RU/s + 20,000 RU/m Savings 17% cost saving 80% of throttling eliminated 20% cost saving 80% of throttling eliminated 53% cost saving 100% of throttling eliminated Resources Our vision is to be the most trusted database service for all modern applications. We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into Azure Cosmos DB. We spend limitless hours talking to customers every day and adapting Azure Cosmos DB to make the experience truly stellar and fluid. We hope that RU/m capability will enable you to do more and will make your development and maintenance even easier! So, what are the next steps you should take? First, understand the core concepts of Azure Cosmos DB Learn more about RU/m by reading the documentation: How RU/m works Enabling and disabling RU/m Good use cases Optimize your provisioning Specify access to RU/m for specific operations Visit the pricing page to understand billing implications If you need any help or have questions or feedback, please reach out to us through askcosmosdb@microsoft.com. Stay up-to-date on the latest Azure Cosmos DB news (#CosmosDB) and features by following us on Twitter @AzureCosmosDB and join our LinkedIn Group. About Azure Cosmos DB Azure Cosmos DB started as “Project Florence” in the late 2010 to address developer pain-points faced by large scale applications inside Microsoft. Observing that the challenges of building globally distributed apps are not a problem unique to Microsoft, in 2015 we made the first generation of this technology available to Azure Developers in the form of Azure DocumentDB. Since that time, we’ve added new features and introduced significant new capabilities.  Azure Cosmos DB is the result. It represents the next big leap in globally distributed, at scale, cloud databases.
Quelle: Azure

New Azure Quickstart template – Cloudera CDH and Tableau Server by Slalom

Developers and IT-Pros usually start their Cloud journey by installing software on virtual machines.  While this makes it easy to create running Infrastructure as a Service (IaaS), it requires additional work to load data and develop a coherent business solution.  Today we’re excited to announce a quick and easy way for the moderately technical user to get started with a pre-bundled visual analytics & big data solution running on Microsoft Azure: The Tableau/Cloudera Quickstart, built by Slalom.

We are also inviting everyone to join us on May 24, 2017, for a Webinar and demonstration of this solution delivered by Slalom and Microsoft. RSVP here – Webinar: Multi-Server Cloudera deployment on Azure.

To process and rationalize the large amounts of data produced in today’s modern businesses, users need flexible, sophisticated visualization tools combined with Internet-scale data processing clusters.  As you may already know, Tableau’s Server is acclaimed for self-service data visualization and collaboration, whereas Cloudera’s Enterprise Data Hub (CDH) framework provides robust data management and processing. Numerous customers are deploying these solutions every day on Azure, and both solutions can be found in the Azure Marketplace.

As a way for our customers to quickly experience the power of Azure and the strengths of our market-leading partners Tableau and Cloudera, Microsoft teamed up with Slalom to develop a sample, integrated business solution.  Whether you deploy for trial purposes, for a proof of concept, or a production environment, you can get started with this solution with very little configuration and setup effort.  Simply follow the Quickstart Cloudera + Tableau Deployment Guide, and you will end up with a pre-configured, Azure-optimized cluster of VM’s running Cloudera CDH that is pre-loaded with data. This cluster will also contain a Tableau server VM that is pre-configured with a Tableau retail dashboard that allows you to work with and analyze the Cloudera CDH data. 

The solution is designed to configure all the necessary components and the network topology, so you can focus on your data ingestion, management, analytics, and insights. If you have specific environment requirements or you prefer to customize the setup, you also have the flexibility to fork the repository, update the ARM Template and deploy the Cloudera with Tableau solution to fit your needs.

We hope this helps with your Azure journey with Cloudera and Tableau – and we are looking forward to your comments. Please feel free to reach out.

 

Suggested learning path:

Check the GitHub Repo with the Cloudera and Tableau Quickstart
Visualize the Quickstart and see the underlying JSON
Review the Deployment and Usage Guide
Deploy the Quickstart directly to your Azure subscription
Reach out to Slalom if you need help
Explore other Azure Partner Quickstarts to push your Azure skills to the next level

Practical aspects:

(1)   Licenses – Both Cloudera and Tableau solutions are provided as Bring-Your-Own-License (BYOL) model, as an initial free trial basis. These trials cover 60 days for Cloudera CDH and 14 days for Tableau server. After the initial trial period, you will need to reach out to the respective ISVs for licenses.

(2)   Versions:

Cloudera CDH 5.4.x Apache Hadoop deployment on CentOS
Tableau Server 10.1 running on Windows

Acknowledgments

This article is a collaboration between several people. Special thanks to Nicolas Caudron and Gil Isaacs.
Quelle: Azure

Optimize content delivery for your scenario with Azure CDN

When delivering content to a large global audience, it is critical to ensure optimized delivery of your content. This new capability is targeted to accelerate and optimize performance based on the scenarios you may use Azure CDN to deliver. These scenarios can include general website or web application delivery, media streaming, file download, etc. Optimization will be applied by default to the scenario you specified in the "optimized for" option when you create a CDN endpoint.

Optimization that we apply includes caching, object chunking, origin failure retry policy, among others, depending on the specific scenario. Media streaming is time sensitive in that packets arriving late on the client can cause degraded viewing experience, for example, frequent buffering of video content. The new enhancements, reduces the latency for delivery of media content. For large file download, object chunking is critical. Files are requested in smaller chunks from the origin to ensure a smooth download experience. We apply these enhancements based on the experience with many customers and we will continue adding additional settings to improve content delivery performance.

You can optimize the CDN endpoint to one of the following scenarios:

General web delivery
General media streaming
Video on demand media streaming
Large file download

When creating a new CDN endpoint, simply select from the drop down that best matches your scenario.

Depending on the optimization CDN providers support and how they apply enhancement in different scenarios, "optimized for" options can vary based on the provider you select. Currently Azure CDN from Akamai supports general web delivery, general media streaming, video on demand media streaming, and large file download. Azure CDN from Verizon supports general web delivery. You can use general web delivery for video on demand and large file download. We highly recommend you to test the performance between different providers to select the optimal provider for your delivery.

Read also:

Enable Azure CDN
Azure CDN overview

Is there a feature you'd like to see in Azure CDN? Give us feedback!
Quelle: Azure

Upgrade classic Backup and Site Recovery vaults to ARM Recovery Services vaults

Today, we are pleased to offer seamless upgrade of classic Backup or Site Recovery vaults to ARM based Recovery Services vaults.

Last May 2016, we announced the General Availability of Recovery Services (RS) vault based on the Azure Resource Manager for Azure Backup and Azure Site Recovery. Since then, we announced many new features for both services which are available only with Recovery Services vaults.  The new upgrade feature allows customers on the older classic vaults of both Backup and ASR to seamlessly upgrade to RS vault with minimal downtime, no loss of data, recovery points or configuration settings, and take advantage of all the new features available with RS vaults.  

New features available only in Recovery Services vaults

Backup

Enhanced capabilities to help secure your backup data: With Recovery Services vaults, Azure Backup provide security capabilities to protect cloud backups. These security features ensure that you can secure your backups and recover data using cloud backups if production and backup servers are compromised.  Learn more
Central monitoring for your hybrid IT environment: With Recovery Services vaults, you can now monitor not only your Azure IaaS VM backups but also your on-premises backups from a central portal. Learn more
Role based Access Control: Recovery Services vaults are based on the Azure Resource Manager model and thus bring you the benefits of RBAC access that restrict backup and restore access to the defined set of user roles. Learn more
Protect all configurations of Azure Virtual Machines: Recovery Services vaults can protect Resource Manager based V2 VMs including Premium Disks, Managed Disks and Encrypted VMs. This allows you to upgrade your classic V1 VMs to V2 VMs and retain your older recovery points for the V1 VMs as well as configure protection for the newly upgraded V2 VMs in the same vault. Learn more
Instant restore for IaaS VMs: Using Recovery Services vaults, you can now restore files and folders without having to restore the entire IaaS VM thus enabling you to have faster restores. This support is available for both Windows and Linux VMs. Learn more

Site Recovery

Azure Resource Manager support: You can protect and failover your virtual machines and physical machines into Azure ARM stack. You get the benefits of RBAC access that restrict replication and recovery operations to the defined set of user roles.
Streamlined ‘Getting Started’ experience: Simplicity has been the design goal of Recovery Services vaults. The new Getting Started experience vastly simplifies setting up disaster recovery for your applications.
Exclude Disk: This feature allows you to exclude specific disks that do not contain important data from being replicated. You can save storage and network resources by not replicating unwanted churn.
Support for Premium and Locally-redundant Storage (LRS): You can now protect applications that need higher IOPs by replicating into premium storage.
Support for managed disks: You can attach managed disks to your machines after failover to Azure. Using managed disks simplifies disk management for Azure IaaS VMs by managing the storage accounts associated with the VM disks.

For more details, please refer to this blog.

No impact to ongoing replication or existing backups

The entire upgrade process has been designed to be quick, smooth and easy to perform.

There is no physical data movement between the old vault and the upgraded vault and the upgrade process only involves updating configuration settings.
Once started, the upgrade typically takes about 15-30 minutes. During periods of heavy load, this could take up to 1 hour.
During the upgrade process, replication and scheduled backups will continue to happen, so you will remain protected.
During the upgrade, you will not be able to do management operations.

For Backup, these include new registrations, configuring new cloud backups and restoring IaaS VMs.
For Site Recovery, it includes operations such as registering a new server, performing a test failover, executing a failover, failback.

Post upgrade, all your settings and configuration will be retained, including all backup and recovery points that you created from the classic vault.

How to upgrade?

There is significant demand for upgrading to the RS vault, and we are expecting a number of customers to sign up for upgrade. To streamline this process, we will be releasing customers into the upgrade queue in batches. You can sign-up for upgrade using the following links:

Backup vaults: Sign up
Site Recovery vaults: Sign up

Once your subscription has been white-listed for upgrade, Microsoft will contact you to proceed with the upgrade.
Quelle: Azure

General Availability: Azure Search parses JSON Blobs

Today, we are happy to announce general availability for JSON parsing with Azure Search’s Blob Storage indexer.

Azure Search has long supported indexers for a variety of data sources on Azure: Document DB, Azure SQL Database, Tables, and Blobs. Indexers allow for Azure Search to automatically pull data (along with changes and deletions) into an Azure Search index without writing any code. The Blob indexer in particular is interesting because it is able to crack open and index a multitude of file types: Office documents, PDFs, HTML files, and more.

With today’s announcement, we are releasing the ability for the Blob Storage indexer to parse JSON content stored in blobs. This capability is not currently configurable in the Azure Portal. Note that support for parsing multiple documents from JSON arrays remains in preview.

Indexing JSON objects

With JSON parsing enabled, the Blob Storage Indexer can index properties of JSON objects, like the example below, into separate fields in your search index.

{
"text" : "A hopefully useful article explaining how to parse JSON blobs",
"datePublished" : "2016-04-13"
"tags" : [ "search", "storage", "howto" ]
}

To set up JSON parsing, create a datasource as usual:

POST https://[service name].search.windows.net/datasources?api-version=2016-09-01
Content-Type: application/json
api-key: [admin key]

{
"name" : "my-blob-datasource",
"type" : "azureblob",
"credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=;AccountKey=;" },
"container" : { "name" : "my-container", "query" : "optional, my-folder" }
}

Then, create an indexer (https://docs.microsoft.com/rest/api/searchservice/create-indexer) and set the parsingMode parameter to json.

POST https://[service name].search.windows.net/indexers?api-version=2016-09-01
Content-Type: application/json
api-key: [admin key]

{
"name" : "my-json-indexer",
"dataSourceName" : "my-blob-datasource",
"targetIndexName" : "my-target-index",
"schedule" : { "interval" : "PT2H" },
"parameters" : { "configuration" : { "parsingMode" : "json" } }
}

Azure Search only supports primitive data types, string arrays, and GeoJSON points, which means that the Blob Storage indexer cannot index arbitrary JSON. However, it is possible to select parts of the JSON object and “lift” them to top-level fields of an Azure Search document. To learn more about this, visit our documentation on field mappings. 

Learn More

Read more about Azure Search and its capabilities and visit our documentation. Please visit our pricing page to learn about the various tiers of service to fit your needs.
Quelle: Azure

Enhancements to Application Insights Smart Detection

We’re happy to introduce two enhancements to Smart Detection in Application Insights, which automatically notifies you if your live web app shows performance issues. You will now get an automatic notification if your live web app dependencies slow down – for example, a database or REST API that your app calls. And you can click through from Smart Detection details to the profiler trace of an operation where the problem has occurred.

Smart Detection of degradation in dependency duration

Smart Detection now detects degradation in duration of dependency calls. Web applications and modern services in general often rely on external services and platforms to power key scenarios. For those applications dependency calls, duration is a key factor in the overall application performance. Smart Detection will automatically notify you about changes in dependency duration compared to its historical performance. Supported for all dependency calls types. Learn more here.

We also added an analysis of dependency duration to Server Response Time Degradation detections. So in case that degradation in operation performance is correlated to degradation in related dependent service performance, this information will be included in Detection Analysis section.

 

Smart Detection and Profiler integration

Another new enhancement is the integration between Smart Detection and Profiler. Sometimes the fastest way to find the root cause is reviewing trace examples collected by Profiler that show where the time was spent when executing a specific operation. Now you can view the relevant profiler examples by clicking on a link on Smart Detection details blade.

 

Note: The link to view Profiler examples will appear only if examples were collected for this operation during detection period.
Quelle: Azure

The best public cloud for SAP workloads gets more powerful

More and more enterprise customers are realizing the benefits of moving their core business applications to the cloud. Many have moved beyond the conversation of “why cloud” to “which cloud provider.” Customers want the assurance of performance, privacy and scale for their mission critical applications. Microsoft Azure sets the bar for scale and performance, leads in compliance and trust measure, and offers the most global reach of any public cloud. Specifically for SAP workloads, our strong partnership with SAP enables us to provide our mutual customers best-in-class support for their most demanding enterprise applications.

We’ve invested deeply to ensure that Azure is the best public cloud for our customers’ SAP HANA workloads. Azure provides the most powerful and scalable infrastructure of any public cloud provider for HANA. Azure also offers customers the ability to extract more intelligence from their SAP solution environments with AI and analytics, and our broad, longstanding partnership with SAP includes integrations with Office 365 to help customer enhance productivity, too. Lastly, we have an enormous partner ecosystem ready to help enterprises succeed with SAP solution workloads.

I’m pleased to announce several new advancements to this key area of focus:

Support for running some of the largest public cloud estates of SAP HANA, across both virtual machines and Large Instance offerings.

To power SAP HANA and other high-end database workloads, we’re introducing M-Series virtual machines powered by Intel® Xeon® processor E7-8890 v3 that support single node configurations up to 3.5TB memory. This will allow customers to quickly spin up a new virtual machine to test a new business process scenario and turn it off when the testing is done to avoid incurring additional costs.
Real-time, transactional business applications need scale up power within a single node. For customers using OLTP landscapes like SAP S/4HANA or SoH that go beyond the limits of today’s hypervisors, we’re announcing a range of new SAP HANA on Azure Large Instance SKUs, powered by Intel® Xeon® processor E7-8890 v4 from 4TB to 20TB memory.
SAP business warehousing environments that harness and capture intelligence from massive volumes of data require multi-node, scale-out systems. We’re introducing support for SAP HANA Large instances up to 60TB memory for potential future use for applications like SAP BW, and SAP BW/4HANA.

SAP Cloud Platform: SAP’s platform-as-a-service offering is now available as a public preview hosted on Microsoft Azure. Customers can take advantage of the pre-built SAP Cloud Platform components to build business applications while leveraging the broader toolset of Azure services.
SAP HANA Enterprise Cloud: In cooperation with SAP, we are working to make Azure available as a deployment option for SAP HANA Enterprise Cloud, SAP’s secure managed cloud offering. Customers will benefit from Azure’s enterprise-proven compliance and security, in addition to close connections between their other Azure workloads and SAP solutions running on Azure in SAP HANA Enterprise Cloud.
SAP and Azure Active Directory Single-Sign-On: SAP Cloud Platform Identity Authentication Services are now integrated with Azure Active Directory. This integration enables customers to implement friction-free, web-based, single-sign-on capabilities across all SAP solutions that integrate with SAP Cloud Platform Identity Authentication. In addition, SAP SaaS solutions (e.g. Concur, SAP SuccessFactors, etc.), as well as core SAP NetWeaver-based solutions or SAP HANA, are integrated with Azure Active Directory.

Last year, Satya Nadella took the stage at SAPPHIRE, announcing a new era of partnership with SAP – and at the time we shared that early adopters, Coats LLC and Rockwell Automation, were using Azure Large Instance infrastructure to run their SAP solution environments.  Since then, we’ve seen tremendous momentum with customers choosing to deploy SAP on Azure. Just a few examples of the companies deciding on Azure as their cloud platform for SAP solution landscapes are:

Accenture: This is the largest business warehousing SAP HANA deployment in the public cloud, running Accenture’s own mission-critical financial reporting systems on Azure. For more about Accenture’s use of Azure, don’t miss their session at SAPPHIRE NOW’17 this week.
Pact Group: By choosing to migrate their on-premises SAP servers to Azure, this Asia-Pacific packaging company anticipates annualized savings of 20 percent with their business now running more than 90% of its applications and compute on Azure.
Mosaic: One of the world’s largest producers of phosphate and potash crop nutrients moved all its global financial, commercial, and supply-chain SAP systems to Azure, increasing speed and agility. The company anticipates a year over year cost savings of 20 percent.
IXOM: When they needed to separate from their parent company, water treatment and chemical distributor IXOM chose to move its SAP applications to Microsoft Azure, taking a cloud-first approach to the future.
Subsea7: This world-leading seabed-to-surface engineering, construction and services contractor is working with Accenture to unlock cost savings and greater efficiencies through SAP Business Suite on SAP HANA hosted on Azure. Subsea 7 aims to deliver a simpler, faster and more tightly integrated landscape to its global employees, providing more mobility, agility and an enhanced user experience.

One of the key advantages of Azure is that customers can drive intelligence and insights from their SAP solution environments by integrating with solutions like Power BI and Cortana Intelligence, powering new business opportunities and efficiencies.

Our integration partners are a critical part of SAP solutions on Azure deployments as they help ensure customer success, leveraging their skill and expertise across both the Microsoft and SAP ecosystems. At SAPPHIRE NOW’17, Microsoft will host some of our top Global System Integrator partners including Accenture, Cognizant, HCL, Infosys, TCS and Wipro to showcase the value and operational improvements each of these organizations can provide to customers looking to deploy SAP on Azure.  

For more information on today’s announcements, please go here. And if you’re going to be at SAPPHIRE, be sure stop by our booth #468.
Quelle: Azure

Migrating to Azure Data Sync 2.0

We’ve made improvements to SQL Data Sync including PowerShell programmability, better security and resilience, enhanced monitoring and troubleshooting, and availability in more regions. As part of these changes, SQL Data Sync will only be available in the Azure portal beginning July 1, 2017. This blog post will cover the steps current active users will use to migrate to the new service. This only applies to customers that have used their Sync Groups after March 1, 2017.

For details on the improvements, please see our Data Sync Update blog post. If you are a new user that would like to try Data Sync, look at this blog post on Getting Started with Data Sync. 

Overview

Starting June 1, 2017, existing Sync Groups will begin migrating. Existing Sync Groups will continue to work until the migration is completed. Please see the section below that fits your case for details.

A Sync Database will be created for each region where you have a Sync Group to store metadata and logs. You will own this database. This will be created in the same server as your Hub Database.

On July 1, 2017, SQL Data Sync will be retired from the Azure classic portal. After July 1, 2017, the original service will continue to run, but you won’t be able to make any changes or have portal access until you complete your migration. 

On September 1, 2017, the original SQL Data Sync service will be retired. If you haven’t migrated your sync groups by September 1, 2017, your Sync Group will be deleted.

What you need to do

Plan and prepare for your migration. You will receive emails in the coming weeks with your migration date and instructions.
Do not make topologies in the old portal after your migration date. This date will be sent to the subscription email of each Sync Group being migrated.
If you have any on-premises member databases install and configure the local agent. You need one on each machine or VM hosting a member database. Detailed steps are included below.
Test for successful migration. Do this by clicking “Sync Now” in the Azure Data Sync portal.
After verifying migration, disable scheduled sync in the old service. You must do this before making any topology changes or this could cause errors with Data Sync.

After completing these steps you can use Data Sync in the new portal.

Installing and configuring local agent

Your local agent from the original service can remain running and installed while you do the following steps.

You will need to:
1. Download and install the local agent. Install the local agent in the default location on every machine which hosts a member database.

2. Get the agent key. You can find this in the new Azure Data Sync portal after your migration date. We have also emailed this key to the subscription email of each Sync Group being migrated.

3. Click the "Submit Agent Key" button and connect to the Sync Group and Sync Database. You'll need to enter credentials for your Sync Database. This is the same as your credentials for the server on which the Sync Database is located.

4. Update your credentials.

a. Click the edit credentials button and update the information.

b. Copy the configuration file from the original service.

Copy the “AgentConfigData.xml” from the original service. The path will be as follows: Microsoft SQL Data SyncdataAgentConfigData.xml 
Put the copy of that file in the new Data Sync folder. The path will be as follows: Microsoft SQL Data Sync Agent 2.0dataAgentConfigData.xml

Related Links

Getting Started with Data Sync
Data Sync Refresh Blog

If you have any feedback on Azure Data Sync service, we’d love to hear from you! To get the latest update of Azure Data Sync, please join the SQL Advisor Yammer Group or follow us @AzureSQLDB on Twitter.
Quelle: Azure