Microsoft Azure at NAB Show 2017

is changing the world and is powering the digital transformation in the media industry. We are thrilled to see the significant momentum with which businesses large and small are selecting Azure for their digital transformation needs whether it be launching a new Over-The-Top (OTT) video service, web and mobile applications with rich media, or using cutting-edge media AI technologies to unlock insights that enhance content discovery and drive deeper user engagement.

This year at NAB Show 2017, we are showcasing why Microsoft Azure is the trusted and global-scale cloud for the media industry’s needs. At the core of it are new innovations and enhancements that we are releasing or have launched in the last few months to meet our customers ever evolving needs.

Encoding service enhancements

In our quest to be responsive to customer feedback we have made significant enhancements to the encoding service which include:

Reduced pricing and per minute billing: We launched a new pricing model based on output minutes instead of GB’s processed which reduces the price by half for typical us cases. Customers can now use our service for content-adaptive encoding, where the encoder will generate a ‘bit-rate ladder’ that is optimized for each input video. Learn more our new pricing model.
Autoscaling capacity: Our service can now also monitor your workload and automatically scale resources up or down, providing increased concurrency/throughput when needed. Combined with Azure Functions and Azure Logic Apps, you can quickly build, test and deploy custom media workflows at scale. This feature is in preview, please contact amsved@microsoft.com for more information.
DTS-HD surround sound now available in Premium Encoder for content creation and streaming delivery to connected devices.

Media analytics

Adding to the growing family of media analytics which include face and emotion detection, motion detection, video OCR, video summarization, content moderation and audio indexing, we are excited to add the following new capabilities,

Private Preview of Azure Media Video Annotator: identifies objects in the video such as cars, house etc. Information from Annotator can be used to build deep search applications. This information can also be combined with data obtained from other Media Analytics processors  to build custom workflows.
Public Preview of Azure Media Face Redactor: Azure Media Face Redactor enables customers to protect the identities of people before releasing their private videos to public. We see many use cases scenarios in broadcast news and look forward to seeing how our customers can use this new service. Learn more about Azure Media Face Redactor.

Streaming Service Enhancements

In order to simplify our customers decision around configuring streaming origins, we are excited about the following:

Autoscale Origins: We have introduced a new Standard Streaming Units offer that has the same features as Premium Streaming Units but scales automatically based on outbound bandwidth. Premium Streaming Units (Endpoints) are suitable for advanced workloads, providing dedicated, scalable bandwidth capacity where as Standard streaming unit is operated in a shared pool while still delivering scalable bandwidth.  Learn more here. In addition, the streaming team has delivered the following enhancements:

CMAF support – Microsoft and Apple worked closely to define the Common Media Application Format (CMAF) standard and submit it to MPEG. The new standard provides for storing and delivering streaming content using a single encrypted, adaptable multimedia presentation to a wide range of devices. The industry will greatly benefit from this common format, embodied in an MPEG standard, to improve interoperability and distribution efficiency.
DTS-HD surround sound streaming is now integrated with our dynamic packager and streaming services across all protocols (HLS, DASH and Smooth).
FairPlay HLS offline playback – new support for offline playback of HLS content using the Apple FairPlay DRM system.
RTMP ingest improvements – we&;ve updated our RTMP ingest support to allow for better integration with open source live encoders such as FFmpeg, OBS, Xsplit and more.
Serverless media workflows using Azure Functions & Azure Logic Apps: Azure offers a serverless compute platform that lets you easily trigger code based on events in Azure, such as when media is uploaded to a folder or through partners like Aspera . We’ve published a collection of integrated media workflows on Github to allow developers to get started building codeless and customized media workflows with Azure Functions and Logic Apps. Try it out!

Azure Media Player

The advertising features in Azure Media Player for video on demand is now GA. This enables the insertion of pre, mid and post roll advertisements from any VAST compliant ad server, empowering content owners to monetize their streams. In addition to our GA announcement of VOD ad insertion, we are excited to announce a preview of Live ad insertion. Additionally, we have a new player skin with enhanced accessibility features.

Azure CDN

Building on the unique multi-CDN offering of any public cloud platform we are excited to add new capabilities including custom domain SSL and “One click” integration of CDN with streaming origin, storage & web apps.

Custom Domain SSL: Azure CDN now supports custom domain SSL to enhance the security of data in transit. Use of HTTPS protocol ensures data exchanged with a website is encrypted while in transit. Azure CDN already supported HTTPS for Azure provided domains (e.g. https://contoso.azureedge.net) and it was enabled by default. Now, with custom domain HTTPS, you can enable secure delivery for custom domains (e.g. https://www.contoso.com) too. Learn more about Custom Domain SSL.
 “One click” CDN Integration with Streaming Endpoint, Storage & Web Apps: We have added deeper integration of Azure CDN with multiple Azure services to simplify the configuration of CDN. When Content Delivery Network is enabled for a streaming endpoint, network transfer charges from Streaming Endpoint to the CDN are waived. For more information, please visit the Azure Blog.

Growing partner ecosystem

At NAB, we are excited to announce that Avid has selected Microsoft Azure as their preferred partner to power their business in the cloud. Siemens has expanded their Azure integrations with deep integration into Azure media analytics to enhance their Smart video engine product. We have also expanded the partnership with Verizon Digital Media Services with a deeper integration of their CDN services with Azure storage. Ooyala has expanded their integration with Azure to include the Azure media analytics capabilities to enhance their media logistics platform.

While launching product and services is exciting, the goal we strive for is to make our customers successful. It is great to see this in some of recent case studies we have released on NBC’s Streaming of Rio 2016 Olympics and Lobster Ink.

One common theme that we hear from customers on why they adopt Azure for building media workflows is that it offers an enterprise grade battle tested media cloud platform that is simple, scalable, and flexible.

Come see us at NAB

If you’re attending NAB Show 2017, I encourage you to by stop by our booth SL6710 to learn more about Microsoft’s cloud media services and see demos from us and several of our partners. Also, don’t forget to check out Steve Guggenheimer’s blog post  and his keynote presentation on digital transformation in media in the Las Vegas Convention Center in rooms N262/N264 followed by a panel discussion.

If you are not attending the conference but would like to learn more about our media services, follow the Azure Blog to say up to date on new announcements.
Quelle: Azure

Monetizing your content with Azure Media Player

Video Advertisements- Tell me more!

As the playback of online video content increases in popularity, video publishers are looking to monetize their content via instream video advertisements. The growth of online video has been accompanied by a steep rise in the amount of money advertisers are interested in spending on video ads. Content owners can take advantage of this and leverage video advertisement in order to (need a better word for make money from) their media

As of version 2.1.0 (released this week, check out the blog post for more details!), Azure Media Player supports the insertion of linear advertisements for pre-roll (before regular content), mid-roll (during regular content) and post-roll (after regular content) for video on demand. These linear video advertisements are fetched and inserted into your content using the VAST standard.

Check out our demo for NAB!

To learn more about the Interactive Advertisement Bureau and some advertisement standards.

How do I start inserting ads into my stream?

To enable ads, first update your version of Azure Media Player version to 2.1.0 or higher as older versions of AMP do not support ads.
Next, configure and generate your ad tags from a VAST compliant ad server like OpenX or AdButler. You can use that ad tag in your player options so Azure Media Player knows where to request the ad from.

Configuring a pre-roll is as simple as:

ampAds:{
preRoll: {
sourceUri: ‘[VAST_FILE.xml]&;,
options:
{
skipAd:
{
enabled: true,
offset:5
}
}
},

This will insert a pre-roll ad requested from the sourceUri that is skippable after 5 seconds. You can find a more complex sample  with pre- mid- and post-rolls here.

Interested in Live Ad insertion?

If you are a customer interested in leveraging live ad insertion with Azure Media Serviced content and Azure Media Player, keep an eye out for my blog post coming out next week on how to insert video ads into your AMS streams on the fly. You can also contact us at ampinfo@microsoft.com to test out the Live Ad Insertion (Now in Preview!)

Calling all ad servers!

If you are an ad server and are interested in developing a custom plugin that supports ad insertion with AMP, please contact ampinfo@microsoft.com.

Providing feedback

As Azure Media Player continues to grow, evolve, and add additional features/enabling new scenarios, you can request new features, provide ideas or feedback via UserVoice. If you have and specific issues, questions or find any bugs, drop us a line at ampinfo@microsoft.com.

Sign up to stay up-to-date with everything Azure Media Player has to offer.
Quelle: Azure

Now announcing: Azure Media Player v2.0

Since its release at NAB two years ago, Azure media Player has grown significantly in robustness and in richness of features. We have been working hard addressing feedback from our fantastic customers (that’s you!) to enhance and improve a player that everyone can benefit from. For AMP’s 2nd birthday I am incredibly excited to announce our first major release its initial; welcome AMP 2.0!

What’s new in AMP 2.0?

Advertisement support

Azure Media Player version 2.1.0 and higher supports the insertion of pre- mid- and post- roll ads in all your on demand assets. The player inserts ads in accordance to the IAB’s VAST standard and allows you to configure options like ad position and skipabilty. To learn more about video ads with Azure Media Player check out my blog post: Monetizing Your Content With Azure Media Player

A new skin

We released a new skin along with as a counterpart to “AMP-Default” called “AMP-Flush”. You can enable AMP flush by simply changing two point in your code:

1) update the CSS your application loads

from

<link href="//amp.azure.net/libs/amp/2.1.0/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">

to

<link href="//amp.azure.net/libs/amp/2.1.0/skins/amp-flush/azuremediaplayer.min.css" rel="stylesheet">

2) updating the class in your videotag

from

<video id="azuremediaplayer" class="azuremediaplayer amp-default-skin amp-big-play-centered" tabindex="0"> </video>

to

<video id="azuremediaplayer" class="azuremediaplayer amp-flush-skin amp-big-play-centered" tabindex="0"> </video>

making these changes should result in the following new skin:

A more accessible player

We are always working towards creating a more accessible and user friendly player. The team has been working hard to improve your experience with the player in use cases like:

interfacing with assistive technologies (like JAWS or Narrator)
playback in High Contrast mode
navigating without a mouse (or Tab To Navigate)

New plugins

This release comes with some new plugins you can load from our plugin gallery  as well as new functionality baked into the player like Playback speed (icon in the screencap above). The details for these new features and additional APIs can all be found in our documentation. You can utilize them to customize the player to support the playback scenario you want to achieve. Plugin development is a very community driven operation and if you have any questions about creating plugins, modifying the ones in the gallery, or contributing them to the player, please email me at saraje@microsoft.com

Making the Switch to AMP 2.0

Transitioning to AMP 2.0 is an incredibly simple process. Make sure to update your CDN endpoints to point to 2.1.0 like so:

<link rel="stylesheet" href="http://amp.azure.net/libs/amp/2.1.0/skins/amp-default/azuremediaplayer.min.css">

<script src="//amp.azure.net/2.1.0/amp/latest/azuremediaplayer.min.js"></script>

 

Providing Feedback

Azure Media Player will continue to grow and evolve, adding additional features and enabling new scenarios. You can request new features, provide ideas or feedback via UserVoice. If you have and specific issues, questions or find any bugs, drop us a line at ampinfo@microsoft.com.

Sign up for the latest news and updates

Sign up to stay up-to-date with everything Azure Media Player has to offer.

Additional resources

Learn more
License
Documentation
Samples
Demo page
Plugin Gallery
UserVoice
Sign up

Quelle: Azure

See what’s next for Azure at Microsoft Ignite

Get all your Azure questions answered by the experts who build it. This year’s schedule is still in progress, but here are some highlights from last year’s conference:

Deliver more features faster with a modern development and test solution

This session shows how to use the infrastructure provided by Microsoft Azure DevTest Labs to quickly build dev and test environments.

Protect your data using Azure&;s encryption capabilities and key management

Cloud security is essential, and this deep dive explores Azure’s built-in encryption and looks at data disposal, key management, and access control.

Build tiered cloud storage in Microsoft Azure

Explore scalable, cost-efficient object storage using Azure’s Blob Storage service.

Register now to join us at Microsoft Ignite to connect with the tech community and discover new innovations.
Quelle: Azure

“How to make a movie the secure way” at NAB Show 2017

At the NAB Show 2017 Conference this week, we’ll be reprising our session “Securing the Making of the Next Hollywood Blockbuster” (Las Vegas | April 25, 2017 | 1:30 PM – 2:00 PM in the Cybersecurity & Content Protection Pavilion – Booth C3830CS – Central Hall). Azure’s very own Joel Sloss will regale the audience on the transition to secure movie production in the cloud, made possible by Microsoft Azure and a cadre of ISV partners who’ve ported their solutions to the platform.

Content comes in many forms and must be stored in many places, including on servers, workstations, mobile storage, archives, etc., which presents a massive security challenge. Securing this ever-moving data on top of the added challenge of properly handling personal information such as health records, contract details, and paystubs, strains an already complicated data governance situation.

In this session, we’ll look at an end-to-end workflow leveraging Azure and the combined wizardry of Avid, 5th Kind, Contractlogix, Docusign, MarkLogic, and SyncOnSet. Lulu Zezza, Physical Production Executive and the driving force behind the end-to-end workflow, will be co-presenting with Mr. Sloss.  She noted that, “Moving to the cloud is the best way to implement security controls across so many different physical and logical environments, locations, and data types. We have people working all over the world, for different companies, using different systems, all contributing to the same production. In the past, it’s been like a free-for-all, with contractors getting access to things they shouldn’t, information being duplicated and stored in the wrong places, and sensitive content left out in the open.”

The new digital workflow enables a secure “script-to-screen” experience for the management of both production data and the crew’s personal HR information, to which new global privacy standards apply. Metadata captured from contracts, scripts, and camera files is associated with filming days, scenes and takes recorded, and later to the final edit of the film, reducing the need for document sharing and film screenings. Plus communications are kept protected and confidential. It’s a whole new way to make movies.

Join Joel at his session where you’ll hear about:

Architectural considerations for multi-domain cloud environments
Secure access and device management for BYOD users
Content protection and privacy in connected and disconnected networks

Learn more information about Microsoft’s activities at NAB Show 2017.
Quelle: Azure

How Azure Security Center detects a Bitcoin mining attack

Azure Security Center helps customers deal with myriads of threats using advanced analytics backed by global threat intelligence. In addition, a team of security researchers often work directly with customers to gain insight into security incidents affecting Microsoft Azure customers, with the goal of constantly improving Security Center detection and alerting capabilities.

In the previous blog post "How Azure Security Center helps reveal a Cyberattack", security researchers detailed the stages of one real-world attack campaign that began with a brute force attack detected by Security Center and the steps taken to investigate and remediate the attack. In this post, we’ll focus on an Azure Security Center detection that led researchers to discover a ring of mining activity, which made use of a well-known bitcoin mining algorithm named Cryptonight.

Before we get into the details, let’s quickly explain some terms that you’ll see throughout this blog. “Bitcoin Miners” are a special class of software that use mining algorithms to generate or “mine” bitcoins, which are a form of digital currency. Mining software is often flagged as malicious because it hijacks system hardware resources like the Central Processing Unit (CPU) or Graphics Processing Unit (GPU) as well as network bandwidth of an affected host. Cryptonight is one such mining algorithm which relies specifically on the host’s CPU. In our investigations, we’ve seen bitcoin miners installed through a variety of techniques including malicious downloads, emails with malicious links, attachments downloaded by already-installed malware, peer to peer file sharing networks, and through cracked installers/bundlers.

Initial Azure Security Center alert details

Our initial investigation started when Azure Security Center detected suspicious process execution and created an alert like the one below. The alert provided details such as date and time of the detected activity, affected resources, subscription information, and included a link to a detailed report about hacker tools like the one detected in this case.

We began a deeper investigation, which revealed the initial compromise was through a suspicious download that got detected as “HackTool: Win32/Keygen".  We suspect one of the administrators on the box was trying to download tools that are usually used to patch or "crack" some software keys. Malware is frequently installed along with these tools allowing attackers a backdoor and access to the box.

Based on our log analysis, the attack began with the creation of a user account named “*server$”.
The “*server$” account then created a scheduled task called "ngm”. This task launched a batch script named "kit.bat” located in the "C:WindowsTempngmtx" folder.
We then observed process named "servies.exe“  being launched with cryptonight related parameters.
Note: The ‘bond007.01’ represents the bitcoin user’s account behind this activity and ‘x’ represents the password.

Two days later we observed the same activity with different file names. In the screenshot below, sst.bat has now replaced kit.bat and mstdc.exe has replaced servies.exe . This same cycle of batch file and process execution was observed periodically.

These .bat scripts appear to be used for making connections to the crypto net pool (XCN or Shark coin) and launched by a scheduled task that restarts these connections approximately every hour.

Additional Observation: The downloaded executables used for connecting to the bitcoin service and generating the bitcoins are renamed from the original, 32.exe or 64.exe, to “mstdc.exe” and “servies.exe” respectively. These executable’s naming schemes are based on an old technique used by attackers trying to hide malicious binaries in plain sight. The technique attempts to make files look like legitimate benign-sounding Windows filenames.

Mstdc.exe: “mstdc.exe” looks like “msdtc.exe” which is a legitimate executable on Windows systems, namely Microsoft Distributed Transaction Coordinator required by various applications such as Microsoft Exchange or SQL Server installed in clusters.
Servies.exe: Similarly, “services.exe” is a legitimate Service Control Manager (SCM) is a special system process under the Windows NT family of operating systems, which starts, stops and interacts with Windows service processes. Here again attackers are trying to hide by using similar looking binaries. “Servies.exe” and “services.exe”, they look very similar, don’t they? Great tactic used by attackers.

As we did our timeline log analysis, we noted other activity including wscript.exe using the “VBScript.Encode” to execute ‘test.zip’.

On extraction, it revealed ‘iissstt.dat’ file that was communicating with an IP address in Korea. The ‘mofcomp.exe’ command appears to be registering the file iisstt.dat with WMI. The mofcomp.exe compiler parses a file containing MOF statements and adds the classes and class instances defined in the file to the WMI repository.

Recommended remediation and mitigation steps

The initial compromise was the result of malware installation through cracked installers/bundlers which resulted in complete compromise of the machine. With that, our recommendation was first to rebuild the machine if possible. However, with the understanding that this sometimes cannot be done immediately, we recommend implementing the following remediation steps:

1. Password Policies: Reset passwords for all users of the affected host and ensure password policies meet best practices.

Best Practices for Enforcing Password Policies 
Selecting Secure Passwords
Using Strong Passwords

2. Defender Scan: Run a full antimalware scan using Microsoft Antimalware or another solution, which can flag potential malware.

3. Software Update Consideration: Ensure the OS and applications are being kept up to date. Azure Security Center can help you identify virtual machines that are missing critical and security OS updates.

4. OS Vulnerabilities & Version: Align your OS configurations with the recommended rules for the most hardened version of the OS. For example, do not allow passwords to be saved. Update the operating system (OS) version for your Cloud Service to the most recent version available for your OS family. Azure Security Center can help you identify OS configurations that do not align with these recommendations as well as Cloud Services running outdates OS version.

5. Backup: Regular backups are important not only for the software update management platform itself, but also for the servers that will be updated. To ensure that you have a rollback configuration in place in case an update fails, make sure to back up the system regularly.

6. Avoid Usage of Cracked Software: Using cracked software introduces unwanted risk into your home or business by way of malware and other threats that are associated with pirated software. Microsoft highly recommends evading usage of cracked software and following legal software policy as recommended by their respective organization.

More information can be found at:

Educate yourself on software piracy risk.
Learn more by reading “SIRv13: Be careful where you go looking for software and media files”. 

7. Email Notification: Finally, configure Azure Security Center to send email notifications when threats like these are detected.

Click on Policy tile in Prevention Section.
On the Security Policy blade, you pick which Subscription you want to configure Email Alerts for.
This brings us to the Security Policy blade. Click on the Email Notifications option to configure email alerting.

An email alert from Azure Security Center will look like the one below.

To learn more about Azure Security Center, see the following:

Azure Security Center detection capabilities — Learn about Azure Security Center’s advanced detection capabilities.
Managing and responding to security alerts in Azure Security Center — Learn how to manage and respond to security alerts.
Managing security recommendations in Azure Security Center — Learn how recommendations help you protect your Azure resources.
Security health monitoring in Azure Security Center — Learn how to monitor the health of your Azure resources.
Monitoring partner solutions with Azure Security Center — Learn how to monitor the health status of your partner solutions.
Azure Security Center FAQ — Find frequently asked questions about using the service.
Azure Security blog — Get the latest Azure security news and information.

Quelle: Azure

Announcing new Azure Services in the UK

We’re pleased to announce the following services which are now available in the UK!

HDInsight –  HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Azure.

HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA.  Each of these big data technologies and ISV applications are easily deployable as managed clusters with enterprise-level security and monitoring.  Learn more about HDInsight.

Azure Import/Export service now also available in UK is the perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.

Azure Import/Export –   Import/Export Service is now live in UK South! The Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth).

Customers can now use Azure Import/Export Service to copy data to and from Azure Storage by shipping hard disk drives to Azure UK South data center.

For more information about creating import/export jobs see, Use the Microsoft Azure Import/Export Service to transfer data to Blob Storage.

Azure Container Registry –  Azure Container Registry is a private registry for hosting container images. Using the Azure Container Registry, customers can store Docker-formatted images for all types of container deployments. Azure Container Registry integrates well with orchestrators hosted in Azure Container Service, including Docker Swarm, DC/OS and Kubernetes. Users can benefit from using familiar tooling capable of working with the open source Docker Registry v2.

Customers can now create one or more container registries in their Azure subscription. Each registry is backed by a standard Azure storage account in the same location. Take advantage of local, network-close storage of your container images by creating a registry in the same Azure location as your deployments. Learn more about Azure Container Registry.

We are excited about these additions, and invite customers using the UK Azure region to try them today!
Quelle: Azure

How Microsoft builds massively scalable services using Azure DocumentDB

This week at Microsoft Data Amp we covered how you can harness the incredible power of data using Microsoft’s latest innovations in its Data Platform. One of the key pieces in the Data Platform is Azure DocumentDB, Microsoft’s globally distributed NoSQL database service. Released in 2015, DocumentDB is being used virtually ubiquitously as a backend for first-party Microsoft services for many years.

DocumentDB is Microsoft&;s multi-tenant, globally distributed database system designed to enable developers to build planet scale applications. DocumentDB allows you to elastically scale both, throughput and storage across any number of geographical regions. The service offers guaranteed low latency at P99, 99.99% high availability, predictable throughput, and multiple well-defined consistency models, all backed by comprehensive SLAs. By virtue of its schema-agnostic and write optimized database engine, by default DocumentDB is capable of automatically indexing all the data it ingests and serve SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As a cloud service, DocumentDB is carefully engineered with multi-tenancy and global distribution from the ground up.

In this blog, we cover case studies of first-party applications of DocumentDB by the Windows, Universal Store, and Azure IoT Hub teams, and how these teams could harness the scalability, low latency, and flexibility benefits of DocumentDB to innovate and bring business value to their services.

Microsoft DnA: How Microsoft uses error reporting and diagnostics to improve Windows

The Windows Data and Analytics (DnA) team in Microsoft implements the crash reporting technology for Windows. One of their components runs as a Windows Service in every Windows device. Whenever an application stops responding on a user&039;s desktop, Windows collects post-error debug information and prompts the user to ask if they’re interested in finding a solution to the error. If the user accepts, the dump is sent over the Internet to the DnA service. When a dump reaches the service, it is analyzed and a solution is sent back to the user when one is available.

Windows error reporting diagnostic information

 

Windows&039; need for fast key-value lookups

In DnA’s terminology, crash reports are organized into “buckets”. Each bucket is used to classify an issue by key attributes such as Application Name, Application Version, Module Name, Module Version, and OS Exception code. Each bucket contains crash reports that are caused by the same bug. With the large ecosystem of hardware and software vendors, and 15 years of collected data about error reports, the DnA service has over 10 billion unique buckets in its database cluster.

One of the DnA team’s requirements was rather simple at face value. Given the hash of a bucket, return the ID corresponding to its bucket/issue if one was available. However, the scale posed interesting technical challenges. There was a lot of data (10 billion buckets, growing at 6 million a day), high volume of requests and global reach (requests from any device running Windows), and low latency requirements (to ensure a good user experience).

To store “Bucket Dimensions”, the DnA team provisioned a single DocumentDB collection with 400,000 request units per second of provisioned throughput. Since all access was by the primary key, they configured the partition key to be the same as the “id”, with a digest of the various attributes as the value. As DocumentDB provided <10 ms read latency and <15ms write latency at p99, DnA could perform fast lookups against buckets and lookup issues even as their data and request volumes continued to grow over time.

Windows cab catalog metadata and query

Aside from fast real-time lookups, the DnA team also wanted to use the data to drive engineering decisions to help improve Microsoft and other vendors’ products by fixing the most impactful issues. For example, the team has observed that addressing the top 1 percent of reliability issues could address 50 percent of customers’ issues. This analysis required storing the crash dump binary files, “cabs”, extracting useful metadata, then running analysis and reports against this data. This presented a number of interesting challenges on its own.

The team deals with approximately 600 different types of reliability-incident data. Managing the schema and indexes required a significant engineering and operational overhead on the team.
The cab metadata was also a big volume of data. There were about 5 billion cabs, and 30 million new cabs were added every day.

The DnA team could migrate their Bucket Dimension and Cab Catalog stores to DocumentDB from their earlier solution based on an on-premises cluster of SQL Servers. Since shifting the database’s heavy lifting to DocumentDB, DnA benefited from the speed, scale, and flexibility offered by DocumentDB. More importantly, they could focus less on maintenance of their database and more on improving user experience on Windows.

You can read the case study at Microsoft’s DnA team achieves planet-scale big-data collection with Azure DocumentDB.

Microsoft Global Homing Service: How Xbox Live and Universal Store build highly available location services

Microsoft’s Universal Store team implements the e-commerce platform that is used to power Microsoft’s storefronts across Windows Store, Xbox, and a large set of Microsoft services. One of the key internal components in the Universal Store backend is the Global Homing Service (GHS), a highly reliable service that provides its downstream consumers with the ability to quickly retrieve location metadata associated with one to many, arbitrary large number of, IDs.

Global Homing Service (GHS) using Azure DocumentDB across 4 regions

GHS is on a hot path for the majority of its consumer services and receives hundreds of thousands of requests per second. Therefore, the latency and throughput requirements for the service are strict. The service had to maintain 99.99% availability and predictable latencies under 300ms end-to-end at the 99.9th percentile to satisfy requirements of its partner teams. To reduce latencies, the service is geo-distributed so that it is as close as possible to calling partner services.

The initial design of GHS was implemented using a combination of Azure Table Storage and various levels of caches. This solution worked well for the initial set of loads, but given the critical nature of GHS and increased adoption of the service from key partners, it became apparent that the existing SLA was not going to meet their partners’ P99.9 requirements of <300ms with a 99.99% reliability over 1 minute. Partners with a critical dependency on the GHS call path found that even if the overall reliability was high, there were periods of time where the number of timeouts would exceed their tolerances and result in a noticeable degradation of the partner’s own SLA. These periods of increased timeouts were given the name “micro-outages” and key partners started tracking these daily.

After investigating many possible solutions, such as LevelDB, Kafka, MongoDB, and Cassandra, the Universal Store team chose to replace GHS’s Azure Table backend and the original cache in front of it with an Azure DocumentDB backend. GHS deployed a single DocumentDB collection with 600,000 request units per second deployed across four geographic regions where their partner teams had the biggest footprint. As a result of the switch of DocumentDB, GHS customers have seen p50 latencies under 30ms and a huge reduction in the number and scale of micro-outages. GHS’s availability has remained at or above 99.99% since the migration. In addition to the increase in service availability, overall latencies significantly improved as well for most of GHS call patterns.

Number of GHS micro-outages before and after DocumentDB migration

Microsoft Azure IoT Hub: How to handle the firehose from billions of IoT devices

Azure IoT Hub is a fully managed service that allows organizations to connect, monitor, and manage up to billions of IoT devices. IoT Hub provides reliable communication between devices, the a queryable store for device metadata and synchronized state information, and provides extensive monitoring for device connectivity and device identity management events. Since IoT Hub is at the ingestion point for the massive volume of writes coming from IoT devices across all of Azure, they needed a robust and scalable database in their backend.

IoT Hub provides device-related information, “device twins”, as part of its APIs that device and back ends can use to synchronize device conditions and configuration. A device twin is a JSON document that includes tags assigned to the device in the backend, a property bag of “reported properties” which include device configuration or conditions, and a property bag of “desired properties” that can be used to notify the device to perform a configuration change. The IoT Hub team choose Azure DocumentDB over Hbase, Cassandra, and MongoDB because DocumentDB provided functionality that the team needed like guaranteed low latency, elastic scaling of storage and throughput, provide high availability via global distribution, and rich query capabilities via automatic indexing.

IoT Hub stores the device twin data as JSON documents and performs updates based on the latest state reported by devices in near real-time. The architecture uses a partitioned collection that uses a compound key constructed by concatenating the Azure account (tenant) ID and the device ID to elastically scale to handle massive volumes of writes. IoT Hub also uses Service Fabric to scale out devices across multiple servers, each server communicating with a 1-N DocumentDB partitions. This topology is replicated across each Azure region that IoT Hub is available.

Next steps

In this blog, we looked at a couple of first-party use cases of DocumentDB and how these Microsoft teams were able to utilize Azure DocumentDB to improve user experience, improve latency, and reliability of their services.

Learn more about global distribution with DocumentDB.
Create a new DocumentDB account from the Azure Portal or download the DocumentDB Emulator.
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB or reach out to us on the developer forums on Stack Overflow.

Quelle: Azure

How Microsoft builds massively scalable services using Azure DocumentDB

This week at Microsoft Data Amp we covered how you can harness the incredible power of data using Microsoft’s latest innovations in its Data Platform. One of the key pieces in the Data Platform is Azure DocumentDB, Microsoft’s globally distributed NoSQL database service. Released in 2015, DocumentDB is being used virtually ubiquitously as a backend for first-party Microsoft services for many years.

DocumentDB is Microsoft&;s multi-tenant, globally distributed database system designed to enable developers to build planet scale applications. DocumentDB allows you to elastically scale both, throughput and storage across any number of geographical regions. The service offers guaranteed low latency at P99, 99.99% high availability, predictable throughput, and multiple well-defined consistency models, all backed by comprehensive SLAs. By virtue of its schema-agnostic and write optimized database engine, by default DocumentDB is capable of automatically indexing all the data it ingests and serve SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As a cloud service, DocumentDB is carefully engineered with multi-tenancy and global distribution from the ground up.

In this blog, we cover case studies of first-party applications of DocumentDB by the Windows, Universal Store, and Azure IoT Hub teams, and how these teams could harness the scalability, low latency, and flexibility benefits of DocumentDB to innovate and bring business value to their services.

Microsoft DnA: How Microsoft uses error reporting and diagnostics to improve Windows

The Windows Data and Analytics (DnA) team in Microsoft implements the crash reporting technology for Windows. One of their components runs as a Windows Service in every Windows device. Whenever an application stops responding on a user&039;s desktop, Windows collects post-error debug information and prompts the user to ask if they’re interested in finding a solution to the error. If the user accepts, the dump is sent over the Internet to the DnA service. When a dump reaches the service, it is analyzed and a solution is sent back to the user when one is available.

Windows error reporting diagnostic information

 

Windows&039; need for fast key-value lookups

In DnA’s terminology, crash reports are organized into “buckets”. Each bucket is used to classify an issue by key attributes such as Application Name, Application Version, Module Name, Module Version, and OS Exception code. Each bucket contains crash reports that are caused by the same bug. With the large ecosystem of hardware and software vendors, and 15 years of collected data about error reports, the DnA service has over 10 billion unique buckets in its database cluster.

One of the DnA team’s requirements was rather simple at face value. Given the hash of a bucket, return the ID corresponding to its bucket/issue if one was available. However, the scale posed interesting technical challenges. There was a lot of data (10 billion buckets, growing at 6 million a day), high volume of requests and global reach (requests from any device running Windows), and low latency requirements (to ensure a good user experience).

To store “Bucket Dimensions”, the DnA team provisioned a single DocumentDB collection with 400,000 request units per second of provisioned throughput. Since all access was by the primary key, they configured the partition key to be the same as the “id”, with a digest of the various attributes as the value. As DocumentDB provided <10 ms read latency and <15ms write latency at p99, DnA could perform fast lookups against buckets and lookup issues even as their data and request volumes continued to grow over time.

Windows cab catalog metadata and query

Aside from fast real-time lookups, the DnA team also wanted to use the data to drive engineering decisions to help improve Microsoft and other vendors’ products by fixing the most impactful issues. For example, the team has observed that addressing the top 1 percent of reliability issues could address 50 percent of customers’ issues. This analysis required storing the crash dump binary files, “cabs”, extracting useful metadata, then running analysis and reports against this data. This presented a number of interesting challenges on its own.

The team deals with approximately 600 different types of reliability-incident data. Managing the schema and indexes required a significant engineering and operational overhead on the team.
The cab metadata was also a big volume of data. There were about 5 billion cabs, and 30 million new cabs were added every day.

The DnA team could migrate their Bucket Dimension and Cab Catalog stores to DocumentDB from their earlier solution based on an on-premises cluster of SQL Servers. Since shifting the database’s heavy lifting to DocumentDB, DnA benefited from the speed, scale, and flexibility offered by DocumentDB. More importantly, they could focus less on maintenance of their database and more on improving user experience on Windows.

You can read the case study at Microsoft’s DnA team achieves planet-scale big-data collection with Azure DocumentDB.

Microsoft Global Homing Service: How Xbox Live and Universal Store build highly available location services

Microsoft’s Universal Store team implements the e-commerce platform that is used to power Microsoft’s storefronts across Windows Store, Xbox, and a large set of Microsoft services. One of the key internal components in the Universal Store backend is the Global Homing Service (GHS), a highly reliable service that provides its downstream consumers with the ability to quickly retrieve location metadata associated with one to many, arbitrary large number of, IDs.

Global Homing Service (GHS) using Azure DocumentDB across 4 regions

GHS is on a hot path for the majority of its consumer services and receives hundreds of thousands of requests per second. Therefore, the latency and throughput requirements for the service are strict. The service had to maintain 99.99% availability and predictable latencies under 300ms end-to-end at the 99.9th percentile to satisfy requirements of its partner teams. To reduce latencies, the service is geo-distributed so that it is as close as possible to calling partner services.

The initial design of GHS was implemented using a combination of Azure Table Storage and various levels of caches. This solution worked well for the initial set of loads, but given the critical nature of GHS and increased adoption of the service from key partners, it became apparent that the existing SLA was not going to meet their partners’ P99.9 requirements of <300ms with a 99.99% reliability over 1 minute. Partners with a critical dependency on the GHS call path found that even if the overall reliability was high, there were periods of time where the number of timeouts would exceed their tolerances and result in a noticeable degradation of the partner’s own SLA. These periods of increased timeouts were given the name “micro-outages” and key partners started tracking these daily.

After investigating many possible solutions, such as LevelDB, Kafka, MongoDB, and Cassandra, the Universal Store team chose to replace GHS’s Azure Table backend and the original cache in front of it with an Azure DocumentDB backend. GHS deployed a single DocumentDB collection with 600,000 request units per second deployed across four geographic regions where their partner teams had the biggest footprint. As a result of the switch of DocumentDB, GHS customers have seen p50 latencies under 30ms and a huge reduction in the number and scale of micro-outages. GHS’s availability has remained at or above 99.99% since the migration. In addition to the increase in service availability, overall latencies significantly improved as well for most of GHS call patterns.

Number of GHS micro-outages before and after DocumentDB migration

Microsoft Azure IoT Hub: How to handle the firehose from billions of IoT devices

Azure IoT Hub is a fully managed service that allows organizations to connect, monitor, and manage up to billions of IoT devices. IoT Hub provides reliable communication between devices, the a queryable store for device metadata and synchronized state information, and provides extensive monitoring for device connectivity and device identity management events. Since IoT Hub is at the ingestion point for the massive volume of writes coming from IoT devices across all of Azure, they needed a robust and scalable database in their backend.

IoT Hub provides device-related information, “device twins”, as part of its APIs that device and back ends can use to synchronize device conditions and configuration. A device twin is a JSON document that includes tags assigned to the device in the backend, a property bag of “reported properties” which include device configuration or conditions, and a property bag of “desired properties” that can be used to notify the device to perform a configuration change. The IoT Hub team choose Azure DocumentDB over Hbase, Cassandra, and MongoDB because DocumentDB provided functionality that the team needed like guaranteed low latency, elastic scaling of storage and throughput, provide high availability via global distribution, and rich query capabilities via automatic indexing.

IoT Hub stores the device twin data as JSON documents and performs updates based on the latest state reported by devices in near real-time. The architecture uses a partitioned collection that uses a compound key constructed by concatenating the Azure account (tenant) ID and the device ID to elastically scale to handle massive volumes of writes. IoT Hub also uses Service Fabric to scale out devices across multiple servers, each server communicating with a 1-N DocumentDB partitions. This topology is replicated across each Azure region that IoT Hub is available.

Next steps

In this blog, we looked at a couple of first-party use cases of DocumentDB and how these Microsoft teams were able to utilize Azure DocumentDB to improve user experience, improve latency, and reliability of their services.

Learn more about global distribution with DocumentDB.
Create a new DocumentDB account from the Azure Portal or download the DocumentDB Emulator.
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB or reach out to us on the developer forums on Stack Overflow.

Quelle: Azure

Ubuntu 12.04 (Precise Pangolin) nearing end-of-life

Ubuntu 12.04 "Precise Pangolin" has been with us from the beginning, since we first embarked on the journey to support Linux virtual machines in Microsoft Azure. However, as its five-year support cycle is nearing an end in April 2017 we must now move on and say "goodbye" to Precise. Ubuntu posted the official EOL notice back in March. The following is an excerpt from one of the announcements:

This is a reminder that the Ubuntu 12.04 (Precise Pangolin) release is nearing its end of life. Ubuntu announced its 12.04 (Precise Pangolin) release almost 5 years ago, on April 26, 2012. As with the earlier LTS releases, Ubuntu committed to ongoing security and critical fixes for a period of 5 years. The support period is now nearing its completion and Ubuntu 12.04 will reach its end of life near the end of April 2017. At that time, Ubuntu Security Notices will no longer include information or updated packages, including kernel updates, for Ubuntu 12.04.

The supported upgrade path from Ubuntu 12.04 is via Ubuntu 14.04. Users are encouraged to evaluate and upgrade to our latest 16.04 LTS release via 14.04. Ubuntu 14.04 and 16.04 continue to be actively supported with security updates and select high-impact bug fixes.

For users who can&;t upgrade immediately, Canonical is offering Ubuntu 12.04 ESM (Extended Security Maintenance), which provides important security fixes for the kernel and the most essential user space packages in Ubuntu 12.04. These updates are delivered in a secure, private archive exclusively available to Ubuntu Advantage customers.

Users interested in Ubuntu 12.04 ESM updates can purchase Ubuntu Advantage.

Existing UA customers can acquire their ESM credentials by filing a support request.
Quelle: Azure