Azure Media Services Live Monitoring Dashboard open-source release

We are excited to announce the open-source release of the Azure Media Services (AMS) Live Monitoring Dashboard on GitHub.

The Live Monitoring Dashboard is a .NET C# web app that enables Azure Media Services (AMS) customers to view the health of their channel and origin deployments. The dashboard captures the state of ingest, archive, encode, and origin telemetry entities, enabling customers to quantify the health of their services with low latency. The dashboard supplies data on the incoming data rate for video stream ingestion, dropped data in storage archive, encoding data rate, and origin HTTP error statuses and latencies.

Special thanks to Prakash Duggaraju for his help and contributions to this project.

Dashboard overview

The image below illustrates the account-level view of the Live Monitoring Dashboard. The upper left pane highlights each deployment’s health status with a different status code color. Ingest, archive, origin, and encode telemetry entities are denoted by i, a, o, and e abbreviations respectively. Each color of theindicator summarizes whether an entity is currently impacted. Green denotes healthy, orange indicates mildly impacted, red indicates unhealthy, and gray indicates inactive. You can modify the thresholds for which these flags are raised from the storage account JSON configuration file. From the right pane, you can drill down into the detailed views for each deployment by clicking on the active status squares.

This dashboard is backed by a SQL database that reads telemetry data from your Azure storage account. Our telemetry release announcement blog post details the types of telemetry data supported today. Every 30 seconds all views within the dashboard are automatically refreshed with the latest telemetric data.

Channel Detailed View

The channel detailed view provides incoming bitrate, discontinuity count, overlap count, and bitrate ratio data for a given channel. In this view, these fields represent the following:

Bitrate: the expected bitrate for a given track and incoming bitrate is the bitrate that the channel receives
Discontinuity count: the count of instances where a fragment was missing in the stream
Overlap count: the count of instances where the channel receives fragments with the same or overlapping stream timestamp
Bitrate ratio: the ratio of incoming bitrate to expected bitrate

Optimally, a channel should have no discontinuities, no overlaps, and a bitrate ratio of one. Flags are set to raise when these dimensions deviate from normal values.

Archive Detailed View

The archive detailed view provides bitrate, dropped fragment count, and dropped fragment rate for the archive entities backing each track. In this view, these fields represent the following:

Bitrate: the expected bitrate of the given track
Dropped fragment count: the number of fragments dropped in the program
Dropped fragment ratio: the number of fragments dropped per minute

Optimally, the dropped fragment count and dropped fragment ratio should be zero.

Origin Detailed View

The origin detailed view provides request count, bytes sent, server latency, end-to-end (E2E) latency, request ratio, bandwidth, and data output utilization ratio for a given origin. In this view, these fields represent the following:

Request count: the number of times a client requested data from the origin, categorized by the HTTP status code
Bytes sent: the number of bytes returned to the client
Server latency: the server latency component for responding to a request
End-to-end latency: the total latency for responding to a request
Request rate: the number of requests the origin receives per minute
Bandwidth: the origin response throughput
Request ratio: the percentage of requests for a given HTTP status code
Data output utilization ratio: the percentage of maximum throughput that the origin utilizes

Optimally, origin requests should return only HTTP 200 status codes and there should be no failed requests (HTTP 4XX + 5XX – 412). The data out utilization should preferably not exceed 90 – 95% of the maximum available throughput.

Encode Detailed View

The encode detailed view provides the health status for inputs, transcoders, output, and overall health.

Optimally, the encoder detailed view should reflect overall healthy status.

Providing feedback & feature requests

We love to hear from our customers and better understand your needs! To help serve you better, we are always open to feedback, new ideas, and appreciate any bug reports so that we can continue to provide an amazing service with the latest technologies. To request new features, provide ideas or feedback, please submit to User Voice for Azure Media Services. If you have any specific issues, questions, or find any bugs, please post your question or feedback to our forum.
Quelle: Azure

Announcing: New Auto-Scaling Standard Streaming Endpoint and Enhanced Azure CDN Integration

Since the launch of Azure Media Services our streaming services have been one of the biggest things that has attracted customers to our platform.  It offers the scale and robustness to handle the largest events on the web including FIFA World Cup matches, streaming coverage of Olympic events, and Super Bowls.  It also offers features that greatly reduce workflow complexity and cost through dynamic packaging into HLS, MPEG-DASH, and Smooth Streaming as well as dynamic encryption for Microsoft PlayReady, Google Widevine, Apple Fairplay, and AES128.

However, our origin services (aka Streaming Endpoints) have always been plagued with the usability issue of needing to provision them with Streaming Units (each one provides 200Mbps of egress capacity) based on scale needs.  We continually receive questions from customers and partners asking “how many Streaming Units do I need?”, “how do I know when I need more”, “can I get dynamic packaging without Streaming Units”, etc.

Thus, we’re very excited to announce that we have a new Streaming Endpoint option called a Standard Streaming Endpoint which eliminates this complexity by giving you the scale and robustness you need without needing to worry about Streaming Units.  Behind the scenes we monitor the bandwidth requirements on your Streaming Endpoint and scale out as needed.  This means a Standard Streaming Endpoint can be used to deliver your streams to a large range of audience sizes, from very small audiences to thousands of concurrent viewers using the integration of Azure CDN services (more on that further below).

More good news! We also heard your request to have a free trial period to get familiar with Azure Media Services streaming capabilities. When a new Media Services account gets created, a default Standard Streaming Endpoint also automatically gets provisioned under the account. This endpoint includes a 15-day free trial period and trial period starts when the Endpoint gets started for the first time.

In addition to Standard Streaming Endpoints, we are also pleased to announce enhanced Azure CDN integration. With a single click you can integrate all the available Azure CDN providers (Akamai and Verizon) to your Streaming Endpoint including their Standard and Premium products and you can manage and configure all the related features through the Azure CDN portal. When Azure CDN is enabled for a Streaming Endpoint using Azure Media Services, data transfer charges between the Streaming Endpoint and CDN do not apply. Data transferred is instead charged at the CDN edge using CDN pricing.

Comparing Streaming Endpoint Types

Our previous Streaming Endpoints are not going away, meaning there are now multiple options so let’s discuss their attributes.  But before I do let me first jump to the punch line and give you our recommendation for which Streaming Endpoint type you should use.  We have analyzed current customer usage and have determined that the streaming needs of 98% of our customers can be met with Standard Streaming Endpoint.  The remaining 2% are customers like Xbox Movies and Rakuten Showtime that have extremely large catalogs, massive audiences, and origin load profiles that are very unique.  Thus, unless you feel your service will be in that stratosphere our recommendation is that you migrate to a Standard Streaming Endpoint.  If you have any concerns that you may fall into that 2% please contact us and we can provide additional guidance. A good guide post is to contact us if you expect a concurrent audience size larger than 10,000 viewers.

With that out of the way, here’s some finer grained details on the types and how they can be provisioned.

Our existing Streaming Units have now been renamed to "Premium Streaming Units" and any streaming endpoint that have a Premium Streaming Unit will be named a “Premium Streaming Endpoint”.  These Streaming Endpoints behave exactly as they did before and require you to provision them with Streaming Units based on your anticipated load.  As mentioned above almost everyone should be using a Standard Streaming Endpoint and you should contact us if you think you need a Premium Streaming Endpoint.
Any newly created Azure Media Services account will by default have a Standard Streaming Endpoint with Azure CDN (S1 Verizon Standard) integrated created and placed in a stopped state.  It is put into a stopped state so that it doesn’t incur any charges until you are ready to begin streaming.
New Streaming Endpoints can also be created as Standard Streaming Endpoints.
Previously, when a new Azure Media Services account was created a Streaming Endpoint was created with no Streaming Units(aka Classic Streaming Endpoint) . This was a free service intended to give developers time to develop services before incurring any costs.  However, Streaming Units were needed to turn on many of our critical services such as dynamic packaging and encryption so the value was very limited.  Some customers may still have one of these “Classic” Streaming Endpoints in their account.  We recommend customers migrate these to Standard as well, they will not be migrated automatically.  The migration can be done using Azure management portal or Azure Media Services APIs.  For more information, please check "Streaming endpoints overview".  As mentioned above we are offering a 15-day free trial on Standard which provides developers with the same ability to develop services without incurring streaming costs.

Feature
Standard
Premium

Free first 15 days*
Yes
No

Streaming Scale
Up to 600 Mbps when Azure CDN is not used; With Azure CDN turned on Standard will scale to thousands of concurrent viewers
200 Mbps per streaming unit (SU) and scales with CDN.

SLA
99.9
99.9 (200 Mbps per SU).

CDN
Azure CDN, third party CDN, or no CDN.
Azure CDN, third party CDN, or no CDN.

Billing is prorated
Daily
Daily

Dynamic encryption
Yes
Yes

Dynamic packaging
Yes
Yes

IP filtering/G20/Custom host
Yes
Yes

Progressive download
Yes
Yes

Recommended usage
Recommended for the vast majority of streaming scenarios, contact us if you think you may have needs beyond Standard
Contact Us

*Note: Free trial doesn’t apply to existing accounts and end date doesn’t change with state transitions such as stop/start. Free trial starts the first time you start the streaming endpoint and ends after 15 calendar days. The free trial only applies to the default streaming endpoint and doesn&;t apply to additional streaming endpoints.
 

When to Use Azure CDN?

As mentioned above all new Media Services accounts by default have a Standard Streaming Endpoint with Azure CDN (S1 Verizon Standard) integrated. In most cases you should keep CDN enabled. However, if you are anticipating max concurrency lower than 500 viewers then it is recommended to disable CDN since CDN scales best with concurrency.

To migrate your Classic or Premium endpoint to Standard

Navigate to streaming endpoint settings
Toggle your type from Premium to Classic. (If your endpoint doesn&039;t have any streaming units Classic type will be highlighted)

Click "Classic" and save

 
After saving the changes "Opt-in to Standard" button should be visible

Click "Opt-in to Standard"
Read the details and click YES.  (Note: Migrating from classic to standard endpoints cannot be rolled back and has a pricing impact. Please check Azure Media Services pricing page. After migration, it can take up to 30 minutes for full propagation and dynamic packaging and streaming requests might fail during this period)
When operation is completed your classic endpoint will be migrated to "Standard"

To migrate legacy CDN integration to new CDN integration

To migrate to new CDN integration you need to stop your streaming endpoint. Navigate to streaming endpoint details and click stop

Note: Stopping the endpoint will delete existing CDN configuration and stop streaming. Any manually configured setting using CDN management portal will also be deleted and needs to be reconfigured after enabling new CDN integration. Please also note that legacy CDN integrated streaming endpoints doesn&039;t have the "Manage CDN" action button in the menu.
Click "Disable CDN"

Click "Enable CDN" which will trigger new CDN integration workflow

Follow the steps and select your CDN provider and pricing tiers based on your streaming endpoint type

Click "Start"

Note: Starting the streaming endpoint and full CDN provisioning might take up to 2 hours. During this period, you might use your streaming endpoint however, it will operate in degraded mode.
Manage CDN; after streaming endpoint is started and CDN is fully provisioned you can access CDN management.
Click "Manage CDN"

This will open CDN management section and you can manage and configure your streaming integrated CDN endpoint as a regular CDN endpoint.

Note: Data charges from streaming endpoint to CDN only gets disabled if the CDN is enabled over streaming endpoint APIs or using Azure management portal&039;s streaming endpoint section. Manual integration or directly creating an CDN endpoint using CDN APIs or portal section will not disable the data charges. 

Finally; with the release of standard streaming endpoints you will also get access to all CDN providers and can enable your desired CDN provider such as Verizon Standard, Verizon Premium and Akamai Standard with the simple enable CDN check box on the streaming endpoints.

 

You can get more information on Streaming Endpoint from "Streaming endpoints overview" and "StreamingEndpoint REST"

 

We hope you will enjoy our new standard streaming endpoint and the other features.

Common questions related to streaming

1) How to monitor streaming endpoints?

For the last couple of months, we were running a private preview program for our “telemetry APIs”. I know some of you already used the private APIs, but from general usage there was no public data and our streaming endpoints was a black box.

Good news is, we just released our “Telemetry APIs”.  With this APIs, you can monitor your streaming endpoint as well as your live channels. For streaming endpoint, you can get the throughput, latency, request count and errors count almost in real-time and act based on the values. Please check this blog post “Telemetry Platform Features in Azure Media Services” for details. You can also get more information from the API documentation.

2) How to determine the count of streaming units?

Unfortunately, there is no simple answer for this question. The answers depend on various factors such as your catalog size, CDN cache hit ratio, CDN node count, simultaneous connections, aggregated bitrate, protocol counts, DRM count etc. Based on these values, you need to make the math and calculate the required streaming unit count.  Good news is, standard streaming endpoint and Azure CDN integration combination will be sufficient enough for most of the work loads. If you have an advanced workload or you are not sure if standard endpoint is suitable for you or you want to get more insights on the throughput, you can use the Telemetry APIs and monitor your streaming endpoints. If your load is more than the standard endpoint targeted values or you want to use the premium streaming units, you need to make the math based on telemetry values and define the streaming unit count and scale accordingly. You can start with a high number, but after that you can monitor the system and fine tune it and based on the throughput, request/sec and latency numbers.

3) I don’t see CDN analytics for my existing streaming endpoints in the new portal.

CDN management portal for existing CDN integrated streaming endpoints are not available in the new management portal and depriciated. To access the CDN management, you should migrate your streaming endpoint to new CDN integration.  Please see migration steps above.

Providing Feedback and Feature Requests

Azure Media Services will continue to grow and evolve, adding more features, and enabling more scenarios.  To help serve you better, we are always open to feedback, new ideas and appreciate any bug reports so that we can continue to provide an amazing service with the latest technologies. To request new features, provide ideas or feedback, please submit to User Voice for Azure Media Services. If you have and specific issues, questions or find any bugs, please post your question to our forum.

 
Quelle: Azure

HDInsight tools for IntelliJ & Eclipse December Updates

We are pleased to announce the December updates of HDInsight Tools for IntelliJ & Eclipse. The HDInsight Tools for IntelliJ & Eclipse serve the open source community and will be of interest to HDInsight Spark developers. The tools run smoothly in Linux, Mac and Windows. The recent release focuses on users’ feedback to ensure a smooth user experiences on project creation and submission. The release also covers a couple of new features including Spark 2.0 support, local run, and a refined Job View & Job Graph.

Support Spark 2.0

The HDInsight Tools for IntelliJ & Eclipse now is fully compatible Spark 2.0. It allows you to enjoy the cool features from Spark 2.0 including API usability, SQL 2003 support, performance improvements, structured streaming, R UDF support, as well as operational improvements.

Local Run – Use the HDInsight Tools for IntelliJ with the Hortonworks Sandbox

With this feature, the HDInsight Tools for IntelliJ can work with generic Hadoop clusters in addition to submitting Spark jobs to HDInsight clusters. Using the Hortonworks Sandbox allows you to work with Hadoop locally on your development environment. Once you have developed a solution and want to deploy it at scale, you can then move to an HDInsight cluster.

Connect to local sandbox for local run and debug

Job View & Job Graph

The updated Job View provides you a slick UI to view your jobs list, job summary, and details for a selected job. The job graph also allows you to view the execution details, task summary, and executors view for a job.

Job List and Job Summary

Job Graph

Task Summary

Executors View

Installation

User can get the latest bits by going to IntelliJ repository, and searching “HDInsight.” IntelliJ will also prompt users for latest update if user has already installed the plugin.

 

For more information, check out the following:

IntelliJ HDInsight Spark Local Run: Use HDInsight Tools for IntelliJ with Hortonworks Sandbox   
IntelliJ Remote Debug: Use HDInsight Tools in Azure Toolkit for IntelliJ to debug Spark applications remotely on HDInsight Spark Linux cluster 

        Create Spark Applications:

IntelliJ User Guide: Use HDInsight Tools in Azure Toolkit for IntelliJ to create Spark applications for HDInsight Spark Linux cluster
Video: Introducing HDInsight Tools for IntelliJ for Spark Development
Eclipse User Guide: Use HDInsight Tools in Azure Toolkit for Eclipse to create Spark applications
Video: Use HDInsight Tool for Eclipse to create Spark applications

 

Learn more about today’s announcements on the Azure blog and Big Data blog.

Discover more Azure service updates.

 

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.
Quelle: Azure

Azure Storage Queues New Feature: Pop-Receipt on Add Message

As part of the “2016-05-31” REST API version, we have introduced the pop receipt on add message functionality, which has been a commonly requested feature by our users.

Pop receipt functionality for the Queue service is a great tool for developers to easily identify an enqueued message for further processing. Prior to the “2016-05-31” version, pop receipt value could only be retrieved when a user gets a message from the queue. To simplify this, we now make pop receipt value available in the Put Message (aka Add Message) response which allows users to update/delete a message without the need to retrieve the message first.

Below is a short code snippet that make use of this new feature using Azure Storage Client Library 8.0 for .NET.

// create initial message
CloudQueueMessage message = new CloudQueueMessage("");

// add the message to the queue, but keep it hidden for 3 min
queue.AddMessage(message, null, TimeSpan.FromSeconds(180));
//message.PopReceipt is now populated, and only this client can operate on the message until visibility timeout expires
.
.
.
// update the message (now no need to receive the message first, since we already have a PopReceipt for the message)
message.SetMessageContent("");
queue.UpdateMessage(message, TimeSpan.FromSeconds(180), MessageUpdateFields.Content | MessageUpdateFields.Visibility);

// remove the message using the PopReceipt before any other process sees it
await queue.DeleteMessageAsync(message.Id, message.PopReceipt);

A common problem in cloud applications is to help coordinate updates across non-transactional resources. As an example, an application that processes images or videos may:

1.    Process an image
2.    Upload it to a blob
3.    Save metadata in a table entity

These steps can be tracked using the Queue service as the processes complete successfully using the following flow:

1.    Add a state as a message to the Queue service
2.    Process an image
3.    Upload it to a blob
4.    Save metadata in a table entity
5.    Delete the message if all were successful

Remaining messages in the queue are simply images that failed to be processed, and can be consumed by a worker for cleanup. The scenario above is now made simpler with the popreceipt on add message feature, since in the 5th step the message can be deleted with the popreceipt value retrieved in the 1st step.

Quick Sample using the Face API from Azure Cognitive Services

In the following sample, we are going to be uploading photos from a local folder to the Blob service and we will also make use of the Face API to estimate each person’s age in the photos, storing as an entity in a table. This process will be tracked in a queue and once completed, the message will be deleted with the pop receipt value. The workflow for the sample is:

1.    Find JPG files in ‘testfolder’
2.    For each photo, repeat steps 2-7:
3.    Upload a queue message representing the processing of this photo.  
4.    Call the Face API to estimate the age of each person in the photo.
5.    Store the age information as an entity in the table.
6.    Upload the image to a blob if at least one face is detected.
7.    If both the blob and the table entity operation succeeded, delete the message from queue using the pop receipt.

// Iterate over photos in &;testfolder&039;
var images = Directory.EnumerateFiles("testfolder", "*.jpg");

foreach (string currentFile in images)
{

string fileName = currentFile.Replace("testfolder", "");

Console.WriteLine("Processing image {0}", fileName);

// Add a message to the queue for each photo. Note the visibility timeout
// as blob and table operations in the following process may take up to 180 seconds.
// After the 180 seconds, the message will be visible and a worker role can pick up
// the message from queue for cleanup. Default time to live for the message is 7 days.
CloudQueueMessage message = new CloudQueueMessage(fileName);
queue.AddMessage(message, null, TimeSpan.FromSeconds(180));

// read the file
using (var fileStream = File.OpenRead(currentFile))
{

// detect face and estimate the age
var faces = await faceClient.DetectAsync(fileStream, false, true, new FaceAttributeType[] { FaceAttributeType.Age });
Console.WriteLine(" > " + faces.Length + " face(s) detected.");

CloudBlockBlob blob = container.GetBlockBlobReference(fileName);

var tableEntity = new DynamicTableEntity(DateTime.Now.ToString("yyMMdd"), fileName);

// iterate over detected faces
int i = 1;
foreach (var face in faces)
{

// append the age info as property in the table entity
tableEntity.Properties.Add("person" + i.ToString(), new EntityProperty(face.FaceAttributes.Age.ToString()));
i++;

}

// upload the blob if a face was detected
if (faces.Length > 0)
await blob.UploadFromFileAsync(currentFile);

// store the age info in the table
table.Execute(TableOperation.InsertOrReplace(tableEntity));

// delete the queue message with the pop receipt since previous operations completed successfully
await queue.DeleteMessageAsync(message.Id, message.PopReceipt);

}

}

Check out the full sample in our Github sample repository.

As always, if you have any feature requests please let us know by submitting your ideas to Azure Storage Feedback.
Quelle: Azure

Azure Security Center extends support for Windows Server 2016

Azure Security Center now offers full support for Windows Server 2016. Today, the Azure Monitoring Agent, which is used by Security Center to collect security metadata from virtual machines, is compatible with Windows Server 2008 R2 and newer versions, including Windows Server 2016, as well as most popular Linux distros (see complete list).

Security Center leverages this metadata to identify security issues, such as missing system updates and vulnerable OS configurations, and applies behavioral analysis to detect malicious activity, such as an attacker executing code or attempts to persist on a compromised VM.

To enable these protections:

Launch Security Center from the Azure portal
Turn on data collection (if you have not done so already) to automatically provision the Monitoring Agent on all supported VMs
Start the 90-Day free trial to enable behavioral analysis and other advanced threat detections

Quelle: Azure

SoftNAS Cloud® on Azure – Cloud NAS Storage made easy

Today’s post, co-authored by Michael Richtberg, VP at SoftNAS, who heavily contributed in describing much of the technical details discussed in this document.

What if you could take advantage of the unlimited flexibility offered by an Azure cloud hosted infrastructure without changing your applications or your data?  Would you consider a move that can keep up with your business, as needs change and your demands grow, without the strain of rearchitecting your own capital intensive data centers?  Consider the flexibility and scale of Microsoft Azure on-demand resources, that no single organization could possibly afford, that allows you to tap into virtually unlimited adaptability… anywhere in the world!

 

Traditional Storage Appliances

Cloud Hosted Virtual Storage Appliances

Purchasing Terms

Purchase and fill for 3 to 5 years.

Pay for used capacity.

Storage Elasticity

Fixed capacity or scale up only.

Flexible capacity – scales up or down as needed.

Design Point

Separate products for performance or capacity.

Flexible combinations of performance and capacity workloads.

 

IT organizations need the freedom to make the best choices for their business. Demands on enterprise storage capacities continue to grow at an increasing rate. Access to storing more data and enabling more applications and users, regardless of access requirements, are essential. With ultra-easy consumption, pay-as-you-grow pricing and no architectural limits on growth, the appeal of the consumption model of public cloud is rising.

SoftNAS Cloud® is a software only enterprise storage virtual appliance solution that can replace traditional on-premise storage options for applications that typically require NFS, SMB/CIFS/SMB, iSCSI, and Apple File Protocol (AFP). Microsoft has partnered with SoftNAS to enable an easy transition to Azure for customers that need storage capacities ranging from terabytes to many petabytes.

How Does SoftNAS Cloud Work?

Unlike traditional storage that you pick from a list of SKUs and then wait for it to arrive, SoftNAS Cloud running on Azure, takes less than an hour to configure. There are four fundamental steps that all occur via the Azure Portal:

Creating the Virtual Storage Appliance – The Azure search function locates the SoftNAS Cloud image and then walks you through selecting an “instance type” for the virtual controller. Options instantly show up on the workflow in the portal for picking an appropriate compute capacity for loading the SoftNAS Cloud image. Azure instance types provide the vital physical ingredients that allow different degrees of performance like RAM and local SSD for caching, networking, and CPU. Here’s a video for details.

Attaching the Storage Account – Using the flexible options for storage performance and capacity types available on Azure, users can attach the appropriate media provisioned from an Azure storage account. The options range from all flash to cool blob (object) storage types. See more on using Azure block storage or for adding Blob (object) storage.

High Availability – Using Azure Availability Sets, SoftNAS Cloud enables two instances that communicate via a virtual IP address to the workloads. The Availability Set architecture ensures that the two virtual machines running SoftNAS Cloud are not part of the same affinity group. For more information on setting up high availability, please see this video.

Final steps: Confirm the configuration, purchase and push the setup to deployment. In less than an hour, the NAS storage solution has been created and ready to use. Configuring the volumes
and LUNs occurs via the SoftNAS StorageCenter™ web console. Here’s a video for an overview of the SoftNAS StorageCenter and more information on configuring pools and volumes.

The resulting configuration leverages the Azure infrastructure that can now service workloads using standard storage protocol interfaces that can adjust over time to match business requirements. SoftNAS Cloud utilizes the Azure block and/or object storage accounts as a storage pool much like traditional storage systems use disk drives.

SoftNAS Cloud and Azure make a great combination for increasing the native Azure file services capacity beyond the 5TB limit available today. Because SoftNAS Cloud is a software only means of creating the storage system, customers have the flexibility to choose from a wide range of Azure compute instances to meet varying performance demands. SoftNAS Cloud leverages these combinations to provide flexible cost and performance storage solutions that are often difficult or impossible to obtain using conventional on-premises options. If you are unsure of your future demands, you simply add-on capacity as your needs change.

Thinking you might be taking a step backwards by shifting your storage and applications to the cloud? SoftNAS Cloud includes all the enterprise features expected from an on-premises network storage solution. This includes these advanced capabilities:

Data Protection

High availability for a No Downtime Guarantee™
Copy on write file system
On-disk and in-flight encryption for 360-degree™ protection
AD, AAD, and LDAP integration

Lifecycle Management

Instance writeable clones Snapshotting
Replication

Data Efficiency Services

Deduplication
Compression
… both for better cloud storage cost effectiveness

Flexibility

NFS, SMB, CIFS/SMB, AFP, and iSCSI protocols Hybrid, on-premises or Azure cloud hosted
Scales from terabytes to petabytes

If Azure already provides storage, why do I need SoftNAS Cloud?

Indeed, Azure provides various storage options, but these may not help fill all the needs customers have for making the shift to a public cloud hosted infrastructure. Here are some reasons why and how SoftNAS Cloud complements these Azure offerings:

Azure Storage

Azure Capabilities

SoftNAS Cloud on Azure

File Services

CIFS/SMB only protocol

No AD integration

5TB limit

Using the options below, SoftNAS Cloud overcomes capacity limits, adds all file protocols, adds full
AD/AAD/LDAP user access controls, consumption efficiency and full featured data services.

Page Blob
Premium &
Standard Storage

SSD or Hard disk based storage

Block only 2 – 40TB capacity

Leverages the block storage to present NAS file protocols.
Accelerates performance and improves data efficiency.

Cool & Hot Block Blob Storage

Object storage only

Up to 500TB per storage account

Leverages the object storage to present NAS file protocols and expands the Azure capacity to petabytes by aggregating multiple storage accounts. Accelerates performance and improves data efficiency.

Ideal Uses Cases

The use cases for software-based virtual storage appliances hosted on Azure span many segments. At the end of the day, all computing resources require and use storage. For unstructured data with ever increasing file sizes, examples of ideal uses include:

Archive and record retention
User file sharing
Video/media storage
User file directories
Source code repository
Medical records
Legal documents
Energy Industry data
Big Data
Genomics

Hybrid storage for extending on-premises capacities to Azure using standard mountable NAS protocol connected over a VPN WAN connection is also a common use case. Use the existing data center infrastructure and expand to an Azure hosted storage option with ease. SoftNAS Cloud provides on premises to cloud capabilities for replication or expansion.

When to Think About Using

In summary, SoftNAS Cloud fits customer needs for the following scenarios when considering Azure: 

SoftNAS Cloud extends native capabilities of Azure for CIFS/SMB, NFS, AFP, iSCSI storage. 
For use cases where applications need large file capacity (up to many petabytes) and need an easy way to move them to Azure. 
You are ready to move from traditional on-premises storage to an elastic public model but don’t want the expense of re-engineering the data services.
Need a flexible storage model that can service different roles ranging from high performance all flash to capacity oriented Cool Blob.

How to Get Started:

You can get started with SoftNAS Cloud on Azure in multiple ways via the Azure Marketplace:

 

Free Azure Test Drive: Get started in under 10 minutes using the Azure Test Drive. This option allows you to quickly try SoftNAS Cloud without having to install or configure anything. The SoftNAS Cloud instance loads automatically, connects the Azure storage account and pre-provisions multiple storage volumes/LUN using NFS, CIFS/SMB, and iSCSI. No Credit card or Azure Subscription required but the environment is available for 1 hour from the time you enter the test drive.  
Free 30-Day Trial:  You can also try the SoftNAS 30 day Free-Trial on your Azure subscription.  This will allow you to install, configure and use SoftNAS Cloud as if you were running in a production environment. This allows you to explore the product for multiple weeks; but, it will require an Azure subscription. 
Purchase: You can purchase SoftNAS Cloud on the Azure Marketplace. We offer an Express Edition for 1TB of capacity, a Standard Edition for 20TB of capacity. Discounted larger deployments up to many petabytes, are available via a BYOL (Bring Your Own License) obtained by contacting the SoftNAS Sales team or an authorized reselling partner.

You can also find additional helpful information via these resources:

SoftNAS Video Tutorial
User Reference guide
Contact SoftNAS support

You can also learn more about SoftNAS Cloud at our YouTube channel.
Quelle: Azure

Design Azure infrastructure services to host a multi-tier LOB application

Deploying multi-tier line of business (LOB) applications as virtual machines in Azure is a combination of:

What you already know, which is how to configure the servers and the overall application in your local datacenter.

What you might not already know, which is how to adapt and design the application for the networking, storage, and compute elements of Azure infrastructure services.

To help with what you might not already know, you can step through a design methodology that incorporates the following:

Resource groups
Connectivity
Storage
Identity
Security
Virtual machines

To understand this design methodology and use it for your own LOB application, see the Design and Build an LOB application in Azure IaaS video and slide deck of the November 2016 webinar for the Cloud Adoption Advisory Board (CAAB).

The webinar video has the following sections:

Definitions and assumptions (starts at 3:12)
Design process (starts at 8:05)
Design example (starts at 38:50)
Build with Azure PowerShell (starts at 50:30)

The result of the design process, which incorporates Azure Patterns and Practices recommendations and best practices, is a table of virtual machines and their Azure infrastructure-specific settings. Here is an example:

After you have determined the table entries, it’s much easier to build out the elements and get all the Azure infrastructure settings correct. For example, here is how you might use the table and a PowerShell command block to create a virtual network and its subnets:

Additionally, here is how you might use the table and a PowerShell command block to create a virtual machine:

The slide deck has the following appendices that were not covered in the webinar:

PowerShell command blocks  Each slide is a fill-in-the-blanks set of Azure PowerShell commands to build an element of Azure infrastructure services.
Design your naming conventions  Tips for determining how to name your Azure infrastructure elements.

Use this design methodology for accelerated and successful deployments of LOB applications hosted in Azure infrastructure services.

 
Quelle: Azure

Blockchain Basics & Partner Strategy

Thank you to all of you who packed the aisles at the Microsoft Worldwide Partner Conference for our Blockchain Basics presentations.  We had overwhelming feedback from you that we needed to make our presentation available to those who were not able to see it in Toronto. Please enjoy! And let us know if you have any feedback or questions.

Blockchain is a secure, shared, distributed ledger that can be public, private, or consortium

Secure – Uses cryptography to create transactions that are impervious to fraud and establishes a shared truth.

Shared – Blockchain value is directly linked to the number of organizations or companies that participate in them. There is huge value for even the fiercest of competitors to participate with each other in these shared database implementations.

Distributed – There are many replicas of the blockchain database. In fact, the more replicas there are the more authentic it becomes.

Ledger – The database is append only so it is an immutable record of every transaction that occurs.

Microsoft is implementing a three part strategy

Build and learn from key partner-driven POCs built on top of various blockchain technologies
Grow the blockchain marketplace ecosystem & artifacts together with our partners & customers
Develop key Azure blockchain middleware services to ensure the infrastructure is enterprise ready

 

Project Bletchley middleware and cryptlets

Project Bletchley is the code name for extending blockchain by creating both new middleware as well as secure cryptlets. We are connecting to many different ledgers and existing external and internal services to enable a robust blockchain ecosystem for the enterprise.

 

 

How do you get started?

– SIGN UP FOR AN AZURE ACCOUNT

– SETUP BLOCKCHAIN ON AZURE & PLAY WITH TEMPLATES

– JOIN OUR BLOCKCHAIN ADVISORY YAMMER GROUP

– ONCE YOU FEEL CONFIDENT, ESTABLISH A LAB

Contact us with any questions (BaaS@Microsoft.com)
Keep up-to-date with Blockchain on Azure

Why Microsoft?

We have world class Identity Services through Azure Active Directory
Our corporate strategy is focused on creating and nurturing an open cloud ecosystem
We have flexible architecture that allows you to avoid vendor lock-in

Thank you, and let us know if you have any questions or requests.
Quelle: Azure

Protection and recovery of Citrix XenDesktop and XenApp using Azure Site Recovery

I am excited to announce support for the protection and recovery of Citrix XenDesktop and XenApp environments in Azure using Azure Site Recovery (ASR).  We have been working closely with Citrix to validate and provide guidance on leveraging ASR to build a robust, enterprise grade DR solution for the recovery to Azure of on-premises XenDesktop and XenApp environments running on VMware/Hyper-V.

With ASR, you can protect and recover the essential components in your on-premises XenDesktop and XenApp environment including:

Citrix Delivery Controller
StoreFront Server
XenApp Master Virtual Delivery Agent (VDA)
XenApp License Server
AD DNS Server
SQL Database Server

Additionally, ASR provides you the ability to:

Recover to an application consistent point in time, which is useful to recover your multi-tiered Citrix VDI environment to an application-consistent state.
Use flexible recovery plans to customize the order of recovery by grouping together machines that need to failover together, add automation scripts, and manual actions to be executed on a failover.
Perform Non-disruptive recovery testing, that lets you test the failover of your Citrix VDI farm to Azure, without impacting on-going replication or the performance of your production environment.

A detailed step by step guidance for building a disaster recovery solution using ASR has been chalked out in close collaboration with Citrix. The whitepaper from Citrix detailing the same can be downloaded.

Ready to start using ASR? Check out additional product information to start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run (Azure, AWS, Windows Server, Linux, VMware or OpenStack) with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.
Quelle: Azure

Christian Wade explains the preview of Azure Analysis Services

Christian Wade stops by Azure Friday to speak with Scott about Azure Analysis Services. Built on the proven analytics engine in Microsoft SQL Server Analysis Services, Azure Analysis Services delivers enterprise-grade BI semantic modeling capabilities with the scale, flexibility, and management benefits of the cloud. Azure Analysis Services helps you transform complex data into actionable insights. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis. By leveraging the skills, tools, and data your team has today, you can get more from the investments you’ve already made.

Watch Christian as he introduces Azure Analysis Services and shows various demos of how to use and manage Analysis Services in the cloud.

Try the preview of Azure Analysis Services and learn about creating your first data model.
Quelle: Azure