Explore Microsoft Cloud Platform System – delivering Azure experiences in an integrated system

Are you getting ready for your upcoming Ignite trip? Are you ready to learn how Microsoft Cloud Platform System (CPS) can help you get started with cloud without breaking the integrity of your existing virtualized environments? Join us at BRK2260 session “Explore Microsoft Cloud Platform System – delivering Azure experiences in an integrated system” to learn all about our hybrid cloud vision, new developments, and new possibilities that enable IT organizations to get the best of both public and private cloud infrastructures. Also learn about how you can take advantage of various technologies from Microsoft today to start your cloud journey and plan your investments so that they are aligned with the future. As part of the session, we’d like also to share with you on some real-life customer examples, and use-cases, that are based on CPS as well as best practices.

My name is Cheng Wei, a program manager on the Azure Stack team. And together with my colleagues Walter Oliver & John Haskin, we can’t wait to share with you on all these exciting topics at Ignite and would love to hear what’s hot in your mind and what you would like to discuss with us around this subject.

During the session, you can expect to hear from us on the following areas:

Explain Microsoft’s hybrid cloud vision
Introduce CPS product family (CPS Premium and CPS Standard)
Explain WAP / CPS and Azure Stack co-existing strategy and experience
Demo the experiences after connecting WAP to Azure Stack

Please note that not everything we’ll share at this session will be available at the Technical Preview 2 release. So don’t miss this opportunity to come learn and see the demo of how to continue your cloud investment with WAP/CPS today and connect them with Azure Stack next year when it’s released!

Again, if you’re coming to Ignite, we&;d love to hear your thoughts on if there is anything else you’d like to see and hear from this session, or if you have any specific questions that you’d like to start discussing with us. Feel free to follow us @cheng__wei, @walterov, and @AzureStack for more updates on this and other Microsoft Azure Stack session topics.

Thanks and look forward to meeting some of you at @MS_Ignite!

Quelle: Azure

Refreshing user logins in App Service Mobile Apps

Azure App Service&;s Easy Auth feature has made enabling app authentication extremely simple, whether you are working with client flow or server flow. Still, if you&039;ve worked with token-based authentication in the past, token expiry and refresh can be a hassle. Depending on the authentication provider, token expiry can range widely from minutes to months. Facebook has a 60-day expiry, while other common providers like Google, Azure AD, and us at Azure Mobile Apps have a 1-hour expiry. You probably had to handle these in your codes to ensure app user authentication and client experience, similar to what Adrian Hall detailed in his 30 Days of Azure Mobile Apps: Day 7 – Refresh Tokens post.

To simplify this token refresh experience, we recently baked Auth 2.0’s Refresh Token into Easy Auth&039;s client SDKs! Instead of adding your own refresh logic for authentication, here’s how you can use the built-in token refresh feature in our Managed Azure Mobile Client SDK 2.1.0. or later versions to keep app users logged in.

This feature is only available for server-managed authentication flow. And given the balance between security and an app&039;s possible inactivity during the weekend, refresh tokens can be obtained as long as the Mobile Apps authentication token has not expired for more than 72 hours (see Chris Gillum&039;s post for more details).

How to Use Refresh Tokens with Your Identity Provider

We assume that you have successfully set up desired identity providers with your Mobile App following how-tos for Microsoft Account, Google, or Azure Active Directory (Facebook and Twitter are not supported). 

Microsoft Account

Enable wl.offline_access scope on Portal > Settings > Easy Auth > Authentication / Authorization > Microsoft Account:

Then the following snippets will help you refresh users in a server-managed authentication workflow:

MobileServiceUser user = await client.LoginAsync(MobileServiceAuthenticationProvider.MicrosoftAccount);
//…
user = await client.RefreshUser();

Google

In server-managed authentication workflow, pass in additional parameter (access_type=offline) in MobileServiceClient.LoginAsync().

MobileServiceUser user = await client.LoginAsync(MobileServiceAuthenticationProvider.Google,

new Dictionary<string, string>() {{ "access_type", "offline" }});
//…
user = await client.RefreshUser();

AAD

After configuring your AAD client secret on Azure Resource Explorer (see the Azure Resource Explorer snippets here if you don&039;t know how), pass in an additional parameter in MobileServiceClient.LoginAsync() in your server-managed authentication flow.

MobileServiceUser user = await client.LoginAsync(MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory,

new Dictionary<string, string>() {{ "response_type", "code id_token" }});
//…
user = await client.RefreshUser();

 

Handling Refresh Failures

RefreshUser would work only if all following requirements are satisfied:

The identity provider supports OAuth 2.0’s Refresh Token. Microsoft Account, Google and Azure Active Directory support Refresh Token, while Facebook and Twitter do not.
Permission/scope required for using Refresh Token is granted by the developer, e.g. wl.offline scope for Microsoft Account, offline access_type for Google account, code reponse_type for Azure Active Directory account.
Access token or refresh token is not revoked by the developer.
MobileServiceAuthenticationToken has not expired for more than 72 hours.

Here are some errors that you can be experiencing with the refresh call.

Error

Why?

What to do?

400 Bad Request

Lack of offline permission/scope

Identity provider (i.e. Facebook, Twitter) does not support refresh token

Prompt user to login again

401 Unauthorized

MobileServiceAuthenticationToken is invalid

MobileServiceAuthenticationToken expired for more than 72 hours

Prompt user to login again

403 Forbidden

Access token revoked

Refresh token revoked

User permission revoked

Prompt user to login again

 

Give it a try and let us know what you think!
Quelle: Azure

Create an Office 365 dev/test environment in Azure

With the Office 365 dev/test environment in Azure, you can follow step-by-step instructions to configure a simplified intranet in Azure infrastructure services, an Office 365 Enterprise E5 subscription, and directory synchronization for Azure Active Directory (AD). With this new dev/test environment, you can:

Perform Office 365 application development and testing in an environment that simulates an enterprise organization.
Learn about Office 365 Enterprise E5 features, experiencing them from a consequence-free configuration that is separate from your organization’s infrastructure and Office 365 subscription and your personal computer.
Gain experience setting up directory synchronization between a Windows Server AD forest and the Azure AD tenant of an Office 365 subscription.

Do all of this for free with Office 365 Enterprise E5 and Azure trial subscriptions.

Build out the Office 365 dev/test environment with these steps:

Create a simulated intranet in Azure infrastructure services.
Add an Office 365 Enterprise E5 subscription.
Configure and test directory synchronization between the Windows Server AD forest of your simulated intranet and the Office 365 subscription.

Here is the progression:

Once complete, you can connect to any of the computers on the simulated intranet with Remote Desktop connections to perform administration, app development, and app installation and testing.

This dev/test environment can also be extended with an Enterprise Mobility Suite (EMS) trial subscription, resulting in the following:

With the Office 365 and EMS dev/test environment, you can test scenarios or develop applications for a simulated enterprise that is using both Office 365 and EMS.
Quelle: Azure

Azure DocumentDB powers the modern marketing intelligence platform

Affinio is an advanced marketing intelligence platform that enables brands to understand their users in a deeper and richer level. Affinio’s learning engine extracts marketing insights for its clients from mining billions of points of social media data. In order to store and process billions of social network connections without the overhead of database management, partitioning, and indexing, the Affinio engineering team chose Azure .

You can learn more about Affinio’s journey in this newly published case study.  In this blog post, we provide an excerpt of the case study and discuss some effective patterns for storing and processing social network data.

 

Why are NoSQL databases a good fit for social data?

Affinio’s marketing platform extracts data from social network platforms like Twitter and other large social networks in order to feed into its learning engine and learn insights about users and their interests. The biggest dataset consisted of approximately one billion social media profiles, growing at 10 million per month. Affinio also needs to store and process a number of other feeds including Twitter tweets (status messages), geo-location data, and machine learning results of which topics are likely to interest which users.

A NoSQL database is a natural choice for these data feeds for a number of reasons:

The APIs from popular social networks produced data in JSON format.
The data volume is in the TBs, and needs to be refreshed frequently (with both the volume and frequency expected to increase rapidly over time).
Data from multiple social media producers is processed downstream, and each social media channel has its own schema that evolves independently.
And crucially, a small development team needs to be able to iterate rapidly on new features, which means that the database must be easy to setup, manage, and scale.

Why does Affinio use DocumentDB over AWS DynamoDB and Elasticsearch

The Affinio engineering team initially built their storage solution on top of Elasticsearch on AWS EC2 virtual machines. While Elasticsearch addressed their need for scalable JSON storage, they realized that setting up and managing their own Elasticsearch servers took away precious time from their development team. They then evaluated Amazon’s DynamoDB service which was fully-managed, but it did not have the query capabilities that Affinio needed.

Affinio then tried Microsoft Azure DocumentDB, Microsoft’s planet-scale NoSQL database service. DocumentDB is a fully-managed NoSQL database with automatic indexing of JSON documents, elastic scaling of throughput and storage, and rich query capabilities which meets all their requirements for functionality and performance. As a result, Affinio decided to migrate its entire stack off AWS and onto Microsoft Azure.

“Before moving to DocumentDB, my developers would need to come to me to confirm that our Elasticsearch deployment would support their data or if I would need to scale things to handle it. DocumentDB removed me as a bottleneck, which has been great for me and them.”

-Stephen Hankinson, CTO, Affinio

Modeling Twitter Data in DocumentDB – An Example

As an example, we take a look at how Affinio stored data from Twitter status messages in DocumentDB. For example, here’s a sample JSON status message (truncated for visibility). 

{
"created_at":"Fri Sep 02 06:43:15 +0000 2016",
"id":771599352141721600,
"id_str":"771599352141721600",
"text":"RT @DocumentDB: Fresh SDK! DocumentDB SDK v1.9.4 just released!",
"user":{
"id":2557284469,
"id_str":"2557284469",
"name":"Azure DocumentDB",
"screen_name":"DocumentDB",
"location":"",
"description":"A blazing fast, planet scale NoSQL service delivered by Microsoft.",
"url":"http://t.co/30Tvk3gdN0"
}
}

Storing this data in DocumentDB is straightforward. As a schema-less NoSQL database, DocumentDB consumes JSON data directly from Twitter APIs without requiring schema or index definitions. As a developer, the primary considerations for storing this data in DocumentDB are the choice of partition key, and addressing any unique query patterns (in this case, searching with text messages). We&;ll look at how Affinio addresses these two.

Picking a good partition key:  DocumentDB partitioned collections require that you specify a property within your JSON documents as the partition key. Using this partition key value, DocumentDB automatically distributes data and requests across multiple physical servers. A good partition key has a number of distinct values and allows DocumentDB to distribute data and requests across a number of partitions. Let’s take a look at a few candidates for a good partition key for social data like Twitter status messages.

"created_at" – has a number of distinct values and is useful for accessing data for a certain time range. However, since new status messages are inserted based on the created time, this could potentially result in hot spots for certain time value like the current time
"id" – this property corresponds to the ID for a Twitter status message. It is a good candidate for a partition key, because there are a large number of unique users, and they can be distributed somewhat evenly across any number of partitions/servers
"user.id" – this property corresponds the ID for a Twitter user. This was ultimately the best choice for a partition key because not only does it allow writes to be distributed, it also allows reads for a certain user’s status messages to be efficiently served via queries from a single partition

With "user.id" as the partition key, Affinio created a single DocumentDB partitioned collection provisioned with 200,000 request units per second of throughput (both for ingestion and for querying via their learning engine).

Searching within the text message: Affinio needs to be able to search for words within status messages, and didn’t need to perform advanced text analysis like ranking. Affinio runs a Lucene tokenizer on the relevant fields when it needs to search for terms, and it stores the terms as an array inside a JSON document in DocumentDB. For example, "text" can be tokenized as a "text_terms" array containing the tokens/words in the status message. Here’s an example of what this would look like:

{
"text":"RT @DocumentDB: Fresh SDK! DocumentDB dotnet SDK v1.9.4 just released!",
"text_terms":[
"rt",
"documentdb",
"dotnet",
"sdk",
"v1.9.4",
"just",
"released"
]
}

Since DocumentDB automatically indexes all paths within JSON including arrays and nested properties, it is now possible to query for status messages with certain words in them like “documentdb” or “dotnet” and have these served from the index. For example, this is expressed in SQL as:

SELECT * FROM status_messages s WHERE ARRAY_CONTAINS(s.text_terms, "documentdb")

Next Steps

In this blog post, we looked at why Affinio chose Azure DocumentDB for their market intelligence platform, and some effective patterns for storing large volumes of social data in DocumentDB.

Read the Affinio case study to learn more about how Affinio harnesses DocumentDB to process terabytes of social network data, and why they chose DocumentDB over Amazon DynamoDB and Elasticsearch.
Learn more about Affinio from their website.
If you’re looking for a NoSQL database to handle the demands of modern marketing, ad-technology and real-time analytics applications, try out DocumentDB using your free trial, or schedule a 1:1 chat with the DocumentDB engineering team.  
Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB.

Quelle: Azure

Announcing Update 3.0 for StorSimple 8000 series

We are pleased to announce update 3.0 for StorSimple 8000 series. This release has the following new features and improvements (for full list, check the link below to the release notes):

This release improves the read and write performance to the cloud. This results in faster backups to the cloud, reading of tiered data from cloud or reading/restoring data post failover during a disaster recovery operation
Enables the standby controller on the 8000 series physical appliance to perform space reclamation and ensure active controller resources focus on serving data for active I/O
We have made improvements to the monitoring charts which are available on the StorSimple Management Service

The automated update is released in a phased approach over the coming months and will be available for all customers to apply from the StorSimple Management Service in Azure.  The update can also be manually applied using hotfix method (see link below).

Next steps:

StorSimple 8000 Series Update 3 release notes

Install Update 3 on your StorSimple device
Quelle: Azure

Join the Microsoft Azure Stack Meetup @ Ignite 2016

The Microsoft Azure Stack product team is hosting a meetup at the Ignite 2016 conference in Atlanta on Monday, Sept. 26th. This is a great opportunity to network with others who are interested in Azure Stack and provide the product team feedback on features and scenarios for Azure Stack.

I am sure this will be my favorite event of the entire conference.

We will meet at the Hilton Garden Inn – just a 10 minutes’ walk from the Georgia World Congress Center.

We expect a lot of attendees, so please take a few seconds to register at our meetup registration page.
Quelle: Azure

Azure Event Hubs Archive is now in public preview, providing efficient micro-batch processing

Azure Event Hubs is a real-time, highly scalable, and fully managed data-stream ingestion service that can ingress millions of events per second and stream them through multiple applications. This lets you process and analyze massive amounts of data produced by your connected devices and applications.

Included in the many key scenarios for Event Hubs are long-term data archival and downstream micro-batch processing. Customers typically use compute (Event Processor Host/Event Receivers) or Stream Analytics jobs to perform these archival or batch processing tasks. These along with other custom downstream solutions involve significant overhead with regards to scheduling and managing batch jobs. Why not have something out-of-the-box that solves this problem? Well, look no further – there’s now a great new feature called Event Hubs Archive!

Event Hubs Archive addresses these important requirements by archiving the data directly from Event Hubs to Azure storage as blobs. ‘Archive’ will manage all the compute and downstream processing required to pull data into Azure blob storage. This reduces your total cost of ownership, setup overhead, and management of custom jobs to do the same task, and lets you focus on your apps!

Benefits of Event Hub Archive

Simple setup

Extremely straightforward to configure your Event Hubs to take advantage of this feature.

Reduced total cost of ownership

Since Event Hubs handles all the management, there is minimal overhead involved in setting up your custom job processing mechanisms and tracking them.

Cohesive with your Azure Storage

By just choosing your Azure Storage account, Archive pulls the data from Event Hubs to your containers.

Near-Real time batch analytics

Archive data is available within minutes of ingress into Event Hubs. This enables most common scenarios of near-real time analytics without having to construct separate data pipelines.

A peek inside the Event Hubs Archive

Event Hubs Archive can be enabled in one of the following ways:

With just a click on the new Azure portal on an Event Hub in your namespace

Azure Resource Manager templates

Once the Archive is enabled for the Event Hub, you need to define the time and size windows for archiving.

The time window allows you to set the frequency with which the archival to Azure Blobs will happen. The frequency range is configurable from 60 – 900 seconds (1 – 15 minutes), both inclusive, with a granularity of 1 second. The default setting is 300 seconds (5 minutes).

The size window defines the amount of data built up in your Event Hub before an archival operation. The size range is configurable between 10MB – 500MB (10485760 – 524288000 bytes), both inclusive, at byte level granularity.

The archive operation will kick in when either the time or size window is exceeded. After time and size settings are set, the next step is configuring the destination which will be the storage account of your choosing.

That’s it! You’ll soon see blobs being created in the specified Azure Storage account’s container.

The blobs are created with the following naming convention:

<Namespace>/<EventHub>/<Partition>/<YYYY>/<MM>/<DD>/<HH>/<mm>/<ss>

For example: Myehns/myhub/0/2016/07/20/09/02/15 and are in standard Avro format.

If there is no event data in the specified time and size window, empty blobs will be created by Archive.

Pricing

Archive will be an option when creating an Event Hub in a namespace and will be limited to one per Event Hub. This will be added to the Throughput Unit charge and thus will be based on the number of throughput units selected for the Event Hub.

Opting Archive will involve 100% egress of ingested data and the cost of storage is not included. This implies that cost is primarily for compute (hey, we are handling all this for you!).

Check out the price details on Azure Event Hubs pricing.

Let us know what you think about newer sinks an newer serialization formats.

Start enjoying this feature, available today.

If you have any questions or suggestions, leave us a comment below.

 
Quelle: Azure

StorSimple Virtual Array – ROBO Scenario

StorSimple solutions have been addressing some of the biggest problems faced by IT organizations – double digit data growth, storage capacity, and the data protection complexities that come with it. StorSimple 8000 series solution has addressed this problem in datacenters by storing the most used data on-premises physical array and tiering cold unused data automatically to the cloud. Customers have requested the ability to use StorSimple solutions in their remote and branch offices for a smaller amount of data and have a native file server capability. We announced the StorSimple Virtual Array general availability in March of 2016.

StorSimple Virtual Array is a software version of the StorSimple solution designed specifically to address the data growth problems in smaller remote and branch office scenarios. The virtual array is implemented in the form of a virtual machine that runs on a hypervisor (Hyper-V or VMware) in your branch and remote offices where deploying a physical appliance is not practical as the amount of data managed is small (500 GB to 5 TB). The StorSimple Virtual Array can be configured to run as an iSCSI server or as a native File Server. Check out the StorSimple Virtual Array – File Server or iSCSI Server blog post to learn more about when and where to use them.

Remote Office Branch Office (ROBO) Scenario

Customers with distributed environments have a Windows file server serving the needs for user and department file shares in their remote and branch office locations. A typical remote or branch office contains 10 – 200 users with storage requirements from 500 GB to 5 TB in the remote or branch location for quicker access to the data locally. These servers are regularly backed up to a central datacenter using a backup software or replicating the data for recovery and disaster recovery purpose. Some of the problems faced in these remote and branch offices are:

Data is growing at a fast rate and the infrastructure cannot accommodate the data growth
Stale data is sitting idle on the infrastructure and taking up valuable storage space
Backups are getting complex and are not finishing in time
Recovery is complex if the user wants a file or folder restored
Disaster recovery is untested and recovering from a disaster takes a longer time causing users to lose access to their data for an extended period of time

StorSimple Virtual Array addresses all these problems and provides enterprises an easy way to manage data growth, backup, recovery, and disaster recovery. The StorSimple virtual array reserves 10% of the provisioned space of a share or volume locally to serve hot data from local disk and automatically tiers the older data to the cloud. This allows users to have quick access to most used data and fetch any cold data from cloud on demand.

Cloud snapshots provide the daily backups of data stored on the StorSimple Virtual Array. The cloud snapshot identifies the changed data from the previous backups and transfers it to the cloud, providing a quick way to back up the data. This eliminates the need for having a backup solution and policy in the remote or branch office for user data stored on the StorSimple Virtual Array. The backups use the available Internet bandwidth to copy the data to the cloud and free up traffic on the dedicated bandwidth between the central datacenter and remote office. These backups can be used for recovery with the click of a button, which exposes a new share or volume on the same virtual array and provides the user an easy way to recover any deleted data. In a file server role, the StorSimple Virtual Array also enables the user to do self-service restores from previous five backups.

In the event of a disaster, the cloud snapshots taken from the StorSimple Virtual Array provides an easy way to failover to another StorSimple Virtual Array managed by the same StorSimple Manager. The target virtual array can be running in any remote or branch office or in a datacenter. Once the failover is performed, the users have immediate access to their data.

To get maximum benefits from the StorSimple Virtual Array, use the virtual array in remote offices and branch offices which meets the following criteria.

Data size in branch office is ~ 500 GB to 10 TB with 2 – 3% daily change rate
Number of files in a share is < 1 million
RPO of 1 day

To get started:

Provision a StorSimple Virtual Array on Hyper-V

Provision a StorSimple Virtual Array on VMware

Set up StorSimple Virtual Array as file server

Set up StorSimple Virtual Array as an iSCSI server

Best practices for deploying the StorSimple Virtual Array
Quelle: Azure

Microsoft OMS + System Center product meetup at Ignite 2016

Would you like to participate in a discussion on IT Management? This is an opportunity for you to meet the Microsoft leadership team and influence the direction of the management products (OMS and System Center) at Microsoft. We will have dedicated tables to discuss various topics with members of the product team and you will get a chance to speak directly with the Microsoft Directors. This is a FREE event for Ignite attendees and dinner will be provided. You can participate by filling out this short survey and if selected and depending on capacity we will send you a meeting invite with details on the time and location of the meetup at Ignite.

 

Link to survey : https://www.surveymonkey.com/r/OMSIgniteMeetup

 

We are looking forward to meeting you!
Quelle: Azure

StorSimple Virtual Array – File Server or iSCSI Server?

StorSimple Virtual Array (SVA) can be configured as a File Server or as an iSCSI Server. Configured as a File Server, StorSimple Virtual Array provides the native shares which can be accessed by users to store their data. StorSimple Virtual Array configured as an iSCSI server provides volumes (LUNs), which can be mounted on an iSCSI initiator (typically a Windows Server). This blog post looks at the various requirements that should be considered when choosing a configuration of the StorSimple Virtual Array for a remote or branch office.

Architecture

 

 

 

Requirement

StorSimple Virtual Array File Server

StorSimple Virtual Array iSCSI Server

Number of Shares

SVA file server supports a maximum of 16 shares

If the number of shares in the remote or branch office is larger than 16, we recommend using SVA iSCSI server

User self-restore

SVA file server allows the users to restore their data from previous five backups from .backups folder available in the share

An administrator must restore the cloud snapshot as a new volume and then restore data from the restored volume

Number of files in a share

SVA file server supports up to a maximum of 1 million files per share (maximum of 4 million files in total on the file server)

SVA iSCSI server works on the block level and does not have a limitation in terms of number of files

Maximum size of data

SVA file server supports a maximum share size of 2 TB for locally pinned shares and a maximum of 20 TB for tiered shares

SVA iSCSI server supports a maximum volume size of 500 GB for locally pinned shares and a maximum of 5 TB for tiered volumes

Failover time

SVA file server failover time is dependent on number of files in a share. During the failover, the directory structure is recreated and this may take additional time depending on the number of files in the share. To estimate the failover time, you can approximate the time as 20 minutes per 100,000 files. Hot data is downloaded in the background based on heat map

SVA iSCSI server provides instant failover (minutes to make the volume available). Only the metadata is downloaded during the failover and volume made available for use after the failover. Hot data is downloaded in the background based on the heat map

Active directory domains

SVA file server must be joined to an AD domain

SVA iSCSI server can be optionally joined to an AD domain, but it is not required. The iSCSI initiator may be joined to the domain or can be a part of the work group in non-active directory domain environments

File Server Resource Manager (FSRM) features (Quota, File blocking etc.)

SVA file Server does not support FSRM features

FSRM features can be enabled on the Windows server iSCSI initiator connected to a SVA iSCSI server

 

Useful Links:

StorSimple Virtual Array Overview

StorSimple Virtual Array Deployment Videos

StorSimple Virtual Array Best practices
Quelle: Azure