Adding #AzureSearch to #DocumentDB collections with a click of a button

Our customers love how easy it is to use Azure Search and DocumentDB together to meet business goals. Tight integration through Indexers simplifies the task of indexing and searching in a variety of verticals from ecommerce to business applications. With the ability to load data with zero code, it’s even easier. We’re always looking for ways to boost developer productivity, so today we’re happy to announce the ability to add Search to a collection directly from DocumentDB with a click of a button.

Seamlessly select or create a Search service, and your DocumentDB configuration will be populated automatically. You’ll have all the search power you’ve come to expect. Schema inference provides an excellent starting point to easily add features like faceted navigation, intelligent language processing, and suggestions.

All of this is built on the tried and true indexer infrastructure, so expect a mature solution that’s in use by lots of customers and that will get the task done smoothly and reliably. Indexers and Search + DocumentDB enable more complex scenarios as well. DocumentDB is a global NoSQL database, this enables you to create Azure Search service instances in as many regions as you want. Create an indexer in each Search service, all pointing at the same DocumentDB account, for a simple and rock-solid solution for low-latency, geo-distributed search application backend.

We can’t wait to see what you build with DocumentDB and Azure Search! As always we’d love to hear from you on Twitter, User Voice, or the comments below. Happy coding!
Quelle: Azure

Announcing Azure SQL Database Threat Detection general availability coming in April 2017

Today we are happy to announce that Azure SQL Database Threat Detection will be generally available in April 2017. Through the course of the preview we optimized our offering and it has received 90% positive feedback from customers regarding the usefulness of SQL threat alerts. At general availability, SQL Database Threat Detection will cost of $15 / server / month. We invite you to try it out for 60 days for free.

What is Azure SQL Database Threat Detection?

Azure SQL Database Threat Detection provides an additional layer of security intelligence built into the Azure SQL Database service. It helps customers using Azure SQL Database to secure their databases within minutes without needing to be an expert in database security. It works around the clock to learn, profile and detect anomalous database activities indicating unusual and potentially harmful attempts to access or exploit databases.

How to use SQL Database Threat Detection

Just turn it ON – SQL Database Threat Detection is incredibly easy to enable. You simply switch on Threat Detection from the Auditing & Threat Detection configuration blade in the Azure portal, select the Azure storage account (where the SQL audit log will be saved) and configure at least one email address for receiving alerts.

Real-time actionable alerts – SQL Database Threat Detection runs multiple sets of algorithms which detect potential vulnerabilities and SQL injection attacks, as well as anomalous database access patterns (such as access from an unusual location or by an unfamiliar principal). Security officers or other designated administrators get email notification once a threat is detected on the database. Each notification provides details of the suspicious activity and recommends how to further investigate and mitigate the threat.

Live SQL security tile – SQL Database Threat Detection integrates its alerts with Azure Security Center. A live SQL security tile within the database blade in Azure portal tracks the status of active threats. Clicking on the SQL security tile launches the Azure Security Center alerts blade and provides an overview of active SQL threats detected on the database. Clicking on a specific alert provides additional details and actions for investigating and preventing similar threats in the future.

Investigate SQL threat – Each SQL Database Threat Detection email notification and Azure Security Center alert includes a direct link to the SQL audit log. Clicking on this link launches the Azure portal and opens the SQL audit records around the time of the event, making it easy to find the SQL statements that were executed (who accessed, what he did and when) and determine if the event was legitimate or malicious (e.g. application vulnerability to SQL injection was exploited, someone breached sensitive data, etc.).

Recent customer experiences using SQL Database Threat Detection

During our preview, many customers benefited from the enhanced security SQL Database Threat detection provides.

Case : Anomalous access from a new network to production database

Justin Windhorst, Head of IT North America at Archroma

“Archroma runs a custom built ERP/e-Commerce solution, consisting of more than 20 Web servers and 20 Databases using a multi-tier architecture, with Azure SQL Database at its core.  I love the built-in features that bring added value such as the enterprise level features: SQL Database Threat Detection (for security) and Geo Replication (for availability).  Case in point: With just a few clicks, we successfully enabled SQL Auditing and Threat Detection to ensure continuous monitoring occurred for all activities within our databases.  A few weeks later, we received an email alert that "Someone has logged on to our SQL server from an unusual location”. The alert was triggered as a result of unusual access from a new network to our production database for testing purposes.  Knowing that we have the power of Microsoft behind us that automatically brings to light anomalous such as these gives Archroma incredible peace of mind, and thus allows us to focus on delivering a better service.”

Case : Preventing SQL Injection attacks

Fernando Sola, Cloud Technology Consultant at HSI

“Thanks to Azure SQL Database Threat Detection, we were able to detect and fix vulnerabilities to SQL injection attacks and prevent potential threats to our database. I was very impressed with how simple it was to enable threat detection using the Azure portal. A while after enabling Azure SQL Database Threat Detection, we received an email notification about ‘An application generated a faulty SQL statement on our database, which may indicate a vulnerability of the application to SQL injection.’  The notification provided details of the suspicious activity and recommended actions how to observe and fix the faulty SQL statement in our application code using SQL Audit Log. The alert also pointed me to the Microsoft documentation that explained us how to fix an application code that is vulnerable to SQL injection attacks. SQL Database Threat Detection and Auditing help my team to secure our data in Azure SQL Database within minutes and with no need to be an expert in databases or security.”

Summary

We would like to thank all of you that provided feedback and shared experiences during the public preview. Your active participation validated that SQL Database Threat Detection provides an important layer of security built into the Azure SQL Database service to help secure databases without the need to be an expert in database security.

Click the following links for more information to:

Learn more about Azure SQL Database Threat Detection

Learn more about Azure SQL Database Auditing
Learn more about Azure SQL Database
Learn more about Azure Security Center

Quelle: Azure

Preview the new enhancements to Azure Security Center

While the cloud may have initially raised some security concerns among enterprises, Microsoft is changing those dynamics. By tapping into the collective power of millions of cloud customers, Microsoft can help each customer more effectively defend against the increasing volume and sophistication of attacks. Azure Security Center has released a number of new capabilities that leverage this collective intelligence to not only detect threats, but also do a better job of preventing them.

Advanced cloud defenses  

Some traditional security controls deliver important protection from threats, but have proved to be too costly to configure and maintain. By applying prescriptive analytics to application and network data, learning the behavior of a machine or a group of machines, and combining these insights with broad cloud reputation, Azure Security Center empowers customers to realize the benefits of these controls without introducing any management overhead.

Application Whitelisting – Once compromised, an attacker will likely execute malicious code on a VM as they take action toward their objectives. Whitelisting legitimate applications helps block unknown and potentially malicious applications from running, but historically managing and maintaining these whitelists has been problematic. Azure Security Center can now automatically discover, recommend whitelisting policy for a group of machines and apply these settings to your Windows VMs using the built-in AppLocker feature. After applying the policy, Azure Security Center continues to monitor the configuration and suggests changes making it easier than ever before to leverage the powerful security benefits of application whitelisting.
Just-In-Time (JIT) Network Access to VMs – Attackers commonly target open network ports (RDP, SSH, etc.) with Brute Force attacks as a means to gain access to VMs running in the cloud. By only opening these ports for a limited time when needed to connect remotely to the VM, Azure Security Center can significantly reduce the attack surface and subsequently the risk that the VM will be compromised.

For an early preview, join the Azure Advisors community and then Azure Security Center Advisors group.

Advanced threat detection

Our security research and data science teams are constantly monitoring the threat landscape and adding new or enhancing current detection algorithms. Azure Security Center customers benefit from these innovations as algorithms are continuously released, validated, and tuned without the need to worry about keeping signatures up to date. Here are some of the most recent updates:

Harnessing the Power of Machine Learning – Azure Security Center has access to a vast amount of data about cloud network activity, which can be used to detect threats targeting your Azure deployments. For example:

Brute Force Detections – Machine learning is used to create a historical pattern of remote access attempts, which allows it to detect brute force attacks against SSH, RDP, and SQL ports. In the coming weeks, these capabilities will be expanded to also monitor for network brute force attempts targeting many applications and protocols, such as FTP, Telnet, SMTP, POP3, SQUID Proxy, MongoDB, Elastic Search, and VNC.
Outbound DDoS and Botnet Detection – A common objective of attacks targeting cloud resources is to use the compute power of these resources to execute other attacks. New detection algorithms are generally available in Azure Security Center, which clusters virtual machines together according to network traffic patterns and uses supervised classification techniques to determine if they are taking part in a DDoS attack. Also, in private preview are new analytics that detect if a virtual machine is part of a botnet. It works by joining network data (IPFIX) with passive DNS information to obtain a list of domains accessed by the VM and using them to detect malicious access patterns.

New Behavioral Analytics Servers and VMs – Once a server or virtual machine is compromised, attackers employ a wide variety of techniques to execute malicious code on that system while avoiding detection, ensuring persistence, and obviating security controls. Additional behavioral analytics are now generally available in Azure Security Center to help identify suspicious activity, such as process persistency in the registry, processes masquerading as system processes, and attempts to evade application whitelisting. In addition, new analytics have been released to public preview that are designed specifically for Windows Server 2016, for example activity related to SAM and admin account enumeration. Over the next few weeks, many of the behavioral analytics available for Windows VMs will be available for Linux VMs as well. Operations Management Suite Security users will also benefit from these new detections for non-Azure servers and VMs.
Azure SQL Database Threat Detection – Threat Detection for Azure SQL Database, which identifies anomalous database activities indicating unusual and potentially harmful attempts to access or exploit databases, announced upcoming general availability in April 2017. You can view alerts from SQL Database Threat Detection in Azure Security Center, along with additional details and actions for investigating and preventing similar threats in the future.

To take advantage of these and other advanced detection capabilities, select the Standard tier or free 90 Day Trial from the Pricing Tier blade in the Security Center Policy. Learn more about pricing.

Integrated partners   

Azure Security Center makes it easy for you to bring your trusted cloud security vendors with you to the cloud. Recent additions include:

Fortinet NGFW and Cisco ASA – In addition to solutions from Checkpoint and Barracuda, ASC now features integration with Fortinet and Cisco ASA next generation firewalls. ASC automatically discovers deployments where these solutions are recommended (based on the policy you set), streamlines deployment and monitoring, and integrates security alerts from these partner solutions – making it easier than ever to bring your trusted security solutions with you to the cloud.

Azure Security Center requires zero setup – simply open Security Center in the Azure Portal. Use the free version or upgrade to the 90 Day Trial to enable advanced prevention and threat detection.
Quelle: Azure

Azure Backup and Azure Site Recovery now available in UK

We’re pleased to announce Azure Backup and Azure Site Recovery now available in the UK.

Azure Backup – The Azure-based service you can use to back up (or protect) and restore your data in the Microsoft cloud. Azure Backup enables Azure IaaS VM backup as well as replace your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive. Learn more about Azure Backup.

Azure Site Recovery – Contributes to your BCDR strategy by orchestrating replication of on-premises virtual machines and physical servers. You replicate servers and VMs from your primary on-premises datacenter to the cloud (Azure), or to a secondary datacenter. Learn more about Azure Site Recovery.

We are excited about these new Azure Services, and invite customers using these Azure regions to try them today!
Quelle: Azure

Bletchley – The Cryptlet Fabric & Evolution of blockchain Smart Contracts

Anatomy of a Smart Contract

The concept of a Smart Contract has been around for awhile and is largely attributed to Nick Szabo’s work in the late 1990s. However, it remained an abstract concept until the summer of 2015 with the Frontier release of Ethereum as its first implementation. The promise of Smart Contracts is sprawling and has gotten the attention of every industry as a revolutionary disrupter that can change the way business is done forever. That remains to be seen, but like most first implementations of significantly important technology, there are some early lessons learned and some introspection about how improvements can be made.

I have written a paper that describes at a high level how Smart Contracts are implemented today and how they can be refactored to significantly improve their performance, security, scalability, manageability, versioning, and reuse in the near future. This paper describes the thought process and historical context for a new architectural approach that focuses on separation of concerns and implementing a 3 Layered/Tiered Smart Contract architecture.

To understand the context and exactly what a “3-Layered & Tiered” Smart Contract architecture means please give this paper a read, Anatomy of a Smart Contract.

If you want the short and sweet answer it is this: Smart Contracts designed for semi-trusted enterprise consortium networks should be separated into 3 main layers:

Data Layer – The definition of data schema and only the data logic for validation of inserts (appends) and optimization of reads. In platforms like Ethereum or Chain, languages like Solidity and Ivy can be used at this layer. This is similar to how relational databases use the SQL language and stored procedures.
Business Layer – All business logic for Smart Contracts and surface level APIs for interacting with Smart Contracts from the Presentation layer (UI) or other external applications. Cryptlets written in any language targeting the runtimes supported by the Cryptlet Fabric. (.NET, .NET Core, JVM, native)
Presentation Layer – User Interface platforms and other applications built on using the exposed APIs by Cryptlets.

These Layers can then be deployed, optimized, and scaled in their respective tiers: Presentation, Middle (Business), and Data tiers.

*Note, this approach is not generally valid for trustless implementations of Smart Contracts, but targeted at enterprise consortium blockchains.

The Cryptlet Fabric – Update

As we get closer to the first public preview of the Cryptlet Fabric, I though it was a good time to quickly highlight what blockchain enthusiasts, professionals and developers should be looking forward to. The Cryptlet Fabric is designed to be the middle tier in the new proposed 3-Layered & Tiered Smart Contract architecture, so it will deliver things you would expect from such an implementation.  It manages scale, failover, caching, monitoring, management…a long list of features, but what is new:

Cryptographic primitives, secure execution enclaves and a runtime secure key (secrets) platform that allows for the creation, persistence and dynamic reanimation for performing cryptographically secure operations with these keys at scale.

This allows Cryptlets to create, store, and use key pairs in a secure execution environment to do all sorts of “blockchain-ey” things like digital signatures, zero knowledge proofs, ring signatures, threshold signatures, and even homomorphic encryption all within secure enclaves. Cryptlets will come in 2 basic types, which I went into some detail in the reading Cryptlets in Depth, utility and contract.

The Cryptlet Fabric provides a blockchain router layer, abstracting away different transaction messages and cryptographic signing from the code implemented in Cryptlets, as well as integration with existing systems. For example, a Cryptlet’s output only contains the information to be sent to the blockchain, like market data or business logic results, and is packaged into the actual blockchain specific transaction by the Cryptlet Fabric below it by the blockchain router and blockchain platform specific providers.  This encapsulation technique has been used for decades allowing technologies like TCP/IP to work on all sorts of networks (LAN, WAN, Internet and Mobile) and all kinds of applications. This is what allows for Cryptlets to be reused across different types of blockchains.

Utility Cryptlets (oracles)

Ok, I admit that being a Microsoft guy makes it hard for me to call anything an oracle without associating it with the company. The fact of the matter is that Utility Cryptlets can largely be thought of as a more scalable, standard, secure, and discoverable way to write blockchain oracles. The majority of requests that we get for Cryptlets are initially in this category, providing a secure attested data source for blockchain applications. You will able to write cryptlet oracles using the language of your choice targeting (initially) .NET, .NET Core, JVM and bare metal runtimes, so C#, Java, C++, F#, VB, etc… Your cryptlet oracle can publish any type of data you want, including:

Market prices
Package delivery notifications
Workflow next steps
Weather alerts
Wounterparty updates like credit downgrade

and define your subscription parameters like:

Time and date
Event driven triggers from the blockchain or listening to external sources
Conditional – if “this” and “that” are true, evaluation combinations if, else if, switches, etc.
Combination – if the time is 4PM EST & the NYSE was open today then send me the LIBOR and the price of x

As a developer or provider of cryptlet oracles you can publish them as libraries into the Azure Marketplace for discovery and acquiring customers. We expect to see a robust catalog of Cryptlet libraries from our partners big and small.

However, there is more to Utility Cryptlets than just providing data. You can use them to expose a secure channel into your blockchain based applications for other applications and user interfaces. An ERP or CRM system can read data from the blockchain without having to know any of the details, and they can also securely fire data and events into the blockchain to ease integration of blockchains into existing enterprise infrastructure. Cryptlets can also provide services to blockchain applications and analytics platforms by triggering events into existing systems by watching blockchain transactions and issuing an index request or data pull.

I’m sure that customers and partners will find many uses for Utility Cryptlets, but the first and most obvious is for blockchain oracles.

Contract Cryptlets

These Cryptlets are key to fully realizing the refactoring of Smart Contracts into a 3-Layered/Tiered architecture. These Cryptlets differ from Utility Cryptlets in many ways. Utility Cryptlets can have many instances at once, each handling many different subscriptions at the same time, whereas Contract Cryptlets are single, paired, or ring instanced and bound to a specific Smart Contract. Contract Cryptlets contain the business logic, rules, and external API for interaction for executing a Smart Contract between counterparties.  In a private, semi-trusted consortium blockchain network where there are strong identities or trust between counterparties, the Smart Contract logic does not need to exist “on-chain” or be executed by every node on the network. Separation of concerns between the data layer in the blockchain and the business and presentation above, isolates code between layers abstracting implementations for reuse and optimization.

Contract Cryptlets are themselves digitally signed and run within an enclave that attests that the code they contain ran unaltered without tampering in private.  This allows for the code in the Cryptlet to be private between counterparties and even be encrypted to protect intellectual property.  Counterparties can write and review their own Contract Cryptlets and all inspect and verify the code before compiling and packaging for implementation, or they can choose from certified Contract Cryptlets that a vendor or reputation stands behind. 

Contract Cryptlets can be instantiated for each counterparty participating in a specific Smart Contract instance.  These Cryptlet Pairs and Rings are optional, but allows for each Contract Cryptlet to hold secrets like private keys for signing or encryption belonging to that counterparty, avoiding co-mingling secrets of counterparties in the same address space.  This capability allows for more advanced transaction scenarios, like encrypting results, to be stored on the blockchain using threshold or ring encryption schemes.

Contract Cryptlets can use private keys and secrets it has permissions to that are fetched from the fabric’s “secret” services.  These secrets are controlled completely by the counterparty via Azure Key Vault, and remain inaccessible to other participants, including Microsoft.

This approach allows code defining a Smart Contract’s business and integration logic to be run on scaled up resources and collocated near data for maximum performance.  The data that they persist to the blockchain is still subject to the data logic “on-chain” and is reconciled on the network through its consensus process.  The digital signatures from the Cryptlet, enclave and blockchain provider are used for validation and attestation, and can be stored along with each transaction on the blockchain as proofs.

Contract Cryptlets also use Utility Cryptlet services, however they can interact directly within the fabric via a subscription or direct API access.

Using Cryptlets allows for a three layered architecture where:

The Smart Contract on the blockchain defines schema and data logic, which is deployed to the blockchain network representing the Data Tier.
Contract Cryptlets define business logic and a Surface Level API for User Interfaces and external applications deployed to the Cryptlet Fabric representing the Middle Tier.
Utility Cryptlets provide reusable data sources and events also in the Middle Tier.
Presentation tier communicates with the Surface Level API of the Cryptlet Fabric using standard UI technologies or integration platforms deployed to web servers, mobile devices, service bus, etc. for the Presentation Tier.

This architectural approach provides abstraction of the blockchain implementation from its clients, as well as tuning and scaling the blockchain and Cryptlet Fabric independently.

Cryptlet Fabric Diagram

Here is an updated diagram of the Cryptlet Fabric.  We will cover its parts in more detail in subsequent posts and white papers, which will also be included in its releases.

Quelle: Azure

Automating Azure Analysis Services processing with Azure Functions

In this post, we’ll walk through a simple example on how you can setup Azure Functions to process Azure Analysis Services tables.

Azure functions are perfect for running small pieces of code in the cloud. To learn more about Azure Functions, see Azure Functions Overview and Azure Functions pricing.

Create an Azure Function

To get started, we first need to create an Azure Function

1. Go to the portal and create a new Function App.

2. Type a unique Name for your new function app, choose the Resource Group and Location. For the Hosting plan use App Service Plan.

Note: As the duration of processing Analysis Services tables and models may vary, use a Basic or Standard App Service Plan and make sure that the Always On setting is turned on, otherwise the Function may time out if the processing takes longer.

Click Create to deploy the Function App.

3. In the Quickstart tab, click Timer and C#, and then click Create this function.

Configure timer settings

Now that we’ve created our new function, we need to configure some settings. First, let’s configure a schedule.

1. Go to the Integrate > Timer > Schedule.

The default schedule has a CRON expression for every 5 minutes.

Change this to any setting you would like. In the example below I used an expression to trigger the function at 3AM every day. Click Documentation to see a description and some examples for CRON expressions.

2. Click Save.

Configure application settings

Before we begin writing our code, we need to configure the application.

Important: Make sure you have the latest data providers installed on your computers. To get more info and download, see Data providers for connecting to Azure Analysis Services.

After installing the providers, you’ll need these two files in the next step:

C:Program FilesMicrosoft SQL Server0SDKAssembliesMicrosoft.AnalysisServices.Core.dll
C:Program FilesMicrosoft SQL Server0SDKAssembliesMicrosoft.AnalysisServices.Tabular.dll

1. In the new Azure Function, go to Function app settings > Go to Kudu, to open the debug console.

2. Navigate to the function in D:homesitewwwrootyourfunctionname, and then create a folder named bin.

3. Navigate to the newly created bin folder, and then drop the two files specified in the previews point. It should look like this:

4. Refresh your browser. In Develop > bin, you should see the two files in it (If you don’t see the file structure, click View files).

5. Before we write our code, we need to create a connection string. In Function app settings, click Configure app settings.

6. Scroll to the end of the Application settings view, to the Connection strings section, and then create a Custom connection string.

Provider=MSOLAP;Data Source=asazure://region.asazure.windows.net/servername; Initial Catalog=dbname;User ID=user@domain.com;Password=pw

Add code

Now that we have our function’s configuration settings in-place, we can enter the code. You’ll need to reference the DLLs we uploaded, but other than that it looks like any other .Net code.

Note: In this example, I included some commented lines to only process a table or the model.

"Microsoft.AnalysisServices.Tabular.DLL"

r "Microsoft.AnalysisServices.Core.DLL"

r "System.Configuration"

using System;

using System.Configuration;

using Microsoft.AnalysisServices.Tabular;

public static void Run(TimerInfo myTimer, TraceWriter log)

{

    log.Info($"C# Timer trigger function started at: {DateTime.Now}");  

    try

            {

                Microsoft.AnalysisServices.Tabular.Server asSrv = new Microsoft.AnalysisServices.Tabular.Server();

                var connStr = ConfigurationManager.ConnectionStrings["AzureASConnString"].ConnectionString;

                asSrv.Connect(connStr);

                Database db = asSrv.Databases["AWInternetSales2"];

                Model m = db.Model;

                //db.Model.RequestRefresh(RefreshType.Full);     // Mark the model for refresh

                //m.RequestRefresh(RefreshType.Full);     // Mark the model for refresh

                m.Tables["Date"].RequestRefresh(RefreshType.Full);     // Mark only one table for refresh

                db.Model.SaveChanges();     //commit  which will execute the refresh

                asSrv.Disconnect();

            }

            catch (Exception e)

            {

                log.Info($"C# Timer trigger function exception: {e.ToString()}");

            }

    log.Info($"C# Timer trigger function finished at: {DateTime.Now}"); 

}

 

Click Save to save the changes, and then click Run to test the code. You’ll get an output window where you will be able to see the log information and exceptions.

Learn more on Azure Analysis Services and Azure Functions.
Quelle: Azure

Azure IoT Hub Connector to Cassandra now available

As part of Microsoft’s ongoing commitment to open source and interoperability in IoT, I’m pleased to announce the release of a new open source project to enable devices that are connected to Azure IoT Hub to store data in a Cassandra database. The code can be found on GitHub.

With this new Cassandra connector, developers can easily build solutions that harness IoT-scale fleets of devices and store data from them in Cassandra tables for later analysis. The library can also be fully customized, if needed. Developers can define the schema of Cassandra tables and specify how data should be stored, whether based on the type of the message or by splitting the incoming data into multiple tables for easier analysis. In addition to the Cassandra connector, we’ve also released a Docker container to make the deployment and testing of the new library a matter of few minutes.

We invite you to follow us on GitHub as we release more libraries, samples and demo applications in the coming months.
Quelle: Azure

StorSimple Virtual Array available for Cloud Solutions Provider (CSP) partners

StorSimple Virtual Array (VA) is now available for Cloud Solution Provider (CSP) partners. We are enabling CSP partners to resell and own the end-to-end customer lifecycle with direct provisioning, billing, and support of StorSimple VA. CSP partners can deploy StorSimple VA from the Azure Management Portal using Partner Center. The usage of virtual array and Azure storage is metered and billed separately and StorSimple VA deployed by CSP is eligible for wholesale discount under the CSP program. For information on other partner incentives, go to CSP program incentives. Learn more about the StorSimple virtual array and its deployment and go to Customer Support for CSP to learn about partner support model. Join Azure Advisors on Yammer group – StorSimple Partner Advisors to find answers to commonly asked questions.
Quelle: Azure

10 GitHub samples with Azure DocumentDB you shouldn’t miss!

Azure DocumentDB is a fully managed, multi-model, scalable, queryable, schema-free NoSQL database service built for modern applications: mobile, web, IoT, bots, AI, etc. Recently, I went on GitHub and have found a lot of useful material and links to step-by-step tutorials and examples. Below are the top 10 that anyone starting to build an app backed by planet-scale NoSQL should know about. There is lots more. So head on over and learn about this cool new NoSQL planet-scale database service.

1. Azure/azure-documentdb-dotnet

In this repo, you can find the samples and utilities relating to Azure DocumentDB and the .NET SDK and how to use them. The samples demonstrate how to use every method and operation of the .NET SDK, and searchabletodo is a sample ASP.NET MVC web application that shows how to build an ASP.NET MVC web application with DocumentDB and then further enrich it with Azure Search. Another great example in this repo is a Xamarin sample which illustrates how to use DocumentDB built-in authorization engine to implement per-user data pattern for a Xamarin mobile app. It is a simple multi-user ToDo list app allowing users to login using Facebook Auth and manage their to do items. After playing with this sample, you can then go further with Xamarin and build any IoS or Android app on top of DocumentDB.

The samples will walk you through how to best interact with the service using Client SDK. Specifically:

CollectionManagement – shows CRUD operations on DocumentCollection resources.
DatabaseManagent – shows CRUD operations on Database resources.
DocumentManagement – shows CRUD operations on Document resources.
IndexManagement – shows samples on how to customize the Indexing Policy for a Collection should you need to.
Partitioning – included samples for common partitioning scenarios using the .NET SDK.
Queries –  shows how to query using LINQ and SQL.
ServerSideScripts – shows how to create and execute Stored Procedures, Triggers and User Defined Functions.
UserManagement – shows CRUD operations on User and Permission resources.
Spatial – shows how to work with GeoJSON and DocumentDB geospatial capabilities.

After walking through these samples, you should have a good idea of how to get going and how to make use of the various APIs interacting with the NoSQL service in Azure.

2. mingaliu/DocumentDBStudio

This repo contains DocumentDBStudio –  a client management viewer/explorer for DocumentDB service. Currently it supports:

Easy browsing of DocumentDB resources, which enables you to learn DocumentDB resource model very quickly.
Create, Read, Update, Delete (CRUD) and Query operations for every DocumentDB resources and resource feed.
Support of SQL or UDF query. You can execute Javascript stored procedure or trigger right from DocumentDBStudio.
Inspection of headers (for quota, usage, RG charge, etc.) for every request operation. It also supports three connection modes: TCP, HTTPDirect, and Gateway.
Support of various RequestOptions (for pre/post trigger, sessionToken, consistency model etc), FeedOptions(for paging, enableScanforQuery etc), IndexingPolicy (for indexingMode, indexingType, indexingPath etc).
PrettyPrint the output JSON.
Bulk import of JSON files.

It is simply a “good IDE” for the “natives” of DocumentDB. Give it a try.

3. Azure/azure-documentdb-node

This repo provides a Node.js module that makes it easy to interact with Azure DocumentDB using Node.js – an open-source, cross-platform JavaScript runtime environment suited for developing a diverse variety of tools and applications. Node.js aims to optimize throughput and scalability in Web applications with many input/output operations, as well as for real-time Web applications (e.g., real-time communication programs and browser games). Combining it with DocumentDB service gives you a really powerful combination and agility in building up an app and then scaling it up very quickly.

If you are developing using Node.js and combining it with DocumentDB, see Node.js Developer Center and the Microsoft Azure DocumentDB Node.js SDK Documentation. Also, to get started, watch this YouTube video. The samples in the repo were built using the Node.js Tools for Visual Studio and include njsproj files accordingly. However, you do not need Visual Studio to run these samples. Just ignore the nsjprof files, if you wish, and open the app.js in your choice of editor such as Visual Studio Code, or even a text editor, such as Sublime. The choice is yours!

4. Azure/azure-documentdb-datamigrationtool

This repo contains the DocumentDB Data Migration Tool – an open source solution to import data to DocumentDB from a variety of sources with ease and simplicity. The migration tool supports migration of data from the following sources:

Azure Tables
JSON files
MongoDB
SQL Server
CSV files
RavenDB
Amazon DynamoDB
HBase
DocumentDB collections

While the import tool includes a graphical user interface (dtui.exe), it can also be driven from the command line (dt.exe). In fact, there is an option to output the associated command after setting up an import through the UI. Tabular source data (e.g. SQL Server or CSV files) can be transformed such that hierarchical relationships (sub-documents) can be created during import. Check it out to learn more about data source options, sample command lines to import from each source, target options, and viewing import results.

5. Azure/azure-documentdb-python

This repo contains Python sample solutions showing common operations on Azure DocumentDB. You will learn how to use Azure DocumentDB to store and access data from a Python web application hosted on Azure and presumes that you have some prior experience using Python and Azure websites. Another good tutorial to follow up with is Python Flask Web Application Development with DocumentDB, where you will build a simple voting application that allows you to vote for a poll using Python against DocumentDB.

6. Azure/azure-documentdb-node-q

This repo has DocumentDB Node.js Q promises wrapper. If you don’t know anything about Q promises, read Promises in Javascript With Q. The repo project provides a “Hello world example code using Q promises” that makes it very easy to interact with Azure DocumentDB. You will seriously witness here that DocumentDB is built with a deep commitment to the JSON and JavaScript. This approach of “JavaScript as a modern day T-SQL” frees application developers from the complexities of type system mismatches and object-relational mapping technologies. The samples in this repo will help you get going with the JavaScript SDK to interact with the Azure DocumentDB service.

7. Azure/azure-documentdb-js-server

Before you head to this repo, maybe watch this video first – to get a brief introduction to Azure DocumentDB&;s server-side programming model. You will learn how DocumentDB’s language integrated, transactional execution of JavaScript lets developers write stored procedures, triggers and user defined functions (UDFs) natively in JavaScript. This allows developers to write application logic which can be shipped and executed directly on the database storage partitions.

8. Azure/azure-documentdb-java

This project provides a client library in Java that makes it easy to interact with Azure DocumentDB. In this repo, you will find a number of Java code samples working with DocumentDB. If you feel comfortable and up to it, you can build the entire Java web application using DocumentDB in just a few steps. For documentation please see the Microsoft Azure Java Developer Center and the JavaDocs.

9. Azure/azure-documentdb-hadoop

This repo provides a client library in Java that allows Microsoft Azure DocumentDB to act as an input source or output sink for Hadoop MapReduce, Hive and Pig jobs. This tutorial shows you how to run Apache Hive, Apache Pig, and Apache Hadoop MR jobs on Azure HDInsight with DocumentDB&039;s Hadoop connector. DocumentDB&039;s Hadoop connector allows DocumentDB to act as both a source and sink for Hive, Pig, and MapReduce jobs. This tutorial uses DocumentDB as both the data source and destination for Hadoop jobs, and shows how to do it. I recommend getting started by watching the following video, where we run through a Hive job using DocumentDB and HDInsight.

Hive, Pig, and MapReduce jobs. This tutorial uses DocumentDB as both the data source and destination for Hadoop jobs, and shows how to do it. I recommend getting started by watching the following video, where we run through a Hive job using DocumentDB and HDInsight.

10. Azure-Samples/documentdb-node-todo-app

Finally, this repo contains the source code for a complete application. The sample shows how to use the Microsoft Azure DocumentDB service to store and access data from a Node.js Express application hosted on Azure Websites.

For a complete end-to-end walk-through of creating this application, please read the full tutorial on the Azure documentation page. The code included in this sample is intended to get you going with a simple Node.js Express application that connects to Azure DocumentDB and showing how to interact with DocumentDB using the documentdb npm package. It is not intended to be a set of best practices on how to build scalable enterprise grade web applications, but it’s a great start.

@rimmanehme

P.S. If you’ve never even heard the word “NoSQL”, first of all – wow! You are at the end of the blog, and still paying attention. That’s awesome! Second, a quick way to learn about DocumentDB and see it in action is to follow these three steps:

Watch the two minute What is DocumentDB? video, which introduces the benefits of using DocumentDB.
Watch the three minute Create DocumentDB on Azure video, which highlights how to get started with DocumentDB by using the Azure Portal.
Visit the Query Playground, where you can walk through different activities to learn about the rich querying functionality available in DocumentDB. Then, head over to the Sandbox tab and run your own custom SQL queries and experiment with DocumentDB.

Quelle: Azure

Announcing general availability of Managed Disks and larger Scale Sets

I love it that so many of you enjoy both our Infrastructure-as-a-service (IaaS) and Platform-as-a-service (PaaS) services. In fact, over 55% of our IaaS customers also benefit from PaaS services. With this deep experience in both PaaS and IaaS, we can take insights from our PaaS services, like the agile benefit of automated management and scale, and apply it to improve our infrastructure services. Today, I am excited to announce one such insight with the general availability of Managed Disks. With this PaaS-like support, you no longer need to be concerned with the complexity of storage management nor worry about storage as you scale. Yet, you still have the full power and control you expect and love with Azure VMs – a "PaaS bridge" on our IaaS VMs.

This not only simplifies the management of every VM created, it is exponentially helpful when deploying at cloud-scale with VM Scale Sets. Azure VM scale sets (VMSS) are a powerful way to reliably deploy massive cloud infrastructure without the overhead of coordinating multiple resources. With Scale sets, you can simplify the management of your applications with automated application scale and load-balancer integration. Today’s announcement extends these platform features to include automated disk management, enabling simpler storage management and even larger scale. With Managed Disks, you can now attach data disks to every instance and create a VM scale set of up to 1,000 VMs, a 10X increase.

These capabilities are just the beginning to bring the agility of PaaS to the comfort of IaaS. I look forward to announcing additional capabilities coming later this year, including OS patching support, application lifecycle integration, application health monitoring, and load-balancer app health integration.

The sections below elaborate on the key benefits of Managed Disks and VM Scale Sets.

Management

Managed Disks free you from storage account scale management.  Managed Disks are Azure Resource Manager (ARM) resources, can be fully templatized, and support both Standard and Premium Disk types. You only need to specify the size and type of the disk you want. You can create thousands of Managed Disks without worrying about the storage account and without having to specify any disk details. You can create a blank disk, create one from a VHD in a storage account, or create one from an image as part of VM creation. You can even migrate an existing Azure Resource Manager VM to a VM with managed disks with just a single reboot. You don’t have to worry about reconfiguring your networking or your security rules to start using the powerful new capability.

In addition to the new Managed Disk resource, we also have added Snapshots and Images as Azure Resource Managed resources.

Snapshot and Backup

With today’s launch, you can also take a Disk Snapshot and maintain it as an Azure Resource Manager resource. With snapshots, you can back up your Managed Disks at any point-in-time. These Snapshots exist independent of the source disk and can be used to create new Managed Disks. You can also use the Azure Backup service with Managed Disks to create a backup job with time-based backups, easy VM restoration, and backup retention policies.

Image

Managed Disks also support creating a managed custom Image. You can create an Image from your custom VHD in a storage account or directly from a running VM. This captures in a single Image, all Managed Disks associated with a running Virtual Machine, including both the OS and Data Disks. This even enables deploying a large VM Scale Set with hundreds of VMs using your custom Image, without the need to copy to or manage any storage accounts.

“The new Managed Disks significantly reduces code complexity and simplifies the management of disks. Managed Disks will enable us to expand upon our current database-as-a-service offering on Azure with new MongoDB plan types to help meet the demanding workloads of our users.” – Sean Hyrich, mLab

Creating a VM

Using the Azure PowerShell or AZ CLI, you can easily create a VM with Managed Disks. Below is an example of a CLI command to create a VM with a managed OS disk and a 128 GB managed data disk. You no longer specify a Storage account for either the OS disk or data disk.

$ az vm create -g myResourceGroup -n myLinuxVM –image RHEL –size Standard_DS3_v2 –data-disk-sizes-gb 128

You can also easily create a VM with Managed Disks in the portal by selecting, “Use managed disks:”

Here is an example of creating a Windows VM with Managed OS disk using Azure PowerShell. You will notice that now you don’t specify a Storage account. You can also programmatically create a virtual machine with Managed Disks using Azure Management Libraries for Java and .NET.

Scale

Today, you can now deploy up to 1,000 VMs in a Scale Set based on platform images, a 10x scale improvement. This enables you to deploy and manage a single cluster, like a large-scale Hadoop, DataSynapse, Cassandra, or IIS deployment. Additionally, if you need load balancing at this scale, you can deploy with an Azure Application Gateway for layer-7 load-balancing.

“Elasticsearch, the leading open source data search solution, allows companies to explore anywhere from gigabytes to petabytes of data in real-time. Managed Disks and expanded VM Scale Sets will help Azure users deploy Elasticsearch clusters at very large scale” – Martijn Laarman, Elastic Software Developer

Here is how you can easily create a 1,000 VM scale set:

$ az vmss create -g myResourceGroup -n myVMSSName –image ubuntuLTS –instance-count 1000

Application support for scale sets

VM scale sets now support attached data disks. When you define a scale set you can create as many attached disks as the VM size supports. This allows more data-intensive analytics and search applications to take advantage of the management and scalability benefits of scale sets.

Security

Because Managed Disks, Snapshots, and Images are Azure Resource Manager resources, you can now apply granular access control using Azure RBAC on each. Managed Disks expose a variety of operations such as read, write (create/update), delete, and export. You can now grant access to only the operations that a person needs to perform his/her job. You can even create custom roles and only grant the permissions to best suit your requirements.

You can also encrypt Managed Disks using Azure Disk Encryption with Customer-managed or Microsoft-managed keys.

"Through providing disk as a managed logical resource, Managed Disk brings our customers an enhanced, easy to use and more secure experience. We can now define role-based access controls for the Disks." – Jesper Jensen, Solvo IT

Migration

Managed Disks comes with easy migration capability from existing unmanaged Azure Resource Manager VMs to Managed Disks VMs without the need to recreate the VM, preserving the configuration and security of that VM. After initiating migration, the VM become available immediately after rebooting. You have full control of the migration and can choose to migrate one VM at a time or plan the migration to be scripted on all of your VMs at once.

You can now also migrate your Managed Disks from Standard to Premium in an easy way. With Managed Disks, if you stop your VM, you can change the account type of your disks without deleting or reconfiguring the VM. The changed disks become available immediately to restart your VM. Here are more details on how to execute the above migration options.

“Performance limits and proper naming to distribute accounts in different stamps need to be considered for a proper design. With Managed Disks, we’ve seen the light, they put to rest all these concerns and let us and our customers concentrate on our business. That’s not all, Managed Disks come with the migration capability, where I can migrate my Virtual Machines to Managed Disks with a single reboot.” –Daniele Grandini, Progel

Pricing

You can visit the Azure Storage Pricing page for more details for Managed Disks. The pricing of Premium Managed Disks is the same as Premium Unmanaged Disks. Standard Managed Disks offer a slightly different pricing model as Standard Unmanaged Disks with pricing based upon the provisioned disk size. Given the change, we are offering a 50% promotional discount on Standard Managed Disks for the first six months.

Availability Regions

As of today, Managed Disks are available in all the global regions. Sovereign clouds will have this support in the coming weeks. Discover the Availability of Managed Disks by regions.

Getting Started

I hope you enjoyed this post and really enjoy this new service offering bringing the agility and ease of PaaS management and scale to your IaaS VMs. Give the service a try! We would love to hear your feedback and comments. Deploying infrastructure has never been so much fun.

See ya around,

Corey

 

Learn more about Managed Disks:

Getting started
Azure Management Libraries for Java
Azure Management Libraries for .NET
Azure CLI V2
Premium Managed Disks
Standard Managed Disks
Managed Disks FAQ
VM Scale Set overview
Migrate to Managed Disks
Azure VM scale sets and attached data disks

Quelle: Azure