Azure Web Application Firewall (WAF) Generally Available

Last September at Ignite we announced plans for better web application security by adding Web Application Firewall to our layer 7 Azure Application Gateway service. We are now announcing the General Availability of Web Application Firewall in all Azure public regions.

Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities, such as SQL injection and cross site scripting attacks. Preventing such exploits in the application requires rigorous maintenance, patching, and monitoring at multiple layers of the application topology. A centralized web application firewall (WAF) protects against web attacks and simplifies security management without requiring any application changes. Application and compliance administrators get better assurance against threats and intrusions.

Azure Application Gateway is our Application Delivery Controller (ADC) layer 7 network service offering capabilities including SSL termination, true round robin load distribution, cookie-based session affinity, multi-site hosting, and URL path based routing. Application Gateway provides SSL policy control and end to end SSL encryption to provide better application security hardening. These capabilities allow backend applications to focus on core business logic while leaving costly encryption/decryption, SSL policy, and load distribution to the Application Gateway. Web Application Firewall integrated with Application Gateway’s core offerings further strengthens the security portfolio and posture of applications protecting them from many of the most common web vulnerabilities, as identified by Open Web Application Security Project (OWASP) top 10 vulnerabilities. Application Gateway WAF comes pre-configured with OWASP ModSecurity Core Rule Set (3.0 or 2.2.9), which provides baseline security against many of these vulnerabilities. With simple configuration and management, Application Gateway WAF provides rich logging capabilities and selective rule enablement.

Benefits

Following are the core benefits that Web Application Firewall provides:

Protection

Protect your application from web vulnerabilities and attacks without modifying backend code. WAF addresses various attack categories including:

SQL injection
Cross site scripting
Common attacks such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion attack
HTTP protocol violations
HTTP protocol anomalies
Bots, crawlers, and scanners
Common application misconfigurations (e.g. Apache, IIS, etc.)
HTTP Denial of Service

Protect multiple web applications simultaneously. Application Gateway supports hosting up to 20 websites behind a single gateway that can all be protected against web attacks.

Ease of use

Application Gateway WAF is simple to configure, deploy, and manage via the Azure Portal and REST APIs. PowerShell and CLI will soon be available.
Administrators can centrally manage WAF rules.
Existing Application Gateways can be simply upgraded to include WAF. WAF retains all standard Application Gateway features in addition to Web Application Firewall.

Monitoring

Application Gateway WAF provides the ability to monitor web applications against attacks using a real-time WAF log that is integrated with Azure Monitor to track WAF alerts and easily monitor trends. The JSON formatted log goes directly to the customer’s storage account. Customers have full control over these logs and can apply their own retention policies. Customers can also ingest these logs into their own analytics system. WAF logs are also integrated with Operations Management Suite (OMS) so customers can use OMS log analytics to execute sophisticated fine grained queries.

Application Gateway WAF will shortly be integrated with Azure Security Center to provide a centralized security view of all your Azure resources. Azure Security Center scans your subscriptions for vulnerabilities and recommends mitigation steps for detected issues. One such vulnerability is the presence of web applications that are not protected by a WAF.

Customization

Application Gateway WAF can be run in detection or prevention mode. A common use case is for administrators to run in detection mode to observe traffic for malicious patterns. Once potential exploits are detected, turning to prevention mode blocks suspicious incoming traffic.
Customers can customize WAF RuleGroups to enable/disable broad categories or sub-categories of attacks. Therefore, an administrator can enable or disable RuleGroups for SQL Injection or Cross Site Scripting (XSS). Customers can also enable/disable specific rules within a RuleGroup. For example, the Protocol Anomaly RuleGroup is a collection of many rules that can be selectively enabled/disabled.

Embracing Open Source

Application Gateway WAF uses one of the most popular WAF deployments –  OWASP ModSecurity Core Rule Set to protect against the most common web vulnerabilities. These rules, which conform to rigorous standards, are managed and maintained by the open source community. Customers can choose between rule set CRS 2.2.9 and CRS 3.0. Since CRS 3.0 offers a dramatic reduction in false positives, we recommend using CRS 3.0.

Summary and next steps

General availability of Web Application Firewall is an important milestone in our Application Gateway ADC security offering. We will continue to enhance the WAF feature set based on your feedback. You can try Application Gateway Web Application Firewall today using portal or ARM templates. Further information and detailed documentation links are provided below.

Application Gateway WAF pricing
More technical details on Application Gateway WAF
A comprehensive list of WAF rule schemas and RuleGroup/Rules
Step by step guide to create and customize
Deployment by an ARM template
ARM API
PowerShell and CLI support will be available soon

Quelle: Azure

Use BigDL on HDInsight Spark for Distributed Deep Learning

Deep learning is impacting everything from healthcare, transportation, manufacturing, and more. Companies are turning to deep learning to solve hard problems like image classification, speech recognition, object recognition, and machine translation. In this blog post, Intel’s BigDL team and Azure HDInsight team collaborate to provide the basic steps to use BigDL on Azure HDInsight.  

What is Intel’s BigDL library?

In 2016, Intel released its BigDL distributed Deep Learning project into the open-source community, BigDL Github. It natively integrates into Spark, supports popular neural net topologies, and achieves feature parity with other open-source deep learning frameworks. BigDL also provides 100+ basic neural networks building blocks allowing users to create novel topologies to suit their unique applications. Thus, with Intel’s BigDL, the users are able to leverage their existing Spark infrastructure to enable Deep Learning applications without having to invest into bringing up separate frameworks to take advantage of neural networks capabilities.

Since BigDL is an integral part of Spark, a user does not need to explicitly manage distributed computations. While providing a high-level control “knobs” such as number of compute nodes, cores, and batch size, a BigDL application leverages stable Spark infrastructure for node communications and resource management during its execution. BigDL applications can be written in either Python or Scala and achieve high performance through both algorithm optimization and taking advantage of intimate integration with Intel’s Math Kernel Library (MKL). Check out Intel’s BigDL portal for more details.  

Azure HDInsight

Azure HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Other than that, HDInsight is an open platform for 3rd party big data applications such as ISVs, as well as custom applications such as BigDL.  

Through this blog post, BigDL team and Azure HDInsight team will give a high-level view on how to use BigDL with Apache Spark for Azure HDInsight. You can find a more detailed step to use BigDL to analyze MNIST dataset in the engineering blog post.  

Getting BigDL to work on Apache Spark for Azure HDInsight

BigDL is very easy to build and integrate. There are two major steps:

Get BigDL source code and build it to get the required jar file
Use Jupyter Notebook to write your first BigDL application in Scala 

Step 1: Build BigDL libraries

The first step is to build the BigDL libraries and get the required jar file. You can simply ssh into the cluster head node, and follow the build instructions in BigDL Documentation. Please be noted that you need to install maven in headnode to build BigDL, and put the jar file (dist/lib/bigdl-0.1.0-SNAPSHOT-jar-with-dependencies.jar) to the default storage account of your HDInsight cluster. Please refer to the engineering blog for more details.  

Step 2: Use Jupyter Notebook to write your first application

HDInsight cluster comes with Jupyter Notebook, which provides a nice notebook-like experience to author Spark jobs. Here is a snapshot of a Jupyter Notebook running BigDL on Azure Spark for Apache HDInsight. For detailed step-by-step example of implementing a popular MNIST dataset training using LeNet model, please refer to this Microsoft’s engineering blog post. For more details on how to use Jupyter Notebooks on HDInsight, please refer to the documentation.

BigDL workflow and major components

Below is a general workflow of how BigDL trains a deep learning model on Apache Spark: As shown in the figure, BigDL jobs are standard Spark jobs. In a distributed training process, BigDL will launch spark tasks in executor (each task leverages Intel MKL to speed up training process).

A BigDL program starts with import com.intel.analytics.bigdl._ and then initializes the Engine, including the number of executor nodes and the number of physical cores on each executor.

If the program runs on Spark, Engine.init() will return a SparkConf with proper configurations populated, which can then be used to create the SparkContext. For this particular case, the Jupyter Notebook will automatically set up a default spark context so you don’t need to do the above configuration, but you do need to configure a few other Spark related configuration which will be explained in the sample Jupyter Notebook.  

Conclusion

In this blog post, we have demonstrated the basic steps to set up a BigDL environment on Apache Spark for Azure HDInsight, and you can find a more detailed step to use BigDL to analyze MNIST dataset in the engineering blog post “How to use BigDL on Apache Spark for Azure HDInsight.” Leveraging BigDL Spark library, a user can easily write scalable distributed Deep Learning applications within familiar Spark infrastructure without an intimate knowledge of the configuration of the underlying compute cluster. BigDL and Azure HDInsight team have been collaborating closely to enable BigDL in Apache Spark for Azure HDInsight environment.

If you have any feedback for HDInsight, feel free to drop an email to hdifeedback@microsoft.com. If you have any questions for BigDL, you can raise your questions in BigDL Google Group.

Resources

Learn more about Azure HDInsight
Aritificial Intelligence Software and Hardware at Intel
BigDL introductory video

Quelle: Azure

Azure Analysis Services now available in Japan East and UK South

Last October we released the preview of Azure Analysis Services, which is built on the proven analytics engine in Microsoft SQL Server Analysis Services. With Azure Analysis Services you can host semantic data models in the cloud. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis.

We are excited to share with you that the preview of Azure Analysis Services is now available in 2 additional regions: Japan East and UK South. This means that Azure Analysis Services is now available in the following regions: Australia Southeast, Canada Central, Brazil South, Southeast Asia, North Europe, West Europe, West US, South Central US, North Central US, East US 2, West Central US, Japan East, and UK South.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.
Quelle: Azure

TIBCO DataSynapse launches its Cloud Adapter for autoscaling in Azure

Since our launch of the TIBCO DataSynapse GridServer Engine image in the Azure Marketplace back in August, we have continued to pursue feature updates that improve the integration and experience of scheduling jobs and tasks into Azure. With the increased regulatory requirements of the Fundamental Review of the Trade Book (FRTB), having the ability to burst into Azure for the use of additional compute capacity is a hot topic within the Financial Services industry.

This week, TIBCO DataSynapse released the High Performance Adapter (HPCCA) to extend the functionality of GridServer 6.2.0 and enable customers to temporarily increase their compute capacity in the cloud.

The hybrid burst scenario takes a step forward with the ability to dynamically create and provision both Linux and Windows VMs directly into Azure. By implementing HPCCA as a Manager Hook into the Broker, it is able to review Broker events and take the right course of action through a simple cloud management algorithm.

Based on the number of events experienced, a formula predicts the number of VMs required and a customer created image is initiated. HPCCA can configure the current deployment and remote-start the Engine Daemons ready to process these events.

HPCCA also has the ability to shut down the Engine Daemons as soon as the events begin to decrease. By utilizing the same set of Azure APIs, the Engine idle time following event execution triggers this action.

If you would like to hear more about this and you based in the New York City area, please join us Wednesday, April 19th at the Microsoft office to view a demonstration.

Register now for the TIBCO and Microsoft Azure Workshop!

TIBCO DataSynapse GridServer is a service execution platform for dynamically scaling any application at any time across grid infrastructure. Due to its improvement in productivity, performance and uptime it is used heavily within Financial Services for parallel computing certain risk applications.
Quelle: Azure

How Azure Security Center helps reveal a Cyberattack

The Azure Security Center (ASC) analysts team reviews and investigates ASC alerts to gain insight into security incidents affecting Microsoft Azure customers, helping improve Azure Security alerts and detections. ASC helps customers keep pace with rapidly evolving threats by using advanced analytics and global threat intelligence.

Although we have come a long way as far as cloud security is concerned, even today security factors are heavily discussed as companies consider moving their assets to the cloud. The Azure Security Center team understands how critical it is for our customers to be assured that their Azure deployments are secure, not only from advanced attacks but even from the ones that are not necessarily new or novel. The beauty of ASC lies in its simplicity. Although ASC uses machine learning, anomaly detection, and behavioral analysis to determine suspicious events, it still addresses simple things like SQL brute force attacks that Bad Guys/Script Kiddies are using to break into Microsoft SQL servers.

In this blog, we’ll map out the stages of one real-world attack campaign that began with a SQL Brute Force attack, which was detected by the Security Center, and the steps taken to investigate and remediate the attack. This case study provides insights into the dynamics of the attack and recommendations on how to prevent similar attacks in your environment.

Initial ASC alert and details

Hackers are always trying to target internet connected databases. There are tons of bad guys trying to discover IP addresses that have SQL Server running so that they can crack their password through a brute force attack. The SQL database can contain a wealth of valuable information for the attackers, including personally identifiable information, credit card numbers, intellectual property, etc. Even if the database doesn’t have much information, a successful attack on an insecurely configured SQL installation can be leveraged to get full system admin privileges.

Our case started with an ASC Alert notification to the customer detailing malicious SQL activity. A command line “ftp -s:C:zyserver.txt” launched by the SQL service account was unusual and flagged as by ASC Alerts.

The alert provided details such as date and time of the detected activity, affected resources, subscription information, and included a link to a detailed report of the detected threat and recommended actions.

 

 

Through our monitoring, the ASC analysts team was also alerted to this activity and looked further into the details of the alert. What we discovered was the SQL service account (SQLSERVERAGENT) was creating FTP scripts (i.e.: C:zyserver.txt), which was used to download and launch malicious binaries from an FTP site.

The initial compromise

A deeper investigation into the affected Azure subscription began with inspection of the SQL error and trace logs where we found indications of SQL Brute Force attempts. In the SQL error logs, we encountered hundreds of “Audit Login Failed” logon attempts for the SQL Admin ‘sa’ account (built-in SQL Server Administration) which eventually led up to a successful login.

These brute force attempts occurred over TCP port 1433, which was exposed on a public facing interface. TCP port 1433 is the default port for SQL Server.

Note: It is a very common recommendation to change the SQL default port 1433, this may impart a “false sensation of security”, because many port scanning tools can scan a “range” of network ports and eventually find SQL listening on ports other than 1433.

Once the SQL Admin ‘sa’ account was compromised by brute force, the account was then used to enable the ‘xp_cmdshell’ extended stored procedure as we’ve highlighted below in a SQL log excerpt.

The ‘xp_cmdshell’ stored procedure is disabled by default and is of particular interest to attackers because of its ability to invoke a Windows command shell from within Microsoft SQL Server. With ‘xp_cmdshell enabled, the attacker created SQL Agent jobs which invoked ‘xp_cmdshell’ and launched arbitrary commands, including the creation and launch of FTP scripts which, in turn, downloaded and ran malware.

Details of malicious activity

Once we determined how the initial compromise occurred, our team began analyzing Process Creation events to determine other malicious activity. The Process Creation events revealed the execution of a variety of commands, including downloading and installing backdoors and arbitrary code, as well as permission changes made on the system.

Below we have detailed a chronological layout of process command lines that we determined to be malicious:

A day after the initial compromise we began to see the modification of ACLS on files/folders and registry keys with use of Cacls.exe (which appears to have been renamed to osk.exe and vds.exe).

Note: Osk.exe is the executable for the Accessibility On-Screen Keyboard and Vds.exe is the Virtual Disk Service executable, both typically found on a Windows installation. The command lines and command switches detailed below, however, are not used for Osk.exe or VDS.exe and are associated with Cacls.exe.

The Cacls.exe command switches /e /g is used to grant the System account full(:f) access rights to ‘cmd.exe’ and ‘net.exe’.

A few seconds later, we see the termination of known Antivirus Software using the Windows native “taskkill.exe”.

This was followed by the creation of and FTP script (c:zyserver.txt ) which was flagged in the original ASC Alert. This FTP script appears to download malware (c:stserver.exe) from a malicious FTP site and subsequently launch the malware.

A few minutes later, we see the “net user” and “net localgroup” commands used to accomplish the following:

a.    Activate the built-in guest account and add it to the Administrators group

b.   Create a new user account and add the newly created user to the Administrators group

A little over 2 hours later, we see the regini.exe command which appears to be used to create, modify, or delete registry keys. Regini can also set permissions on the registry keys as defined in the noted .ini file. We then see, regsvr32.exe silently (/s switch) registering dlls related to the Windows shell (urlmon.dll, shdocvw.dll) and Windows scripting (jscript.dll, vbscript.dll, wshom.ocx).

This is immediately followed by additional modification of permissions on various Windows executables. Essentially resetting each to default with the “icacls.exe” command.

Note: The /reset switch replaces ACLs with default inherited ACLs for all matching files.

Lastly, we observed the deletion of “Terminal Server” fDenyTSConnections registry key. This is a registry key that contains the configuration of Terminal Server connection restrictions. This led us to believe that malicious RDP connections may be the next step for the attacker to access the server. Inspection of logon events did not reveal to us any malicious RDP attempts or connections, however:

Disabling of Terminal Server connection restrictions by overwriting values in the “Terminal Server” registry key
reg.exe ADD "HKLMSYSTEMCurrentControlSetControlTerminal Server" /v fDenyTSConnections /t REG_DWORD /d 00000000 /f" 

We also noticed and Scheduled task created. This task referenced a binary named “svhost.exe” to be launched out of the C:RECYCLER folder, which is suspicious.

Note that the legitimate “svchost.exe” files located in the “WindowsSystem32” and “WindowssysWOW64”. Svchost.exe running from any other directory should be considered suspicious.

Persistence mechanism – Task Scheduler utility (schtasks.exe) used to set a recurring task
C:WindowsSystem32schtasks.exe /create /tn "45645" /tr "C:RECYCLERsvchost.exe" /sc minute /mo 1 /ru "system 

Recommended remediation and mitigation steps

Once we understood the extent and the details of the attack, we recommended the following remediation and mitigation steps to be taken.

First, if possible, we first recommended the backup and rebuild the SQL Server and reset all user accounts. We then implement the following mitigation steps to help prevent further attacks.

1. Disable ‘sa’ account and use the more secure Windows Authentication

To disable ‘sa’ login via SQL, run the following commands as a sys admin

ALTER LOGIN sa DISABLE

GO

2. To help prevent attackers from guessing the ‘sa’ account, rename the ‘sa’ account
To rename the ‘sa’ account via SQL, run the following as a sys admin:

ALTER LOGIN sa WITH NAME = [new_name];

GO

3. To prevent future brute force attempts, change and harden the ‘sa’ password and set the sa Login to ‘Disabled’.

Learn how to verify and change the system administrator password in MSDE or SQL Server 2005 Express Edition.

4. It’s also a good idea to ensure that ‘xp_cmdshell’ is disabled. Again, note that this should be disabled by default.

5. Block port TCP port 1433 if it is not needed be opened to the internet. From your Azure Portal, take the following steps to configure a Rule to block 1433 in Network Security Group

a. Open the Azure portal

b. Navigate to > (More Services) -> Network security groups

c. If you have opted into the Network Security option, you will see an entry for <ComputerName-nsg> — click it to view your Security Rules

d. Under Settings click "Inbound security rules" and then click +Add on the next pane

e. Enter the Rule name and Port information.Under the ‘Service’ pulldown, choose MS SQL and it will automatically select Port range = 1433 as detailed below.

f. Then apply the newly created rule to the subscription

6. Inspect all stored procedures that may have been enabled in SQL and look for stored procedures that may be implementing ‘xp_cmdshell’ and running unusual command.

For example, in our case, we identified the following commands:

7. Lastly, we highly recommend configuring Azure subscription(s) to receive future alerts and email notifications from Microsoft Azure Security Center. To receive alerts and email notifications of security issues like this in the future, we recommended upgrading from ASC “Free” (basic detection) tier to ASC “Standard” (advanced detection) tier.

Below is an example of the email alert received from ASC when this SQL incident was detected:

Learn more about SQL detection

Azure SQL Database Threat Detection–Advanced DB Security in the Cloud
Protect Azure SQL Databases with Azure Security Center
SQL Threat Detection – Your built-in security expert

Quelle: Azure

Up your SaaS application game

Frustrated with ever-changing customer expectations? Yesterday’s investments losing steam as the market turns to the next shiny thing? How do you scale across cloud and mobile, stay on top of data and security, and ultimately sell successfully in a rapidly transforming landscape?

Microsoft App Accelerate targets exactly these issues for application creators and ISVs (independent software vendors). Whatever type of solution your company delivers, the program provides the resources to help you plan, deploy, launch, and grow your applications.

Learn more by reading Build intelligent applications with help from Microsoft App Accelerate.
Quelle: Azure

One-click disaster recovery of applications using Azure Site Recovery

Disaster recovery is not only about replicating your virtual machines but also about end to end application recovery that is tested multiple times, error free, and stress free when disaster strikes, which are the Azure Site Recovery promises. If you have never seen your application run in Microsoft Azure, chances are that when a real disaster happens, the virtual machines may just boot, but your business may remain down. The importance and complexity involved in recovering applications was described in the previous blog of this series – Disaster recovery for applications, not just virtual machines using Azure Site Recovery. This blog covers how you can use the Azure Site Recovery construct of recovery plans to failover or migrate applications to Microsoft Azure in the most tested and deterministic way, using an example of recovering a real-world application to the public cloud.  

Why use Azure Site Recovery “recovery plans”?

Recovery plans help you plan for a systematic recovery process by creating small independent units that you can manage. These units will typically represent an application in your environment. Recovery plan not only allows you to define the sequence in which the virtual machines start, but also helps you automate common tasks during recovery.

Essentially, one way to check that you are prepared for disaster recovery is by ensuring that every application of yours is part of a recovery plan and each of the recovery plans is tested for recovery to Microsoft Azure. With this preparedness, you can confidently migrate or failover your complete datacenter to Microsoft Azure.
 
Let us look at the three key value propositions of a recovery plan:

Model an application to capture dependencies
Automate most recovery tasks to reduce RTO
Test failover to be ready for a disaster

Model an application to capture dependencies

A recovery plan is a group of virtual machines generally comprising an application that failover together. Using the recovery plan constructs, you can enhance this group to capture your application-specific properties.
 
Let us take the example of a typical three tier application with

one SQL backend
one middleware
one web frontend

The recovery plan can be customized to ensure that the virtual machines come up in the right order post a failover. The SQL backend should come up first, the middleware should come up next, and the web frontend should come up last. This order makes certain that the application is working by the time the last virtual machine comes up. For example, when the middleware comes up, it will try to connect to the SQL tier, and the recovery plan has ensured that the SQL tier is already running. Frontend servers coming up last also ensures that end users do not connect to the application URL by mistake until all the components are up are running and the application is ready to accept requests. To build these dependencies, you can customize the recovery plan to add groups. Then select a virtual machine and change its group to move it between groups.

 

Once you complete the customization, you can visualize the exact steps of the recovery. Here is the order of steps executed during the failover of a recovery plan:

First there is a shutdown step that attempts to turn off the virtual machines on-premises (except in test failover where the primary site needs to continue to be running)
Next it triggers failover of all the virtual machines of the recovery plan in parallel. The failover step prepares the virtual machines’ disks from replicated data.
Finally the startup groups execute in their order, starting the virtual machines in each group – Group 1 first, then Group 2, and finally Group 3. If there are more than one virtual machines in any group (for example, a load-balanced web frontend) all of them are booted up in parallel.

Sequencing across groups ensures that dependencies between various application tiers are honored and parallelism where appropriate improves the RTO of application recovery.

Automate most recovery tasks to reduce RTO

Recovering large applications can be a complex task. It is also difficult to remember the exact customization steps post failover. Sometimes, it is not you, but someone else who is unaware of the application intricacies, who needs to trigger the failover. Remembering too many manual steps in times of chaos is difficult and error prone. A recovery plan gives you a way to automate the required actions you need to take at every step, by using Microsoft Azure Automation runbooks. With runbooks, you can automate common recovery tasks like the examples given below. For those tasks that cannot be automated, recovery plans also provide you the ability to insert manual actions.

Tasks on the Azure virtual machine post failover – these are required typically so that you can connect to the virtual machine, for example:

Create a public IP on the virtual machine post failover
Assign an NSG to the failed over virtual machine’s NIC
Add a load balancer to an availability set

Tasks inside the virtual machine post failover – these reconfigure the application so that it continues to work correctly in the new environment, for example:

Modify the database connection string inside the virtual machine
Change web server configuration/rules

For many common tasks, you can use a single runbook and pass parameters to it for each recovery plan so that one runbook can serve all your applications. To deploy these scripts yourself and try them out, click the button below and import popular scripts into your Microsoft Azure Automation account.

 
With a complete recovery plan that automates the post recovery tasks using automation runbooks, you can achieve one-click failover and optimize the RTO. 

Test failover to be ready for a disaster

A recovery plan can be used to trigger both a failover or a test failover. You should always complete a test failover on the application before doing a failover. Test failover helps you to check whether the application will come up on the recovery site.  If you have missed something, you can easily trigger cleanup and redo the test failover. Do the test failover multiple times until you know with certainty that the application recovers smoothly.

 

Each application is different and you need to build recovery plans that are customized for each. Also, in this dynamic datacenter world, the applications and their dependencies keep changing. Test failover your applications once a quarter to check that the recovery plan is current.

Real-world example – WordPress disaster recovery solution

Watch a quick video of a two-tier WordPress application failover to Microsoft Azure and see the recovery plan with automation scripts, and its test failover in action using Azure Site Recovery.

The WordPress deployment consists of one MySQL virtual machine and one frontend virtual machine with Apache web server, listening on port 80.
WordPress deployed on the Apache web server is configured to communicate with MySQL via the IP address 10.150.1.40.
Upon test failover, the WordPress configuration needs to be changed to communicate with MySQL on the failover IP address 10.1.6.4. To ensure that MySQL acquires the same IP address every time on failover, we will configure the virtual machine properties to have a preferred IP address set to 10.1.6.4.

With relentless focus on ensuring that you succeed with full application recovery, Azure Site Recovery is the one-stop shop for all your disaster recovery needs. Our mission is to democratize disaster recovery with the power of Microsoft Azure, to enable not just the elite tier-1 applications to have a business continuity plan, but offer a compelling solution that empowers you to set up a working end to end disaster recovery plan for 100% of your organization&;s IT applications.

You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Azure Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate, whether it is running on VMware or Hyper-V. To learn more about Azure Site Recovery, check out our How-To Videos. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the Azure Site Recovery User Voice to let us know what features you want us to enable next.
Quelle: Azure

Adding offerings and UK Region: Azure rolls deep with PCI DSS v3.2

Check out our AoC

Go here to download Azure’s Payment Card Industry Data Security Standard (PCI DSS) v3.2  Attestation of Compliance (AoC)! When it comes to enabling customers who want or need to operate in a cloud environment AND also need to adhere to the global standards designed to prevent credit card fraud, they need look no further than Azure. 

Why put off until 2018 what you can do today?

When it comes to security and compliance, we are always ready to act. The DSS v3.2 contains several requirements that don’t take effect until January 2018, and while it is possible to get a v3.2 certification without meeting these future requirements, Azure has already adopted them and is currently compliant with all new requirements!

UK, too!

Azure has also added the UK region to our list of PCI-certified datacenters, while expanding coverage within previously certified regions around the world. A new version of our PCI Responsibility Matrix will be released shortly, keep an eye out for that announcement coming soon.

More services = More options for customers

Azure has again increased the coverage of our attestation to keep up with customer needs and we continue to be unmatched amongst Cloud Service Providers in the depth and breadth of offerings with PCI DSS v3.2. A sample of the services added in this attestation include:

Note: Refer to the latest AoC for the full list of services and regions covered. 

DocumentDB
Data Catalog
Machine Learning
Functions
Virtual Machine Scale Sets

AoC FAQs and Highlights

Why does the AoC say “April 2016”?

The front page and footer of the AoC says “April 2016”.  This is the date that the template was published by the PCI SSC, it is not the date of our AoC.  Many customers get confused by this, but we are not able to modify the AoC template. Refer to page 76 of the AoC for the date it was actually signed and issued. 

How should I interpret the service listing in the AoC?

We have received feedback in the past that it was difficult to understand what services were covered in the AoC. This was mainly because the services were listed under the groupings and internal names our Qualified Security Assessor (QSA) used for the assessment, along with the fact that many services got re-branded shortly after our 2015 AoC was released.

We incorporated that feedback in the release of our 2016 AoC, and have again updated the service listing in the 2017 AoC to reflect the current set of Azure offerings. Please be aware that if an Azure service is re-branded we are not able to retroactively update the AoC.  If you have questions about the status of an Azure service, please contact Azure support or your TAM. 

Why isn’t Azure assessed as a “Shared Hosting Provider”?

The shared hosting provider designation in PCI DSS is for situations where multiple customers are being hosted on a single server, but doesn’t take into account hosting of isolated virtualized environments.  An example of shared hosting is if a service provider was hosting multiple customer websites on a single physical web server. In that situation, there is no segregation between the customer environments. Azure is not considered a shared hosting provider for PCI because customer VMs and environments are segregated and isolated from each other. So changes made to “Customer X’s” VM does not affect “Customer Y’s” VM, even under the scenario that both VMs are hosted on the same physical host.  
Quelle: Azure

Azure Data Factory’s Data Movement is now available in the UK

Data Movement is a feature of Azure Data Factory that enables cloud-based data integration, which orchestrates and automates the movement and transformation of data. You can now create data integration solutions using Azure Data Factory that can ingest data from various data stores, transform/process data, and publish results to the data stores. 

Moreover, you can now utilize Azure Data Factory for both your cloud and hybrid data movement needs with the UK data store. For instance, when copying data from a cloud data source to an Azure store located in the UK, Data Movement service in UK South will perform the copy and ensure compliance with data residency.

Note: Azure Data Factory itself does not store any data, but instead lets you create data-driven flows to orchestrate movement of data between supported data stores and the processing of data using compute services in other regions or in an on-premises environment.

To learn more about using Azure Data Factory for data movement, view the Move data by using Copy Activity article. 

You can also go to Azure.com learn more about Azure Data Factory or view more in depth Azure Data Factory information documentation.
Quelle: Azure

Azure Relay Hybrid Connections is generally available

The Azure Relay service was one of the first core Azure services. Today’s announcement shows that it has grown up nicely with the times. For those familiar with the WCF Relay feature of Azure Relay rest assured it will continue to function, but its dependency on Windows Communication Foundation is not for everyone. The Hybrid Connections feature of Azure Relay sheds this dependency by utilizing open standards based protocols.

Hybrid Connections contains a lot of the same functionality as WCF Relay including:

Secure connectivity of on-premises assets and the cloud
Firewall friendliness as it utilizes common outbound ports
Network management friendliness that won&;t require a major reconfiguration of your network

The differences between the two are even better!

Open standards based protocol and not proprietary! WebSockets vs. WCF
Hybrid Connections is cross platform, using Windows, Linux or any platform that supports WebSockets
Hybrid Connections supports .NET Core, JavaScript/Node.js, and multiple RPC programming models to achieve your objectives

Getting started with Azure Relay Hybrid Connections is simple and easy with steps here for .NET and Node.js.

If you want to try it and we hope you do, you can find out more about Hybrid Connections pricing and the Azure Relay offering.
Quelle: Azure