Announcing Azure Blob storage events preview

In conjunction with the recently announced Azure Event Grid preview, we are pleased to announce the preview of Azure Blob storage events.

Azure Blob storage is a massively-scalable object storage platform. With exabytes of capacity and massive scalability, Azure Blob storage easily and cost-effectively stores hundreds to billions of objects, in hot or cool tiers, and supports any type of data—images, videos, audio, documents, and more.

Blob storage events allow applications to react to the creation and deletion of blobs without the need for complicated code and expensive, inefficient polling services. Instead, events are pushed directly to event handlers such as Azure Functions, Azure Logic Apps, or your own custom http listener.

Blob storage events are made possible by Azure Event Grid, which enables event based programming in Azure by providing reliable distribution of events for all services in Azure and third-party services. With publisher/subscriber semantics, event sources like Azure Blob Storage push events to Event Grid, which routes, filters, and reliably distributes them to subscribers with WebHooks, queues, and Event Hubs as endpoints. Event Grid is baked into the Azure ecosystem so it is as easy as point and click to connect your storage account to your event handler. Azure Event Grid has a pay-per-event pricing model, so you only pay for what you use. Additionally, to help you get started quickly, the first 100,000 operations per month are free. Beyond 100,000 per month, pricing is $0.30 per million operations (per-operation) during the preview. More details can be found on the pricing page.

The preview of Blob storage events is available now for Blob storage accounts in the US West Central location with additional locations coming soon. To learn more, and to sign up for the preview, see Azure Blob storage events.

We would love to hear more about your experiences with the preview and get your feedback! Are there other storage events you would like to see made available? Drop us a line at azurestorageevents@microsoft.com and let us know.

Happy eventing!
Quelle: Azure

Integrate Azure Stack into your datacenter

Now that you’ve ordered Azure Stack, how do you integrate it into your datacenter? What are the integration touchpoints?

Introduction

Microsoft Azure Stack is an integrated system available from multiple OEMs. If you haven’t read our blog post about why we decided to go down this path, I encourage you to do so.

Azure Stack offers a tailored, hardened, and secured appliance-like experience with simplified administration. For emergency recovery, a privileged PowerShell endpoint is available, which is secured using just enough administration. The administrative experience has a full-featured update mechanism called Patch and Update (referred as P&U) to ensure that customers can focus on services they run on Azure Stack.

Of course, without being able to integrate into your datacenters, Azure Stack is just an island, consuming power and cooling resources. In this post, I’ll focus on how Azure Stack can integrate into your datacenter.

Network integration

Azure Stack is build using a Clos network topology and uses the Software Defined Network (SDN) capabilities in Windows Server 2016. Azure Stack systems ship with top-of-rack switches and a baseboard management controller switch, which connects the physical server’s baseboard management controller.

All network traffic flowing from the top-of-rack switches to the customer border switches is layer three only. Network traffic is routed using Border Gateway Protocol or static routing, depending on customer requirements.

For deployment and integration of Azure Stack, you—as a customer—must offer various IP v4 subnet ranges. While some of them are only used internally by Azure Stack, others are required to be routable within your network and potentially controlled by access control lists (ACLs).

The following table shows a list of required IPv4 Ranges.

Identity

At deployment, you must select either Azure Active Directory (Azure AD) or Active Directory Federation Services (ADFS) to provide identities for Azure Stack. This is a non-revertible decision.

Let’s address the most obvious scenario first: If you run Azure Stack in a completely disconnected scenario (on a boat, for example) then your only choice is ADFS. Also, when deploying with ADFS and Internet connectivity, you can use the consumption-based licensing model of Azure Stack, including marketplace syndication.

Automation in Azure Stack federates an existing ADFS deployment to enable the use of existing on premises identities.

This diagram illustrates the integration:

IT system management

As mentioned earlier, if you don’t integrate Azure Stack with your datacenter, full IT life cycle management is not possible. Azure Stack does not manage your datacenter, but rather it’s a component of your datacenter to be managed. To enable management, it’s key to integrate Azure Stack with your IT system management processes.

Azure Stack systems consists of three major, high-level areas. These three areas are Azure Stack software, physical machines, and physical network switches.

Each area offers a different protocol to retrieve events. For more details see the diagram below:

To accelerate IT system management integration, Microsoft offers a management pack that uses System Center Operations Manager. Microsoft has also co-engineered a plug-in with Nagios to accelerate integration with open-source-based monitoring solutions.

For more information, please read the blog post titled, Management Pack for Microsoft Azure Stack.

Publishing

Azure Stack endpoints can be published in multiple ways, ranging from directly exposed to the Internet, with or without a firewall device—to a traditional DMZ architecture including network address translation.

This illustration shows how Azure Stack can be published directly to the internet with an existing firewall.

Azure Stack handles internal and external DNS zones, which can integrate with existing DNS server environments. Azure Stack includes automation to ensure that name resolution is working for your datacenter resources. However, it does not automatically resolve Azure Stack DNS zones from your existing network, which requires action based on Microsoft recommendations.

SSL certificates, which can be issued by a public trusted root certificate authority or by an enterprise certification authority, must be offered at installation. Furthermore, the deployment accepts either wildcard certificates or multiple certificates with a unique certificate name.

Usage

Azure Stack usage data is tracked under the Azure subscription you provided for registration. You can view your usage data as part of your Azure subscription. In addition, Azure Stack offers an API that allows you to retrieve the same usage data that is sent to Azure. This API is used to integrate with billing systems or panels.

For more information, please see the Provider Resource Usage API page. 

Backup

Azure Stack Backup is divided into two scopes: infrastructure backup and tenant backup.

The Azure Stack infrastructure backup requires an SMB file share to operate. This SMB share must be from an existing storage solution (Windows-based file server or third-party). This is considered an “off stamp” backup which enables restoration of infrastructure components or full restoration in case of failure.

For tenant data, Azure Stack supports backup of Windows- and Linux-based applications deployed on IaaS Azure Resource Manager virtual machines. Backup products with access to the guest operating system (OS) can easily protect file, folder, OS state, and application data. You have the flexibility to use Microsoft products like Azure Backup and System Center Data Center Protection Manager to back up data on premises, to a service provider, or directly to Azure. This approach gives you flexibility and choice for protecting your applications and data.

You can read more about backup and disaster recovery for Azure Stack in this blog post.

Security

If you haven’t had a chance to read the Azure Stack security blog post, I highly recommend you do so. We have designed and built Azure Stack with security in mind, from the ground up, including the ability to exchange audit logs from physical devices and Azure Stack internal components with existing security systems.

The Health Resource Provider API in Azure Stack allows you to retrieve all security logs and make them available to security solutions for processing. For physical devices, RADIUS and TACACS are supported for controlling device authentication and auditing. Azure Stack uses servers deployed in your datacenter to provide these services, including sending syslog messages to existing syslog servers.

Summary

You may have noticed I did not cover solution requirements in terms of power, cooling, rack units, and other typical datacenter requirements. Our OEM partners have those details available, including sizing tools to help you choose the right solution.

More information

Will you be attending Microsoft Ignite this year in Orlando? Check out these two sessions for information about Azure Stack Datacenter Integration:

Integrating Azure Stack into your Datacenter
Microsoft Azure Stack usage and billing

Lastly, the Azure Stack team is extremely customer focused and we are always looking for new customers to talk too. If you are passionate about Hybrid Cloud and want to talk with team building Azure Stack at Ignite please sign up for our customer meetup.
Quelle: Azure

Preview: SQL Transparent Data Encryption (TDE) with Bring Your Own Key support

We’re glad to announce the preview of Transparent Data Encryption (TDE) with Bring Your Own Key (BYOK) support for Azure SQL Database and Azure SQL Data Warehouse! Now you can have control of the keys used for encryption at rest with TDE by storing these master keys in Azure Key Vault.

TDE with BYOK support gives you increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties.

When you use TDE, your data is encrypted at rest with a symmetric key (called the database encryption key) stored in the database or data warehouse distribution. To protect this data encryption key (DEK) in the past, you could only use a certificate that the Azure SQL Service managed. Now, with BYOK support for TDE, you can protect the DEK with an asymmetric key that is stored in Key Vault. Key Vault is a highly available and scalable cloud-based key store which offers central key management, leverages FIPS 140-2 Level 2 validated hardware security modules (HSMs), and allows separation of management of keys and data, for additional security.

All the features of Azure SQL Database and SQL Data Warehouse will work with TDE with BYOK support, and you can start enabling TDE with a key from Key Vault today using Azure Portal, PowerShell, and REST API.

In the Azure Portal, we’ve kept the experience simple. Let’s go over three common scenarios.

Enabling TDE

We’ve kept the same simple experience for enabling TDE on the database or data warehouse.

Setting a TDE Protector

On the server, you can now choose to use your own key as the TDE Protector for the databases and data warehouses on your server. Browse through your key vaults to select an existing key or create a new key in Key Vault.

 

Rotating Your Keys

You can rotate your TDE Protector through Key Vault, by adding a new version to the current key. You can also switch the TDE Protector to another key in Key Vault or back to a service-managed certificate at any time. The Azure SQL service will pick up these changes automatically. Rotating the TDE Protector is a fast online process: instead of re-encrypting all data, the rotation re-encrypts the DEK on each database and data warehouse distribution using the TDE Protector.

Integrating BYOK support for SQL TDE allows you to leverage the benefits of TDE as an encryption feature and Key Vault as an external key management service.

You can get started by visiting the Azure Portal or the how-to guide using PowerShell today. To learn more about the feature including best practices, watch our Channel 9 video or visit Transparent Data Encryption with Bring Your Own Key support.

Tell us what you think about TDE with BYOK by visiting the SQL Database and SQL Data Warehouse forums.
Quelle: Azure

Default compatibility level 140 for Azure SQL databases

As of today, the default compatibility level for new databases created in Azure SQL Database is 130. Very soon, we’ll be changing the Azure SQL Database default compatibility level for newly created databases to 140.

The alignment of SQL versions to default compatibility levels are as follows:

100: in SQL Server 2008 and Azure SQL Database
110: in SQL Server 2012 and Azure SQL Database
120: in SQL Server 2014 and Azure SQL Database
130: in SQL Server 2016 and Azure SQL Database
140: in SQL Server 2017 and Azure SQL Database

For details on what compatibility level 140 specifically enables, please see the blog post Public Preview of Compatibility Level 140 for Azure SQL Database. 

Once this new database compatibility default goes into effect, if you wish to still use database compatibility level 130, or lower, please follow the instructions detailed to view or change the compatibility level of a database. For example, you may wish to ensure that new databases created in Azure SQL Database use the same compatibility level as other databases in Azure SQL Database. This is to ensure consistent query optimization behavior across development, QA, and production versions of your databases.

We recommend that database configuration scripts explicitly designate COMPATIBILITY_LEVEL rather than rely on the defaults, in order to ensure consistent application behavior.

For new databases supporting new applications, we recommend using the latest compatibility level, 140. For pre-existing databases running at lower compatibility levels, the recommended workflow for upgrading the query processor to a higher compatibility level is detailed in the article Change the Database Compatibility Mode and Use the Query Store. Note that this article refers to compatibility level 130 and SQL Server, but the same methodology applies for moves to 140 for SQL Server and Azure SQL DB.

To determine the current compatibility level of your database, execute the following Transact-SQL statement:

SELECT compatibility_level
FROM [sys].[databases]
WHERE [name] = 'Your Database Name';

For newly created databases, if you wish to use database compatibility level 130, or lower, instead of the new 140 default, execute ALTER DATABASE. For example:

ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = 130;

Databases created prior to the new compatibility level default change will not be affected and will maintain their current compatibility level. Also note that Azure SQL Database Point in Time Restore will preserve the compatibility level that was in effect when the full backup was performed. 
Quelle: Azure

Announcing Azure Data Lake Store Capture Provider for Event Hubs Capture

Event Hubs Capture went to generally availability in June 2017. To this feature, we are adding Azure Data Lake Store as a new Capture provider. Yes, you can now choose your Azure Data Lake Store to capture events from Event Hubs.

Event Hubs Capture addresses key scenarios of data-streaming such as long-term data retention and downstream micro-batch processing. Capture enabled on your event hub pulls the data directly from Event Hubs to your Azure Data Lake Store. Capture will manage all the compute and downstream processing required to do this. Create your Azure Data Lake Store and set up appropriate permissions for your event hub that has Capture enabled, and you will see how easy it is to stream data into Azure.

How can I enable this provider?

Azure Data Lake Store provider can be enabled in one of the following ways:

On Azure portal, by selecting Azure Data Lake store from the Capture Provider
Azure Resource Manager templates

Once Capture is enabled with Azure Data Lake Store as the provider, choose your time and size window, and you will see your events being captured in your chosen destination.

Event Hubs Capture provides the benefit of simple setup, reduced cost of ownership, no configuration overheads so you can focus on your apps while providing near-real time batch analytics.

Unleash the power of Azure Data Lake Store for your big data requirements at real-time or batch processing and visualization. With Event Hubs Capture streaming data, you can now optimize your data analysis and visualization.

Enjoy this new provider and refer to this article for more details on enabling your Azure Data Lake Store to capture events from your event hub.

Happy eventing!

Next Steps

Get started with Event Hubs Capture

Know more about Azure Data Lake Store
Quelle: Azure

Hortonworks extends IaaS offering on Azure with Cloudbreak

This blog post is co-authored by Peter Darvasi, Engineer, Hortonworks.

We are excited to announce the availability of Cloudbreak for Hortonworks Data Platform on Azure Marketplace. Hortonworks Data Platform (HDP) is an enterprise-ready, open source Apache Hadoop distribution. With Cloudbreak, you can easily provision, configure, and scale HDP clusters in Azure. Cloudbreak is designed for the following use cases:

Create clusters which you can fully control and customize to best fit your workload
Create on-demand clusters to run specific workloads, with data persisted in Azure Blob Storage or Azure Data Lake Store
Create, manage, and scale your clusters intuitively using Cloudbreak UI, or automate with Cloudbreak Shell or API
Automatically configure Kerberos and Apache Knox to secure your cluster

When you deploy Cloudbreak, it installs a “controller” VM which runs the Cloudbreak application. You can use the controller to launch and manage clusters. The following diagram illustrates the high-level architecture of Cloudbreak and HDP on Azure:

Cloudbreak lets you manage all your HDP clusters from a central location. You can configure your clusters with all the controls that Azure and HDP have to offer, and you can automate and repeat your deployments with:

Infrastructure templates for specifying compute, storage, and network resources in the cloud
Ambari blueprints for configuring Hadoop workload
Custom scripts that you can run before or after cluster creation

In addition, Cloudbreak on Azure features the following unique capabilities:

Easily install Cloudbreak by following a UI wizard on Azure Marketplace
Choose among Azure Blob Storage, Azure Data Lake Store, as well as Managed Disks attached to the cluster nodes to persist your data
Follow a simple Cloudbreak wizard to automate the creation of an Azure Active Directory Service Principal for Cloudbreak to manage your Azure resources
Enable high availability with Azure Availability Set
Deploy clusters in new or existing Azure VNet

Getting started

Go to Azure Marketplace and follow the wizard to install Cloudbreak. 
Once deployment is succeeded, retrieve the public DNS name for the Cloudbreak VM. 

Open https with the DNS name, and you will see a browser warning. This is because by default there is no certificate set for this https site. You can still continue to your Cloudbreak web UI and follow the wizard to provision clusters. We recommend that you set up a valid certificate and disable public IP in a production environment. 

Additional resources

For a step-by-step guide, visit Cloudbreak for Hortonworks Data Platform on Azure Marketplace documentation.
Get your questions answered at Hortonworks Community Connection.
To learn more about Cloudbreak and the Hortonworks Data Platform, visit www.hortonworks.com.

Quelle: Azure

Debug Spark Code Running in Azure HDInsight from Your Desktop

This month’s IntelliJ HDInsight Tools release delivers a robust remote debugging engine for Spark running in the Azure cloud. The Azure Toolkit for IntelliJ is available for users running Spark to perform interactive remote debugging directly against code running in HDInsight.

Debugging big data applications is a longstanding pain point. The data-intensive, distributed, scalable computing environment in which big data apps run is inherently difficult to troubleshoot, and this is no different for Spark developers. There is little tooling support for debugging such scenarios, leaving developers with manual, brute-force approaches that are cumbersome, and come with limitations. Common approaches include local debugging against sample data which poses limitations on data size; analysis of log files after the app has completed, requiring manual parsing of unwieldy log files; or use of a Spark shell for line by line execution, which does not support break points.

Azure Toolkit for IntelliJ addresses these challenges by allowing the debugger to attach to Spark processes on HDInsight for direct remote debugging. Developers connect to the HDInsight cluster at any time, leverage IntelliJ built-in debug capabilities, and automatically collect log files. The steps for this interactive remote debugging are the same ones developers are familiar with from debugging one-box apps. Developers do not need to know the configurations of the cluster, nor understand the location of the logs.

To learn more, watch this demo of HDInsight Spark Remote Debugging.

Customer key benefits

Use IntelliJ to run and debug Spark application remotely on an HDInsight cluster anytime via “Run->Edit Configurations”.
Use IntelliJ built-in debugging capabilities, such as conditional breakpoints, to quickly identify data-related errors. Developers can inspect variables, watch intermediate data, step through code, and finally edit the app and resume execution – all against Azure HDInsight clusters with production data.
Set a breakpoint for both driver and executor code. Debugging executor code lets developers detect data-related errors by viewing RDD intermediate values, tracking distributed task operations, and stepping through execution units.
Set a breakpoint in Spark external libraries allowing developers to step into Spark code and debug in the Spark framework.
View both driver and executor code execution logs in the console panel (see the “Driver Tab” and “Executor Tab”).

How to start debugging

The initial configuration to connect to your HDInsight Spark cluster for remote debugging is as simple as a few clicks in the advanced configuration dialog. You can set up a breakpoint on the driver code and executor code in order to step through the code and view the execution logs. To learn more, read the user guide Spark Remote Debug through SSH.

How to install or update

You can get the latest bits by going to IntelliJ repository, and search “Azure Toolkit.” IntelliJ will also prompt you for the latest update if you have already installed the plugin.

For more information, visit the following resources:

User Guide: Use HDInsight Tools in Azure Toolkit for IntelliJ to create Spark applications for HDInsight Spark Linux cluster
Documentation: Spark Remote Debug through SSH
Documentation: Spark Remote Debug through VPN
Spark Local Run: Use HDInsight Tools for IntelliJ with Hortonworks Sandbox
Create Scala Project (Video): Create Spark Scala Applications
Remote Debug (Video): Use Azure Toolkit for IntelliJ to debug Spark applications remotely on HDInsight Cluster

Learn more about today’s announcements on the Azure blog and Big Data blog. Discover more Azure service updates.   

If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.
Quelle: Azure

Architecting Distributed Cloud Applications (free video course)

Learn how to architect distributed cloud applications with the correct developer mindset using the right technologies and the best cloud patterns. This technology-agnostic course begins by explaining the benefits of distributed cloud applications with an emphasis on maintaining high-availability and scalability in a cost-effective way, while also dealing with inevitable hardware and software failures.

Topics include:

orchestrators

transactions

auto-scaling

backup and restore

CDNs

containers

eventual consistency

Saga pattern

service API contracts

replicas

configuration

load balancers

messaging

versioning (code, APIs, and data schemas)

DNS

leader election

data caching

microservices

object and file services

SLAs

partitioning

12-factor apps

event sourcing

relational and non-relational databases

CQRS

data consistency

concurrency control

network

optimistic concurrency

proxies

This course is for anyone considering or actively working on a distributed cloud application. It is designed to provide you with a thorough understanding of these concepts, the various pros and cons of specific technologies, and the resilient patterns that are heavily used by distributed cloud applications.

Where to find the free 6.5 hour course:

YouTube

edX.org (with supplemental reading materials, review questions, and hands-on labs). You can also get a verified certificate (to show employers) for $99.

About the instructor

Jeffrey Richter is a Software Architect on Microsoft’s Azure team. He is also a co-founder of Wintellect, a software consulting and training company. He has authored many videos available on WintellectNOW, has spoken at many industry conferences, and is the author of several best-selling Windows and .NET Framework programming books.
Quelle: Azure

Azure Monitor now available in Azure Government

Getting ahead of issues before they impact end users is a key goal of any IT organization. One important tool in this process is the use of monitoring and analytics services, which help ensure that you get up-to-date information on the overall health of your cloud environment. We are happy to announce that we have expanded the portfolio of management services with the general availability of Azure Monitor in Azure Government.

With Azure Monitor, you can now consume monitoring metrics and logs within the portal and via APIs in near real-time and gain more visibility into the state and performance of your resources. Azure Monitor provides you the ability to configure alert rules to get notified or to take automated actions on issues impacting your resources. Azure Monitor enables analytics, troubleshooting, and a unified dashboarding experience within the portal, in addition to enabling a wide range of product integrations via APIs and data export options. All of this has now been enabled for Azure Government.

With this release, we are also providing new alerting and notification options including custom email and webhooks. This allows you to enable notification on specific Azure services and receive service health notifications. 

Azure Monitor is not just useful for the administration of your Azure resources. The centralized logging and alerting helps achieve compliance with many NIST SP 800-53 security controls that support CJIS, FedRAMP, and the DoD compliance requirements. The data from Azure Monitor can be queried, archived, or analyzed to provide an audit trail and meet key monitoring controls.

Learn more about Azure Monitor by visiting the documentation page. For a detailed list of Azure Monitor features available in the different Azure Government datacenter regions, visit the Azure Government Monitoring + Management page.
Quelle: Azure

Security and Compliance in Azure Stack

Security posture and compliance validation roadmap for Azure Stack

Security considerations and compliance regulations are important drivers for people that choose to control their infrastructure using private/hybrid clouds while using IaaS and PaaS technologies to modernize their applications. Azure Stack was designed for these scenarios, and as a result, security and compliance are areas of major investment for Azure Stack.

Before starting implementing, we asked our customers what they expect from security for a solution like Azure Stack. Not surprisingly, the majority of the people we talked to told us that they would strongly favor a solution that comes already hardened and with security features specifically designed and validated for that solution. Additionally, people said that compliance paperwork is a top frustration.

We listened.

This post will walk you through the security posture of the Azure Stack infrastructure and how it addresses the feedback we received. It also describes the work that we’ve done to accelerate the compliance certification process and reduce paperwork overhead.

Security posture

The security posture of Azure Stack is designed based on two principles:

Assume breach.
Hardened by default.

Assume breach is the modern approach to security, where the focus extends from not only trying to prevent an intrusion, but to also detect and contain a breach. In other words, we not only included measures to prevent a breach, but we also focused on solid detection capabilities. We built the system so that if one component gets compromised, it does not directly result in the entire system getting taken over.

Because of the elevated set of permissions associated with it, the administrator role is typically the most often attacked. Following the assume breach principle, we created a predefined, constrained administration experience so that if an admin credential is compromised, the attacker can only perform actions for which the system is designed, instead of having unrestricted access to every component in the infrastructure. Doubling down on the same concept, we removed any customer-facing domain admin accounts in Azure Stack. If you want to further break down the admin role, Azure Stack offers very fine-grained, role-based access control (RBAC), allowing for complete control of the capabilities exposed to each role.

The Azure Stack infrastructure is completely sealed both from a permission (there is no account to log into it1) and network point of view, making it very hard for unauthorized users to get in. Network access control lists (ACLs) are applied at multiple levels of the infrastructure, blocking all the unauthorized incoming traffic and all unnecessary communications between infrastructure components. The ACL policy is very simple, block everything unless it is necessary.

To boost detection capabilities, we enabled security and audit logs of each infrastructure component and we centrally store them in a storage account. These logs offer great visibility into the infrastructure and security solutions (e.g. Security Information and Event Management) can be attached to monitor for intrusion.

Since we define the hardware and software configuration of Azure Stack, we were able to harden the infrastructure by design, the hardened-by-default principle. In addition to following industry best practices like the Microsoft SDL, Azure Stack comes with encryption at-rest for both infrastructure and tenant data, and encryption in-transit with TLS 1.2 for infrastructure network, Kerberos-based authentication of infrastructure components, military-level OS security baseline (based on the DISA STIG), automated rotation of internal secrets, disabled legacy protocols (such as NTLM, SMBv1), Secure Boot, UEFI, and TPM 2.0. Additionally, we enabled several Windows Server 2016 security features like Credential Guard (credential protection against Pass-the-Hash attacks), Device Guard (software whitelisting), and Windows Defender (antimalware).

There is no security posture without a solid, continuous servicing process. For this reason, in Azure Stack, we strongly invested in an orchestration engine that can apply patches and updates seamlessly across the entire infrastructure.

Thanks to the tight partnership with the Azure Stack OEM partners, we were also able to extend the same security posture to the OEM-specific components, like the Hardware Lifecycle Host and the software running on top of it. This ensures that Azure Stack has a uniform, solid security posture across the entire infrastructure, on top of which customers can build and secure their application workloads.

To add an additional layer of due diligence, we brought in two external vendors to perform extended penetration testing of the entire integrated system, including the OEM-specific components.

We also understand that our customers will want to protect their Azure Stack deployments with additional 3rd party security software. We are working with some of the major players in the industry to make sure that their software can easily interoperate with Azure Stack. If there is specific security software that you want Azure Stack to work with, please fill out this survey.

Regulatory compliance

Customers told us that compliance paperwork is a major frustration. To alleviate that, Azure Stack is going through a formal assessment with a 3PAO (3rd-Party Assessor Organization). The outcome of this effort will be documentation on how the Azure Stack infrastructure meets the applicable controls from several major compliance standards. Customers will be able to use this documentation to jump-start their certification process. For the first round of assessments, we are targeting the following standards: PCI-DSS and the CSA Cloud Control Matrix. The former addresses the payment card industry while the latter gives comprehensive mapping across multiple standards.

Concurrently, we have also started the process to certify Azure Stack for Common Criteria. Given the length of the process, this certification will be completed sometime early next year.

It is important to clarify that Microsoft will not certify Azure Stack for those standards, except for Common Criteria, because several controls within those standards are the customer’s responsibility, that is, people- and process-related controls. Microsoft is formally validating that Azure Stack meets the applicable controls. As a result of this validation, Microsoft, via the 3PAO, will produce pre-compiled documentation that explains how Azure Stack meets the applicable controls.

In the coming months, Azure Stack will continue to expand the portfolio of standards to validate against. The decision about which standard to prioritize will be based on customer demand. To express your preference about which standard Azure Stack should prioritize, please fill out this survey.

More information

Are you attending Microsoft Ignite this year in Orlando? Do not miss the session on Azure Stack Security and Compliance with our chief architect Jeffrey Snover and myself:

BRK3089 – Microsoft Azure Stack security and compliance

Lastly, the Azure Stack team is extremely customer focused and we are always looking for new customers to talk too. If you are passionate about Hybrid Cloud and want to talk with team building Azure Stack at Ignite please sign up for our customer meetup.

1Except for some predefined privileged operations exposed via a PowerShell JEA endpoint
Quelle: Azure