Fileless attack detection for Linux in preview

This blog post was co-authored by Aditya Joshi, Senior Software Engineer, Enterprise Protection and Detection.

Attackers are increasingly employing stealthier methods to avoid detection. Fileless attacks exploit software vulnerabilities, inject malicious payloads into benign system processes, and hide in memory. These techniques minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions.

To counter this threat, Azure Security Center released fileless attack detection for Windows in October 2018. Our blog post from 2018 explains how Security Center can detect shellcode, code injection, payload obfuscation techniques, and other fileless attack behaviors on Windows. Our research indicates the rise of fileless attacks on Linux workloads as well.

Today, Azure Security Center is happy to announce a preview for detecting fileless attacks on Linux.  In this post, we will describe a real-world fileless attack on Linux, introduce our fileless attack detection capabilities, and provide instructions for onboarding to the preview. 

Real-world fileless attack on Linux

One common pattern we see is attackers injecting payloads from packed malware on disk into memory and deleting the original malicious file from the disk. Here is a recent example:

An attacker infects a Hadoop cluster by identifying the service running on a well-known port (8088) and uses Hadoop YARN unauthenticated remote command execution support to achieve runtime access on the machine. Note, the owner of the subscription could have mitigated this stage of the attack by configuring Security Center JIT.
The attacker copies a file containing packed malware into a temp directory and launches it.
The malicious process unpacks the file using shellcode to allocate a new dynamic executable region of memory in the process’s own memory space and injects an executable payload into the new memory region.
The malware then transfers execution to the injected ELF entry point.
The malicious process deletes the original packed malware from disk to cover its tracks. 
The injected ELF payload contains a shellcode that listens for incoming TCP connections, transmitting the attacker’s instructions.

This attack is difficult for scanners to detect. The payload is hidden behind layers of obfuscation and only present on disk for a short time.  With the fileless attack detection preview, Security Center can now identify these kinds of payloads in memory and inform users of the payload’s capabilities.

Fileless attacks detection capabilities

Like fileless attack detection for Windows, this feature scans the memory of all processes for evidence of fileless toolkits, techniques and behaviors. Over the course of the preview, we will be enabling and refining our analytics to detect the following behaviors of userland malware:

Well known toolkits and crypto mining software. 
Shellcode, injected ELF executables, and malicious code in executable regions of process memory.
LD_PRELOAD based rootkits to preload malicious libraries.
Elevation of privilege of a process from non-root to root.
Remote control of another process using ptrace.

In the event of a detection, you receive an alert in the Security alerts page. Alerts contain supplemental information such as the kind of techniques used, process metadata, and network activity. This enables analysts to have a greater understanding of the nature of the malware, differentiate between different attacks, and make more informed decisions when choosing remediation steps.

 

The scan is non-invasive and does not affect the other processes on the system.  The vast majority of scans run in less than five seconds. The privacy of your data is protected throughout this procedure as all memory analysis is performed on the host itself. Scan results contain only security-relevant metadata and details of suspicious payloads.

Getting started

To sign-up for this specific preview, or our ongoing preview program, indicate your interest in the "Fileless attack detection preview."

Once you choose to onboard, this feature is automatically deployed to your Linux machines as an extension to Log Analytics Agent for Linux (also known as OMS Agent), which supports the Linux OS distributions described in this documentation. This solution supports Azure, cross-cloud and on-premise environments. Participants must be enrolled in the Standard or Standard Trial pricing tier to benefit from this feature.

To learn more about Azure Security Center, visit the Azure Security Center page.
Quelle: Azure

Burst 4K encoding on Azure Kubernetes Service

Burst encoding in the cloud with Azure and Media Excel HERO platform.

Content creation has never been as in demand as it is today. Both professional and user-generated content has increased exponentially over the past years. This puts a lot of stress on media encoding and transcoding platforms. Add the upcoming 4K and even 8K to the mix and you need a platform that can scale with these variables. Azure Cloud compute offers a flexible way to grow with your needs. Microsoft offers various tools and products to fully support on-premises, hybrid, or native cloud workloads. Azure Stack offers support to a hybrid scenario for your computing needs and Azure ARC helps you to manage hybrid setups.

Finding a solution

Generally, 4K/UHD live encoding is done on dedicated hardware encoder units, which cannot be hosted in a public cloud like Azure. With such dedicated hardware units hosted on-premise that need to push 4K into the Azure data center the immediate problem we face is a need for high bandwidth network connection between the encoder unit on-premise and Azure data center. In general, it's a best practice to ingest into multiple regions, increasing the load on the network connected between the encoder and the Azure Datacenter.

How do we ingest 4K content reliably into the public cloud?

Alternatively, we can encode the content in the cloud. If we can run 4K/UHD live encoding in Azure, its output can be ingested into Azure Media Services over the intra-Azure network backbone which provides sufficient bandwidth and reliability.

How can we reliably run and scale 4K/UHD live encoding on the Azure cloud as a containerized solution? Let's explore below. 

Azure Kubernetes Service

With Azure Kubernetes Services (AKS) Microsoft offers a managed Kubernetes platform to customers. It is a hosted Kubernetes platform without having to spend a lot of time creating a cluster with all the necessary configuration burden like networking, cluster masters, and OS patching of the cluster nodes. It also comes with pre-configured monitoring seamlessly integrating with Azure Monitor and Log Analytics. Of course, it still offers flexibility to integrate your own tools. Furthermore, it is still just the plain vanilla Kubernetes and as such is fully compatible with any existing tooling you might have running on any other standard Kubernetes platform.

Media Excel encoding

Media Excel is an encoding and transcoding vendor offering physical appliance and software-based encoding solutions. Media Excel has been partnering with Microsoft for many years and engaging in Azure media customer projects. They are also listed as recommended and tested contribution encoder for Azure Media Services for fMP4. There has also work done by both Media Excel and Microsoft to integrate SCTE-35 timed metadata from Media Excel encoder to an Azure Media Services Origin supporting Server-Side Ad Insertion (SSAI) workflows.

Networking challenge

With increasing picture quality like 4K and 8K, the burden on both compute and networking becomes a significant architecting challenge. In a recent engagement with a customer, we needed to architect a 4K live streaming platform with a challenge of limited bandwidth capacity from the customer premises to one of our Azure Datacenters. We worked with Media Excel to build a scalable containerized encoding platform on AKS. Utilizing cloud compute and minimizing network latency between Encoder and Azure Media Services Packager. Multiple bitrates with a top bitrate up to 4Kp60@20Mbps of the same source are generated in the cloud and ingested into the Azure Media Services platform for further processing. This includes Dynamic Encryption and Packaging. This setup enables the following benefits:

Instant scale to multiple AKS nodes
Eliminate network constraints between customer and Azure Datacenter
Automated workflow for containers and easy separation of concern with container technology
Increased level of security of high-quality generated content to distribution
Highly redundant capability
Flexibility to provide various types of Node pools for optimized media workloads

In this particular test, we proved that the intra-Azure network is extremely capable of shipping high bandwidth, latency-sensitive 4K packets from a containerized encoder instance running in West Europe to both East US and Honk Kong Datacenter Regions. This allows the customer to place origin closer to them for further content conditioning.

Workflow:

Azure Pipeline is triggered to deploy onto the AKS cluster. In the YAML file (which you can find on Github) there is a reference to the Media Excel Container in Azure Container Registry.
AKS starts deployment and pulls container from Azure Container Registry.
During Container start custom PHP script is loaded and container is added to the HMS (Hero Management Service). And placed into the correct device pool and job.
Encoder loads source and (in this case) push 4K Livestream into Azure Media Services.
Media Services packaged Livestream into multiple formats and apply DRM (digital rights management).
Azure Content Deliver Network scales livestream.

Scale through Azure Container Instances

With Azure Kubernetes Services you get the power of Azure Container Instances out of the box. Azure Container Instances are a way to instantly scale to pre-provisioned compute power at your disposal. When deploying Media Excel encoding instances to AKS you can specify where these instances will be created. This offers you the flexibility to work with variables like increased density on cheaper nodes for low-cost low priority encoding jobs or more expensive nodes for high throughput high priority jobs. With Azure Container Instances you can instantly move workloads to standby compute power without provisioning time. You only pay for the compute time offering full flexibility for customer demand and future change in platform needs. With Media Excel’s flexible Live/File based encoding roles you can easily move workloads across different compute power offered by AKS and Azure Container Instances.

Azure DevOps pipeline to bring it all together

All the general benefits that come with containerized workload apply in the following case. For this particular proof-of-concept, we created an automated deployment pipeline in Azure DevOps for easy testing and deployment. With a deployment YAML and Pipeline YAML we can easily automate deployment, provisioning and scaling of a Media Excel encoding container. Once DevOps pushes the deployment job onto AKS a container image is pulled from Azure Container Registry. Although container images can be bulky utilizing node side caching of layers any additional container pull is greatly improved down to seconds. With the help of Media Excel, we created a YAML file container pre- and post-container lifecycle logic that will add and remove a container from Media Excel's management portal. This offers an easy single pane of glass management of multiple instances across multiple node types, clusters, and regions.

This deployment pipeline offers full flexibility to provision certain multi-tenant customers or job priority on specific node types. This unlocks the possibility of provision encoding jobs on GPU enabled nodes for maximum throughput or using cheaper generic nodes for low priority jobs.

Azure Media Services and Azure Content Delivery Network

Finally, we push the 4K stream into Azure Media Services. Azure Media Services is a cloud-based platform that enables you to build solutions that achieve broadcast-quality video streaming, enhance accessibility and distribution, analyze content, and much more. Whether you're an app developer, a call center, a government agency, or an entertainment company, Media Services helps you create apps that deliver media experiences of outstanding quality to large audiences on today’s most popular mobile devices and browsers.

Azure Media Services is seamlessly integrated with Azure Content Delivery Network. With Azure Content Delivery Network we offer a true multi CDN with choices of Azure Content Delivery Network from Microsoft, Azure Content Delivery Network from Verizon, and Azure Content Delivery Network from Akamai. All of this through a single Azure Content Delivery Network API for easy provisioning and management. As an added benefit, all CDN traffic between Azure Media Services Origin and CDN edge is free of charge.

With this setup, we’ve demonstrated that Cloud encoding is ready to handle real-time 4K encoding across multiple clusters. Thanks to Azure services like AKS, Container Registry, Azure DevOps, Media Services, and Azure Content Delivery Network, we demonstrated how easy it is to create an architecture that is capable of meeting high throughput time-sensitive constraints.
Quelle: Azure

A secure foundation for IoT, Azure Sphere now generally available

Today Microsoft Azure Sphere is generally available. Our mission is to empower every organization on the planet to connect and create secured and trustworthy IoT devices. General availability is an important milestone for our team and for our customers, demonstrating that we are ready to fulfill our promise at scale. For Azure Sphere, this marks a few specific points in our development. First, our software and hardware have completed rigorous quality and security reviews. Second, our security service is ready to support organizations of any size. And third, our operations and security processes are in place and ready for scale. General availability means that we are ready to put the full power of Microsoft behind securing every Azure Sphere device.

The opportunity to release a brand-new product that addresses crucial and unmet needs is rare. Azure Sphere is truly unique, our product brings a new technology category to the Microsoft family, to the IoT market, and to the security landscape.

IoT innovation requires security

The International Data Corporation (IDC) estimates that by 2025 there will be 41.6 billion connected IoT devices. Put in perspective, that’s more than five times the number of people on earth today. When we consider why IoT is growing so rapidly, the astounding pace is being driven by industries and companies that are investing in IoT to pursue long-term, real-world impact. They’re looking to harness the power of the intelligent edge to make daily life effortless, to transform businesses, to create safer working and living conditions, and to address some of the world’s most pressing challenges.

Innovation, no matter how valuable, is not durable without a foundation of security. If the devices and experiences that promise to reshape the world around us are not built on a foundation of security, they cannot last. But when innovation is built on a secure foundation, you can be confident in its ability to endure and deliver value long into the future. Durable innovation requires future-proofing IoT investments by planning and investing in security upfront.

IoT security is complex and the threat landscape is dynamic. You have to operate under the assumption that attacks will happen, it's not a matter of if but when. With this in mind, we built Azure Sphere with multiple layers of protection and with continually improving security so that it’s possible to limit the reach of an attack and renew and enhance the security of a device over time. Azure Sphere delivers foundational security for durable innovation.

Security is complex, but it doesn’t have to be complicated

Many of the customers we talk to struggle to define the specific IoT security measures necessary for success. We’ve leveraged our deep Microsoft experience in security to develop a very clear view of what IoT security requires. We found that there are seven properties that every IoT device must have in order to be secured. These properties clearly outline the requirements for an IoT device with multiple layers of protection and continually improving security.

Any organization can use the seven properties as a roadmap for device security, but Azure Sphere is designed to give our customers a fast track to secured IoT deployments by having all seven properties built-in. It makes achieving layered, renewable security for connected devices an easy, affordable, no-compromise decision.

Azure Sphere is a fully realized security system that protects devices over time. It includes four components, three of which are powered by technology, the Azure Sphere-certified chips that go into every device, the Azure Sphere operating system (OS) that runs on the chips, and the cloud-based Azure Sphere Security Service.

Every Azure Sphere chip includes built-in Microsoft security technology to provide a dependable hardware root of trust and advanced security measures to guard against attacks. The Azure Sphere OS is designed to limit the potential reach of an attack and to make it possible to restore the health of the device if it’s ever compromised. We continually update our OS, proactively adding new and emerging protections. The Azure Sphere Security Service reaches out and guards every Azure Sphere device. It brokers trust for device-to-cloud and device-to-device communication, monitors the Azure Sphere ecosystem to detect emerging threats, and provides a pipe for delivering application and OS updates to each device. Altogether, these layers of security prevent any single point of failure that could leave a device vulnerable.

The fourth component may be the most important: our Azure Sphere team. These are some of the brightest minds in security and they’re dedicated to the security of every single Azure Sphere device. Our team is at work identifying emerging security threats, creating new countermeasures, and deploying them to every Azure Sphere device. We are fighting the security battle, so our customers don’t have to.

Security obsessed, customer-driven

The challenges of IoT device security that keep us up at night lead to the features and capabilities that give our customers peace of mind. It’s ambitious and demanding work. To realize the defense-in-depth approach we had to integrate multiple distinct technologies and their related engineering disciplines. Our team can’t think about any component in isolation. Instead, we work from a unified view of interoperability and dependencies that brings together our silicon, operating system, SDK, security services, and developer experience. Having a clear mission gives us a shared focus to strategize and collaborate across teams and technologies. By eliminating boundaries among technologies or engineering teams, we’ve been able to create a product far greater than the sum of its parts.

We also made a point to collaborate with our early customers. We’ve used public preview to learn and improve how we deliver security in a way that supports customer and partner needs. Working closely with a wide range of customers has helped shape our investments in hardware, features, capabilities, and services. To support customers across the breadth of their IoT journeys, we’ve built strong partnerships with leading silicon and hardware manufacturers. This gives customers more choice, more implementation options, and new offerings that can speed time to market. Right now, customers are using Azure Sphere to securely connect everything from espresso machines to datacenters. Between those examples, there’s a whole range of use cases for home and commercial appliances, industrial manufacturing equipment, smart energy solutions, and so much more.

Our customers across a wide array of industries are putting their trust in Azure Sphere as they connect and secure equipment, drive improvements, reduce costs, and mitigate the real risks that cyberattacks present.

The Azure Sphere commitment

“Culture eats strategy for breakfast.” Only when we ground everything we do in our culture, can we support what’s necessary to execute a brilliant strategy. What we’ve set out to achieve with Azure Sphere is ambitious and Microsoft is deeply invested in a culture that can support the most ambitious ideas. We apply a growth mindset to everything we do and always strive to learn more about our customers. We actively seek diversity and practice inclusion as we work together toward the ultimate pursuit of making a difference in the world. Guided by our belief that a strong culture is an essential foundation for bringing our vision to life, we’ve focused on culture from the beginning.

To bring together the right technology and tactics as a single, end-to-end solution at scale, is an amazing amount of work that requires true teamwork. We’ve built a team with a broad variety of backgrounds, experience, and expertise across multiple disciplines to work together on Azure Sphere. To support collaboration and creativity, we have nurtured the Microsoft cultural values by practicing fearlessness, trustworthiness, and kindness. Without a strong and positive culture, the work we do would be much harder and far less fun. Our culture gives us the confidence to tackle seemingly impossible challenges and the freedom to take bold steps forward.

Azure Sphere general availability is a culmination of the focus, commitment, and investment we make as a team to realize our shared vision. I’m incredibly proud of the Azure Sphere team and what we’ve built together. And I’m grateful to share this accomplishment with all of the teammates, partners, and customers who have been a part of our journey to general availability. We’re ready to be our customers’ trusted partner in device security so that they can focus on unleashing innovation in their products and in their businesses.

If you are interested in learning more about how Azure Sphere can help you securely fast track your next IoT innovation:

Visit the Azure Sphere website to learn more. 
See how customers like Starbucks are using Azure Sphere to drive efficiency and consistency in their retail operations.
Get started.

Quelle: Azure

Preview of Active Directory authentication support on Azure Files

We are excited to announce the preview of Azure Files Active Directory (AD) authentication. You can now mount your Azure Files using AD credentials with the exact same access control experience as on-premises. You may leverage an Active Directory domain service either hosted on-premises or on Azure for authenticating user access to Azure Files for both premium and standard tiers. Managing file permissions is also simple. As long as your Active Directory identities are synced to Azure AD, you can continue to manage the share level permission through standard role-based access control (RBAC). For directory and file level permission, you simply configure Windows ACLs (NTFS DACLs) using Windows File Explorer just like any regular file share. Most of you may have already synced on-premises Active Directory to Azure AD as part of Office 365 or Azure adoption and are ready to take advantage of this new capability today.

When you consider migrating file servers to the cloud, many may decide to keep the existing Active Directory infrastructure and move the data first. With this preview release, we made it seamless for Azure Files to work with existing Active Directory with no change in the client environment. You can log into an Active Directory domain-joined machine and access Azure file share with a single sign-on experience. In addition, you can carry over all existing NTHS DACLs that have been configured on the directories and files over the years and have them continue to be enforced as before. Simply migrate your files with ACLs using common tools like robust file copy (robocopy) or orchestrate tiering from on-premises Windows file servers to Azure Files with Azure File Sync.

With AD authentication, Azure Files can better serve as the storage solution for Virtual Desktop Infrastructure (VDI) user profiles. Most commonly, you have set up the VDI environment with Windows Virtual Desktop as an extension of your on-premises workspace while continue to use Active Directory to manage the hosting environment. By using Azure Files as the user profile storage, when a user logs into the virtual session, only the profile of the authenticated user is loaded from Azure Files. You don’t need to set up a separate domain service for managing storage access control experience for your VDI environment. Azure Files provides you the most scalable, cost-efficient, and serverless file storage solution for hosting user profile data. To learn more about using Azure Files for Windows Virtual Desktop scenarios, refer to this article.

What’s new?

Below is a summary of the key capabilities introduced in the preview:

Enable Active Directory (Active Directory/Domain Services) authentication for server message block (SMB) access. You can mount Azure Files from Active Directory domain-joined machines either on-premises or on Azure using Active Directory credentials. Azure Files supports using Active Directory as the directory service for identity-based access control experience for both premium and standard tiers. You can enable Active Directory authentication on self-managed or Azure Files Sync managed file shares.
Enforce share level and directory or file level permission. The existing access control experience continues to be enforced for file shares enabled for Active Directory authentication. You can leverage RBAC for share-level permission management, then persist or configure directory or file level NTFS DACLs using Windows File Explorer and icacls tools.
Support file migration from on-premises with ACL persistence over Azure File Sync. Azure File Sync now supports persisting ACLs on Azure Files in native NTFS DACL format. You can choose to use Azure File Sync for seamless migration from on-premises Windows file servers to Azure Files. Existing files and directories tiered to Azure Files through Azure Files Syncs have ACLs persisted in the native format.

Get started and share your experiences

You can create a file share in the preview supported regions and enable authentication with your Active Directory environment running on-premises or on Azure. Here are the documentation links to the detailed guidance on the feature capabilities and step to step enablement.

As always, you can share your feedback and experience over email at azurefiles@microsoft.com. Post your ideas and suggestions about Azure Storage on our feedback forum.
Quelle: Azure

Azure Security Center for IoT RSA 2020 announcements

We announced the general availability of Azure Security Center for IoT in July 2019. Since then, we have seen a lot of interest from both our customers and partners. Our team has been working on enhancing the capabilities we offer our customers to secure their IoT solutions. As our team gets ready to attend the RSA conference next week, we are sharing the new capabilities we have in Azure Security Center for IoT.

As organizations pursue digital transformation by connecting vital equipment or creating new connected products, IoT deployments will get bigger and more common. In fact, the International Data Corporation (IDC) forecasts that IoT will continue to grow at double-digit rates until IoT spending surpasses $1 trillion in 2022. As these IoT deployments come online, newly connected devices will expand the attack surface available to attackers, creating opportunities to target the valuable data generated by IoT. Organizations are challenged with securing their IoT deployments end-to-end from the devices to applications and data, also including the connections between the two.

Why Azure Security Center for IoT?

Azure Security Center for IoT provides threat protection and security posture management designed for securing entire IoT deployments, including Microsoft and 3rd party devices. Azure Security Center for IoT is the first IoT security service from a major cloud provider that enables organizations to prevent, detect, and help remediate potential attacks on all the different components that make up an IoT deployment—from small sensors, to edge computing devices and gateways, to Azure IoT Hub, and on to the compute, storage, databases, and AI or machine learning workloads that organizations connect to their IoT deployments. This end-to-end protection is vital to secure IoT deployments.

Added support for Azure RTOS operating system

Azure RTOS is a comprehensive suite of real-time operating systems (RTOS) and libraries for developing embedded real-time IoT applications on multi control unit (MCU) devices. It includes Azure RTOS ThreadX, a leading RTOS with the off-the-shelf support for most leading chip architectures and embedded development tools. Azure Security Center for IoT extends support for Azure RTOS operating system in addition to Linux (Ubuntu, Debian) and Windows 10 IoT core operating systems. Azure RTOS will be shipped with a built-in security module that will cover common threats on real-time operating system devices. The offering includes detection of malicious network activities, device behavior baselining based on custom alerts, and recommendations that will help to improve the security hygiene of the device.

New Azure Sentinel connector

As information technology, operational technology, and the Internet of Things converge, customers are faced with rising threats.

Azure Security Center for IoT announces the availability of an Azure Sentinel connector that provides onboarding of IoT data workloads into Sentinel from Azure IoT Hub-managed deployments. This integration provides investigation capabilities on IoT assets from Azure Sentinel allowing security pros to combine IoT security data with data from across the organization for artificial intelligence or advanced analysis. With Azure Sentinel connector you can now monitor alerts across all your IoT Hub deployments, act upon potential risks, inspect and triage your IoT Incidents, and run investigations to track attacker's lateral movement within your network.

With this new announcement, Azure Sentinel is the first security information and event management (SIEM) with native IoT support, allowing SecOps and analysts to identify threats in the complex converged networks.

Microsoft Intelligent Security Association partnership program for IoT security vendors

Through partnering with members of the Microsoft Intelligent Security Association, Microsoft is able to leverage a vast knowledge pool to defend against a world of increasing IoT threats in enterprise, healthcare, manufacturing, energy, building management systems, transportation, smart cities, smart homes, and more. Azure Security Center for IoT's simple onboarding flow connects solutions, like Attivo Networks, CyberMDX, CyberX, Firedome, and SecuriThings—enabling you to protect your managed and unmanaged IoT devices, view all security alerts, reduce your attack surface with security posture recommendations, and run unified reports in a single pane of glass.

For more information on the Microsoft Intelligent Security Association partnership program for IoT security vendors check out our tech community blog.

Availability on government regions

Starting on March 1, 2020, Azure Security Center for IoT will be available on USGov Virginia and USGov Arizona regions.

Organizations can monitor their entire IoT solution, stay ahead of evolving threats, and fix configuration issues before they become threats. When combined with Microsoft’s secure-by-design devices, services, and the expertise we share with you and your partners, Azure Security Center for IoT provides an important way to reduce the risk of IoT while achieving your business goals.

To learn more about Azure Security Center for IoT please visit our documentation page. To learn more about our new partnerships please visit the Microsoft Intelligent Security Association page. Upgrade to Azure Security Center Standard to benefit from IoT security.
Quelle: Azure

New Azure Firewall certification and features in Q1 CY2020

This post was co-authored by Suren Jamiyanaa, Program Manager, Azure Networking

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall capabilities based on your top feedback items:

ICSA Labs Corporate Firewall Certification.
Forced tunneling support now in preview.
IP Groups now in preview.
Customer configured SNAT private IP address ranges now generally available.
High ports restriction relaxation now generally available.

Azure Firewall is a cloud native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

ICSA Labs Corporate Firewall Certification

ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the world’s top technology vendors.

Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see information here. For more information, see the ICSA Labs Firewall Certification program page.

Figure one – Azure Firewall now ICSA Labs certified.

Forced tunneling support now in preview

Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or a nearby Network Virtual Appliance (NVA) for additional inspection. By default, forced tunneling isn't allowed on Azure Firewall to ensure all its outbound Azure dependencies are met.

To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and BGP route propagation must be disabled.

Within this configuration, the AzureFirewallSubnet can now include routes to any on-premise firewall or NVA to process traffic before it's passed to the Internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet. For more information see Azure Firewall forced tunneling documentation.

Figure two – Creating a firewall with forced tunneling enabled.

IP Groups now in preview

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. For more information, see the IP Groups in Azure Firewall documentation.

Figure three – Azure Firewall application rules utilize an IP group.

Customer configured SNAT private IP address ranges

Azure firewall provides automatic Source Network Address Translation (SNAT) for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. If your organization uses a public IP address range for private networks or opts to force tunnel Azure Firewall internet traffic via an on-premises firewall, you can configure Azure Firewall to not SNAT additional custom IP address ranges. For more information, see Azure Firewall SNAT private IP address ranges.

Figure four – Azure Firewall with custom private IP address ranges.

High ports restriction relaxation now generally available

Since its initial preview release, Azure Firewall had a limitation that prevented network and application rules from including source or destination ports above 64,000. This default behavior blocked RPC based scenarios and specifically Active Directory synchronization. With this new update, customers can use any port in the 1-65535 range in network and application rules.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

Azure Firewall documentation.
Azure Firewall July 2019 blog: What’s new in Azure Firewall.
Azure Firewall Manager documentation.
Azure Firewall Manager blog: Azure Firewall Manager now supports virtual networks.

Azure Firewall central management partners:

AlgoSec CloudFlow.
Barracuda Cloud Security Guardian, now generally available in Azure Market.
Tufin SecureCloud.

Quelle: Azure

Azure Firewall Manager now supports virtual networks

This post was co-authored by Yair Tor, Principal Program Manager, Azure Networking.

Last November we introduced Microsoft Azure Firewall Manager preview for Azure Firewall policy and route management in secured virtual hubs. This also included integration with key Security as a Service partners, Zscaler, iboss, and soon Check Point. These partners support branch to internet and virtual network to internet scenarios.

Today, we are extending Azure Firewall Manager preview to include automatic deployment and central security policy management for Azure Firewall in hub virtual networks.

Azure Firewall Manager preview is a network security management service that provides central security policy and route management for cloud-based security perimeters. It makes it easy for enterprise IT teams to centrally define network and application-level rules for traffic filtering across multiple Azure Firewall instances that spans different Azure regions and subscriptions in hub-and-spoke architectures for traffic governance and protection. In addition, it empowers DevOps for better agility with derived local firewall security policies that are implemented across organizations.

For more information see Azure Firewall Manager documentation.

Figure one – Azure Firewall Manger Getting Started page

 

Hub virtual networks and secured virtual hubs

Azure Firewall Manager can provide security management for two network architecture types:

 Secured virtual hub—An Azure Virtual WAN Hub is a Microsoft-managed resource that lets you easily create hub-and-spoke architectures. When security and routing policies are associated with such a hub, it is referred to as a secured virtual hub.

 Hub virtual network—This is a standard Azure Virtual Network that you create and manage yourself. When security policies are associated with such a hub, it is referred to as a hub virtual network. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. It is also possible to manage firewalls in standalone virtual networks that are not peered to any spoke.

Whether to use a hub virtual network or a secured virtual depends on your scenario:

 Hub virtual network—Hub virtual networks are probably the right choice if your network architecture is based on virtual networks only, requires multiple hubs per regions, or doesn’t use hub-and-spoke at all.

 Secured virtual hubs—Secured virtual hubs might address your needs better if you need to manage routing and security policies across many globally distributed secured hubs. Secure virtual hubs have high scale VPN connectivity, SDWAN support, and third-party Security as Service integration. You can use Azure to secure your Internet edge for both on-premises and cloud resources.

The following comparison table in Figure 2 can assist in making an informed decision:

 

 
Hub virtual network
Secured virtual hub

Underlying resource
Virtual network
Virtual WAN hub

Hub-and-Spoke
Using virtual network peering
Automated using hub virtual network connection

On-prem connectivity

VPN Gateway up to 10 Gbps and 30 S2S connections; ExpressRoute

More scalable VPN Gateway up to 20 Gbps and 1000 S2S connections; ExpressRoute

Automated branch connectivity using SDWAN
Not supported
Supported

Hubs per region
Multiple virtual networks per region

Single virtual hub per region. Multiple hubs possible with multiple Virtual WANs

Azure Firewall – multiple public IP addresses
Customer provided
Auto-generated (to be available by general availability)

Azure Firewall Availability Zones
Supported
Not available in preview. To be available availabilityavailablity

Advanced internet security with 3rd party Security as a service partners

Customer established and managed VPN connectivity to partner service of choice

Automated via Trusted Security Partner flow and partner management experience

Centralized route management to attract traffic to the hub

Customer managed UDR; Roadmap: UDR default route automation for spokes

Supported using BGP

Web Application Firewall on Application Gateway
Supported in virtual network
Roadmap: can be used in spoke

Network Virtual Appliance
Supported in virtual network
Roadmap: can be used in spoke

Figure 2 – Hub virtual network vs. secured virtual hub

Firewall policy

Firewall policy is an Azure resource that contains network address translation (NAT), network, and application rule collections as well as threat intelligence settings. It's a global resource that can be used across multiple Azure Firewall instances in secured virtual hubs and hub virtual networks. New policies can be created from scratch or inherited from existing policies. Inheritance allows DevOps to create local firewall policies on top of organization mandated base policy. Policies work across regions and subscriptions.

Azure Firewall Manager orchestrates Firewall policy creation and association. However, a policy can also be created and managed via REST API, templates, Azure PowerShell, and CLI.

Once a policy is created, it can be associated with a firewall in a Virtual WAN Hub (aka secured virtual hub) or a firewall in a virtual network (aka hub virtual network).

Firewall Policies are billed based on firewall associations. A policy with zero or one firewall association is free of charge. A policy with multiple firewall associations is billed at a fixed rate.

For more information, see Azure Firewall Manager pricing.

The following table compares the new firewall policies with the existing firewall rules:

 

Policy

Rules

Contains

NAT, Network, Application rules, and Threat Intelligence settings

NAT, Network, and Application rules

Protects

Virtual hubs and virtual networks

Virtual networks only

Portal experience

Central management using Firewall Manager

Standalone firewall experience

Multiple firewall support

Firewall Policy is a separate resource that can be used across firewalls

Manually export and import rules or using 3rd party management solutions

Pricing

Billed based on firewall association. See Pricing

Free

Supported deployment mechanisms

Portal, REST API, templates, PowerShell, and CLI

Portal, REST API, templates, PowerShell, and CLI

Release Status

Preview

General Availability

Figure 3 – Firewall Policy vs. Firewall Rules

Next steps

For more information on topics covered here, see the following blogs, documentation, and videos:

 Azure Firewall Manager documentation
 Azure Firewall Manager Pricing

Azure Firewall central management partners:

AlgoSec CloudFlow
Barracuda Cloud Security Guardian, now generally available in Azure Market
Tufin SecureCloud

Quelle: Azure

Azure Offline Backup with Azure Data Box now in preview

An ever-increasing number of enterprises, even as they adopt a hybrid IT strategy, continue to retain mission-critical data on-premises and look towards the public cloud as an effective offsite for their backups. Azure Backup—Azure’s built-in data-protection solution, provides a simple, secure, and cost-effective mechanism to backup these data-assets over the network to Azure, while eliminating on-premises backup infrastructure. After the initial full backup of data, Azure Backup transfers only incremental changes in the data, thereby delivering continued savings on both network and storage.

With the exponential growth in critical enterprise data, the initial full backups are reaching terabyte scale. Transferring these large full-backups over the network, especially in high-latency network environments or remote offices, may take weeks or even months. Our customers are looking for more efficient ways beyond fast networks to transfer these large initial backups to Azure. Microsoft Azure Data Box solves the problem of transferring large data sets to Azure by enabling the “offline” transfer of data using secure, portable, and easy-to-get Microsoft appliances.

Announcing the preview of Azure Offline Backup with Azure Data Box

Today, we are thrilled to add the power of Azure Data Box to Azure Backup, and announce the preview program for offline initial backup of large datasets using Azure Data Box! With this preview, customers will be able to use Azure Data Box with Azure Backup to seed large initial backups (up to 80 TB per server) offline to an Azure Recovery Services Vault. Subsequent backups will take place over the network.

This preview is currently available to the customers of Microsoft Azure Recovery Services agent and is a much-awaited addition to the existing support for offline backup using Azure Import/Export Services. 

Key benefits

The Azure Data Box addition to Azure Backup delivers core benefits of the Azure Data Box service while offering key advantages over the Azure Import/Export based offline backup.

Simple—No need to procure your own Azure-compatible disks or connectors as with the Azure Import based offline backup. Simply order and receive one or more Data Box appliances from your Azure subscription, plug-in, fill with backup data, return to Azure, and track all of it on the Azure portal.
Built-in—The Azure Data Box based offline backup experience is built-into the Recovery Services agent, so you can easily discover and detect your received Azure Data Box appliances, transfer backup data, and track the completion of the initial backup directly from the agent.
Secure—Azure Data Box is a tamper-resistant appliance that comes with ruggedized casing to handle bumps and bruises during transport and supports 256-bit AES encryption on your data.
Efficient—Get freedom from provisioning temporary storage (staging locations) or use of additional tools to prepare disks and copy data, as in the Azure Import based offline backup. Azure Backup directly copies backup data to Azure Data Box, delivering savings on storage and time, and eliminating additional copy tools.

Getting started

Seeding your large initial backups using Azure Backup and Azure Data Box involves the following high-level steps. 

Order and receive your Azure Data Box based on the amount of data you want to backup from a server. Order an Azure Data Box Disk if you want to backup less than 7.2 TB of data. Order an Azure Data Box to backup up to 80 TB of data.
Install and register the latest Recovery Services agent to an Azure Recovery Services Vault.
Select the “Transfer using Microsoft Azure Data Box disks” option for offline backup as part of scheduling your backups with the Recovery Services agent.

Trigger Backup to Azure Data Box from the Recovery Services Agent.
Return Azure Data Box to Azure.

Azure Data Box and Azure Backup will automatically upload the data to the Azure Recovery Services Vault. Refer to this article for a detailed overview of pre-requisites and steps to take advantage of Azure Data Box when seeding your initial backup offline with Azure Backup.

Offline backup with Azure Data Box on Data Protection Manager and Azure Backup Server

If you are using System Center Data Protection Manager or Microsoft Azure Backup Server and are interested in seeding large initial backups using Azure Data Box, drop us a line at systemcenterfeedback@microsoft.com for access to early previews.

Related links and additional content

Jump right into using Offline Backup with Azure Data Box.
Learn more about Offline backup options with Azure Backup.
New to Azure Backup? Sign up for a free Azure trial subscription.
Review whether you need to use online or offline mechanisms to send backup data to Azure.
Need help? Reach out to Azure Backup forum for support or browse Azure Backup documentation.
Follow us on Twitter @AzureBackup for the latest news and updates.

Quelle: Azure

SQL Server runs best on Azure. Here’s why.

SQL Server customers migrating their databases to the cloud have multiple choices for their cloud destination. To thoroughly assess which cloud is best for SQL Server workloads, two key factors to consider are:

Innovations that the cloud provider can uniquely provide.

Independent benchmark results.

What innovations can the cloud provider bring to your SQL Server workloads?

As you consider your options for running SQL Server in the cloud, it's important to understand what the cloud provider can offer both today and tomorrow. Can they provide you with the capabilities to maximize the performance of your modern applications? Can they automatically protect you against vulnerabilities and ensure availability for your mission-critical workloads?

SQL Server customers benefit from our continued expertise developed over the past 25 years, delivering performance, security, and innovation. This includes deploying SQL Server on Azure, where we provide customers with innovations that aren’t available anywhere else. One great example of this is Azure BlobCache, which provides fast, free reads for customers. This feature alone provides tremendous value to our customers that is simply unmatched in the market today.

Additionally, we offer preconfigured, built-in security and management capabilities that automate tasks like patching, high availability, and backups. Azure also offers advanced data security that enables both vulnerability assessments and advanced threat protection. Customers benefit from all of these capabilities both when using our Azure Marketplace images and when self-installing SQL Server on Azure virtual machines.

Only Azure offers these innovations.

What are their performance results on independent, industry-standard benchmarks?

Benchmarks can often be useful tools for assessing your cloud options. It's important, though, to ask if those benchmarks were conducted by independent third parties and whether they used today’s industry-standard methods.

The images above show performance and price-performance comparisons from the February 2020 GigaOm performance benchmark blog post. 

In December, an independent study by GigaOm compared SQL Server on Azure Virtual Machines to AWS EC2 using a field test derived from the industry standard TPC-E benchmark. GigaOm found Azure was up to 3.4x faster and 87 percent cheaper than AWS.1 Today, we are pleased to announce that in GigaOm’s second benchmark analysis, using the latest virtual machine comparisons and disk striping, Azure was up to 3.6x faster and 84 percent cheaper than AWS.2 These results continue to demonstrate that SQL Server runs best on Azure.

Get started today

Learn more about how you can start taking advantage of these benefits today with SQL Server on Azure.

 

1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in October 2019. The study compared price performance between SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in Azure E64s_v3 instance type with 4x P30 1TB Storage Pool data (Read-Only Cache) + 1x P20 0.5TB log (No Cache) and the SQL Server 2017 Enterprise Edition on Windows Server 2016 Datacenter edition in AWS EC2 r4.16xlarge instance type with 1x 4TB gp2 data + 1x 1TB gp2 log. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of October 2019. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server on AWS, excluding Software Assurance costs. Actual results and prices may vary based on configuration and region.

2Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in February 2020. The study compared price performance between SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in Azure E32as_v4 instance type with P30 Premium SSD Disks and the SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in AWS EC2 r5a.8xlarge instance type with General Purpose (gp2) volumes. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of January 2020. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs. Actual results and prices may vary based on configuration and region.
Quelle: Azure

Announcing the preview of Azure Shared Disks for clustered applications

Today, we are announcing the limited preview of Azure Shared Disks, the industry’s first shared cloud block storage. Azure Shared Disks enables the next wave of block storage workloads migrating to the cloud including the most demanding enterprise applications, currently running on-premises on Storage Area Networks (SANs). These include clustered databases, parallel file systems, persistent containers, and machine learning applications. This unique capability enables customers to run latency-sensitive workloads, without compromising on well-known deployment patterns for fast failover and high availability. This includes applications built for Windows or Linux-based clustered filesystems like Global File System 2 (GFS2).

With Azure Shared Disks, customers now have the flexibility to migrate clustered environments running on Windows Server, including Windows Server 2008 (which has reached End-of-Support), to Azure. This capability is designed to support SQL Server Failover Cluster Instances (FCI), Scale-out File Servers (SoFS), Remote Desktop Servers (RDS), and SAP ASCS/SCS running on Windows Server.

We encourage you to get started and request access by filling out this form.

Leveraging Azure Shared Disks

Azure Shared Disks provides a consistent experience for applications running on clustered environments today. This means that any application that currently leverages SCSI Persistent Reservations (PR) can use this well-known set of commands to register nodes in the cluster to the disk. The application can then choose from a range of supported access modes for one or more nodes to read or write to the disk. These applications can deploy in highly available configurations while also leveraging Azure Disk durability guarantees.

The below diagram illustrates a sample two-node clustered database application orchestrating failover from one node to the other.
  
The flow is as follows:

The clustered application running on both Azure VM 1 and  Azure VM 2 registers the intent to read or write to the disk.
The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk.
This reservation is enforced on Azure Disk and the database can now exclusively write to the disk. Any writes from the application instance on Azure VM 2 will not succeed.
If the application instance on Azure VM 1 goes down, the instance on Azure VM 2 can now initiate a database failover and take-over of the disk.
This reservation is now enforced on the Azure Disk, and it will no longer accept writes from the application on Azure VM 1. It will now only accept writes from the application on Azure VM 2.
The clustered application can complete the database failover and serve requests from Azure VM 2.

The below diagram illustrates another common workload consists of multiple nodes reading data from the disk to run parallel jobs, for example, training of Machine Learning models.
  
The flow is as follows:

The application registers all Virtual Machines registers to the disk.
The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk while opening up reads from other Virtual Machines.
This reservation is enforced on Azure Disk.
All nodes in the cluster can now read from the disk. Only one node writes results back to the disk on behalf of all the nodes in the cluster.

Disk types, sizes, and pricing

Azure Shared Disks are available on Premium SSDs and supports disk sizes including and greater than P15 (i.e. 256 GB). Support for Azure Ultra Disk will be available soon. Azure Shared Disks can be enabled as data disks only (not OS Disks). Each additional mount to an Azure Shared Disk (Premium SSDs) will be charged based on disk size. Please refer to the Azure Disks pricing page for details on limited preview pricing.

Azure Shared Disks vs Azure Files

Azure Shared Disks provides shared access to block storage which can be leveraged by multiple virtual machines. You will need to use a common Windows and Linux-based cluster manager like Windows Server Failover Cluster (WSFC), Pacemaker, or Corosync for node-to-node communication and to enable write locking. If you are looking for a fully-managed files service on Azure that can be accessed using Server Message Block (SMB) or Network File System (NFS) protocol, check out Azure Premium Files or Azure NetApp Files.

Getting started

You can create Azure Shared Disks using Azure Resource Manager templates. For details on how to get started and use Azure Shared Disks in preview, please refer to the documentation page. For updates on regional availability and Ultra Disk availability, please refer to the Azure Disks FAQ. Here is a video of Mark Russinovich from Microsoft Ignite 2019 covering Azure Shared Disks.

In the coming weeks, we will be enabling Portal and SDK support. Support for Azure Backup and  Azure Site Recovery is currently not available. Refer to the Managed Disks documentation for detailed instructions on all disk operations.

If you are interested in participating in the preview, you can now get started by requesting access.
Quelle: Azure