Azure Firewall Manager now supports virtual networks

This post was co-authored by Yair Tor, Principal Program Manager, Azure Networking.

Last November we introduced Microsoft Azure Firewall Manager preview for Azure Firewall policy and route management in secured virtual hubs. This also included integration with key Security as a Service partners, Zscaler, iboss, and soon Check Point. These partners support branch to internet and virtual network to internet scenarios.

Today, we are extending Azure Firewall Manager preview to include automatic deployment and central security policy management for Azure Firewall in hub virtual networks.

Azure Firewall Manager preview is a network security management service that provides central security policy and route management for cloud-based security perimeters. It makes it easy for enterprise IT teams to centrally define network and application-level rules for traffic filtering across multiple Azure Firewall instances that spans different Azure regions and subscriptions in hub-and-spoke architectures for traffic governance and protection. In addition, it empowers DevOps for better agility with derived local firewall security policies that are implemented across organizations.

For more information see Azure Firewall Manager documentation.

Figure one – Azure Firewall Manger Getting Started page

 

Hub virtual networks and secured virtual hubs

Azure Firewall Manager can provide security management for two network architecture types:

 Secured virtual hub—An Azure Virtual WAN Hub is a Microsoft-managed resource that lets you easily create hub-and-spoke architectures. When security and routing policies are associated with such a hub, it is referred to as a secured virtual hub.

 Hub virtual network—This is a standard Azure Virtual Network that you create and manage yourself. When security policies are associated with such a hub, it is referred to as a hub virtual network. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. It is also possible to manage firewalls in standalone virtual networks that are not peered to any spoke.

Whether to use a hub virtual network or a secured virtual depends on your scenario:

 Hub virtual network—Hub virtual networks are probably the right choice if your network architecture is based on virtual networks only, requires multiple hubs per regions, or doesn’t use hub-and-spoke at all.

 Secured virtual hubs—Secured virtual hubs might address your needs better if you need to manage routing and security policies across many globally distributed secured hubs. Secure virtual hubs have high scale VPN connectivity, SDWAN support, and third-party Security as Service integration. You can use Azure to secure your Internet edge for both on-premises and cloud resources.

The following comparison table in Figure 2 can assist in making an informed decision:

 

 
Hub virtual network
Secured virtual hub

Underlying resource
Virtual network
Virtual WAN hub

Hub-and-Spoke
Using virtual network peering
Automated using hub virtual network connection

On-prem connectivity

VPN Gateway up to 10 Gbps and 30 S2S connections; ExpressRoute

More scalable VPN Gateway up to 20 Gbps and 1000 S2S connections; ExpressRoute

Automated branch connectivity using SDWAN
Not supported
Supported

Hubs per region
Multiple virtual networks per region

Single virtual hub per region. Multiple hubs possible with multiple Virtual WANs

Azure Firewall – multiple public IP addresses
Customer provided
Auto-generated (to be available by general availability)

Azure Firewall Availability Zones
Supported
Not available in preview. To be available availabilityavailablity

Advanced internet security with 3rd party Security as a service partners

Customer established and managed VPN connectivity to partner service of choice

Automated via Trusted Security Partner flow and partner management experience

Centralized route management to attract traffic to the hub

Customer managed UDR; Roadmap: UDR default route automation for spokes

Supported using BGP

Web Application Firewall on Application Gateway
Supported in virtual network
Roadmap: can be used in spoke

Network Virtual Appliance
Supported in virtual network
Roadmap: can be used in spoke

Figure 2 – Hub virtual network vs. secured virtual hub

Firewall policy

Firewall policy is an Azure resource that contains network address translation (NAT), network, and application rule collections as well as threat intelligence settings. It's a global resource that can be used across multiple Azure Firewall instances in secured virtual hubs and hub virtual networks. New policies can be created from scratch or inherited from existing policies. Inheritance allows DevOps to create local firewall policies on top of organization mandated base policy. Policies work across regions and subscriptions.

Azure Firewall Manager orchestrates Firewall policy creation and association. However, a policy can also be created and managed via REST API, templates, Azure PowerShell, and CLI.

Once a policy is created, it can be associated with a firewall in a Virtual WAN Hub (aka secured virtual hub) or a firewall in a virtual network (aka hub virtual network).

Firewall Policies are billed based on firewall associations. A policy with zero or one firewall association is free of charge. A policy with multiple firewall associations is billed at a fixed rate.

For more information, see Azure Firewall Manager pricing.

The following table compares the new firewall policies with the existing firewall rules:

 

Policy

Rules

Contains

NAT, Network, Application rules, and Threat Intelligence settings

NAT, Network, and Application rules

Protects

Virtual hubs and virtual networks

Virtual networks only

Portal experience

Central management using Firewall Manager

Standalone firewall experience

Multiple firewall support

Firewall Policy is a separate resource that can be used across firewalls

Manually export and import rules or using 3rd party management solutions

Pricing

Billed based on firewall association. See Pricing

Free

Supported deployment mechanisms

Portal, REST API, templates, PowerShell, and CLI

Portal, REST API, templates, PowerShell, and CLI

Release Status

Preview

General Availability

Figure 3 – Firewall Policy vs. Firewall Rules

Next steps

For more information on topics covered here, see the following blogs, documentation, and videos:

 Azure Firewall Manager documentation
 Azure Firewall Manager Pricing

Azure Firewall central management partners:

AlgoSec CloudFlow
Barracuda Cloud Security Guardian, now generally available in Azure Market
Tufin SecureCloud

Quelle: Azure

Introducing the Cloud Monitoring dashboards API

Using dashboards in Cloud Monitoring makes it easy to track critical metrics across time. Dashboards can, for example, provide visualizations to help debug high latency in your application or track key metrics for your applications. Creating dashboards by hand in the Monitoring UI can be a time-consuming process, which may require many iterations. Once dashboards are created, you can save time by using them in multiple Workspaces within your organization. Today, we’re pleased to announce that the Cloud Monitoring dashboards API is generally available from Google Cloud. The dashboards API lets you read the configuration for existing dashboards, create new dashboards, update existing dashboards and delete dashboards that you no longer use. These methods follow the REST and gRPC semantics and are consistent with other Google Cloud APIs. A common use case for the dashboards API is to deploy a dashboard developed in one Monitoring Workspace into one or more additional Workspaces. For example, you may have a separate Workspace for your development, QA and production environments (learn more on selecting Workspace structures). In one of the environments, you may have developed a standard operational dashboard that you’d like to use across all your Workspaces. By first reading the dashboard configuration via the projects.dashboards.get method, you can save the dashboard configuration and then use the projects.dashboards.create method to create the same dashboard across the other environments.How the dashboard API works When creating a dashboard, you have to specify the layout and the widgets that go inside that layout. A dashboard must use one of three layout types: GridLayout, RowLayout or ColumnLayout. GridLayout divides the available space into vertical columns of equal width and arranges a set of widgets using a row-first strategy.RowLayout divides the available space into rows and arranges a set of widgets horizontally in each row.ColumnLayout divides the available space into vertical columns and arranges a set of widgets vertically in each column.The widgets available to place inside the layouts include an XyChart, Scorecard and Text object.XyChart: displays data using X and Y axes. Charts created through the Google Cloud Console are instances of this widget.Scorecard: displays the latest value of a metric, and how this value relates to one or more thresholds. Text: displays textual content, either as raw text or a markdown string. Here’s an example of the JSON dashboard configuration, which specifies a GridLayout with a single XyChart widget. You can see other examples in our sample dashboards and layouts documentation.Dashboard configuration as a templateA simple approach to building a dashboard configuration is to first create a dashboard in the Cloud Monitoring console, then use the dashboards API projects.dashboards.get method to export the JSON configuration. Then, you can share that configuration as a template either via source control or however you normally share files with your colleagues.You can try out the dashboard API in the Try this API section of the API documentation, and learn more about managing dashboards by reading the Managing Dashboards documentation. We’re working on features to make the API even more useful, including through the gcloud command line. Also, contributors are discussing and planning the Terraform module for the Monitoring Dashboard API in github.A special thanks to our colleagues David Batelu, Technical Lead and Joy Wang, Product Manager, Cloud Monitoring, for their contributions to this post.
Quelle: Google Cloud Platform

Azure Offline Backup with Azure Data Box now in preview

An ever-increasing number of enterprises, even as they adopt a hybrid IT strategy, continue to retain mission-critical data on-premises and look towards the public cloud as an effective offsite for their backups. Azure Backup—Azure’s built-in data-protection solution, provides a simple, secure, and cost-effective mechanism to backup these data-assets over the network to Azure, while eliminating on-premises backup infrastructure. After the initial full backup of data, Azure Backup transfers only incremental changes in the data, thereby delivering continued savings on both network and storage.

With the exponential growth in critical enterprise data, the initial full backups are reaching terabyte scale. Transferring these large full-backups over the network, especially in high-latency network environments or remote offices, may take weeks or even months. Our customers are looking for more efficient ways beyond fast networks to transfer these large initial backups to Azure. Microsoft Azure Data Box solves the problem of transferring large data sets to Azure by enabling the “offline” transfer of data using secure, portable, and easy-to-get Microsoft appliances.

Announcing the preview of Azure Offline Backup with Azure Data Box

Today, we are thrilled to add the power of Azure Data Box to Azure Backup, and announce the preview program for offline initial backup of large datasets using Azure Data Box! With this preview, customers will be able to use Azure Data Box with Azure Backup to seed large initial backups (up to 80 TB per server) offline to an Azure Recovery Services Vault. Subsequent backups will take place over the network.

This preview is currently available to the customers of Microsoft Azure Recovery Services agent and is a much-awaited addition to the existing support for offline backup using Azure Import/Export Services. 

Key benefits

The Azure Data Box addition to Azure Backup delivers core benefits of the Azure Data Box service while offering key advantages over the Azure Import/Export based offline backup.

Simple—No need to procure your own Azure-compatible disks or connectors as with the Azure Import based offline backup. Simply order and receive one or more Data Box appliances from your Azure subscription, plug-in, fill with backup data, return to Azure, and track all of it on the Azure portal.
Built-in—The Azure Data Box based offline backup experience is built-into the Recovery Services agent, so you can easily discover and detect your received Azure Data Box appliances, transfer backup data, and track the completion of the initial backup directly from the agent.
Secure—Azure Data Box is a tamper-resistant appliance that comes with ruggedized casing to handle bumps and bruises during transport and supports 256-bit AES encryption on your data.
Efficient—Get freedom from provisioning temporary storage (staging locations) or use of additional tools to prepare disks and copy data, as in the Azure Import based offline backup. Azure Backup directly copies backup data to Azure Data Box, delivering savings on storage and time, and eliminating additional copy tools.

Getting started

Seeding your large initial backups using Azure Backup and Azure Data Box involves the following high-level steps. 

Order and receive your Azure Data Box based on the amount of data you want to backup from a server. Order an Azure Data Box Disk if you want to backup less than 7.2 TB of data. Order an Azure Data Box to backup up to 80 TB of data.
Install and register the latest Recovery Services agent to an Azure Recovery Services Vault.
Select the “Transfer using Microsoft Azure Data Box disks” option for offline backup as part of scheduling your backups with the Recovery Services agent.

Trigger Backup to Azure Data Box from the Recovery Services Agent.
Return Azure Data Box to Azure.

Azure Data Box and Azure Backup will automatically upload the data to the Azure Recovery Services Vault. Refer to this article for a detailed overview of pre-requisites and steps to take advantage of Azure Data Box when seeding your initial backup offline with Azure Backup.

Offline backup with Azure Data Box on Data Protection Manager and Azure Backup Server

If you are using System Center Data Protection Manager or Microsoft Azure Backup Server and are interested in seeding large initial backups using Azure Data Box, drop us a line at systemcenterfeedback@microsoft.com for access to early previews.

Related links and additional content

Jump right into using Offline Backup with Azure Data Box.
Learn more about Offline backup options with Azure Backup.
New to Azure Backup? Sign up for a free Azure trial subscription.
Review whether you need to use online or offline mechanisms to send backup data to Azure.
Need help? Reach out to Azure Backup forum for support or browse Azure Backup documentation.
Follow us on Twitter @AzureBackup for the latest news and updates.

Quelle: Azure

OpenShift 4.3: Console Customization: YAML Samples

In Red Hat OpenShift 4.2 we introduced OpenShift Console customization via CRDs. Now, for OpenShift 4.3, we have extended the customization abilities to allow users to add their own YAML sample to a Kubernetes resource. These YAML sample will appear in a sidebar on the creation page for any Kube resource:

Out of the box, OpenShift 4 provides a few examples for users. With this new extension mechanism users can now add their own YAML sample for all users on the Cluster. Let us look at how we can manually add a YAML example to the cluster. First we need to navigate to the Custom Resource Definition navigation item and search for YAML:

Next we select the ConsoleYAMLSample CRD and navigate to the instances tab:

In this example we are going to create a YAML Sample for “Job” Kube resource:

Let us take a closer look at the YAML:
description: An example Job YAML sample
targetResource:
apiVersion: batch/v1
kind: Job —> Kube resource assigned this sample
title: Example Job —> Display text in the sidebar
Snippet: false —> YAML will be injected and not replaced if “true”
yaml: —> Sample YAML content
apiVersion: batch/v1
kind: Job
metadata:
name: countdown
spec:
template:
metadata:
name: countdown
spec:
containers:
– name: counter
image: centos:7
command:
– “bin/bash”
– “-c”
– “for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done”
restartPolicy: Never

Before we create the ConsoleYAMLSample CRD, we can navigate to the Jobs menu item and attempt to create a new Job. You will see that no Sample section appears in the Job creation page, only the Schema tab appears in the sidebar:

After we create the ConsoleYAMLSamples CRD, we can see our sample now shows up under the Samples section:

In addition to creating samples manually, this can be achieved programmatically, since the extensions mechanism was built using CRDs. An important use case: With this extension mechanism when a new Operator is installed and it adds new Kube resources (CRDs) to the cluster, the Operator can now add more YAML samples than just the default one.
Snippets
If the “snippet” flag is set to true, then the sample will show up as a snippet, and will be injected into the existing YAML at the location of the cursor in the YAML editor. Snippets will not replace the existing YAML. The Snippets section will show up as a tab in the sidebar:

If you’d like to learn more about what the OpenShift team is up to or provide feedback on any of the new 4.3 features, please take this brief 3-minute survey.
The post OpenShift 4.3: Console Customization: YAML Samples appeared first on Red Hat OpenShift Blog.
Quelle: OpenShift