Celebrating Pride Month: Perspectives on Identity, Diversity, Communication, and Change

Throughout June, we’ve published a series of Q&As at WordPress Discover featuring members of the Automattic team. These conversations explore personal journeys; reflections on identity; and diversity and inclusion in tech, design, and the workplace. Here are highlights from these interviews.

“In a World That Wants You to Apologize or Minimize Who You Are, Don’t.”

Gina Gowins is an HR operations magician on the Human League, our global human resources team. In this interview, Gina examines identity and language; communication and trust-building in a distributed, mostly text-based environment; and how her life experiences have informed her work.

I am particularly attached to the term queer as a repurposing of a word that was once used to isolate and disempower people — it was used to call people out as problematically different and other. From my perspective, there is no normal and no other; instead, we are all individual and unique. Identifying as queer allows me to take pride in my own individuality.Language changes over time, and how we use language shapes our values and thinking. In a culture that is aggressively governed by heteronormative values and where it can still be dangerous and lonely to be LGBTQIA+ — such as the United States, where I live — defining myself as queer is also my small act of defiance. It is a reminder of the consistent fight for acceptance, inclusion, and justice that so many people face, and our inherent value and validity as humans.

Read Gina’s interview

“Reflect What Is Given, and In So Doing Change It a Little”

Echo Gregor is a software engineer on Jetpack’s Voyager team, working on new features that “expand Jetpack’s frontiers.” In this conversation, Echo talks about gender identity, pronouns, and names; and how xer identity and experiences have impacted xer approach to development and work in general.

Earlier in my transition, I called myself “E” sort of as a placeholder while I pondered name things. One late night, on the way home from a party, I had a friend ask if they could call me Echo, as it was the callsign equivalent for “E.” I immediately fell in love with the name, and gradually started using it more and more, until I made it my legal name.I like that it’s simple and doesn’t have many gendered connotations in the modern world. I also appreciate it’s mythological origin! In the myth, Echo was a mountain nymph cursed by the goddess Hera — to be unable to speak, and only repeat the last words said to her.I think there’s a lot of parallels in our world to that idea. We’re part of systems that are so much bigger than us that it’s rare any one of us can be loud enough to bring meaningful change, to speak new words. But echoes don’t perfectly repeat things. They reflect what is given, and in so doing change it a little. I like to try and live up to that by bringing a bit of change to the world, not by being the loudest, but by reflecting things back in my own way.

Read Echo’s interview

“Living My Life Freely and Authentically”

Mel Choyce-Dwan is a product designer on the theme team. In this Q&A, Mel tells us how she got involved with the WordPress community through a previous WordCamp, about her observations of tech events as a queer designer, and about the importance of inclusive design.

Show a lot of different kinds of people in your writing and your imagery, and don’t make assumptions. Talk to people from the communities you’re representing if you can, or read about their own experiences from their perspectives. Don’t assume you know better than someone else’s lived experience. When in doubt, talk to people.And don’t just talk to people about how your product should work, talk about how it shouldn’t work. Talk about how people think others could hurt them using your product. People of marginalized identities often have stories of being harassed, stalked, or abused on the web. We need to think about how our products can be used for harm before — not after — the harassment.

Read Mel’s interview

“Every Person and Voice Has the Opportunity to Be Heard”

Niesha Sweet, a people experience wrangler on the Human League, says she feels like she was destined to work at Automattic. In this final interview, Niesha reflects on her Pride Month traditions and what she finds most rewarding about her HR work.

I would say that we all have to apply an additional level of empathy, understanding, and openness when working together. Just with communication alone — English is not the first language for some Automatticians, and some cultures’ communication style is direct. Assuming positive intent and having an additional level of empathy for one another allows us to effectively communicate with each other, while also appreciating our differences. The reward that comes with our diverse workforce is that every person and voice has the opportunity to be heard. Impostor syndrome is real, so some Automatticians may not feel as though they can share their ideas with anyone at the company, but we truly can. Our level of diversity is truly outside of what the typical company is aiming to achieve. That’s not to say we’re not looking to hire more diverse Automatticians, or increase our workforce with non-US hires, but we’re not limited by age, sexual orientation, race, and gender identity. Diversity has a different meaning in a lot of the countries where we have Automatticians, and that alone is rewarding. 

Read Niesha’s interview

Learn more about diversity and inclusion at Automattic. We’re currently hiring — apply to work with us!
Quelle: RedHat Stack

Reinforcing our commitment to privacy with accredited ISO/IEC 27701 certification

For decades, there has been a growing focus on privacy in technology, with laws such as the EU’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act, and the Australian Privacy Principlesproviding guidance on how to protect and maintain user privacy. Privacy has always been a priority at Google, and we’re continuously evolving to help our customers directly address global privacy and data protection requirements. Today, we’re pleased to announce that Google Cloud is the first major cloud provider to receive an accredited ISO/IEC 27701 certification as a data processor. Published in 2019, ISO/IEC 27701 is a global standard designed to help organizations align with international privacy frameworks and laws. It provides guidance for implementing, maintaining, and continuously improving a Privacy Information Management System (PIMS), and can be used by both data controllers and processors—a key consideration for organizations that must align with the GDPR. ISO/IEC 27701 is an extension of the security industry best practices that are codified in ISO/IEC 27001, which outlines and provides the requirements for an information security management system (ISMS).  Unlocking the benefits of ISO 27701Coalfire ISO, an independent third party, issued an accredited certificate of registration for ISO/IEC 27701 to Google Cloud Platform (GCP). This accredited certificate shows that Google’s PIMS for GCP (as shown in the certificate’s scope) conforms to the ISO/IEC 27701 requirements, and that the body conducting the audit and issuing the certificate did so in accordance with the International Accreditation Forum (IAF)/ANSI National Accreditation Board (ANAB) requirements. This means that the certificate will be recognized by other IAF-accredited audit and certification bodies under the IAF Multilateral Recognition Agreement (MLA). Ouraccredited certification demonstrates Google Cloud’s long-standing commitment to privacy and providing the most trusted experience for our customers. By meeting the rigorous standards outlined by ISO/IEC 27701, Google Cloud customers can leverage the many benefits our certification, including:A universal set of privacy controls, verified by a trusted third party in accordance with the requirements of their accreditation body, that can serve as a solid foundation for the implementation of a privacy programThe ability to rely on Google Cloud Platform’s accredited ISO/IEC 27701 certification in your own compliance effortsReduced time and expense for both internal and third-party auditors, who can now demonstrate compliance with several privacy objectives within a single audit cycleGreater clarity on privacy-related roles and responsibilities, which can facilitate efforts to comply with privacy regulations such as GDPROur commitment to customersCertifications provide independent validation of our ongoing commitment to world-class security and privacy, while also helping customers with their own compliance efforts. You can find more information on Google Cloud’s compliance efforts and our commitment to privacy in our compliance resource center.
Quelle: Google Cloud Platform

Dataproc Metastore: Fully managed Hive metastore now available for alpha testing

Google Cloud is announcing a new data lake building block for our smart analytics platform: Dataproc Metastore, a fully managed, highly available, auto-healing, open source Apache Hive metastore service that simplifies technical metadata management for customers building data lakes on Google Cloud. With Dataproc Metastore, you now have a completely serverless option for several use cases:A centralized metadata repository that can be shared among various ephemeral Dataproc clusters running different open source engines, such as Apache Spark, Apache Hive, and Presto;A metadata bridge between open source tables and code-free ETL/ELT with Data Fusion; A unified view of your open source tables across Google Cloud, providing interoperability between cloud-native services like Dataproc and various other open source-based partner offerings on Google Cloud.To get started with Dataproc Metastore today, join our alpha program by reaching out by email: join-dataproc-metastore-alpha@google.com.Why Hive Metastore?A core benefit of Dataproc is that it lets you create a fully configured, autoscaling, Hadoop and Spark cluster in around 90 seconds. This rapid creation and flexible compute platform makes it possible to treat cluster creation and job processing as a single entity. When the job completes, the cluster can terminate and you pay only for the Dataproc resources required to run your jobs. However, information about tables—the metadata—that was created during those jobs is not always something that you want to be thrown out with the cluster. You often want to keep that table information between jobs or make the metadata available to other clusters and other processing engines. If you use open source technologies in your data lakes, you likely already use the Hive Metastore as the trusted metastore for big data processing. Hive metastore has achieved standardization as the mechanism that open source data systems use to share data structures. The below diagram demonstrates just some of the ecosystem that is already built around Hive Metastore’s capabilities.Click to enlargeHowever, this same Hive Metastore can be a friction point for customers who need to run their data lakes on Google Cloud. Today, Dataproc customers will often use Cloud SQL to persist Hive metadata off-cluster. But we’ve heard about some challenges with this:You must self-manage and troubleshoot the RDBMS Cloud SQL instance.Hive servers are managed independently of RDBMS: This can create both scalability issues for incoming connections, and locking issues in the database. The CloudSQL instance is a single point of failure that requires a maintenance window with downtime, making it impossible to use with data lakes that need always-on processing. This architecture requires that direct JDBC access be provided to each cluster, which can introduce security risks when used with sensitive data.  In order to trust that the Hive Metastore can serve in the critical path for all your data processing jobs, your other option is to move beyond the CloudSQL workaround and spend significant time architecting a highly available IaaS layer that includes load balancing, autoscaling, installations and updates, testing, and backups. However, the Dataproc Metastore abstracts all of this toil and provides these as features in a managed service. Enterprise customers have told us they want a managed Hive Metastore that they can rely on for running business-critical data workloads in Google Cloud data lakes. In addition, customers have expressed a desire for the full, open source-based Hive metastore catalog that maintains their integration points with numerous applications, can provide table statistics for query optimization, and supports Kerberos authentication so that existing security models based on tools like Apache Ranger and Apache Atlas continue to function. We also hear that customers want to avoid a new client library that would require a rewrite of existing software or a “compatible” API that only offers limited functionality of the Hive metastore. Enterprise customers want to use the full features of the open source Hive metastore. The Dataproc Metastore team has accepted this challenge, and now provides a fully serverless Hive metastore service. The Dataproc Metastore complements the Google Cloud Data Catalog, a fully managed and highly scalable data discovery and metadata management service. Data Catalog empowers organizations to quickly discover, understand, and manage all their data with simple and easy-to-use search interfaces, while the Dataproc Metastore offers technical metadata interoperability among open source big data processing. Common use cases for Dataproc MetastoreFlexible analysis of your data lake with centralized metadata repositoryWhen German wholesale giant METRO moved their ecommerce data lake to Google Cloud, they were able to match daily events to compute processing and reduce infrastructure costs by 30% to 50%. The key to these types of gains when it comes to data lakes is severing the ties between storage and compute. By disconnecting the storage layer from compute clusters, your data lake gains flexibility. Not only can clusters come up and down as needed, but cluster specifications like vCPUs, GPUs, and RAM can be tailored to the specific needs of the jobs at hand. Dataproc already offers several features that help you achieve this flexibility.Cloud Storage Connector lets you take data off your cluster by providing Cloud Storage as a Hadoop Compatible File System (HCFS). Jobs based on data in the Hadoop Distributed File System (HDFS) can typically be converted to Cloud Storage with a simple file prefix change (more on HDFS vs. Cloud Storage here).Workflow Templates provides an easy-to-use mechanism for managing and executing workflows. You can specify a set of jobs to run on a managed cluster that gets created on demand and deleted when the jobs are finished. Dataproc Hub makes it easy to give data scientists, analysts, and engineers preconfigured Spark working environments in JupyterLab that automatically spawn and destroy Dataproc clusters without an administrator.   Now, with Dataproc Metastore, achieving flexible clusters is even easier for those clusters that want to share tables and schemas. Clusters of various shapes, sizes, and processing engines can safely and efficiently share the same tables and metadata simply by pointing a Dataproc cluster to a serverless Dataproc Metastore endpoint, as shown here:Serverless and code-free ETL/ELT with Dataproc Metastore and Data FusionWe’ve heard from customers that they’re able to use real-time data to improve customer service, network optimization, and more to save time and reach customers effectively. For companies building data pipelines, they can use Data Fusion, our fully managed, code-free, and cloud-native data integration service that lets you easily ingest and integrate data from various sources. Data Fusion is built with an open source core (CDAP), which offers a Hive source plugin. With this plugin, data scientists and other users of the data lake can share the structured results of their analysis using Dataproc Metastore, offering a shared repository that ETL/ELT developers can use to manage and productionize pipelines in the data lake. Below is one example of a workflow using Dataproc Metastore with Data Fusion to manage data pipelines, so you can go from unstructured raw data to a structured data warehouse without having to worry about running servers.Click to enlargeData scientists, data analysts, and data engineers log in to Dataproc Hub, which they use to spawn a personalized Dataproc cluster running a Juypter lab interface backed by Apache Spark processing. Unstructured raw data on Cloud Storage is analyzed, interpreted, and structured. Metadata about how to interpret Cloud Storage objects as structured tables is stored in Dataproc Metastore, allowing the personalized Dataproc cluster to be terminated without losing the metadata information.Data Fusion’s Hive connector uses the table created in the notebook as a data source via the thrift URL provided by Dataproc Metastore.Data Fusion reads the Cloud Storage data according to the structure provided by Dataproc Metastore. The data is harmonized with other data sources into a data warehouse table.The refined data table is written to BigQuery, Google Cloud’s serverless data warehouse.BigQuery tables are made available to Apache Spark on Jupyter Notebooks for further data lake queries and analysis with the Apache Spark BigQuery Connector.  Partner ecosystem accelerates Dataproc Metastore deployments across multi-cloud and hybrid data lakesAt Google, we believe in an open cloud, and Dataproc Metastore is built with our leading open source-centric partners in mind. Because Dataproc Metastore provides compatibility with open source Apache Hive Metastore, you can integrate Google Cloud partner services into your hybrid data lake architectures without having to give up metadata interoperability. Google Cloud-native services and open source applications can work in tandem. Collibra provides hybrid data lake visibility with Dataproc MetastoreIntegrating Dataproc Metastore with Collibra Data Catalog provides enterprises with enterprise-wide visibility across on-prem and cloud data lakes. Since Dataproc Metastore was built on top of Hive metastore, Collibra could quickly integrate into the solution without having to worry about proprietary data formats or APIs. “Dataproc Metastore provides a fully managed Hive metastore, and Collibra layers on data set discovery and governance, which is critical for any business looking to meet the strictest internal and external compliance standards,” says Chandra Papudesu, VP product management, Catalog and Lineage for Collibra.Qubole provides a single view of metadata across data lakesQubole’s open data lake platform provides end-to-end data lake services, such as continuous data engineering, financial governance, analytics, and machine learning with near-zero administration on any cloud. As enterprises continue to execute a multi-cloud strategy with Qubole, it’s critical to have one centralized view of your metadata for data discovery and governance. “Qubole’s co-founders led the Apache Hive project, which has spawned into many impactful projects and contributors globally,” said Anita Thomas, director of product management at Qubole. “Qubole’s platform has used a Hive metastore since its inception, and now with Google’s launch of an open metastore service, our joint customers have multiple options to deploy a fully managed, central metadata catalog for their machine learning, ad-hoc or streaming analytics applications,” Pricing During the alpha phase, you will not be charged for testing this service. However, under NDA, you can be provided a tentative price list to evaluate the value of Dataproc Metastore against the proposed fees. Sign up for the alpha testing program for Dataproc Metastore now.
Quelle: Google Cloud Platform

Google Cloud VMware Engine is now generally available

Let’s face it: bringing workloads to the public cloud isn’t always easy. And if you want to take full advantage of the elasticity, economics and innovation of the cloud, you usually have to write a new application. But that isn’t always an option, especially for existing applications, which may be from a third-party or written years ago. Compounding the challenge of rewriting those applications for the cloud is how you manage the application after you rebuild it—how you protect it from failures, monitor it, secure it, and so on. For many existing applications, this is done on a platform such as VMware®. So, the question becomes: how can these critical applications take advantage of the cloud when you don’t have a clear path to rearchitecting them outright? Google Cloud VMware Engine now generally availableToday, we’re happy to announce that Google Cloud VMware Engine is generally available, enabling you to seamlessly migrate your existing VMware-based applications to Google Cloud without refactoring or rewriting them. You can run the service in the us-east4 (Ashburn, Northern Virginia) & us-west2 (Los Angeles, California) regions, and we will  expand into other Google Cloud regions around the world in the second half of the year.Google Cloud VMware Engine provides everything you need to run your VMware environment natively in Google Cloud. The service delivers a fully managed VMware Cloud Foundation hybrid cloud platform, including VMware technologies vSphere, vCenter, vSAN, NSX-T, and HCX—in a dedicated environment on Google Cloud’s high performance and reliable infrastructure, to support your enterprise production workloads.With this service, you can extend or bring your on-premises workloads to Google Cloud in minutes—and without changes—by connecting to a dedicated VMware environment. Google Cloud VMware Engine is a first-party offering, fully owned, operated and supported by Google Cloud, that lets you seamlessly migrate to the cloud, without the cost or complexity of refactoring applications, and manage workloads consistently with your on-prem environment. You reduce your operational burden by moving to an on-demand, self-service model, while maintaining continuity with your existing tools, processes and skill sets, while also taking advantage of Google Cloud services to supercharge your VMware environment.Google Cloud VMware Engine is a unique solution for running VMware environments in the cloud, with four areas that provide a differentiated experience: a) user experience, b) enterprise-grade infrastructure, c) integrated networking and d) a rich services ecosystem. Let’s take a closer look.A simple user experienceLaunching a fully functional instance of Google Cloud VMware Engine is easy—all it takes is four clicks from the Google Cloud Console. Within a few minutes, you get a new environment, ready to consume. Compare that to the days and weeks it takes to design a new on-prem data center, ordering hardware and software, racking, stacking, cabling and infrastructure configuration. Not only that, but once the environment is live, you can expand or shrink it at the click of a button. To further simplify the experience, you can provision VMware environments using your existing Google Cloud identities. You also receive integrated support from Google Cloud—a one-stop shop for all support issues, whether in VMware or the rest of Google Cloud. The service is fully VMware certified and verified, and VMware’s support is fully integrated with Google Cloud support for a seamless experience. Consumption associated with the service is available in the standard billing views in the Google Cloud Console. And when you need to use native VMware tools, simply log into the familiar vCenter interface and manage and monitor VMware environment as you normally would.Dedicated, enterprise-grade infrastructureGoogle Cloud VMware Engine is built on high-performance, reliable and high-capacity infrastructure, giving you a fast and highly available VMware experience, at a low cost. The environment includes:Fully redundant and dedicated 100Gbps networking, providing 99.99% availability, low latency and high throughput to meet the needs of your most demanding enterprise workloads.Hyperconverged storage via the VMware vSAN stack on high-end, all-flash NVMe devices. This enables blazing fast performance with the scale, availability, reliability and redundancy of a distributed storage system.Recent generation CPUs (2nd Generation Intel Xeon Scalable Processors), delivering very high (2.6 GHz normal, 3.9 GHz burst) compute performance for your workloads. 768 GB of RAM, and 19.2TB of raw data capacity per node. Since VMware allows compute over-provisioning, many workloads in existing environments are often memory- or storage-constrained. The larger memory and storage capacity in Google Cloud VMware Engine nodes enables more workload VMs to be deployed per node, lowering your overall cost.The compute and storage infrastructure is single tenant—not shared by any other customer. The networking bandwidth to other hosts in a VMware vSphere cluster is also dedicated. This means that you get not only the privacy and security of a dedicated environment, but also highly predictable levels of performance. Integrated cloud networkingVMware environments in Google Cloud VMware Engine are configured directly on VPC subnets. This means you can use standard mechanisms such as Cloud Interconnect and Cloud VPN to connect to the service, as you would to any other service in Google Cloud. This eliminates the need to establish additional, expensive, bandwidth-limited connectivity.You also get direct, private, layer 3 networking access to workloads and services running on Google Cloud. You can connect between workloads in VMware and other services in Google Cloud with high-speed, low-latency connections, using private addresses. This provides faster access and higher levels of security for a wide variety of use cases such as hybrid applications, backup and centralized performance management. By eliminating a lot of networking complexity, you get a seamless, secure experience that is integrated with Google Cloud.A rich services ecosystemIn addition to its native capabilities, VMware users value the platform for its rich third-party ecosystem for disaster recovery, backup, monitoring, security—or any other imaginable IT need. Since the service provides a native VMware platform, you can continue to use those tools, with no changes.In Google Cloud VMware Engine, we have built unique capabilities to enable ecosystem tools. By elevating system privileges, you can install and configure third-party tools as you would on-prem. Third parties such as Zerto are taking advantage of this integration for mission-critical use cases such as disaster recovery.You can also benefit from native Google Cloud services and our ecosystem partners alongside your VMware-based applications. For instance, you can use Cloud Storage with a third-party data protection tool offered by companies such as Veeam, Dell, Cohesity, and Actifio to get a variety of availability and cost options for your backups. You can run third-party KMS tools externally and independently in your Compute Engine VMs to encrypt at-rest storage, making your environment even more secure.And then there are the native Google Cloud services. With your VMware-based databases and applications running inside Google Cloud VMware Engine, you can now manage them alongside your cloud-native workloads with our Operations family (formerly Stackdriver). You can interoperate VMware workloads with services such as Google Kubernetes Engine and Cloud Functions. You can use third-party solutions such as NetApp Cloud Volumes for extended VMware storage needs. And you can take advantage of the privacy and performance of Google Cloud VMware Engine to run cloud-native workloads directly next to your VMware workloads, with the help of Anthos deployed directly inside the service. Or supercharge analytics of your VMware data sources with BigQuery, and make it more intelligent with AI and machine learning services. Moving to the cloud doesn’t have to be hard. By migrating your VMware platform to Google Cloud, you can keep what you like about your on-prem application environment, and tap into next generation hardware and application services. To learn more about Google Cloud VMware Engine, check out our Getting Started guide, and be sure to watch our upcoming Google Cloud Next ‘20: OnAir session, Introducing Google Cloud VMware Engine during the week of July 27th.
Quelle: Google Cloud Platform

New IT Cost Assessment program: Unlock value to reinvest for growth

If you’re in IT, chances are you’re under pressure to prioritize investments and optimize costs in response to the current economic climate. According to a recent survey of our customers1, that situation describes 84% of IT decision makers. Likewise, Forrester Research has said CIOs could face a minimum of 5% budget cuts in 20202, and IDC is forecasting a 5.1% decline in worldwide IT spending3. These are sobering numbers. Here at Google Cloud, we understand the need for clear, actionable ways to optimize your IT costs—and the flexibility to adjust your IT spend to the most critical areas dynamically. To help, we developed a new IT Cost Assessment program that lets you understand how your company’s IT spend compares to your industry peers, so you can quickly identify key areas of opportunity to unlock value to reinvest for growth. Google Cloud has a proven and structured approach to validate these IT cost reduction opportunities. Every business is unique, but knowing where you stand relative to your industry peers is an invaluable piece of insight when strategizing how to survive in this new economic reality. The first thing we do with our IT cost assessment is analyze your individual IT spend and compare it to industry benchmark data derived from our extensive experience working with clients and trusted third-party research firms, providing you a view of cost optimization opportunities. Then, in a second phase, we propose Google Cloud solutions best aligned to helping you reap the benefits of IT cost reductions, reduce physical infrastructure complexity, leverage hybrid-cloud strategy and enhance security, compliance and flexibility. In addition, our differentiated capabilities across AI/ML & Big Data can help you identify opportunities to optimize processes and drive additional operational efficiencies. Once you have this baseline of your performance, we deliver a detailed TCO analysis, ROI projections, and an implementation plan, with Google Cloud solutions that will help you migrate and modernize your legacy environment and deliver a positive impact to your bottom line.We have partnered with leading enterprise companies in manufacturing, financial services, healthcare and life sciences, and insurance sectors, among others and delivered cost savings across their IT environments. In the aforementioned customer survey, three out of four respondents reported savings of up to 30% in the first 6 months of becoming a Google Cloud customer. And presented with the statement, “Google Cloud helped me increase our operational efficiency and optimize IT spend,” nine in ten agreed.Click here to learn more about the IT Cost Assessment program, and to request an engagement. We look forward to helping you navigate—and thrive—through these challenging times.1. TechValidate survey of 122 Google Cloud customers.2. Where To Adjust Tech Budgets In The Pandemic Recession, Forrester, May 19, 20203. International Data Corp., https://www.idc.com/getdoc.jsp?containerId=prUS46268520
Quelle: Google Cloud Platform

New Azure Firewall features in Q2 CY2020

We are pleased to announce several new Azure Firewall features that allow your organization to improve security, have more customization, and manage rules more easily. These new capabilities were added based on your top feedback:

Custom DNS support now in preview.
DNS Proxy support now in preview.
FQDN filtering in network rules now in preview.
IP Groups now generally available.
AKS FQDN tag now generally available.
Azure Firewall is now HIPAA compliant. 

In addition, in early June 2020, we announced Azure Firewall forced tunneling and SQL FQDN filtering are now generally available.

Azure Firewall is a cloud-native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network-level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

Custom DNS support now in preview

Since its launch in September 2018, Azure Firewall has been hardcoded to use Azure DNS to ensure the service can reliably resolve its outbound dependencies. Custom DNS provides separation between customer and service name resolution. This allows you to configure Azure Firewall to use your own DNS server and ensures the firewall outbound dependencies are still resolved with Azure DNS. You may configure a single DNS server or multiple servers in Azure Firewall and Firewall Policy DNS settings.

Azure Firewall is also capable of name resolution using Azure Private DNS, as long as your private DNS zone is linked to the firewall virtual network.

DNS Proxy now in preview

With DNS proxy enabled, outbound DNS queries are processed by Azure Firewall, which initiates a new DNS resolution query to your custom DNS server or Azure DNS. This is crucial to have reliable FQDN filtering in network rules. You may configure DNS proxy in Azure Firewall and Firewall Policy DNS settings. 

DNS proxy configuration requires three steps:

Enable DNS proxy in Azure Firewall DNS settings.
Optionally configure your custom DNS server or use the provided default.
Finally, you must configure the Azure Firewall’s private IP address as a Custom DNS server in your virtual network DNS server settings. This ensures DNS traffic is directed to Azure Firewall.

 
Figure 1. Custom DNS and DNS Proxy settings on Azure Firewall.

FQDN filtering in network rules now in preview

You can now use fully qualified domain names (FQDN) in network rules based on DNS resolution in Azure Firewall and Firewall Policy. The specified FQDNs in your rule collections are translated to IP addresses based on your firewall DNS settings. This capability allows you to filter outbound traffic using FQDNs with any TCP/UDP protocol (including NTP, SSH, RDP, and more). As this capability is based on DNS resolution, it is highly recommended you enable the DNS proxy to ensure your protected virtual machines and firewall name resolution are consistent.

FQDN filtering in application rules for HTTP/S and MSSQL is based on application level transparent proxy. As such, it can discern between two FQDNs that are resolved to the same IP address. This is not the case with FQDN filtering in network rules, so it is always recommended you use application rules when possible.

 
Figure 2. FQDN filtering in network rules.

IP Groups now generally available

IP Groups is a new top-level Azure resource that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. IP Groups is now generally available and supported within a standalone Azure Firewall configuration or as part of Azure Firewall Policy. For more information, see the IP Groups in Azure Firewall documentation.

Figure 3. Creating a new IP Group.

AKS FQDN tag now in generally available

An Azure Kubernetes Service (AKS) FQDN tag can now be used in Azure Firewall application rules to simplify your firewall configuration for AKS protection. Azure Kubernetes Service (AKS) offers managed Kubernetes cluster on Azure that reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure.

For management and operational purposes, nodes in an AKS cluster need to access certain ports and FQDNs. For more guidance on how to add protection for Azure Kubernetes cluster using Azure Firewall, see Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments. 

  Figure 4. Configuring application rule with AKS FQDN tag.

Next steps

For more information on everything we covered here, see these additional resources:

Azure Firewall documentation.
Azure Firewall Forced Tunneling and SQL FQDN filtering now generally available.
Azure Firewall IP Groups.
Azure Firewall Custom DNS, DNS Proxy (preview).
Azure Firewall FQDN filtering in network rules (preview).
Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments. 

Quelle: Azure

Azure Cost Management + Billing updates – June 2020

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management + Billing can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

More flexibility for budget notifications.
Subscribe to active cost recommendations with Advisor digests.
Automate subscription creation in Azure Government.
Subscription ownership transfer improvements.
New ways to save money with Azure.
New videos and learning opportunities.
Documentation updates.

Let's dig into the details.

 

More flexibility for budget notifications

You already know Azure Cost Management budgets keep you informed as your costs increase over time. We're introducing two changes to make it easier than ever to tune your budgets to suit your specific needs.

You can now specify a custom start month for your budget, allowing you to create a budget that starts in the future. This will allow you to plan ahead and pre-configure budgets to account for seasonal changes in usage patterns or simply preparing for the upcoming fiscal year, just to name a couple examples.

You can also add alert thresholds above 100 percent for even greater awareness about how far over budget you are. Not only can you send a separate email to a broader audience when you've hit, let's say 110 percent of your budget, you can also trigger more critical actions to be performed if costs continue to rise above 100 percent. This can be especially useful for organizations tracking internal margins.

We hope these changes help you plan ahead and take action on overages better. How will you use start dates and alert thresholds to better monitor and optimize costs?

 

Subscribe to active cost recommendations with Advisor digests

Running a truly optimized environment requires diligence. As your environment grows and usage patterns change, it's critical to stay on top of new opportunities to optimize costs. This is where Azure Advisor recommendation digests come in.

Recommendation digests provide an easy and proactive way to stay on top of your active recommendations. You can receive periodic notifications via email, SMS, or other channel by using action groups. Each digest notification includes a summary of your active recommendations and complements Advisor alerts to give you a more complete picture of your cost optimization opportunities.

Advisor alerts notify you about new recommendations as they become available, while recommendation digests summarize all available recommendations that you haven’t yet acted on. Together, Advisor recommendation digests and alerts help you stay current with Azure best practices.

Learn more about Advisor recommendation digests.

 

Automate subscription creation in Azure Government

Managing subscriptions efficiently at scale requires automation. Now, organizations with Azure Government accounts can automate the creation of subscriptions with the Microsoft.Subscription/createSubscription API. This expands on previous subscription management capabilities and brings API parity between Azure Global and Azure Government. What would you like to see next?

 

Subscription ownership transfer improvements

Whether you're restructuring your environment or simply expanding your scope, you may run into situations where you need to transfer ownership of your Azure subscriptions to another person or organization. And now, you can do that directly from within the Azure portal for even more subscription types. In addition to existing support for Pay-As-You-Go (PAYG) subscriptions, you can now transfer any of the following subscription types to a new owner from the Azure portal:

Microsoft Customer Agreement.
Visual Studio Enterprise.
Microsoft Partner Network (MPN).
Microsoft Azure Sponsorship.

The portal will also clarify and explain why certain subscriptions cannot be transferred and surface any potential issues and reservation warnings, helping you transfer with ease, avoiding any unintended consequences.

Learn more about subscription ownership transfers and let us know how we can improve your ownership transfer experience.

 

New ways to save money with Azure

We're always looking for ways to help you optimize costs. Here's what's new this month:

Save up to 70 percent on spiky and unpredictable workloads with Cosmos DB autoscale.
Save up to 61 percent on Azure Spring Cloud with the new Basic tier.
Azure Dedicated Hosts supports additional virtual machine sizes offering more opportunities to save.
Azure SQL database serverless auto-scaling limits increased from 16 to 40 vCores.
Azure DevTest Labs environments are now available in Azure Government.
Azure DevTest Labs is now available in Switzerland regions.

 

New videos and learning opportunities

For those visual learners out there, here's one new video you might be interested in:

Evaluate and optimize your costs using the Microsoft Azure Well-Architected Framework (29 minutes).

Follow the Azure Cost Management + Billing YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

Want a more guided experience? Start with Control Azure spending and manage bills with Azure Cost Management + Billing.

 

Documentation updates

Here are a couple documentation updates you might be interested in:

Moved group and filter options into its own document (with a video).
Updated the analyze and manage costs section of the Cost Management best practices.
Added note about using the Monitoring Reader role to analyze resource usage for RBAC scopes.
Clarified what subscriptions are supported by Cost Management within management groups.
Documented Invoices API support for Enterprise Agreement billing accounts.
Documented more Azure Advisor cost optimization recommendations.

Want to keep an eye on all of the documentation updates? Check out the Cost Management + Billing doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

 

What's next?

These are just a few of the big updates from last month. Don't forget to check out the previous Azure Cost Management + Billing updates. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

We know these are trying times for everyone. Best wishes from the Azure Cost Management team. Stay safe and stay healthy!
Quelle: Azure