Hierarchical Firewall Policy Automation with Terraform

Firewall rules are an essential component of network security in Google Cloud. Firewalls in Google Cloud can broadly be categorized into two types; Network Firewall Policies and Hierarchical Firewall Policies. While Network Firewalls are directly associated with a VPC to allow/deny the traffic, Hierarchical Firewalls can be thought of as the policy engine to use Resource Hierarchy for creating and enforcing policies across the organization. Hierarchical policies can be enforced at the organization level or at the folder(s) level. Like Network Firewall rules, hierarchical firewall policy rules can allow or deny traffic AND can also delegate the evaluation to lower level policies or to the network firewall rules itself (with a go_next). Lower-level rules cannot override a rule from a higher place in the resource hierarchy. This lets organization-wide admins manage critical firewall rules in one place.So, now let’s think of a few scenarios where Hierarchical Firewall policies will be useful1. Reduce the number of Network Firewall: Example: say in xyz.com got 6 Shared VPCs based upon their business segments. It is a security policy to refuse SSH access to any VMs in the company, i.e. deny TCP port 22 traffic. With Network Firewalls, this rule needs to be enforced at 6 places (each Shared VPC). Growing number of granular Network firewall rules for each network segment means more touch points, i.e. means more chances of drift and accidents. Security admins get busy with hand holding and almost always become a bottleneck for even simple firewall changes. With Hierarchical firewall Policies, Security Admins can create a common/single policy to deny TCP port 22 traffic and enforce it to xyz.com org. OR explicitly target one/many Shared VPCs from the policy. This way a single policy can define the broader traffic control posture.  2. Manage critical firewall rules using centralized policies AND safely delegate non-critical controls at VPC levelExample: At xyz.com SSH to GCEs is strictly prohibited and non-negotiable. Auditors need this. While allowing/denying TCP traffic to port 443 depends on which Shared VPC the traffic is going to. In this case security admins can create a policy to deny TCP port 22 traffic and enforce this policy to the xyz.com. Another policy is created for TCP port 443 traffic to say “go_next” and decide at the next lower level if this traffic is allowed. Then, have a Network Firewall rule to allow/deny 443 traffic at the Shared VPC level. This way Security Admin has broader control at a higher level to enforce traffic control policies and delegate where possible. Ability to manage the most critical firewall rules at one place also frees project level administrators (e.g., project owners, editors or security admin) from having to keep up with changing organization-wide policies. With hierarchical firewall policies, Security admin can centrally enforce, manage and observe the traffic control patterns.Create, Configure and Enforce Hierarchical Firewall PoliciesThere are 3 major components of Hierarchical Firewall Policies; Rules, Policy and Association. Broadly speaking a “Rule” is a decision making construct to declare if the traffic should be allowed, denied or delegated to the next level for decision. “Policy” is a collection of rules, i.e. one or more rules can be associated with a Policy. “Association” tells the enforcement point of the policy in the Google Cloud resource hierarchy. These concepts are extensively explained on the product page.A simple visualization of Rules, Policy and Association looks likeInfrastructure as Code (Terraform) for Hierarchical Firewall PoliciesThere are 3 Terraform Resources that need to be stitched together to build and enforce Hierarchical Firewall Policies.  #1 Policy Terraform Resource – google_compute_firewall_policyIn this module the most important parameter is the “parent” parameter. Hierarchical firewall policies, like projects, are parented by a folder or organization resource. Remember this is NOT the folder where the policy is enforced or associated. It is just a folder which owns Policy(s) that you are creating. Using a Folder to own the hierarchical firewall policies, also simplifies the IAM to manage who can create/modify these policies, i.e. just assign the IAM to this folder. For a scaled environment it is recommended to create a separate “firewall-policy” folder to host all of your Hierarchical Firewall Policies.Samplecode_block[StructValue([(u’code’, u’/*rn Create a Policyrn*/rnresource “google_compute_firewall_policy” “base-fw-policy” {rn parent = “folders/<folder-id>”rn short_name = “base-fw-policy”rn description = “A Firewall Policy Example”rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca59971d0>)])]You can get the Folder ID of the “firewall-policy” folder using below commandgcloud resource-manager folders list –organization=<your organization ID> –filter='<name of the folder>’For example, if your firewall policy folder is called ‘firewall-policy’ then use gcloud resource-manager folders list –organization=<your organization ID> –filter=’firewall-policy’ #2 Rules Terraform Resource – google_compute_firewall_policy_ruleMost of the parameters in this resource definition are very obvious but there are a couple of them that need special consideration.disabled – Denotes whether the firewall policy rule is disabled. When set to true, the firewall policy rule is not enforced and traffic behaves as if it did not exist. If this is unspecified, the firewall policy rule will be enabled. enable_logging – enabling firewall logging is highly recommended for many future operational advantages. To enable it, pass true to this parameter.target_resources – This parameter comes handy when you want to target certain Shared VPC(s) for this rule. You need to pass the URI path for the Shared VPC. Top get the URI for the VPC use this command code_block[StructValue([(u’code’, u’gcloud config set project <Host Project ID>rngcloud compute networks list –uri’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca4d95c50>)])]SampleHere is some sample Terraform code to create a Firewall Policy Rule with priority 9000 to deny TCP port 22 traffic from 35.235.240.0/20 CIDR block (used for identity aware proxy)code_block[StructValue([(u’code’, u’/*rn Create a Firewall rule #1rn*/rnresource “google_compute_firewall_policy_rule” “base-fw-rule-1″ {rn firewall_policy = google_compute_firewall_policy.base-fw-policy.idrn description = “Firewall Rule #1 in base firewall policy”rn priority = 9000rn enable_logging = truern action = “deny”rn direction = “INGRESS”rn disabled = falsern match {rn layer4_configs {rn ip_protocol = “tcp”rn ports = [22]rn }rn src_ip_ranges = [“35.235.240.0/20″]rn }rn target_resources = [“https://www.googleapis.com/compute/v1/projects/<PROJECT-ID>/global/networks/<VPC-NAME>”]rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca58a7750>)])]#3 Association Terraform Resource – google_compute_firewall_policy_associationIn the attachment_target pass the folder ID where you want to enforce this policy, i.e. everything under this folder (all projects) will get this policy. In the case of Shared VPCs, the target folder should be the parent of your host project. Samplecode_block[StructValue([(u’code’, u’/*rn Associate the policy rn*/rnresource “google_compute_firewall_policy_association” “associate-base-fw-policy” {rn firewall_policy = google_compute_firewall_policy.base-fw-policy.idrn attachment_target = “folders/<Folder ID>”rn name = “Associate Base Firewall Policy with dummy-folder”rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eeca58a73d0>)])]Once these policies are enforced, you can see it on the console under “VPC Network->Firewall” as something like below.In the Firewall Policy Folder, the created Hierarchical Firewall Policy will show up. Remember there are 4 default firewall rules that come with each policy, so even when you create a single rule in your policy, rule count will be 5, as shown in the panel below.Go into the Policy to see the rules you created and association of the policy (shown in 2 panels). SummaryHierarchical Firewall Policy simplifies the complex process of enforcing consistent traffic control policies across your Google Cloud environment. With Terraform modules and automation shown in this article, it gives Security admins ability to build guardrails using a policy engine and known Infrastructure as Code platform. Check out the Hierarchical Firewall Policy doc and how to use them. 
Quelle: Google Cloud Platform

Accelerate integrated Salesforce insights with Google Cloud Cortex Framework

Enterprises across the globe rely on a number of strategic independent software vendors like Salesforce, SAP and others to help them run their operations and business processes. Now more than ever, the need to sense and respond to new and changing business demands has increased and the availability of data from these platforms is integral for business decision making. Many companies today are looking for accelerated ways to link their enterprise data with surrounding data sets and sources to gain more meaningful insights and business outcomes. Getting there faster given the complexity and scale of managing and tying this data together can be an expensive and challenging proposition.To embark on this journey, many companies choose Google’s Data Cloud to integrate, accelerate and augment business insights through a cloud first data platform approach with BigQuery to power data driven innovation at scale. Next, they take advantage of best practices and accelerator content delivered with Google Cloud Cortex Framework to establish an open, scalable data foundation that can enable connected insights across a variety of use cases. Today, we are excited to announce the next offering of accelerators available that expand Cortex Data Foundation to include new packaged analytics solution templates and content for Salesforce. New analytics content for SalesforceSalesforce provides a powerful Customer Relationship Management (CRM) solution that is widely recognized and adopted across many industries and enterprises. With increased focus on engaging customers better and improving insights on relationships, this data is highly valuable and relevant as it spans many business activities and processes including sales, marketing, and customer service. With Cortex Framework, Salesforce data can now be more easily integrated into a single, scalable data foundation in BigQuery to unlock new insights and value. With this release, we take the guesswork out of the time, effort, and cost to establish a Salesforce data foundation in BigQuery. You can deploy Cortex Framework for Salesforce content to kickstart customer-centric data analytics and gain broader insights across key areas including: accounts, contacts, leads, opportunities and cases. Take advantage of the predefined data models for Salesforce along with analytics examples in Looker for immediate customer relationship focused insights, or easily join Salesforce data with other delivered data sets, such as Google Trends, Weather, or SAP to enable richer, connected insights. The choice is yours, and the sky’s the limit with the flexibility of Cortex to enable your specific use cases.By bringing Salesforce data together with other public, community, and private data sources, Google Cloud Cortex Framework helps accelerate the ability to optimize and innovate your business with connected insights.What’s nextThis release extends upon prior content releases for SAP and other data sources to further enhance the value of Cortex Data Foundation across private, public and community data sources. Google Cloud Cortex Framework continues to expand content to help better meet the needs of customers on data analytics transformation journeys. Stay tuned for more announcements coming soon.To learn more about Google Cloud Cortex Framework, visit our solution page, and try out Cortex Data Foundation today to discover what’s possible.Related ArticleAccelerating SAP CPG enterprises with Google Cloud Cortex FrameworkGoogle Cloud Cortex Framework launches analytics content to make it easier for SAP enterprises to solve common Consumer Packaged Goods us…Read Article
Quelle: Google Cloud Platform

CISO Survival Guide: Vital questions to help guide transformation success

Part of being a security leader whose organization is taking on a digital transformation is preparing for hard questions – and complex answers – on how to implement a transformation strategy. In our previous CISO Survival Guide blog, we discussed how financial services organizations can more securely move to the cloud. We examined how to organize and think about the digital transformation challenges facing the highly-regulated financial services industry, including the benefits of the Organization, Operation, and Technology (OOT) approach, as well as embracing new processes like continuous delivery and required cultural shifts.As part of Google Cloud’s commitment to shared fate, today we offer tips on how to ask the right questions that can help create the conversations that lead to better transformation outcomes for your organization. While there often is more than one right answer, a thoughtful, methodical approach to asking targeted questions and maintaining an open mind about the answers you hear back can help achieve your desired result. These questions are designed to help you figure out where to start and where to end your organization’s security transformation. By asking the following questions, CISOs and business leaders can develop a constructive, focused dialogue which can help determine the proper balance between implementing security controls and fine-tuning the risk tolerance set by the executive management and the board of directors.aside_block[StructValue([(u’title’, u’Hear monthly from our Cloud CISO in your inbox’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eba10dcfc90>), (u’btn_text’, u’Subscribe today’), (u’href’, u’https://go.chronicle.security/cloudciso-newsletter-signup?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY23-Cloud-CISO-Perspectives-newsletter-blog-embed-CTA&utm_content=-&utm_term=-‘), (u’image’, None)])]To start the conversation, begin by asking: What defines our organization’s culture?How can we best integrate the culture with our security goals?CISOs should ask business leaders:What makes a successful transformation? What are the key goals of the transformation?What data is (most) valuable?  What data can be retired, reclassified, or migrated?  What losses can we afford to take and still function?  What is the real risk that the organization is willing to accept?Business leaders should ask CISOs and the security team:What are the best practices for protecting our valuable data?What is the business impact of implementing those controls?  What are the top threats that we need to address?CISOs and business leaders should ask: Which threats are no longer as important? Where could we potentially use spending for more cost-effective controls such as firewalls and antivirus software?What benefits do we get from refactoring our applications?Are we really transforming, or lifting and shifting?How should we perform identity and access management to meet our business objectives?What are the core controls needed to ensure enterprise-level performance for the first workloads?CISOs and risk teams should ask:How can we use the restructuring of an existing body of code to streamline security functions?How should we monitor our security posture to ensure we are aligned with our risk appetite?Business and technical teams should ask:What’s our backup plan? What do we do if that fails?Practical advice and the realities of operational transformationSome organizations have been working in the cloud for more than a decade and have already addressed many operational procedures, sometimes with painful lessons learned along the way. If you’ve been operating in the cloud securely for that long, we recognize that there’s a lot to be gained from understanding your approaches to culture, operational expertise, and technology. However, there are still many organizations that have not thought through how they will operate in a cloud environment until it’s almost ready – and at that point, it might be too late. If you can’t detail how a cloud environment will operate before its launch, how will you know who should be responsible for maintaining it? Who are the critical stakeholders, along with those responsible for engineering and maintaining specific systems, who should be identified at the start of the transformation?  There are likely several groups of stakeholders, such as those aligned with operations for transformation, and those focused on control design for cloud aligned with operations. If you don’t have the operators involved in the design phase, you’re destined to create clever security controls with very little practical value because those tasked with day-to-day maintenance most likely won’t have the expertise or training to effectively operate these controls. This is complicated by the fact that many organizations are struggling to recruit and retain resources with the right skills to operate in the cloud. We believe that training current employees to learn new cloud skills, and giving them the time away from other responsibilities, can help build skilled, diverse cloud security teams.If your organization continually experiences high turnover in security leadership and skilled staff, it’s up to you to navigate your culture to ensure greater consistency. You can, of course, choose to supplement internal knowledge with trusted partners – however, that’s an expensive strategy for ongoing operational cost.We met recently with a security organization that turns over skilled staff and leadership every two to three years. This rate of churn results in a continual resetting of security goals. This particular team joked that it’s like “Groundhog Day” as they constantly re-evaluate their best security approaches yet make no meaningful progress. This is not a model to emulate.Many security controls fail not because they are improperly engineered, but because the people who use them – your security team – are improperly trained and insufficiently  motivated. This is especially true for teams with high turnover rates and other organizational misalignments. A security control that blocks 100% of attacks might be engineered correctly, but if you can’t efficiently operate it, the effectiveness of the control will plummet to zero over time. Worse, it then becomes a liability because you incorrectly assume you have a functioning control.In our next blog, we will highlight several proven approaches that we believe can help guide your security team through your organization’s digital transformation. To learn more now, check out:Previous blogPodcast: CISO walks into the cloud: Frustrations, successes, lessons… and does the risk change?Report: CISO’s Guide to Cloud Security TransformationRelated ArticleCISO Survival Guide: How financial services organizations can more securely move to the cloudThe first in a series of CISO survival guide blog posts offers cloud security advice for CISOs in financial services organizations tackli…Read Article
Quelle: Google Cloud Platform

Announcing the GA of BigQuery multi-statement transactions

Transactions are mission critical for modern enterprises supporting payments, logistics, and a multitude of business operations. And in today’s modern analytics-first and data-driven era, the need for the reliable processing of complex transactions extends beyond just the traditional OLTP database; today businesses also have to trust that their analytics environments are processing transactional data in an atomic, consistent, isolated, and durable (ACID) manner. So BigQuery set out to support DML statements spanning large numbers of tables in a single transaction and commit the associated changes atomically (all at once) if successful or rollback atomically upon failure. And today, we’d like to highlight the recent general availability launch of multi-statement transactions within BigQuery and the new business capabilities it unlocks. While in preview, BigQuery multi-statement transactions were tremendously effective for customer use cases, such as keeping BigQuery synchronized with data stored in OLTP environments, the complex post processing of events pre-ingested into BigQuery, complying with GDPR’s right to be forgotten, etc. One of our customers, PLAID, leverages these multi-statement transactions within their customer experience platform KARTE to analyze the behavior and emotions of website visitors and application users, enabling businesses to deliver relevant communications in real time and further PLAID’s mission to Maximize the Value of People with the Power of Data.“We see multi-statement transactions as a valuable feature for achieving expressive and fast analytics capabilities. For developers, it keeps queries simple and less hassle in error handling, and for users, it always gives reliable results.”—Takuya Ogawa, Lead Product EngineerThe general availability of multi-statement transactions not only provides customers with a production ready means of handling their business critical transactions in a comprehensive manner within a single transaction, but now also provides customers with far greater scalability compared to what was offered during the preview. At GA, multi-statement transactions increase support for mutating up to 100,000 table partitions and modifying up to 100 tables per transaction. This 10x scale in the number of table partitions and 2x scale in the number of tables was made possible by a careful re-design of our transaction commit protocol which optimizes the size of the transactionally committed metadata. The GA of multi-statement transactions also introduces full compatibility with BigQuery sessions and procedural language scripting. Sessions are useful because they store state and enable the use of temporary tables and variables, which then can be run across multiple queries when combined with multi-statement transactions. Procedural language scripting provides users the ability to run multiple statements in a sequence with shared state and with complex logic using programming constructs such as IF … THEN and WHILE loops.For instance, let’s say we wanted to enhance the current multi-statement transaction example, which uses transactions to atomically manage the existing inventory and supply of new arrivals of a retail company. Since we’re a retailer monitoring our current inventory on hand, we would now also like to add functionality to automatically suggest to our Sales team which items we should promote with sales offers when our inventory becomes too large. To do this, it would be useful to include a simple procedural IF statement, which monitors the current inventory and supply of new arrivals and modifies a new PromotionalSales table based on total inventory levels. And let’s validate the results ourselves before committing them as one single transaction to our sales team by using sessions. Let’s see how we’d do this via SQL.First, we’ll create our tables using DDL statements:code_block[StructValue([(u’code’, u’CREATE OR REPLACE TABLE my_dataset.Inventoryrn(product string,rnquantity int64,rnsupply_constrained bool);rn rnCREATE OR REPLACE TABLE my_dataset.NewArrivalsrn(product string,rnquantity int64,rnwarehouse string);rn rnCREATE OR REPLACE TABLE my_dataset.PromotionalSalesrn(product string,rninventory_on_hand int64,rnexcess_inventory int64);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c43fb6cd0>)])]Then, we’ll insert some values into our Inventory and NewArrivals tables:code_block[StructValue([(u’code’, u”INSERT my_dataset.Inventory (product, quantity)rnVALUES(‘top load washer’, 10),rn (‘front load washer’, 20),rn (‘dryer’, 30),rn (‘refrigerator’, 10),rn (‘microwave’, 20),rn (‘dishwasher’, 30);rn rnINSERT my_dataset.NewArrivals (product, quantity, warehouse)rnVALUES(‘top load washer’, 100, ‘warehouse #1′),rn (‘dryer’, 200, ‘warehouse #2′),rn (‘oven’, 300, ‘warehouse #1′);”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c40dced50>)])]Now, we’ll use a multi-statement transaction and procedural language scripting to atomically merge our NewArrivals table with the Inventory table while taking excess inventory into account to build out our PromotionalSales table. We’ll also create this within a session, which will allow us to validate the tables ourselves before committing the statement to everyone else.code_block[StructValue([(u’code’, u”DECLARE average_product_quantity FLOAT64;rn rnBEGIN TRANSACTION;rn rnCREATE TEMP TABLE tmp AS SELECT * FROM my_dataset.NewArrivals WHERE warehouse = ‘warehouse #1′;rnDELETE my_dataset.NewArrivals WHERE warehouse = ‘warehouse #1′;rn rn#Calculates the average of all product inventories.rnset average_product_quantity = (SELECT AVG(quantity) FROM my_dataset.Inventory);rn rnMERGE my_dataset.Inventory IrnUSING tmp TrnON I.product = T.productrnWHEN NOT MATCHED THENrnINSERT(product, quantity, supply_constrained)rnVALUES(product, quantity, false)rnWHEN MATCHED THENrnUPDATE SET quantity = I.quantity + T.quantity;rn rn#The below procedural script uses a very simple approach to determine excess_inventory based on current inventory being 120% of the average inventory across all products.rnIF EXISTS(SELECT * FROM my_dataset.Inventoryrn WHERE quantity > (1.2 * average_product_quantity)) THENrn INSERT my_dataset.PromotionalSales (product, inventory_on_hand, excess_inventory)rn SELECTrn product,rn quantity as inventory_on_hand,rn quantity – CAST(ROUND((1.2 * average_product_quantity),0) AS INT64) as excess_inventoryrn FROM my_dataset.Inventoryrn WHERE quantity > (1.2 * average_product_quantity);rnEND IF;rn rnSELECT * FROM my_dataset.NewArrivals;rnSELECT * FROM my_dataset.Inventory ORDER BY product;rnSELECT * FROM my_dataset.PromotionalSales ORDER BY excess_inventory DESC;rn#Note the multi-statement SQL temporarily stops here within the session. This runs successfully if you’ve set your SQL to run within a session.”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c4268fad0>)])]From the results of the SELECT statements, we can see the warehouse #1 arrivals were successfully added to our inventory and the PromotionalSales table correctly reflects what excess inventory we have. Looks like these transactions are ready to be committed.However, just in case there were some issues with our expected results, if others were to query the tables outside the session we created, the changes wouldn’t have taken effect. Thus, we have the ability to validate our results and could roll them back if needed without impacting others.code_block[StructValue([(u’code’, u’#Run in a different tab outside the current session. Results displayed will be consistent with the tables before running the multi-statement transaction.rnSELECT * FROM my_dataset.NewArrivals;rnSELECT * FROM my_dataset.Inventory ORDER BY product;rnSELECT * FROM my_dataset.PromotionalSales ORDER BY excess_inventory DESC;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c4268f650>)])]Going back to our configured session, since we’ve validated our Inventory, NewArrivals, and PromotionalSales tables are correct, we can go ahead and commit the multi-statement transaction within the session, which will propagate the changes outside the session too.code_block[StructValue([(u’code’, u’#Now commit the transaction within the same session configured earlier. Be sure to delete or comment out the rest of the SQL text run earlier.rnCOMMIT TRANSACTION;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c42de2590>)])]And now that the PromotionalSales table has been updated for all users, our sales team has some ideas of what products they should promote due to our excess inventory.code_block[StructValue([(u’code’, u’#Results now propagated for all users.rnSELECT * FROM my_dataset.PromotionalSales ORDER BY excess_inventory DESC;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4c42de2c10>)])]As you can tell, using multi-statement transactions is simple, scalable, and quite powerful, especially combined with other BigQuery Features. Give them a try yourself and see what’s possible.
Quelle: Google Cloud Platform