How to think about threat detection in the cloud

As your organization transitions from on-premises to hybrid cloud or pure cloud, how you think about threat detection must evolve as well—especially when confronting threats across many cloud environments. A new foundational framework for thinking about threat detection in public cloud computing is needed to better secure digital transformations. Because these terms have had different meanings over time, here’s what we mean by threat detection and detection and response. A balanced security strategy covers all three elements of a security triad: prevention, detection, and response. Prevention can improve, but never becomes perfect. Despite preventative controls, we still need to be on the lookout for threats that penetrate our defenses. Finding and confirming malicious activities, and automatically responding to them or presenting them to the security team constitutes detection and response.Vital changes impact the transition from the traditional environment to the cloud and affect three key areas: Threat landscapesIT environment Detection methodsFirst, threat landscapes change. This means new threats evolve, old threats disappear, and the importance of many threats changes. If you perform a threat assessment on your environment and then migrate the entire environment to the public cloud, even if you use the lift and shift approach, the threat assessment will look very different. MITRE ATT&CK Cloud can help us understand how some threat activities apply to public cloud computing. Second, the entire technology environment around you changes. This applies to the types of systems and applications you as a defender would encounter, but also to technologies and operational practices. Essentially, cloud as a realm where you have to detect threats is different —this applies to the assets being threatened and technologies doing the detecting. Sometimes cloud looks to traditional “blue teams” as some alien landscape where they would have only challenges. In reality, cloud does bring a lot of new opportunities for detection. The main theme here is change, some for the worse and some for the better. After all, cloud is Usually distributed—running over many regions and data centersOften immutable—utilizes systems that are replaced, rather than updatedEphemeral uses workloads often created for the task and then removedAPI driven—enabled by pervasive APIsCentered on identity layer—mostly uses identities and not just network perimeter to separate workloadsAutomatically scalable—able to expand with theincreasing workloadShared with the providerSometimes the combination of Distributed, Immutable, and Ephemeral cloud properties is called a DIE triad. All these affect detection for the cloud environment.Third, telemetry sources and detection methods also change. While this may seem like it’s derived from the previous point we made, that’s not entirely true. For some cloud services, and definitely for SaaS, a popular approach of using an agent such as EDR would not work. However, new and rich sources of telemetry may be available—Cloud Audit Logs are a great example here. Similarly, the expectation that you can sniff traffic on the perimeter, and that you even will have a perimeter, may not be entirely correct. Pervasive encryption hampers Layer 7 traffic analysis, while public APIs rewrite the rules on what a perimeter is. Finally, detection sources and methods are also inherently shared with the cloud provider, with some under cloud service provider control while others are under cloud user control.This leads to several domains where we can and should detect threats in the cloud.Let’s review a few cloud threat detection scenarios.Everybody highlights the role of identity in cloud security. Naturally, it matters in threat detection as well—and it matters a lot. While we don’t want to repeat the cliche that in a public cloud you are one IAM mistake away from a data breach, we know that cloud security missteps can be costly. To help protect organizations, Google Cloud offers services that automatically and in real-time analyze every IAM grant to detect outsiders being added—even indirectly.Detecting threats inside compute instances such as virtual machines (VM) using agents seems to be about the past. After all, VMs are just servers, right? However, this is an area where cloud brings new opportunities. For example, VM Threat Detection allows security teams to do completely agentless YARA rule execution against their entire compute fleet. Finally, products like BigQuery require new ways of thinking about detecting data exfiltration. Security Command Center Premium detects queries and backups in BigQuery that would copy data to different Google Cloud organizations. Naturally, some things stay the same in the cloud. These include broad threat categories such as insiders or outsiders; steps in the cyber exploit chain such as coarse-grained stages of an attack; and the MITRE ATT&CK Tactics are largely unchanged. It is also likely that broad detection use cases stay the same. What does that mean for the defenders?When you move to the cloud, your threats and your IT change—and change a lot.This means that using on-premises detection technology and approaches as a foundation for future development may not work well.This also means that merely copying all your on-premise detection tools and their threat detection content is not optimal.Instead, moving to Google Cloud is an opportunity to transform how you can achieve your continued goals of confidentiality, integrity, and availability with the new opportunities created by the technology and process of cloud.Call to action:Listen to “Threat Models and Cloud Security” (ep12) Listen to “What Does Good Detection and Response Look Like in the Cloud? Insights from Expel MDR” (ep72)Listen to “Cloud Threats and How to Observe Them” (ep69)and read the related blog “How to think about cloud threats today”Review how to test cloud detectionsRead the guidance on cloud threat investigation with SCC and ChronicleRelated ArticleRead Article
Quelle: Google Cloud Platform

Making AI more accessible for every business

Alphabet CEO Sundar Pichai has compared the potential impact of artificial intelligence (AI) to the impact of electricity—so it may be no surprise that at Google Cloud, we expect to see increased AI and machine learning (ML) momentum across the spectrum of users and use cases.Some of the momentum is more foundational, such as the hundreds of academic citations that Google AI researchers earn each year, or products like Google Cloud Vertex AI accelerating ML development and experimentation by 5x, with 80% fewer lines of code required. Some are more concrete, like mortgage servicer Mr. Cooper using Google Cloud Document AI to process documents 75% faster with 40% cost savings; Ford leveraging Google Cloud AI services for predictive maintenance and other manufacturing modernizations; and customers across a wide range of industries deploying ML platforms atop Google Cloud. Together, these proof points reflect our belief that AI is for everyone, and that it should be easy to harness in workflows of all kinds and for people of all levels of technical expertise. We see our customers’ accomplishments as validation of this philosophy and a sign that we are taking away the right things from our conversations with business leaders. Likewise, we see validation in recognition from analysts, which recently includes Google being named a Leader byGartner® in the 2022 Magic Quadrant™ for Cloud AI Developer Services reportForrester in the Forrester Wave™: AI Infrastructure, Q4 2021 report, the Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022 report, and The Forrester Wave™: People-Oriented Text Analytics Platforms, Q2 2022 report In June, we talked about four pillars that guide our approach to creating products for MLOps and to accelerate development of ML models and their deployment into product. In this article, we’ll look more broadly at our AI and ML philosophy, and what it means to create “AI for everyone.” AI should be for everyoneOne of the pillars we discussed in June was “meeting users where they are,” and this idea extends far beyond products for data scientists. Technical expertise should not be a barrier to implementing AI—otherwise, use cases where AI can help will languish without modernization, and enterprises without well-developed AI practices will risk falling behind their competitors. To this end, we focus on creating AI and ML services for all kinds of users, e.g.: DocumentAI, Contact Center AI, and other solutions that inject AI and ML into business workflows without imposing heavy technical requirements or retraining on users; Pre-trained APIs, ranging from Speech to Fleet Optimization, that let developers leverage pre-trained ML models and free them from having to develop core AI technologies from scratch; BigQuery ML to unite data analysis tasks with ML;AutoML for abstracted and low-code ML production without requiring ML expertise; Vertex AI to speed up ML experimentation and deployment, with every tool you need to build deploy and the lifecycle of ML projectsAI Infrastructure options for training deep learning and machine learning models cost effectively. Including Deep Learning VMs optimized for data science and machine learning tasks and AI accelerators for every use case, from low-cost inference to high-performance training. It’s important to provide not only leading tools for advanced AI practitioners, but also leading AI services for users of all kinds. Some of this involves abstracting or automating parts of the ML workflow to meet the needs of the job and technical aptitude of the user. Some of it involves integrating our AI and ML services with our broader range of enterprise products, whether that means smarter language models invisibly integrated into Google Docs or BigQuery making ML easily accessible to data analysts. Regardless of any particular angle, AI is turning into a multi-faceted, pervasive technology for businesses and users the world over, so we feel technology providers should reflect this by building platforms that help users harness the power of AI by meeting them wherever they are. How we’re powering the next generation of AICreating  products that help bring AI to everyone requires large research investments, including in areas where the path to productization may not be clear for years. We feel a foundation in research combines with our focus on business needs and users to inform sustainable AI products that are in keeping with our AI principles and encourages responsible use of AI.  Many of our recent updates to our AI and ML platforms began as Google research projects. Just consider how DeepMind’s breakthrough AlphaFold project has led to the ability to run protein prediction models in Vertex AI. Or how  research into neural networks helped create Vertex AI NAS, which lets data science teams train models more accurately with lower latency and power requirements. Research is crucial, but also only one way of validating an AI strategy. Products have to speak for themselves when they reach customers, and customers need to see their feedback reflected as products are iterated and updated. This reinforces the importance of seeing customer adoption and success across a range of industries, use cases, and user types. In this regard, we feel very fortunate to work with so many great customers, and very proud of the work we help them accomplish. I’ve already mentioned Ford and Mr. Cooper, but those are just a small sampling. For example, Vodafone Commercial’s “AI Booster” platform uses the latest Google technology to enable cutting-edge AI use cases such as optimizing customer experiences, customer loyalty, and product recommendations. Our conversational AI technologies are used by companies ranging from Embodied, whose Moxie robot helps children overcome developmental challenges, to HubSpot connecting meeting notes to CRM data. Across our products and across industries around the world, customer stories grow by the day. We also see validation in our partner network. As we noted in the pillars discussed in June, partners like Nvidia help us to ensure customers have freedom of choice when building their AI stacks, and partners like Neo4j help our customers to expand our services into areas like graph structures. Partners support our mission to bring AI to everyone, helping more customers use our services for new and expanded use cases.Accelerating the momentumOverall, to create products that reflect AI’s potential and likely future ubiquity, we have to take all of the preceding factors, from research to customer and analyst conversations to working with partners, and turn them into products and product updates. We’ve been very active over the last year, from the launch of Call Center AI Platform in March, to the new Speech model we released in May, to a range of announcements at the Google Cloud Applied ML Summit in June. We have much more planned in coming months, and we’re excited to work with customers not just to maintain the pace of AI momentum, but to accelerate it. To learn more about Google Cloud’s AI and ML services, visit this link orbrowse recent AI and ML articles on the Google Cloud Blog. GARTNER and MAGIC QUADRANT are registered trademarks and service marks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.Related ArticleCloud TPU v4 records fastest training times on five MLPerf 2.0 benchmarksCloud TPU v4 ML supercomputers set performance records on five MLPerf 2.0 benchmarks.Read Article
Quelle: Google Cloud Platform

Quantifying portfolio climate risk for sustainable investing with geospatial analytics

Financial services institutions are increasingly aware of the significant role they can play in addressing climate change. As allocators of capital through their lending and investment portfolios, they direct financial resources for corporate development and operations in the wider economy. This capital allocation responsibility balances growth opportunities with risk assessments to optimize risk-adjusted returns. Identifying, analyzing, reporting, and monitoring climate risks associated with physical hazards, such as wildfires and water scarcity, is becoming an essential element of portfolio risk management.Implementing a cloud-native portfolio climate risk analytics systemTo help quantify these climate risks, this design pattern includes cloud-native building blocks that financial services institutions can use to implement a portfolio climate risk analytics system in their own environment. This pattern includes a sample dataset from RS Metrics and leverages several Google Cloud products, such as BigQuery, Data Studio, Vertex AI Workbench, and Cloud Run. The technical architecture is shown below.Technical architecture for cloud-native portfolio climate risk analytics.Please refer to the source code repository for this pattern to get started, and read through the rest of this post to dig deeper into the underlying geospatial technology and business use cases in portfolio management. You can use the Terraform code provided in the repository to deploy the sample datasets and application components in your selected Google Cloud Project. The README has step-by-step instructions.After deploying the technical assets, we recommend performing the following steps to get more familiar with the pattern’s technical capabilities:Review the example Data Studio dashboard to get familiar with the dataset and portfolio risk analytics (see screenshot below)Explore the included R Shiny app, deployed with Cloud Run, for more in-depth analyticsVisit Vertex AI Workbench and walk through the exploratory data analysis provided in the included Python-based Jupyter notebookDrop into BigQuery to directly query the sample data for this patternPortfolio climate risk analytics Data Studio dashboard. This dashboard visualizes sample climate risk data stored in BigQuery, and dynamically displays aggregate fire and water stress risk scores based on your selections and filters.The importance of granular objective dataAssessing exposure to climate risks under various climate change scenarios can involve combining geospatial layers, expertise in climate models, and using information about company operations. Depending on where they are located, companies’ physical assets – like their manufacturing facilities or office buildings – can be susceptible to varying types of climate risk. A facility located in a desert will likely experience greater water stress, and a plant located near sea level will have a larger risk of coastal flooding.Asset-level physical climate risk analysisGoogle Cloud partner RS Metrics offers two data products that cover a broad set of investable public equities: ESGSignals® and AssetTracker®. These products include 50 transition and physical climate risk metrics such as biodiversity, greenhouse gas (GHG) emissions, water stress, land usage, and physical climate risks. As an introduction to these concepts, we’ll first describe two key physical risks: water stress risk and fire risk.Water Stress RiskWater stress occurs when an asset’s demand for water exceeds the amount of water available for that asset, resulting in higher water costs or in extreme cases, complete loss of water supply. This can negatively impact the unit economics of the asset, or even result in the asset being shut down. According to a 2020 report from CDP, 357 surveyed companies disclosed a combined $301 billion in potential financial impact of water risks.When investors don’t have asset location data, they use industry average water intensity and basin level water risk to estimate water stress risk, as described in a 2020 report by Ceres. However, ESGSignals® allows a more granular approach, integrating meteorological and hydrological variables at the basin and sub-basin levels, drought severity, evapotranspiration, and surface water availability for millions of individual assets.Left: Watershed map of North America showing 2-digit hydrologic units. Source: usgs.govRight: Water cycle of the Earth’s surface, showing evapotranspiration, composed of transpiration and evaporation. Source: WikipediaAs an example, let’s look at mining, a very water-intensive industry. One mining asset, the Cerro Colorado copper mine in Chile, produced 71,700 metric tons of copper in 2019, according to an open dataset published by Chile’s Ministry of Mining. ESGSignals® identifies this mining asset as having significant water stress, resulting in a water risk score of 75 out of 100. For assets like these, reducing water consumption via efficiency improvements and the use of desalinated seawater will not only save precious water resources for nearby communities, but also reduce operating costs over time.A map illustrating asset level overall risk score calculated from ESGSignals® fire risk and water stress risk scores (range: 0-100). The pop-up in the middle: asset information and scores relevant to BHP Group’s Cerro Colorado Copper Mine. Source: RS Metrics portfolio climate risk Shiny appFire RiskWildfires have caused significant damage in recent years. For example, economists estimated that the 2019-2020 Australian bushfire season caused approximately A$103 billion in property damage and economic losses. Such wildfires pose safety and operational risk for all kinds of commercial operations located in Australia.ESGSignals® fire risk score is calculated by combining historical fire events, proximity, and intensity of fire with company asset locations (AssetTracker®). Based on ESGSignals® assessments, the majority of mining assets located in Australia have medium to high exposure to fire risk.Google Earth Engine animation of wildfires occurring within 100km of two mills owned by the same company during 2021. Asset (a) is considered a high fire risk asset while asset (b) has comparatively lower fire risk. Fire Data Source: NASA FIRMS.Incorporating asset-level climate risk analytics into portfolio management Now that we have an understanding of the mechanics of asset-level climate risk, let’s focus on how portfolio managers could incorporate these analytics into their portfolio management processes, including portfolio selection, portfolio monitoring, and company engagement.Portfolio selectionPortfolio selection can involve various investment tools. In screening, the portfolio manager sets up filtering criteria to select companies for inclusion in, or exclusion from, the portfolio. Asset-level climate risk scores can be included in these screening criteria, along with other financial or non-financial factors. For example, a portfolio manager could search for companies whose average asset-level water stress score is less than 30. This would result in an investment portfolio that has an overall lower risk from water stress than a given benchmark index (see figure below).Portfolio climate risk analytics Data Studio dashboard showing portfolio selection via screening for companies whose average asset-level water stress score is less than 30. In this case, overall score is defined as the mean of water stress risk score and fire risk score.Portfolio monitoringFor portfolio monitoring, it’s important to first establish a baseline of physical climate risk for existing holdings within the portfolio. A periodic reporting process that looks for changes in water stress, wildfire, or other physical climate risk metrics can then be created. Any material changes in risk scores would trigger a more detailed analysis to determine the next best action, such as rebalancing the portfolio to meet the target risk profile.Monitoring fire risk score from 2018 to 2021 for three corporate assets with low, low-medium, and medium-high fire risk scores. For more time series analysis, see the source code repository.Portfolio engagementSome portfolio managers engage with companies held in their portfolios, either through shareholder initiatives or by meeting with corporate investor relations teams. For these investors, it’s important to clearly identify the assets with significant exposure to climate risks. To focus on the locations with the highest opportunity for impact, a portfolio manager could sort the millions of AssetTracker locations by water stress or fire risk score, and engage with companies near the top of these ranked lists. Highlighting mitigation opportunities for these most at-risk assets would be an effective engagement prioritization strategy.Portfolio climate risk analytics Data Studio dashboard as a tool for portfolio engagement. Companies with high risk assets based on fire risk score are shown at the top of the list.Expanding beyond portfolio managementApplying an asset-level approach to physical climate risk analytics can be helpful beyond the use cases in portfolio management presented above. For example, risk managers in commercial banking could use this methodology to quantify lending risk during underwriting and ongoing loan valuation. Insurance companies could also use these techniques to improve risk assessment and pricing decisions for both new and existing policyholders.To enable further insights, additional geospatial datasets can be blended with those used in this pattern via BigQuery’s geospatial analytics capabilities. Location information in these datasets, such as points or polygons encoded in a GEOGRAPHY data type, allow them to be combined together with spatial JOINs. For example, a risk analyst could join AssetTracker data with BigQuery public data, such as population information for states, counties, congressional districts, or zip codes available in the Census Bureau US Boundaries dataset.A cloud-based data environment can help enterprises manage these and other sustainability analytics workflows. Infosys, a Google Cloud partner, provides blueprints and digital data intelligence assets to accelerate the realization of sustainability goals in a secure data collaboration space to connect, collect, correlate information assets such as RS Metrics geospatial data, enterprise data, and digital data to activate ESG intelligence within and across the financial value chain.Curious to learn more? To learn more from RS Metrics about analyzing granular asset-level risk metrics with ESGSignals®, you can review their recent and upcoming webinars, or connect directly with them here.To learn more about sustainability services from Infosys, reach out to the Infosys Sustainability team here. If you’d like a demo of the Infosys ESG Intelligence Cloud solution for Google Cloud, contact the Infosys Data, Analytics & AI team here.To learn more about the latest strategies and tools that can help solve the tough challenges of climate change across industries, view the sessions on demand from our recent Google Cloud Sustainability Summit.Special thanks to contributorsThe authors would like to thank these Infosys collaborators: Manojkumar Nagdev, Rushiraj Pradeep Jaiswal, Padmaja Vaidyanathan, Anandakumar Kayamboo, Vinod Menon, and Rajan Padmanabhan. We would also like to thank Rashmi Bomiriya, Desi Stoeva, Connie Yaneva, and Randhika H from RS Metrics, and Arun Santhanagopalan, Shane Glass and David Sabater Dinter from Google.DisclaimerThe information contained on this website is meant for the purposes of information only and is not intended to be investment, legal, tax or other advice, nor is it intended to be relied upon in making an investment or other decision. All content is provided with the understanding that the authors and publishers are not providing advice on legal, economic, investment or other professional issues and services.Related ArticleGoogle Cloud announces new products, partners and programs to accelerate sustainable transformationsIn advance of the Google Cloud Sustainability Summit, we announced new programs and tools to help drive sustainable digital transformation.Read Article
Quelle: Google Cloud Platform

Prepare for Google Cloud certification with top tips and no-cost learning

Becoming Google Cloud certified has proven to improve individuals’ visibility within the job market, and demonstrate ability to drive meaningful change and transformation within organizations.  1 in 4 Google Cloud certified individuals take on more responsibility or leadership roles at work, and  87% of Google Cloud certified users feel more confident in their cloud skills1.75% of IT decision-makers are in need of technologically-skilled personnel to meet their organizational goals and close skill gaps2.94% of those decision-makers agree that certified employees provide added value above and beyond the cost of certification3.Prepare for certification with a no-cost learning opportunityThat’s powerful stuff, right?  That’s why we’ve teamed up with Coursera to support your journey to becoming Google Cloud certified.As a new learner, get one month of no-cost access to your selected Google Cloud Professional Certificate on Coursera to help you prepare for the relevant Google Cloud certification exam. Choose from Professional Certificates in data engineering, cloud engineering, cloud architecture, security, networking, machine learning, DevOps and for business professionals, the Cloud Digital Leader.Become Google Cloud certifiedTo  help you on your way to becoming Google Cloud certified, you can earn a discount voucher on the cost of the Google Cloud certification exam by completing the Professional Certificate on Coursera by August 31, 2022 Simply visit our page on Coursera and start your one month no-cost learning journey today. Top tips to prepare for your Google Cloud certification examGet hands-on with Google CloudFor those of you in a technical job role, we recommend leveraging the Google Cloud projects to build your hands-on experience with the Google Cloud console. With 500+ Google Cloud projectsnow available on Coursera, you can gain hands-on experience working in the real Google Cloud console, with no download or configuration required.Review the exam guideExam guides provide the blueprint for developing exam questions and offer guidance to candidates studying for the exam. We´d encourage you to be prepared to answer questions on any topic in the exam guide, but it’s not guaranteed that every topic within an exam guide will be assessed.Explore the sample questionsTaking a look at the sample questions on each certification page will help to familiarize you with the format of exam questions and example content that may be covered. Start your certification preparation journey today with a one month no-cost learning opportunity on Coursera. Want to know more about the value of Google Cloud Certification? Find out why IT leaders choose Google Cloud Certification for their teams.1. Google Cloud, Google Cloud certification impact report, 20202. Skillsoft Global Knowledge, IT skills and Salary report, 20213. Skillsoft Global Knowledge, IT skills and Salary report, 2021Related ArticleWhy IT leaders choose Google Cloud certification for their teamsWhy IT leaders should choose Google Cloud training and certification to increase staff tenure, improve productivity for their teams, sati…Read Article
Quelle: Google Cloud Platform

How Ocado Technology delivers smart, secure online grocery shopping with Security Command Center

Grocery shopping has changed for good and Ocado Group has played a major role in this transformation. We started as an online supermarket, applying technology and automation to revolutionise the online grocery space. Today, after two decades of innovation, we are a global technology company providing state-of-the-art software, robotics, and AI solutions for online grocery. We created the Ocado Smart Platform, which powers the online operations of some of the world’s most forward-thinking grocery retailers, from Kroger in the U.S. to Coles in Australia.Grocery shopping has changed for good and Ocado Group has played a major role in this transformation. We started as an online supermarket, applying technology and automation to revolutionise the online grocery space. Today, after two decades of innovation, we are a global technology company providing state-of-the-art software, robotics, and AI solutions for online grocery. We created the Ocado Smart Platform, which powers the online operations of some of the world’s most forward-thinking grocery retailers, from Kroger in the U.S. to Coles in Australia.With the global penetration of the Ocado Smart Platform and the increasing complexity of our operations, we’re paying close attention to our security estate. To proactively identify and tackle any security vulnerabilities, we decided to introduce Google Cloud’s Security Command Center (SCC) Premium as our centralized vulnerability and threat reporting service.Gaining consolidated visibility into Ocado’s cloud assetsFrom the start, we were impressed with the speed of deployment and security findings surfaced with SCC. Where it would take several weeks in the past with other software vendors, we were able to quickly set up SCC in our environment and we could immediately start identifying our most vulnerable assets.Today, we use SCC to detect misconfigurations and vulnerabilities across hundreds of projects throughout our organization and we use it to get an aggregated view of our security health findings. We filter the findings and then use Pub/Sub or Cloud Functions to send alerts directly to the tools each division is working with, such as Splunk or JIRA. This way, each of our teams can discover and respond to the security findings in their own environment, with SCC acting as the single source of truth for our security-related issues.Driving autonomy by delegating security findingsAutonomy fuels innovation at Ocado Technology, which is why we want to make our teams as self-sufficient as possible. SCC helps to make our divisions more autonomous from the central organization. It delivers all the security insights technology teams need to make smart decisions on their own and at pace. Here’s where SCC’s delegation features providing folder and project level access control come in. The platform’s fine-grained access control capabilities enable us to delegate SCC findings to specific teams, without having to give them a view of the entire Ocado Technology organization. Business units no longer need to contact us in the security team to track down vulnerabilities, they can do it themselves in a compliant and secure manner. It makes our work more efficient and autonomous, allowing everyone to focus on their own areas of expertise and environments.Identifying and remediating multiple medium and high vulnerabilitiesSCC’s findings are very rich and don’t end with the identification of the potential misconfigurations and vulnerabilities. It goes beyond this, recommending solutions to resolve any issues and providing clear guidelines on next steps. That’s why the feedback from our users across the organization has been so good.SCC delivers on both quality and quantity. Since implementation, it has helped us identify and remove hundreds of medium and high vulnerabilities from our Google Cloud estate. The number of security related findings have also gone down each quarter, indicating real and tangible improvements in our security posture. SCC is so useful in maintaining our security posture as once we know where the issues are, tackling them is easy.From 8-hour security scans to instant insightsOne particular issue we’ve been able to handle well with SCC are vulnerabilities targeting the Apache logging system Log4j. SCC informed us about attempted compromises, active compromises, or the vulnerability exposure of our Dataproc images. During Log4j response, all these would have been otherwise very hard to track down, especially with limited resources. With SCC, we were able to leverage the security expertise of Google Cloud to identify the latest vulnerabilities, based on the most up-to-date security trends, and act on them quickly.Obviously, speed is of the essence when it comes to threat mitigation and SCC has enabled us to fix issues faster, making us less exposed to outside threats. In the past, just scanning everything once could take up to eight hours. SCC sped things up from the start and findings have been nearly instantaneous since it rolled out real-time Security Health Analytics.Strengthening compliance and demonstrating standards to stakeholdersSCC helps us to achieve better compliance standards, and demonstrate these standards to our stakeholders. We recently ran an internal audit exercise across the Ocado Technology organization, for example, where we identified the projects with the most numerous and severe security-related findings. Without the reports from SCC, this would have been extremely hard or even impossible.We also use the Security Health Analytics information from SCC to visualize the data per project, creating a kind of heat map of security across the organization. This helps us assign our resources to the right projects and prioritize our efforts accordingly, informing our strategic decisions.From top-down to a developer-led securityThere’s been a paradigm shift in security operations, and things are moving from a top-down approach to a more developer-led and autonomous process. SCC helps drive that change at Ocado Technology. It enables us to place the responsibility for security-related issues closer to the resource owners. By making sure that the teams most impacted by a potential problem are the ones who get to fix it, we empower teams to resolve issues proactively and efficiently. Looking forward, we can’t wait to see SCC evolve further. One of the features we’re most excited about is the ability to create custom findings (currently in preview) and additional integration capabilities that enable automation. We’re still not using everything SCC has to offer, but it is already a vital tool for our security team.At Ocado Technology, we’re pioneering the future of online grocery shopping, and this future needs a strong security foundation. SCC helps us to strengthen and maintain that foundation, making profitable, scalable, and secure online grocery shopping possible for even more businesses around the world.Related ArticleProtecting customers against cryptomining threats with VM Threat Detection in Security Command CenterExtending threat detection in Security Command Center with Virtual Machine Threat Detection.Read Article
Quelle: Google Cloud Platform

Invest early, save later: Why shifting security left helps your bottom line

Shifting left on security with Google Cloud infrastructureThe concept of “shifting left” has been widely promoted in the software development lifecycle. The concept is that introducing security earlier, or leftwards, in the development process will lead to fewer software-related security defects later, or rightwards, in production.Shifting cloud security left can help identify potential misconfigurations earlier in the development cycle, which if unresolved can lead to security defects. Catching those misconfigurations early can improve the security posture of production deployments.Why shifting security left mattersGoogle’s DevOps Research and Assessment (DORA) highlights the importance of integrating security into DevOps in the 2016 State of DevOps Report. The report discussed the placement of security testing in the software development lifecycle. The survey found that most security testing and tool usage happened after the development of a release, rather than continuously throughout the development lifecycle. This led to increased costs and friction because remediating problems found in testing may involve big architectural changes and additional integration testing, as shown in Figure 1. For example, security defects in production can lead to GDPR violations, which can carry fines up to 4% of global annual revenue.Figure 1: Traditional Testing PatternBy inserting security testing into the development phase, we can identify security defects earlier and perform the appropriate remediation sooner. This results in fewer defects post-production and reduces remediation efforts and architectural changes.  Figure 2 shows us that integrating security earlier in the SDLC results in overall decreases in security defects and associated remediation costs.Figure 2: Security Landscape After Shiting LeftThe 2021 State of DevOps Report expands the work of the 2016 report and advocates for integrating automated testing throughout the software development lifecycle. Automated testing is useful for continuously testing development code without the need for additional skills or intervention by the developer. Developers can continue to iterate quickly while other stakeholders can be confident that common defects are being identified and remediated.From code to cloudThe DORA findings with regard to code security can also be applied to cloud infrastructure security. As more organizations deploy their workloads to the cloud, it’s important to test the security and configurations of cloud infrastructure. Misconfigurations in cloud resources can lead toward security incidents that could lead to data theft. Examples of such misconfigurations include overly permissive firewall rules, public IP addresses for VMs, or excessive Identity and Access Management (IAM) permissions on service accounts and storage buckets. We can and should leverage different Google Cloud services to identify these misconfigurations early in the development process and prevent such errors from emerging in production to reduce the costs of future remediation, potential legal fines, and compromised customer trust.The key tools in our toolshed are Security Command Center and Cloud Build. Security Command Center provides visibility into misconfigurations, vulnerabilities, and threats within a Google Cloud organization. This information is critical when protecting your cloud infrastructure (such as virtual machines, containers, web applications) against threats, or identifying potential gaps from compliance frameworks (such as CIS Benchmarks, PCI-DSS, NIST 800-53, or ISO 27001. Security Command Center further supports shifting security left by allowing visibility of security findings at the cloud project level for individual developers, while still allowing global visibility for Security Operations. Cloud Build provides for the creation of cloud-native CI/CD pipelines. You can insert custom health checks into a pipeline to evaluate certain conditions (such as security metrics) and fail the pipeline when irregularities are detected. We will now explore two use cases that take advantage of these tools.Security Health CheckerSecurity Health Checker continuously monitors the security health of a Google Cloud project and promptly notifies project members of security findings. Figure 3 shows developers interacting with a Google Cloud environment with network, compute, and database components. Security Command Center is configured to monitor the health of the project.When Security Command Center identifies findings, it sends them to a Cloud Pub/Sub topic. A Cloud Function then takes the findings published to that topic and sends them to a Slack channel monitored by infrastructure developers. Just like a spell checker providing quick feedback on misspellings, Security Health Checker provides prompt feedback on security misconfigurations in a Google Cloud project that could lead to deployment failures or post-production compromises. No additional effort is required on the part of developers.Figure 3: Security Command Center in a Google Cloud EnvironmentSecurity Pipeline CheckerIn addition to using Security Command Center for timely notification of security concerns during the development process, we can also integrate security checks into the CI/CD pipeline by using Security Command Center along with Cloud Build as shown in Figure 4.Figure 4: Security Pipeline Checker ArchitectureThe pipeline begins with a developer checking code into a git repository. This repository is mirrored to Cloud Source Repositories. A build trigger will begin the build process. The build pipeline will include a short waiting period of a few minutes to give Security Command Center a chance to identify security vulnerabilities. A brief delay may appear undesirable at first, but the analysis that takes place during that interval can result in the reduction of security defects post-production. At the end of the waiting period, a Cloud Function serving as a Security Health Checker will evaluate the findings from Security Command Center (Connector 1 in Figure 4). If the validator determines that unacceptable security findings exist, the validator will inject a failure indication into the pipeline to terminate the build process (Connector 2 in Figure 4). Developers have visibility into the failure triggers and remediate them before successfully deploying code to production. This is in contrast to the findings in the 2016 State of DevOps Report wherein organizations that didn’t integrate security into their DevOps processes spent 50% more time remediating security issues than those who “shifted left” on security.Closing thoughtsDORA’s 2016 State of DevOps report called out the need for “shifting left” with security, introducing security earlier in the development process to identify security vulnerabilities early to reduce mitigation efforts post-production. The report also advocated for automated testing throughout the software development lifecycle. We looked at two ways of achieving these objectives in Google Cloud. The Security Health Checker provides feedback to developers using Security Command Center and Slack to notify developers of security findings as they pursue their development activities. The Security Pipeline Checker uses Security Command Center as part of a Cloud Build pipeline to terminate a build pipeline if vulnerabilities are identified during the build process. To implement the Security Heath Checker and the Security Pipeline Checker, check out the GitHub repository. We hope these examples will help you to “shift left” using Google Cloud services. Happy coding!This article was co-authored with Jason Bisson, Bakh Inamov, Jeff Levne, Lanre Ogunmola, Luis Urena, and Holly Willey, Security & Compliance Specialists at Google Cloud.Related ArticleShift security left with on-demand vulnerability scanningUse on-demand vulnerability scanning to detect issues early and help prevent downstream problemsRead Article
Quelle: Google Cloud Platform

IN, NOT_IN and NOT EQUAL query operators for Firestore in Datastore Mode

We’re very pleased to announce that Firestore in Datastore mode now supports IN, Not IN, and Not Equal To operators.IN OperatorFirestore in Datastore Mode now supports the IN operator. With IN, you can query a specific field for multiple values (up to 10). You do this by passing in a list of all the values you want to query for, and Firestore in Datastore Mode will match any entity whose field equals one of those values.For example, if you had a database with entities of kind Orders and you wanted to find which orders had a “delivered” or “shipped” status, then you can now do something like this: Example:SELECT * FROM Orders WHERE status IN ARRAY(“delivered”, “shipped”)Let’s look at another example: say Orders has a field Category that contains a list of categories in which the products in the order may belong to. You can now run an IN query on the categories that you are looking for.Example:SELECT * FROM Orders WHERE Category IN ARRAY(“Home Decor”, “Home Improvements”)In this case, each entity would only be returned once in the query even though they match both the categories in the query.You are now also able to use ORDER BY on both IN and Equal. The query planner originally ignored ordering on an equality, but with the introduction of IN, ORDER BY queries on multiple-valued properties now become valuable. Please make sure to check out the official documentation for additional details.  You can also use the new Query Builder in the UI to use the IN operator.Not IN & Not Equal OperatorsYou can now query using Not IN, which will allow you to find all entities where a field is not in a list of values. For example, entities with kind Orders where the status field is Not IN [“shipped”, “ready to ship”].Example:SELECT * FROM Orders WHERE status NOT IN Array(“shipped”, “ready to ship”);Using Not IN via Query Builder in the UIWith Not Equal you can now query for entities where a field is not equal to some given value. For example, entities of kind Orders where the status field is not equal to the value “pending”.Example:SELECT * FROM Orders WHERE status != “pending”;Using Not Equal via Query Builder in the UINote that with Datastore’s multi-value behavior using Not IN and Not Equal requires only one element to match the given predicate. For instance, Category Not IN [“Home Decor”, “Home Improvements”] would still return both e1 and e2 since they contain the category “Kitchen” and “Living Room”.We hope these new additions enhance your development experience. We look forward to learning about how you’ve taken advantage of these new features, thank you! Please visit the official documentationto learn more.
Quelle: Google Cloud Platform

Investing in Differentiation brings great customer experiences and repeatable business

“Customer success is the cornerstone of our partner ecosystem and ensures our joint customers experience the innovation, faster time to value, and top notch skills from Google and Google Cloud Partners.”—Nina Harding, Global Chief, Partner Advantage Program.Our ecosystem is a strong, validated ally to help you drive business growth and solve complex challenges. Differentiation achievements help you select a partner with confidence, knowing that Google Cloud has verified their skills and customer success across our products, horizontal solutions and key industries.  In all cases, our partners have demonstrated their commitment to learning and ongoing training, demonstrated through earned certifications, Specialization and Expertise. To further refine the process of helping customers find the best partner fast, we recently introduced Net Promoter Score© within Partner Advantage.  This industry standard rating tool allows customers to provide feedback and insights on their successes with partners quickly and easily. We encourage you to work with your partners to share your success and provide feedback using Net Promoter Score.To find the most highly qualified, experienced partners the Google Cloud Partner Directory puts you in the driver’s seat. This purpose-built tool helps customers like you leverage partner Differentiation achievements to move forward with confidence as you start your next project.This new “How to find the right Google Cloud Partner” video shows you how to create a shortlist of potential partners by Region, and based on 14 different strategic solution categories or 100+ Expertise designations.To find a partner that meets your specific needs, or complements your capable team, look no further than Partner Advantage’s Differentiation framework and share in our congratulations to some partners that have achieved Specialization the past few quarters.Related ArticleStanding out to customers through the Partner Differentiation journeyLearn how Google Cloud Partner Advantage partners help customers solve real-world business challengesRead Article
Quelle: Google Cloud Platform

REWE Group accommodates growth spikes and enhances hybrid architecture with Google Cloud

Significant growth in our business partnerships at REWE Group in Austria has led to an unprecedented increase in traffic across our applications. As one of Europe’s largest retail and tourism groups, our burgeoning user base continues to emanate from a variety of sources including new retail partners, affiliate stores, and online customers from desktop and mobile applications. We serve millions of customers in the retail and tourism sectors worldwide and we onboarded Google Cloud services when our applications needed more flexibility and scalability. We needed to efficiently accommodate the dramatic seasonal and even weekly fluctuations we experienced as the pandemic increased our online shopping traffic. As traffic to our applications increased, our team began hosting our traffic-heavy data on a cluster in Google Kubernetes Engine (GKE), successfully leveraging the data management and storage of Cloud Spanner. As a fully managed relational database, Spanner provides unlimited scale, strong consistency, and up to 99.999% availability. By choosing this approach to deployment, we didn’t need to migrate our end user data and maintained a highly flexible cloud environment with an estimated 70 percent hosted in Google Cloud and 30 percent remaining on-premises.Cloud Spanner optimizes speed and performance for online customersGiven that some of the data we migrated was tied to the customer shopping experience on our applications, it was important that the solution we chose be highly secure and reliable. Google Cloud is known for offering the highest levels of availability, reliability, global scale, and security, enabling us to deliver the best possible experiences for our customers. While accessing Spanner through a Kubernetes cluster on Google Cloud, our team developed a ledger for each end user. As the single point of truth for all transactions across the company, the ledger contained two tables. In one, we input a variety of currencies and in the other, we maintained real-time records of the balance of each user in the currency of their purchase. We leveraged the industry-leading 99.999 percent availability SLA of Spanner to optimize the performance of our applications. Spanner also helped us improve the customer experience by providing consistent performance and accelerating the speed of applications and API calls during the purchase process.Spanner provided transactional consistency and accuracy for REWE’s several million users, automatically updating their data in real time as transactions took place. We were able to seamlessly scale the processing of transactions per day to almost double. Since the platform went live, more than 500 million successful transactions have been executed. The native integrations of Google Cloud made it easy to unify our data lifecycle, ensuring the highest performance of our infrastructure at every phase of our development.Query latency is always a critical thing for us, because we are deeply integrated into the point-of-sale applications in our store. If applications are too slow, it compromises the customer experience. However, thanks to Spanner, we are able to complete API calls extremely fast.Fully managed Google services increase team productivity and champion sustainabilityAs a fully managed service, Spanner gave us the freedom to focus on differentiating activities, while operating seamlessly on-premises and in the cloud. Our developers were empowered to iterate and deploy quickly, driving new opportunities for growth and cost reductions. As a company with a 90-year history and international impact, REWE has upheld a continued commitment to environmental efficiency and sustainability across the world. This mission aligns with Google’s goal of running fully carbon-free data centers by 2030. By leveraging Google’s carbon neutrality and sustainability services including waste diversion, use of renewable energy, and enhanced efficiency, we are continuing to optimize our business operations as we champion sustainability.Learn more about how your organization can get started with Spanner today.Related ArticleChange streams for Cloud Spanner: now generally availableCloud Spanner change streams are now generally available. With change streams, you can capture and stream out changes from your Cloud Spa…Read Article
Quelle: Google Cloud Platform

Show off your cloud skills by completing the #GoogleClout weekly challenge

Who’s up for a challenge? It’s time to show off your #GoogleClout!Starting today, check in every Wednesday to unlock a new cloud puzzle that will test your cloud skills against participants worldwide. Stephanie Wong’s previous record is 5 minutes, can you complete the new challenge in 4?#GoogleClout ChallengeThe #GoogleClout challenge is a no-cost weekly 20 minute hands-on challenge. Every Wednesday for the next 10 weeks, a new challenge will be posted on our website. Participants will race against the clock to see how quickly they can complete the challenge. Attempt the 20 minute challenge as many times as you want. The faster you go, the higher your score!How it worksTo participate, follow these four simple steps:Enroll – Go to our website, click the link to the weekly challenge, and enroll in the quest using your Google Cloud Skills Boost account. Play – Attempt the challenge as many times as you want. Remember the faster you are, the higher your score!Share – Share your score card on Twitter/LinkedIn using #GoogleCloutWin – Complete all 10 weekly challenges to earn exclusive #GoogleClout badgesReady to get started?Take the #GoogleClout challenge today!Related ArticleEarn Google Cloud swag when you complete the #LearnToEarn challengeEarn swag with the Google Cloud #LearnToEarn challengeRead Article
Quelle: Google Cloud Platform