SevenRooms serves up personalized hospitality, food, and beverage services with Google Cloud

Finding ways to increase customer loyalty and profitability in a post-COVID world is top of mind for hotels, bars, and restaurants. Unfortunately, many food and beverage services providers struggle to deliver the personalized experiences that keep guests coming back for more. The reality is that most traditional hospitality apps offer only limited insight into guest activities, preferences, underlying operating costs, and other essential details. We built SevenRooms to help food and beverage operators create truly memorable experiences by managing and personalizing every step of the guest journey. With SevenRooms, restaurants in more than 250 cities globally leverage technology and data to provide unforgettable hospitality experiences with a human touch. That includes seating guests at their favorite table, pouring complimentary glasses of wine from a preferred vintage, offering special menu options, and personalized experiences for special occasions. Scaling guest experience and retention on Google CloudWhen developing SevenRooms, we needed a technology partner that would enable our small team to securely scale services, automate manual tasks, and accelerate time to market while reducing IT costs. That’s why we started working withGoogle App Engine—and later became more involved with the Google for Startups Cloud Program.We soon realized many traditional apps lacked the integrations and capabilities needed to respond to the unique challenges facing food and beverage operators that the Google for Startups Cloud Program services provide. With guidance from experts on the Google Startups Success team, we quickly transformed SevenRooms from a beta restaurant and nightclub reservation and guest management point solution into a full-fledged guest experience and retention platform that analyzes actionable data to automatically build and continuously update detailed guest profiles. The combination of Google App Engine and other Google Cloud solutions makes everything easier to build and scale. We’re seeing our total cost of ownership (TCO) decline by 10-15%, so we can shift additional resources to R&D to help customers create one-of-a-kind interactions with their guests. And importantly, we can bring new products to market 200-300% faster than competitors.Because SevenRooms handles a lot of sensitive guest data, stringent security protocols are vital. We store all data onGoogle Cloud, taking advantage of its highly secure-by-design infrastructure and built-in support for international data privacy laws such as GDPR. We useBigQuery andLooker for data analysis and reporting, and power our NoSQL database withFirestore. We also scale workloads and runElasticsearch onGoogle Compute Engine (GCE)—and seamlessly integrate our reservation booking engine with Google Maps andGoogle Search.     In the future, we’re looking to further market actionable guest data with the help of advancedmachine learning (ML) models withTensorFlow andGoogle Cloud Tensor Processing Units (TPUs). Cloud TPUs and Google Cloud data and analytics services are fully integrated with other Google Cloud offerings, includingGoogle Kubernetes Engine (GKE). By running ML workloads on Cloud TPUs, SevenRooms will benefit from Google Cloud’s leading networking and data analytics technologies such as BigQuery. We’re also exploring additional Google Cloud solutions such asAnthos to unify the management of infrastructure and applications across on-premises, edge, and multiple public clouds, as well asGoogle Cloud Run to deploy scalable containerized applications on a fully managed serverless platform. These solutions will enable us to continue to quickly expand our services and offer customers a variety of new benefits. Building a profitable, sustainable future in food and beverageOur work with the Google Startups Success team has been instrumental to helping us get where we are today. Their responsiveness is incredible and stands out compared to services from other technology providers. Google Cloud gives us a highly secure infrastructure and next-level training to evolve our infrastructure using solutions such as BigQuery andBackup and Disaster Recovery.We also work with Google Cloud partner, DoiT International, to further scale and optimize our operations. In particular, DoiT provided expert scripts to short-cut lengthy processes, while actively troubleshooting any issues or questions that came up. DoiT continues to share guidance in key areas for future products and features. They have provided expertise for architecture, infrastructure and cost management. Moving forward, we’re excited to work with Google Cloud and DoiT to handle a growing surge in users which we anticipate in 2023 and beyond.With Google Cloud, SevenRooms is revitalizing food and beverage service delivery by enabling businesses to cultivate and maintain direct guest relationships, deliver exceptional experiences, and encourage repeat visits. We’ve compiled many case studies that demonstrate how our customers see great results by personalizing their interactions with guests, from $1.5M in cost savings, to $400K of additional revenue, to a 68% jump in email open rates.Demand for our guest experience and retention platform keeps growing as we help our customers take a people-first approach by delivering unique and tailored dining experiences. We can’t wait to see what we accomplish next as we expand our team and reach new markets worldwide. If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, andsign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleHow PicnicHealth is revolutionizing healthcare with Google Workspace and Google CloudHealthcare startup PicnicHealth uses Google Cloud and Google Workspace to revolutionize healthcare data.Read Article
Quelle: Google Cloud Platform

From NASA to Google Cloud, Ivan Ramirez helps top gaming companies reach new levels

Editor’s note: Ivan Ramirez, Gaming Team Lead, works in one of Google Cloud’s most dynamic, and least understood businesses, supporting some of the world’s biggest online gaming companies. He’s in a world of extreme potential, heart-stopping challenges, big teamwork, and managing scarce resources for the maximum outcome. And that’s before we get to the gaming.I assume most people in gaming are lifelong gamers. True?Not in my case, but I did start out playing the world’s best video game. I graduated from Georgia Tech with a degree in Aerospace Engineering and went to NASA. I trained to work on the electrical and thermal control systems of the International Space Station, and simulated things like an explosion or a medical emergency for 12 hours at a time.What was it like moving over to the Gaming industry?I had a lot of jobs before I started at Google in 2016. Now, as a Gaming Team Lead, I’m working with customers in many different aspects of the technology relationship, from working hands on keyboard alongside engineers to giving industry vision presentations to executives and everything in between. The great thing about this industry is that at every level, Gaming wants to be at the bleeding edge of technology. They want to be using the best graphics chips, have the most players online at once, or the fastest networking. They want lots of analytics, for things like placing ads in real time, or detecting cheaters while a game is going on. Look at something like Niantic’s Pokemon GO Fest this year, where players caught over one billion Pokémon, spun over 750 million PokéStops and collectively explored over 100 million kilometers. We’ve got big scale numbers like that with a few customers.How does that affect the rest of Google Cloud?When they push us to go faster and deliver more, it helps us invent the future.  Gaming companies also value the freedom to innovate, and have a real passion for their customers, which is something Google Cloud shares in our culture, as well as our leadership in scale, data analytics and more.You say you did a lot of different jobs, but you’ve been here six years. Why?I grew up in Lima, Peru. When I was 10, my dad got an offer to relocate to Miami. It was tough for him, but it was an opportunity he couldn’t pass up. Later, I wanted to go to Georgia Tech because they were strong for Aerospace, even though in Peru you traditionally stay close to family. I think I learned early on that you have to get over your comfort zone to rise up. I’ve had a great time here at Google because it enables me to continue to grow. Over the six years it’s always stayed interesting. Being at Google pushes me to try new things.Do you think Gaming has affected you personally, too?Maybe it affects the way I think about work and people. Some of my proudest moments are helping people, connecting them with others. I try to teach them some of the things I’ve learned, including taking care of yourself. We are people who want to say “yes” to everything, who feel like there’s always something more that we can do, or another project we can improve. You have to find limits and ways to care for yourself and your family too, or you won’t be able to last over the long haul, or even be a good partner and teammate.Related ArticleHear how this Google M&A Lead is helping to build a more diverse Cloud ecosystemPrincipal for Google Cloud’s Mergers & Acquisitions business and founder of Google’s Black+TechAmplify, Wayne Kimball, Jr. shares how inv…Read Article
Quelle: Google Cloud Platform

How to help ensure smooth shift handoffs in security operations

Editor’s note: This blog was originally published by Siemplify on Oct. 29, 2019.Much the same way that nursing teams need to share patient healthcare updates when their shift ends, security operations centers (SOC) need to have smooth shift-handoff procedures in place to ensure that continuous monitoring of their networks and systems is maintained.Without proper planning, knowledge gaps can arise during the shift-change process. These include:Incomplete details: Updates, such as the work that has been done to address active incidents and the proposed duties to continue these efforts, are not thoroughly shared.Incorrect assumptions: Operating with fragmented information, teams engage in repetitive tasks, or worse, specific investigations are skipped entirely because it is assumed they were completed by another shift.Dropped tasks: From one shift to the next, some tasks can fall entirely through the cracks and are never reported to the incoming personnel.Because of these gaps, security analysts tend to spend too much time following up with each other to ensure items are completed. With major incidents, this may mean keeping personnel from the previous shift on for a partial or even full second shift until the incident is closed out. Ramifications of being overworked can include physical and mental fatigue and even burnout.Fortunately, these gaps are not inevitable.Getting a process in placeDecide on the basicsBefore you can succeed with shift handoffs, you need to decide how you will design your shifts. Will shifts be staggered? Will they be covered from geographically different regions (i.e. “follow the sun” model)? If so, handovers may be challenged by language and cultural differences.Do you allow for people to swap shifts (i.e. work the early shift one week and the graveyard the next)? If shifts are fixed then you can create shift teams. If shifts rotate, you need to ensure analysts work each shift for a set period of time to adapt to the specific types of cases and ancillary work that each shift is responsible for. Rotating shifts also can infuse a fresh set of eyes to processes or problems. It also may help retain talent, as working consistently irregular hours can have a negative impact on one’s health.Properly share communicationWhen you work in a SOC, you don’t punch in and out like in the old factory days. Active cases may require you or someone from the team to arrive early to receive a debriefing, or stay late to deliver your own to arriving colleagues (as well as complete any pending paperwork). Streamlining the transfer process is critical but simple: Create a standard handoff log template that each shift uses to clearly communicate tasks and action items. Be prepared for questions.Log activitiesSecurity orchestration, automation, and response (SOAR) technology can help in the collaboration process. In addition, SOAR gives managers the ability to automatically assign cases to the appropriate analyst. Through playbooks, escalations can be defined and automated based on the processes that are unique to your organization.Related ArticleHow to overcome 5 common SecOps challengesHere are 5 common issues that many SecOps teams struggle with—and how to fix them.Read Article
Quelle: Google Cloud Platform

Easily connect SaaS platforms to Google Cloud with Eventarc

Last year, we launched Eventarc, a unified eventing platform with 90+ sources of events from Google Cloud, helping make it a more programmable cloud. We recognize that most Google Cloud customers utilize a myriad of platforms to run their business, from internal IT systems, to hosted vendor software and SaaS services. Creating and maintaining integrations between these platforms is time consuming and complex. With third-party sources in Eventarc, adding integrations between supported SaaS platforms and your applications in Google Cloud is easier than ever.Today we are happy to announce the Public Preview of third-party sources in Eventarc, with the first cohort of sources provided by ecosystem partners.Here are some highlights of this exciting new platform:Simple Discovery and Setup: Configure an integration in two easy steps. Fully managed event infrastructure: With Eventarc, there is nothing to maintain or manage so connecting your SaaS ecosystem to Google Cloud couldn’t be simpler.Consistency: Third-party sources are consistent with the rest of Eventarc, including a consistent trigger configuration and invocations in CloudEvent Format.Trigger multiple workloads: All supported Eventarc destinations are available to target with third party source triggers (Cloud Functions Gen2, Cloud Run, GKE, and Cloud Workflows). Built-in Filtering: Filter on most CloudEvent attributes to allow for robust and easy filtering in the Eventarc trigger.Today, we’re happy to introduce our first cohort of third-party sources. These partners help to improve the value of the connected cloud, and open new exciting use cases for our customers.The Datadog source is available today in publicpreview (codelab, setup instructions).Available in public preview today (setup instructions).The Lacework source is available in private preview. Sign up today.The Check Point CloudGuard source is available in private preview. Sign up today.Next stepsTo learn more about third-party providers offering an Eventarc source, to run through the quickstart, or to provide feedback please see the links below.Learn more about third-party sources in EventarcLearn about third-party providers currently offering an Eventarc sourceTry out the Datadog source codelabInterested in becoming a third-party source of events on Google Cloud? Contact us at eventarc-integrations@google.com
Quelle: Google Cloud Platform

Introducing Cloud Analytics by MITRE Engenuity Center in collaboration with Google Cloud

The cybersecurity industry is faced with the tremendous challenge of analyzing growing volumes of security data in a dynamic threat landscape with evolving adversary behaviors. Today’s security data is heterogeneous, including logs and alerts, and often comes from more than one cloud platform. In order to better analyze that data, we’re excited to announce the release of the Cloud Analytics project by the MITRE Engenuity Center for Threat-Informed Defense, and sponsored by Google Cloud and several other industry collaborators.Since 2021, Google Cloud has partnered with the Center to help level the playing field for everyone in the cybersecurity community by developing open-source security analytics. Earlier this year, we introduced Community Security Analytics (CSA) in collaboration with the Center to provide pre-built and customizable queries to help detect threats to your workloads and to audit your cloud usage. The Cloud Analytics project is designed to complement CSA.The Cloud Analytics project includes a foundational set of detection analytics for key tactics, techniques and procedures (TTPs) implemented as vendor-agnostic Sigma rules, along with their adversary emulation plans implemented with CALDERA framework. Here’s a overview of Cloud Analytics project, how it complements Google Cloud’s CSA to benefit threat hunters, and how they both embrace Autonomic Security Operations principles like automation and toil reduction (adopted from SRE) in order to advance the state of threat detection development and continuous detection and response (CD/CR).Both CSA and the Cloud Analytics project are community-driven security analytics resources. You can customize and extend the provided queries, but they take a more do-it-yourself approach—you’re expected to regularly evaluate and tune them to fit your own requirements in terms of threat detection sensitivity and accuracy. For managed threat detection and prevention, check out Security Command Center Premium’s realtime and continuously updated threat detection services including Event Threat Detection, Container Threat Detection, and Virtual Machine Threat Detection. Security Command Center Premium also provides managed misconfiguration and vulnerability detection with Security Health Analytics and Web Security Scanner.Google Cloud Security Foundation: Analytics Tools & ContentCloud Analytics vs Community Security AnalyticsSimilar to CSA, Cloud Analytics can help lower the barrier for threat hunters and detection engineers to create cloud-specific security analytics. Security analytics is complex because it requires:Deep knowledge of diverse security signals (logs, alerts) from different cloud providers along with their specific schemas;Familiarity with adversary behaviors in cloud environments;Ability to emulate such adversarial activity on cloud platforms;Achieving high accuracy in threat detection with low false positives, to avoid alert fatigue and overwhelming your SOC team.The following table summarizes the key differences between Cloud Analytics and CSA:Target platforms and language support by CSA & Cloud Analytics projectTogether, CSA and Cloud Analytics can help you maximize your coverage of the MITRE ATT&CK® framework, while giving you the choice of detection language and analytics engine to use. Given the mapping to TTPs, some of these rules by CSA and Cloud Analytics overlap. However, Cloud Analytics queries are implemented as Sigma rules which can be translated to vendor-specific queries such as Chronicle, Elasticsearch, or Splunk using Sigma CLI or third party-supported uncoder.io, which offers a user interface for query conversion. On the other hand, CSA queries are implemented as YARA-L rules (for Chronicle) and SQL queries (for BigQuery and now Log Analytics). The latter could be manually adapted to specific analytics engines due to the universal nature of SQL.Getting started with Cloud AnalyticsTo get started with the Cloud Analytics project, head over to the GitHub repo to view the latest set of Sigma rules, the associated adversary emulation plan to automatically trigger these rules, and a development blueprint on how to create new Sigma rules based on lessons learned from this project.The following is a list of Google Cloud-specific Sigma rules (and their associated TTPs) provided in this initial release; use these as examples to author new ones covering more TTPs.Sigma rule exampleUsing the canonical use case of detecting when a storage bucket is modified to be publicly accessible, here’s an example Sigma rule (copied below and redacted for brevity):The rule specifies the log source (gcp.audit), the log criteria (storage.googleapis.com service and storage.setIamPermissions method) and the keywords to look for (allUsers, ADD) signaling that a role was granted to all users over a given bucket. To learn more about Sigma syntax, refer to public Sigma docs.However, there could still be false positives such as a Cloud Storage bucket made public for a legitimate reason like publishing static assets for a public website. To avoid alert fatigue and reduce toil on your SOC team, you could build more sophisticated detections based on multiple individual Sigma rules using Sigma Correlations.Using our example, let’s refine the accuracy of this detection by correlating it with another pre-built Sigma rule which detects when a new user identity is added to a privileged group. Such privilege escalation likely occurred before the adversary gained permission to modify access of the Cloud Storage bucket. Cloud Analytics provides an example of such correlation Sigma rule chaining these two separate events.What’s nextThe Cloud Analytics project aims to make cloud-based threat detection development easier while also consolidating collective findings from real-world deployments. In order to scale the development of high-quality threat detections with minimum false positives, CSA and Cloud Analytics promote an agile development approach for building these analytics, where rules are expected to be continuously tuned and evaluated.We look forward to wider industry collaboration and community contributions (from rules consumers, designers, builders, and testers) to refine existing rules and develop new ones, along with associated adversary emulations in order to raise the bar for minimum self-service security visibility and analytics for everyone.AcknowledgementsWe’d like to thank our industry partners and acknowledge several individuals across both Google Cloud and the  Center for Threat-Informed Defense for making this research project possible:- Desiree Beck, Principal Cyber Operations Engineer, MITRE- Michael Butt, Lead Offensive Security Engineer, MITRE- Iman Ghanizada, Head of Autonomic Security Operations, Google Cloud- Anton Chuvakin, Senior Staff, Office of the CISO, Google CloudRelated ArticleIntroducing Community Security AnalyticsIntroducing Community Security Analytics, an open-source repository of queries for self-service security analytics to help you get starte…Read Article
Quelle: Google Cloud Platform

Keeping track of shipments minute by minute: How Mercado Libre uses real-time analytics for on-time delivery

Iteration and innovation fuel the data-driven culture at Mercado Libre. In our first post, we presented our continuous intelligence approach, which leverages BigQuery and Looker to create a data ecosystem on which people can build their own models and processes. Using this framework, the Shipping Operations team was able to build a new solution that provided near real-time data monitoring and analytics for our transportation network and enabled data analysts to create, embed, and deliver valuable insights.The challengeShipping operations are critical to success in e-commerce, and Mercado Libre’s process is very complex since our organization spans multiple countries, time zones, and warehouses, and includes both internal and external carriers. In addition, the onset of the pandemic drove exponential order growth, which increased pressure on our shipping team to deliver more while still meeting the 48-hour delivery timelines that customers have come to expect.This increased demand led to the expansion of fulfillment centers and cross-docking centers, doubling and tripling the nodes of our network (a.k.a. meli-net) in the leading countries where we operate. We also now have the largest electric vehicle fleet in Latin America and operate domestic flights in Brazil and Mexico. We previously worked with data coming in from multiple sources, and we used APIs to bring it into different platforms based on the use case. For real-time data consumption and monitoring, we had Kibana, while historical data for business analysis was piped into Teradata. Consequently, the real-time Kibana data and the historical data in Teradata were growing in parallel, without working together. On one hand, we had the operations team using real-time streams of data for monitoring, while on the other, business analysts were building visualizations based on the historical data in our data warehouse.  This approach resulted in a number of problems:The operations team lacked visibility and required support to build their visualizations. Specialized BI teams became bottlenecks.Maintenance was needed, which led to system downtime. Parallel solutions were ungoverned (the ops team used an Elastic database to store and work with attributes and metrics) with unfriendly backups and data bounded for a period of time.We couldn’t relate data entities as we do with SQL. Striking a balance: real-time vs. historical dataWe needed to be able to seamlessly navigate between real-time and historical data. To address this need, we decided to migrate the data to BigQuery, knowing we would leverage many use cases at once with Google Cloud.Once we had our real-time and historical data consolidated within BigQuery, we had the power to make choices about which datasets needed to be made available in near real-time and which didn’t. We evaluated the use of analytics with different time windows tables from the data streams instead of the real-time logs visualization approach. This enabled us to serve near real-time and historical data utilizing the same origin. We then modeled the data using LookML, Looker’s reusable modeling language based on SQL, and consumed the data through Looker dashboards and Explores. Because Looker queries the database directly, our reporting mirrored the near real-time data stored in BigQuery. Finally, in order to balance near real-time availability with overall consumption costs, we analyzed key use cases on a case-by-case basis to optimize our resource usage.This solution prevented us from having to maintain two different tools and featured a more scalable architecture. Thanks to the services of GCP and the use of BigQuery, we were able to design a robust data architecture that ensures the availability of data in near real-time.Streaming data with our own Data Producer Model: from APIs to BigQuery To make new data streams available, we designed a process which we call the “Data Producer Model” (“Modelo Productor de Datos” or MPD) where functional business teams can serve as data creators in charge of generating data streams and publishing them as related information assets we call “data domains”. Using this process, the new data comes in via JSON format, which is streamed into BigQuery. We then use a 3-tiered transformation process to convert that JSON into a partitioned, columnar structure.To make these new data sets available in Looker for exploration, we developed a Java utility app to accelerate the development of LookML and make it even more fun for developers to create pipelines.The end-to-end architecture of our Data Producer Model.The complete “MPD” solution results in different entities being created in BigQuery with minimal manual intervention. Using this process, we have been able to automate the following:The creation of partitioned, columnar tables in BigQuery from JSON samplesThe creation of authorized views in a different GCP BigQuery project (for governance purposes)LookML code generation for Looker viewsJob orchestration in a chosen time windowBy using this code-based incremental approach with LookML, we were able to incorporate techniques that are traditionally used in DevOps for software development, such as using Lams to validate LookML syntax as a part of the CI process and testing all our definitions and data with Spectacles before they hit production. Applying these principles to our data and business intelligence pipelines has strengthened our continuous intelligence ecosystem. Enabling exploration of that data through Looker and empowering users to easily build their own visualizations has helped us to better engage with stakeholders across the business.The new data architecture and processes that we have implemented have enabled us to keep up with the growing and ever-changing data from our continuously expanding shipping operations. We have been able to empower a variety of teams to seamlessly develop solutions and manage third party technologies, ensuring that we always know what’s happening – and more critically – enabling us to react in a timely manner when needed. Outcomes from improving shipping operations:Today, data is being used to support decision-making in key processes, including:Carrier Capacity OptimizationOutbound MonitoringAir Capacity MonitoringThis data-driven approach helps us to better serve you -and everyone- who expects to receive their packages on-time according to our delivery promise. We can proudly say that we have improved both our coverage and speed, delivering 79% of our shipments in less than 48 hours in the first quarter of 2022.Here is a sneak peek into the data assets that we use to support our day-to-day decision making:a. Carrier Capacity: Allows us to monitor the percentage of network capacity utilized across every delivery zone and identify where delivery targets are at risk in almost real time.b. Outbound Places Monitoring: Consolidates the number of shipments that are destined for a place (the physical points where a seller picks up a package), enabling us to both identify places with lower delivery efficiency and drill into the status of individual shipments.c. The Air Capacity Monitoring: Provides capacity usage monitoring for our aircrafts running each of our shipping routes.Costs into the equationThe combination of BigQuery and Looker also showed us something we hadn’t seen before: overall cost and performance of the system. Traditionally, developers maintained focus on metrics like reliability and uptime without factoring in associated costs.By using BigQuery’s information schema, Looker Blocks, and the export of BigQuery logs, we have been able to closely track data consumption, quickly detect underperforming SQL and errors, and make adjustments to optimize our usage and spend. Based on that, we know the Looker Shipping Ops dashboards generate a concurrency of more than 150 queries, which we have been able to optimize by taking advantage of BigQuery and Looker caching policies.The challenges aheadUsing BigQuery and Looker has enabled us to solve numerous data availability and data governance challenges: single point access to near real-time data and to historical information, self-service analytics & exploration for operations and stakeholders across different countries & time zones, horizontal scalability (with no maintenance), and guaranteed reliability and uptime (while accounting for costs), among other benefits.However, in addition to having the right technology stack and processes in place, we also need to enable every user to make decisions using this governed, trusted data. To continue achieving our business goals, we need to democratize access not just to the data but also to the definitions that give the data meaning. This means incorporating our data definitions with our internal data catalog and serving our LookML definitions to other data visualizations tools like Data Studio, Tableau or even Google Sheets and Slides so that users can work with this data through whatever tools they feel most comfortable using.If you would like a more indepth look at how we made new data streams available from a process we designed called the “Data Producer Model” (“Modelo Productor de Datos” or MPD) register  to attend our webcast on August 31.  While learning and adopting new technologies can be a challenge, we are excited to tackle this next phase, and we expect our users will be too, thanks to a curious and entrepreneurial culture. Are our teams ready to face new changes? Are they able to roll out new processes and designs? We’ll go deep on this in our next post.
Quelle: Google Cloud Platform

How Google Cloud can help stop credential stuffing attacks

Google has more than 20 years of experience protecting its core service from Distributed Denial of Service (DDoS) attacks and from the most advanced web application attacks. With Cloud Armor, we have enabled our customers to benefit from our extensive experience of protecting our globally distributed products such as Google Search, Gmail, and YouTube.In our research, we have noticed that new and more sophisticated techniques are increasingly able to bypass and override most of the commercial anti-DDoS systems and Web Application Firewalls (WAF). Credential stuffing is one of these techniques.Credential stuffing is one of the hardest to detect attacks because it’s more like the tortoise and less like the hare. In a slow but steady manner, the attacker exploits a list of usernames and passwords, often first available illicitly after a data breach, and uses automated techniques to force these compromised credentials to give them unauthorized access to a web service. While password reuse habits and the ever-growing number of stolen credential collections are making it easier for organizations uncover and report this type of “brute force” technique to law enforcement and technology providers, today’s credential stuffing attacks often leverage bots or compromised IoT devices to reach a level of scale and automation that earns the attackers far better results than the type of brute-force attacks deployed even a few years ago.Nevertheless, a defense-in-depth approach to cloud security can help stuff even advanced credential stuffing attacks. One technique is to secure user accounts with multi-factor authentication. In case of breach, the extra layer of protection that MFA creates can protect a password exposure from resulting in a successful malicious login. Unfortunately, we know that imposing such a requirement isn’t always appropriate or possible. In case of MFA failure or implementation challenges, additional controls to protect the websites that expose login forms against credential stuffing attacks can be deployed.We outline below how Google Cloud can help reduce the likelihood of a successful credential stuffing attack by building a layered security strategy that leverages native Google technologies such as Google Cloud Armor and reCAPTCHA Enterprise.Google Cloud Armor overviewGoogle Cloud Armor can help customers who use Google Cloud or on-premises deployments to mitigate and address multiple threats, including DDoS attacks and application attacks like cross-site scripting (XSS) and SQL injection (SQLi).Google Cloud Armor’s DDoS protection is always-on inline, scaling to the capacity of Google’s global network. It is able to instantly detect and mitigate network attacks in order to allow only well-formed requests through the load balancing proxies. This product provides not only anti-DDoS capabilities, but allows with a set of preconfigured rules to protect web applications and services from common attacks from the internet and help mitigate the OWASP Top 10 vulnerabilities. One of the most interesting features of Cloud Armor, especially for the credential stuffing attack protection, is the possibility to apply rate-based rules to help customers to protect the applications from a large volume of requests that flood instances and block access for legitimate users.Google Cloud Armor has two types of rate-based rules:Throttle: You can enforce a maximum request limit per client or across all clients by throttling individual clients to a user-configured threshold. This rule enforces the threshold to limit traffic from each client that satisfies the match conditions in the rule. The threshold is configured as a specified number of requests in a specified time interval.Rate-based ban: You can rate limit requests that match a rule on a per-client basis and then temporarily ban those clients for a specified time if they exceed a user-configured threshold.Google Cloud Armor security policies enable you to allow or deny access to your external HTTP(S) load balancer at the Google Cloud edge, as close as possible to the source of incoming traffic. This prevents unwelcome traffic from consuming resources or entering your Virtual Private Cloud (VPC) networks. The following diagram illustrates the location of the external HTTP(S) load balancers, the Google network, and Google data centers.Figure 1.A defense-in-depth approach to credential stuffing protectionIt is important to design security controls in a layered approach without relying only on a single defense mechanism. This strategy is known as defense-in-depth and if correctly applied, allows to achieve reasonable degrees of security. In the following sections we will discuss the layers that can be implemented using Google Cloud Armor to protect against credential stuffing attacks.Layer 1 – Geo-blocking and IP-blockingNon-sophisticated credential stuffing attacks are likely to use a limited number of IP addresses, often traceable to nation states. It is possible to start the defense-in-depth approach trying to identify the regions where the website that has to be protected should be available. For example, if the web is expected to be used only by U.S. users it is possible to set a deny rule using an expression like the following:code_block[StructValue([(u’code’, u”origin.region_code != ‘US'”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4f580ebf50>)])]Likewise, it is possible to apply a deny rule to block traffic that is originated by a list of regions, where the application shouldn’t be available. For example, if we want to block traffic from the United States and Italy, it is possible using the following expression:code_block[StructValue([(u’code’, u”origin.region_code == ‘US’ || origin.region_code == ‘IT'”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4f3ea9f510>)])]Additionally, it is possible to react to ongoing attacks creating a denylist for IP addresses or CIDRs, with a limit of 10 IP addresses, or ranges, per rule. An example would be:code_block[StructValue([(u’code’, u”inIPRange(origin.ip, ‘9.9.9.0/24′)”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ef9c891d0>)])]While Geo-blocking and IP-blocking are good mechanisms to stop trivial attacks or to limit an attack, there is more that can be done to block attackers. Most of the sophisticated credential stuffing attack tools can be configured with proxies or to use compromised IoT devices to bypass IP-based controls.Layer 2 – HTTP headers Another way to improve defensive configurations is to add additional checks over the HTTP headers of the requests coming to the application. One of the main examples is the user-agent. The user-agent is a request header that helps the application to identify which operating system and which browser are being used usually to improve the user experience. The attackers do not frequently care about helping the application to better serve the user; in an attack scenario the HTTP headers are either completely missing or malformed. Below you can find an example rule to check the user-agent presence and correctness.code_block[StructValue([(u’code’, u”has(request.headers[‘user-agent’]) && request.headers[‘user-agent’].matches(‘Chrome’)”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ef9c89810>)])]HTTP headers can be helpful to further reduce the attack surface, but they also have their limits. They are still controlled on the client side, which means an attacker can spoof them. To achieve maximum security results from configuring HTTP headers, it is necessary to more fully understand the HTTP headers that the application expects to encounter and to properly configure the Google Cloud Armor rule.Layer 3 – Rate Limiting As we’ve noted, the nature of credential stuffing attacks makes them difficult to identify. They are also often associated with password spraying techniques that target not only breached username and password pairs, but also widely-used, known weak passwords (such as “123456.”). Rate limiting protection mechanisms work well in these scenarios to add an additional defensive layer. When we deal with rate limiting, it’s important to identify the standard rate that a legitimate user would have, and to understand the threshold of requests that would be blocked if exceeded. Finding the right balance between security and user experience is often challenging. To help fine-tune rate limiting so that legitimate users are not blocked, Google Cloud Armor’s  Preview mode allows security teams to test rate limiting without any real enforcement. In order to minimize user impact, we strongly recommended proceeding in this way followed by an analysis of the test results.Once the preliminary analyses have been completed, it is possible to use Google Cloud Armor to implement rate limiting rules. An example of a rule that applies a ban (which the user sees as a 404 error) of 5 minutes after 50 connections in less than 1 minute from the same IP address would be:code_block[StructValue([(u’code’, u’gcloud compute security-policies rules create 100 \rn –security-policy=sec-policy \rn –action=rate-based-ban \rn –rate-limit-threshold-count=50 \rn –rate-limit-threshold-interval-sec=60 \rn –ban-duration-sec=300 \rn –conform-action=allow \rn –exceed-action=deny-404 \rn –enforce-on-key=IP’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4f54690650>)])]When it comes to rate limiting the client identification is fundamental. The IP address could be the first option, but there are cases where it wouldn’t be enough. For example, many Internet service providers use natting techniques to reduce the public IP addresses space needed. Of course, the probability of an IP clash is low, but it is something that should be taken into account when designing the rate limiting thresholds and strategy. Cloud Armor can identify individual clients in many ways such as using IP addresses, HTTP headers, HTTP cookies and XFF-IPs. For example, it is common for mobile apps to be designed to use custom headers with unique values to better identify each client in a reliable way. In this case, it would be appropriate to enforce the client identification based on this custom header rather than the IP address. Below is an example rule based on the custom header ‘client-random-id’.code_block[StructValue([(u’code’, u”gcloud compute security-policies rules create 100 \rn –security-policy=sec-policy \rn –action=rate-based-ban \rn –rate-limit-threshold-count=50 \rn –rate-limit-threshold-interval-sec=60 \rn –ban-duration-sec=300 \rn –conform-action=allow \rn –exceed-action=deny-404 \rn–enforce-on-key=HTTP-HEADER \rn –enforce-on-key-name=’client-random-id'”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4f57fb7a50>)])]Layer 4 – reCAPTCHA Enterprise and Google Cloud Armor integrationAn additional level of protection, combined with the previously mentioned techniques, would be the native integration of Google Cloud Armor with reCAPTCHA Enterprise technology.The integration could be made using a rate limiting rule similar to the one described above. Instead of returning a 404 error, it can be configured to redirect the connection to a reCAPTCHA Enterprise challenge at the WAF layer.  At this stage the following events take place:Cloud Armor verifies the rate limiting criteria and if exceeded would redirect the connection to reCAPTCHA Enterprise.reCAPTCHA Enterprise performs an assessment of the client interaction and if necessary challenges the user with a CAPTCHA.If the user fails the assessment an error message is returned. If the assessment is passed reCAPTCHA issues a temporary exemption cookie.Cloud Armor verifies the exemption cookie validity and grants access to the site.The following diagram shows the event sequence:Figure 2.ConclusionsCredential stuffing is a non-trivial attack and should be mitigated first with multi-factor authentication mechanisms and user education. Some technical measures can be implemented to apply a defense-in-depth model. Google Cloud Armor should be used to implement security mechanisms such as: Geo-Blocking HTTP Header verificationRate LimitingAnd, as an additional security layer:A combination of reCAPTCHA Enterprise and Google Cloud ArmorThese controls can achieve a reasonable degree of protection against not only credential stuffing attacks, but also brute-force attacks and general protection against bot-driven attacks.Related ArticleBetter protect your web apps and APIs against threats and fraud with Google CloudHow Google Cloud’s Web App and API Protection (WAAP) solution protects enterprises from rising security & fraud threatsRead Article
Quelle: Google Cloud Platform

Data Intensive Applications with GKE and MariaDB SkySQL

With Google Kubernetes Engine (GKE), customers get a fully managed environment to automatically deploy, manage, and scale containerized applications on Google Cloud. Kubernetes has become a preferred choice for running not only stateless workloads (e.g. Web Services) but also for stateful applications (e.g. databases). According to the Data on Kubernetes report, over 70% of Kubernetes users run stateful applications in production. Stateful application support within Kubernetes has improved rapidly, and GKE offers extensive support for high-performing and resilient persistent storage and built-in features like Backup for GKE.  With stateful applications, customers can choose to adopt a “do it yourself” (DIY) model and deploy directly on GKE or simply use a fully managed database-as-a-service (DBaaS) offering such as Cloud SQL or MariaDB SkySQL. Whatever operating model customers choose, they expect a reliable, consistent experience from applications which means data must be continuously available.  MariaDB SkySQL is a DBaaS for applications that demand scalability, availability and elasticity. It’s for customers looking for a cloud-native database that enables them to leverage the openness, resilience, extensibility, functionality and performance of MariaDB’s relational database on public cloud infrastructure. SkySQL delivers flexibility and scalability in a cloud database that keeps up with customers’ changing needs — all while reducing legacy database costs.Together customers get the best of both worlds for modern applications — fully managed compute with GKE for stateless applications and a highly reliable MariaDB SkySQL DBaaS for storing state. Virgin Media O2 serves more than 30 million users via Google Cloud and MariaDB SkySQL databases running all transactions for O2’s network, customer authentication, venue deployment and internal operations, including reporting and analytics.“We need to make informed business decisions because we can easily see and understand what is happening in our environment. We now have a 24×7 platform that’s more efficient, faster and cheaper. Cost was the last thing we looked at, but we’re happy to see the savings. Both OpEx and CapEx were massively reduced by moving everything we did from on-prem into SkySQL, and that savings will continue, on an ongoing basis. We can now always work within our budget and scale as we go.” – Paul Greaves, Head of Engineering, O2 Enterprise and Wifi, Virgin Media UK LimitedMariaDB SkySQL is built on GKE Under the hood, MariaDB SkySQL is built on GKE. DBaas are increasingly running on GKE to benefit from built-in features such as Backup for GKE, cost optimization features to measure unit economics and the portability and openness of Kubernetes. Additionally, to help with ongoing operations commonly referred to as ‘day 2 operations’, which has been a source of toil with stateful applications, customers get safe deployment strategies like Blue-green upgradesand observability. All this means running on GKE brings business agility that makes MariaDB SkySQL easy to deploy and easy to scale as the business grows. “Using GKE has really streamlined the process of operating SkySQL in the cloud,” says Kevin Farley, Global Director Cloud Partners MariaDB Corporation. “SkySQL databases deployed on GKE regional clusters using a Kubernetes operator, provide enterprise customers with maximum security and high availability.” Customers can choose to run databases of all types and sizes directly on GKE, or select managed DBaaS offerings like SkySQL. Increasingly, DBaaS are being built on GKE to deliver as-a-service products, so either way, customers get the power of GKE supporting mission critical applications. Try SkySQL on Google Cloud.
Quelle: Google Cloud Platform

Managing the Looker ecosystem at scale with SRE and DevOps practices

Many organizations struggle to create data-driven cultures where each employee is empowered to make decisions based on data. This is especially true for enterprises with a variety of systems and tools in use across different teams. If you are a leader, manager, or executive focused on how your team can leverage Google’s SRE practices or wider DevOps practices, definitely you are in the right place!What do today’s enterprises or mature start-ups look like?Today large organizations are often segmented into hundreds of small teams which are often working around data in the magnitude of several petabytes and in a wide variety of raw forms. ‘Working around data’ could mean any of the following: generating, facilitating, consuming, processing, visualizing or feeding back into the system. Due to a wide variety of responsibilities, the skill sets also vary to a large extent. Numerous people and teams work with data, with jobs that span the entire data ecosystem:Centralizing data from raw sources and systemsMaintaining and transforming data in a warehouseManaging access controls and permissions for the dataModeling dataDoing ad-hoc data analysis and explorationBuilding visualizations and reportsNevertheless, a common goal across all these teams is keeping services running and downstream customers happy. In other words, the organization might be divided internally, however, they all have the mission to leverage the data to make better business decisions. Hence, despite silos and different subgoals, destiny for all these teams is intertwined for the organization to thrive. To support such a diverse set of data sources and the teams supporting them, Looker supports over 60 dialects (input from a data source) and over 35 destinations (output to a new data source).Below is a simplified* picture of how the Looker ecosystem is central to a data-rich organization.Simplified* Looker ecosystem in a data-rich environment*The picture hides the complexity of team(s) accountable for each data source. It also hides how a data source may have dependencies on other sources. Looker Marketplace can also play an important role in your ecosystem.What role can DevOps and SRE practices play?In the most ideal state, all these teams will be in harmony as a single-threaded organization with all the internal processes so smooth that everyone is empowered to experiment (i.e. fail, learn, iterate and repeat all the time). With increasing organizational complexities, it is incredibly challenging to achieve such a state because there will be overhead and misaligned priorities. This is where we look up to the guiding principles of DevOps and SRE practices. In case you are not familiar with Google SRE practices, here is a starting point. The core of DevOps and SRE practices are mature communication and collaboration practices. Let’s focus on the best practices which could help us with our Looker ecosystem.Have joint goals. There should be some goals which are a shared responsibility across two or more teams. This helps establish a culture of psychological safety and transparency across teams.Visualize how the data flows across the organization. This enables an understanding how each team plays their role and how to work with them better.Agree on theGolden Signals (aka core metrics). These could mean data freshness, data accuracy, latency on centralized dashboards etc. These signals allow teams to set their error budgets and SLIs.Agree on communication and collaboration methods that work across teams. Regular bidirectional communication modes – have shared Google Chat spaces/slack channels. Focus on artifacts such as jointly owned documentations pages, shared roadmap items, reusable tooling, etc. For example, System Activity Dashboards could be made available to all the relevant stakeholders and supplemented with notes tailored to your organization.Set up regular forums where commonly discussed agenda items include major changes, expected downtime and postmortems around the core metrics. Among other agenda items, you could define/refine a common set of standards, for example centrally defined labels, group_labels, descriptions, etc. in the LookML to ensure there is a single terminology across the board.Promote informal sharing opportunities such as lessons learned, TGIFs, Brown bag sessions, and shadowing opportunities. Learning and teaching have an immense impact on how teams evolve. Teams often become closer with side projects that are slightly outside of their usual day-to-day duties.Have mutually agreed upon change management practices. Each team has dependencies so making changes may have an impact on other teams. Why not plan those changes systematically? For example, getting common standards across the Advance deploy mode.Promote continuous improvements. Keep looking for better, faster, cost-optimized versions of something important to the teams.Revisit your data flow. After every major reorganization, ensure that organizational change has not broken the established mechanisms.despite silos and different subgoals, destiny for all these teams is intertwined for the organization to thrive.Are you over-engineering?There is a possibility that in the process of maturing the ecosystem, we may end up in an overly engineered system – we may unintentionally add toil to the environment. These are examples of toil that often stem from communication gaps. Meetings with no outcomes/action plans – This one is among the most common forms of toil, where the original intention of a meeting is no longer valid but the forum has not taken efforts to revisit their decision.Unnecessary approvals – Being a single threaded team can often create unnecessary dependencies and your teams may lose the ability to make changes.Unaligned maintenance windows – Changes across multiple teams may not be mutually exclusive hence if there is misalignment then it may create unforeseen impacts on the end user.Fancy, but unnecessary tooling – Side projects, if not governed, may create unnecessary tooling which is not being used by the business. Collaborations are great when they solve real business problems, hence it is also required to refocus if the priorities are set right.Gray areas – When you have a shared responsibility model, you also may end up in gray areas which are often gaps with no owner. This can lead to increased complexity in the long run. For example, having the flexibility to schedule content delivery still requires collaboration to reduce jobs with failures because it can impact the performance of your Looker instance.Contradicting metrics – You may want to pay special attention to how teams are rewarded for internal metrics. For example, if a team focuses on accuracy of data and other one on freshness then at scale they may not align with one another.ConclusionTo summarize, we learned how data is handled in large organizations with Looker at its heart unifying a universal semantic model. To handle large amounts of diverse data, teams need to start with aligned goals and commit to strong collaboration. We also learned how DevOps and SRE practices can guide us navigate through these complexities. Lastly, we looked at some side effects of excessively structured systems. To go forward from here, it is highly recommended to start with an analysis of how data flows under your scope and how mature the collaboration is across multiple teams.Further reading and resourcesGetting to know Looker – common use casesEnterprise DevOps GuidebookKnow thy enemy: how to prioritize and communicate risks—CRE life lessonsHow to get started with site reliability engineering (SRE)Bring governance and trust to everyone with Looker’s universal semantic modelRelated articlesHow SREs analyze risks to evaluate SLOs | Google Cloud BlogBest Practice: Create a Positive Experience for Looker UsersBest Practice: LookML Dos and Don’ts
Quelle: Google Cloud Platform

Top 5 Takeaways from Google Cloud’s Data Engineer Spotlight

In the past decade, we have experienced an unprecedented growth in the volume of data that can be captured, recorded and stored.  In addition, the data comes in all shapes and forms, speeds and sources. This makes data accessibility, data accuracy, data compatibility, and data quality more complex than ever more. Which is why this year at our Data Engineer Spotlight, we wanted to bring together the Data Engineer Community to share important learning sessions and the newest innovations in Google Cloud. Did you miss out on the live sessions? Not to worry – all the content is available on demand. Interested in running a proof of concept using your own data? Sign up here forhands-on workshop opportunities.Here are the five biggest areas to catch up on from Data Engineer Spotlight, with the first four takeaways written by a loyal member of our data community: Francisco Garcia, Founder of Direcly, a Google Cloud Partner. #1: The next generation of Dataflow was announced, including Dataflow Go (allowing engineers to write core Beam pipelines in Go, data scientists to contribute with Python transforms, and data engineers to import standard Java I/O connectors). The best part, it all works together in a single pipeline. Dataflow ML (deploy easy ML models with PyTorch, TensorFlow, or stickit-learn to an application in real time), and Dataflow Prime (removes the complexities of sizing and tuning so you don’t have to worry about machine types, enabling developers to be more productive). Read on the Google Cloud Blog: The next generation of Dataflow: Dataflow Prime, Dataflow Go, and Dataflow MLWatch on Google Cloud YouTube: Build unified batch and streaming pipelines on popular ML frameworks #2: Dataform Preview was announced (Q3 2022), which helps build and operationalize scalable SQL pipelines in BigQuery. My personal favorite part is that it follows software engineering best practices (version control, testing, and documentation) when managing SQL. Also, no other skills beyond SQL are required. Dataform is now in private preview. Join the waitlist Watch on Google Cloud YouTube: Manage complex SQL workflows in BigQuery using Dataform CLI #3: Data Catalog is now part of Dataplex, centralizing security and unifying data governance across distributed data for intelligent data management, which can help governance at scale. Another great feature is that it has built-in AI-driven intelligence with data classification, quality, lineage, and lifecycle management.  Read on the Google Cloud Blog: Streamline data management and governance with the unification of Data Catalog and Dataplex Watch on Google Cloud YouTube: Manage and govern distributed data with Dataplex#4: A how-to on BigQuery Migration Services was covered, which offers end-to-end migrations to BigQuery, simplifying the process of moving data into the cloud and providing tools to help with key decisions. Organizations are now able to break down their data silos. One great feature is the ability to accelerate migrations with intelligent automated SQL translations.  Read More on the Google Cloud Blog: How to migrate an on-premises data warehouse to BigQuery on Google Cloud Watch on Google Cloud YouTube: Data Warehouse migrations to BigQuery made easy with BigQuery Migration Service #5: The Google Cloud Hero Game was a gamified three hour Google Cloud training experience using hands-on labs to gain skills through interactive learning in a fun and educational environment. During the Data Engineer Spotlight, 50+ participants joined a live Google Meet call to play the Cloud Hero BigQuery Skills game, with the top 10 winners earning a copy of Visualizing Google Cloud by Priyanka Vergadia. If you missed the Cloud Hero game but still want to accelerate your Data Engineer career, get started toward becoming a Google Cloud certified Data Engineer with 30-days of free learning on Google Cloud Skills Boost. What was your biggest learning/takeaway from playing this Cloud Hero game?It was brilliantly organized by the Cloud Analytics team at Google. The game day started off with the introduction and then from there we were introduced to the skills game. It takes a lot more than hands on to understand the concepts of BigQuery/SQL engine and I understood a lot more by doing labs multiple times. Top 10 winners receiving the Visualizing Google Cloud book was a bonus. – Shirish KamathCopy and pasting snippets of codes wins you competition. Just kidding. My biggest takeaway is that I get to explore capabilities of BigQuery that I may have not thought about before. – Ivan YudhiWould you recommend this game to your friends? If so, who would you recommend it to and why would you recommend it? Definitely, there is so much need for learning and awareness of such events and games around the world, as the need for Data Analysis through the cloud is increasing. A lot of my friends want to upskill themselves and these kinds of games can bring a lot of new opportunities for them. – Karan KukrejaWhat was your favorite part about the Cloud Hero BigQuery Skills game? How did winning the Cloud Hero BigQuery Skills game make you feel?The favorite part was working on BigQuery Labs enthusiastically to reach the expected results and meet the goals. Each lab of the game has different tasks and learning, so each next lab was giving me confidence for the next challenge. To finish at the top of the leaderboard in this game makes me feel very fortunate. It was like one of the biggest milestones I have achieved in 2022. – Sneha Kukreja
Quelle: Google Cloud Platform