Bazaarvoice uses Recommendations AI to improve CTR by 60%

Not long ago, building AI into recommendation engines was a daunting, expensive task that could take years to get off the ground. But as Bazaarvoice has shown, with the help of cloud services, the time from AI investment to business outcomes is shorter than ever. Bazaarvoice is the leading provider of product reviews and user-generated content (UGC) solutions that help brands and retailers understand and better serve customers. Its 2019 acquisition of Influenster.com, a community of consumer reviewers 6.5 million strong, expanded the Bazaarvoice portfolio with a platform where consumers can share their candid opinions — and share they have, over 54 million times. After the acquisition, Bazaarvoice expanded the site’s product diversity by 53%, to more than 5.4 million unique products. To keep user engagement high, Influenster must be seen as both a source of trusted, transparent reviews and a place for customers to discover useful, relevant products for the first time. By introducing shoppers to new products Influenster not only provides value to customers but also helps brands collect consumer insights. Influenster started out as a place where people gathered to share their honest thoughts on beauty products but quickly expanded to nearly every category, from Art to Wearables. Because of the much smaller scope, the site started and flourished under a rules-based recommendation engine. However, as Influenster expanded its scope under Bazaarvoice, a more robust recommendation system became necessary. In its earliest days, Influenster was successful because of the human perspective it offered: For every product there was a litany of reviews and images that made users feel as if they were getting an endorsement on a product from a friend. The Bazaarvoice engineering team asked themselves how they could keep that same feeling of personalization with an ever-growing catalog of items and categories. They needed recommendations that could scale with the site, rather than requiring more rules be constructed each time a new product category was introduced. They also needed to ensure the Influenster experience would remain performant even towards unknown members.Bazaarvoice tested out several recommendation engines, benchmarking each against their current rules-based system. In the end they decided on Google Cloud’s Recommendations AI because of its transparent billing, ease of integration and setup, and naturally, its proven results.Transparent Billing“Part of what the engineers loved was they knew exactly what it was going to cost as it scaled” says Nick Shiftan, SVP, Content Acquisition Services Product Unit for Influenster. The goal was to build once and innovate rather than leave a wake of technical debt only to be tackled when costs grew unexpectedly out of control. Google Cloud’s straightforward and pay-as-you-go billing allowed them to anticipate how costs would grow as user interactions did and plan accordingly.Ease of integration”I’m positively surprised how Google packed such a complex system in a very easy-to-use API” remarks Eralp Bayraktar, the Software Engineering team lead overseeing the project. Because the original team was made of just one full-time engineer the ease of integration became an even more critical feature. Not only does Recommendations AI pull from years of suggestion expertise in Google Search and YouTube, but in combination with Ad’s Merchant Center, it also creates a streamlined process for importing product metadata. From there, creating a model becomes a matter of picking the preferred recommendation type and then the business objective to optimize for. Once the model is created and the API integrated into the website, the code is already deployed at the global scale: There are no further architectural considerations to ensure recommendations are available to users worldwide. For Bazaarvoice, this meant going from ideation to production in one month.Proven Results“We have used it for product recommendations and off-loaded our DB-tiring business logic to Recommendations AI, which resulted in overall faster response times and much better recommendations as proven by our A/B tests,” Eralp continues.  Bazaarvoice began by A/B testing Recommendations AI against their rules-based system. Early on in the experimental phase they noticed a clear and consistent 60% increase in the click-through rate over their original recommendation system. Even more impressive was the performance on Unknown Members. For every person that signs up for an account on Influenster.com there are many other visitors that come to the website and leave without fully registering. This is typically referred to as the “cold start” problem in the industry — how do you figure out what to recommend to those people without their history, behavior, or preferences? Recommendations AI gives you the option to input and train on unknown users, and by providing metadata on products, it can provide high-quality suggestions to registered members and first-time users alike. With a mind to the future, Eralp concludes his thoughts on Bazaarvoice’s experience: “It enables discovery by adding an adjustable percentage of cross-category products [for] healthier [traffic distribution] across all our catalog. We are investing in data science and having the Recommendations AI as the baseline is a good challenge for us to thrive.” To learn more about Recommendations AI and how it can help your organization thrive, check out our recently published 4 part guide which kicks off with an overview on “How to get better retail recommendations with Recommendations AI.” This series also covers data ingestion, modeling, as well as serving predictions & evaluating Recommendations AI. You can also easily get started with our Quickstart Guide.Related ArticleIKEA Retail (Ingka Group) increases Global Average Order Value for eCommerce by 2% with Recommendations AIIKEA uses Recommendations AI to provide customers with more relevant product information.Read Article
Quelle: Google Cloud Platform

Innovating together to accelerate Germany’s digital transformation

At Google Cloud, we are committed to supporting the next wave of growth for Europe’s businesses and organizations. Germany is one of the largest and most connected global economies, and it is undergoing digital transformation enabled by the use of cloud services. To further support that transformation, we announced plans to invest approximately EUR 1 billion in cloud infrastructure and green energy in Germany by 2030. Organizations in Germany and across Europe need solutions that meet their requirements for security, privacy, and digital sovereignty, without compromising on functionality or innovation. To help meet these requirements, we launched ‘Cloud. On Europe’s Terms’ in September, and as part of that initiative, we entered into a strategic, long-term partnership with T-Systems to build a Sovereign Cloud offering in Germany for private and public sector organizations.Building on our investments and partnership, we want to share the next steps in our plan to support sustainable digital transformation in Germany. Together with T-Systems, we will embark on an ambitious co-innovation program focused on developing new sovereign cloud and digital transformation solutions that help and promote the innovation and competitiveness of local cloud customers.Munich – home to digital innovation for users and customers globallyMunich is at the center of our plans. Google and Google Cloud have a long history in the city. Since the initial opening of our Munich office in 2006, it has grown to become one of Google’s main European engineering hubs. More than 1,500 Googlers are based in Munich-Arnulfpark, our largest office in Germany. It is home to our global Google Safety Engineering Center (GSEC)where robust privacy and security solutions for billions of Google users are being built. Munich was a natural choice for Google to locate this center given the security and privacy expertise in the region.The Free State of Bavaria and its capital are among the most exciting places globally where the next decade of digitization is being shaped – it is home to a rich ecosystem of established and emerging industry leaders, a vibrant technology sector, and academic and research excellence. This environment is the ideal place to establish our first European Google Cloud Co-Innovation Center, located in our offices on the Westhof side of the developing Arnulfpost Quarter. The Co-Innovation Center will open in the coming months and serve as a digital innovation stronghold in the heart of the Bavarian capital. Dr. Markus Söder, Minister-President of Bavaria, commented: “Isar Valley meets Silicon Valley: the future lies in digitisation. Google Cloud and T-Systems launch a global partnership for cloud computing in Bavaria. Munich thus continues to grow as one of the leading IT locations worldwide. With the HighTech Agenda, Bavaria is promoting technology, digitalisation and Super-Tech with a total of 3.5 billion euros. We are creating 13,000 new university places and 1,000 professorships, 100 of which are for AI alone.”Dr. Markus Söder, Minister-President of Bavaria, and Judith Gerlach, Bavarian State Minister for Digital Affairs, in conversation with Thomas Kurian, CEO Google Cloud (on screen bottom right), Adel Al-Saleh, CEO of T-Systems, and Dr. Maximilian Ahrens, CTO of T-Systems (on screen bottom left), and Dr. Wieland Holfelder, Vice President Engineering Google Cloud (on screen top right).Co-Innovation on a sovereign foundationTogether with T-Systems, a company that has also championed innovation in Munich for a long time, we’ll leverage the new Co-Innovation Center to:Serve our joint customers’ needs by collaborating on new solutions aligned with their sustainability and transformation goalsCreate a space for attracting and developing top cloud engineering talent and expertise in and for GermanyBuild the foundation for future sovereign solutions at scale that will strengthen competitiveness and overall digital transformation efforts in Germany.The Co-Innovation Center will also offer programs that support the development of expertise among cloud customers and partners to further accelerate digital transformation efforts across the ecosystem.Dr. Maximilian Ahrens, Chief Technology Officer at T-Systems, said: “We are very excited to see co-innovation as a central aspect of our partnership with Google Cloud come to life in Munich. Trust is a core strength for T-Systems and so is innovation for Google Cloud. Co-innovating along these values uniquely positions us to work together with our joint customers in Germany to address their most critical sovereign needs.” As part of this effort, T-Systems and Google Cloud will create a number of highly-qualified software engineering roles in Munich. OutlookThe official opening of our Co-Innovation Center will take place in mid-2022. Dr. Wieland Holfelder – Google Cloud’s long-standing Vice President, Engineering and site lead for Google Munich – will be leading the efforts for a new generation of sovereign cloud solutions.Given the challenges and opportunities of our time, we believe it is more useful to build bridges and fewer walls, and team up to innovate and develop better technology for Germany, Europe and beyond. We look forward to welcoming our customers and partners to the new Center as we work to make Google Cloud the best possible place for sustainable, digital transformation.Related ArticleHelping build the digital future. On Europe’s terms.Cloud computing is globally recognized as the single most effective, agile and scalable path to digitally transform and drive value creat…Read Article
Quelle: Google Cloud Platform

Cloud CISO Perspectives: November 2021

We’re coming up on the end of the year, yet many of the most pressing security themes from 2021 remain the same, from securing open source software, to enabling zero trust architectures and more. I’ll recap the latest updates from the Google Cybersecurity Action Team and industry progress on important security efforts in this month’s post. Thoughts from around the industrySecuring open source software: Google’s Open Source Software team recently announced ClusterFuzzLite, a continuous fuzzing solution that can run as part of CI/CD workflows to find vulnerabilities. With just a few lines of code, GitHub users can integrate ClusterFuzzLite into their workflow and fuzz pull requests to catch bugs before they are committed. Implementing security checks as early as possible in developer workflows is paramount for improving supply chain security, and NIST’s guidelines for software verification specify fuzzing among the minimum standard requirements for code verification.Runtime cloud-native security: Google Cloud’s Eric Brewer and I discussed the latest trends and the role of cloud providers and startups with InfoWorld in the ‘Race to Secure Kubernetes at Runtime’. Our work in this space goes back many years when we outlined our approach to cloud-native security through our BeyondProd framework, which details one of the core design principles of cloud-native security architectures: protections must extend to how code is changed and how user data in microservices is accessed. The risks and opportunities of the transition to cloud computing: Office of the CISO Director Nick Godfrey and I sat down with Robert Sales of the Global Association of Risk Professionals to discuss the digital risk management landscape. Our discussion covers timely themes like how ensuring the safe adoption of cloud computing is becoming an increasing priority, reflecting the benefits that an organization can accrue from a digital transformation in terms of agility, quality of product and services provided to customers, and relevance in the marketplace and understanding how cloud-driven transformation can actually mitigate existing security, control and resilience risks. Check out the full webinar here.Open source DDR controller framework for mitigating Rowhammer: Google and Antmicro developed a new Rowhammer Tester platform to enable memory security researchers and manufacturers to have access to a flexible platform for experimenting with new types of attacks and finding better Rowhammer mitigation techniques. This important work demonstrates how open source, vendor-neutral IP, tools and hardware can produce better platforms for more effective research and product development.Ethical AI best practices: Many of you are likely engaged in your organizations on controls around AI including the ethical framework for the use of AI. Take a look at SEEE (Security, Ethics, Explainability and Data) in this great summary from Maribel Lopez, Founder, Analyst & Author, Lopez Research, on the importance of controls in AI.Google Cybersecurity Action Team Highlights Here’s a snapshot of the latest updates, new services and resources across our Google Cybersecurity Action Team and Google Cloud Security products since our last post. SecurityReducing risk and increasing sustainability: Veolia, the global leader in optimized resource management, is using Google Cloud’s Security Command Center (SCC) Premium as the core product for protecting the company’s technology environments. In a recent blog post, Thomas Meriadec, Technical Lead and Product Manager for Veolia’s Google Cloud implementation, discusses how SCC Premium serves as the company’s risk management platform and enables Veolia to streamline the process of security management. ComplianceGoogle Cybersecurity Action Team’s Risk and Compliance as Code (RCaC) solution helps organizations prevent security misconfigurations and  automate cloud compliance. The solution enables compliance and security control automation through a combination of Google Cloud products, blueprints, partner integrations, workshops and services to simplify and accelerate time to value. We announced new public sector authorizations including the Impact Level 4 designation for Google Cloud services and FedRAMP High for Google Workspace. These authorizations are a part of our ongoing commitment to help the US federal government modernize their security with cloud-native services at scale. For Google Workspace, this means that federal agencies now have an alternative and choice for productivity and collaboration tools that are completely cloud-native in the marketplace. With IL4 authorization for select GCP services, this is a demonstration of the efficacy of our security controls at scale across our public cloud infrastructure. ControlsWe released new security capabilities for Google Cloud’s enterprise-ready control plane product Traffic Director, which provides fully-managed workload credentials for Google Kubernetes Engine (GKE) via our managed CA Service, and policy enforcement to govern workload communications. The fully-managed credential  provides the foundation for expressing workload identities and securing  connections between workloads leveraging mutual TLS (mTLS), while following zero trust principles.Review our timely guidance here on how to create and safeguard admin accounts in GCP including links to more in-depth guidance in our resource guides.Threat Intelligence Google’s Cybersecurity Action Team released the first issue of the new Threat Horizons report, which is based on cybersecurity threat intelligence observations from Google’s internal security teams. Part of offering a secure cloud computing platform is providing cloud users with cybersecurity threat intelligence so they can better configure their environments and defenses in manners most specific to their needs. This new report provides actionable intelligence that enables organizations to ensure their cloud environments are best protected against ever-evolving threats. Our future reports will continue to provide threat horizon scanning, trend tracking, and Early Warning announcements about emerging threats requiring immediate action. Learn more in our blog post or click here to download the executive summary.Must-listen podcasts Our Cloud Security Podcast has some must-listen episodes this month. Hear from MK Palmore,  a new director in Google Cloud’s Office of the CISO and member of the Cybersecurity Action Team on how Missing Diversity Hurts Your Security and other topics like why email phishing still isn’t solved with Ryan Noon, CEO at Material Security, and the difference between cloud misconfigurations and on-premise infra misconfiguration with the GSK team. Finally, an interview with a Chronicle customer about their SIEM experience is covered in the latest episode.Upcoming Q4 Security Talks – all things Zero TrustOur Google Cloud Security Talks event for Q4 will focus on a topic that we’ve emphasized continuously in our Cloud CISO Perspectives – Zero Trust. Join us on December 15 to hear from leaders across Google as well as leading-edge customers on the many facets of an enterprise zero trust journey. Click here to reserve your spot and we’ll see you there (virtually).If you’d like to have this Cloud CISO Perspectives post delivered every month to your inbox, click here to sign-up. We’ll be back next month for our final Cloud CISO Perspectives blog of 2021.Related ArticleCloud CISO Perspectives: October 2021Security recap from Next ‘21, including product updates that deliver “secure products” not just “security products” and important industr…Read Article
Quelle: Google Cloud Platform

Achieving Autonomic Security Operations: Reducing toil

Almost two decades of Site Reliability Engineering (SRE) has proved the value of incorporating software engineering practices into traditional infrastructure and operations management. In a parallel world, we’re finding that similar principles can radically improve outcomes for the Security Operations Center (SOC), a domain plagued with infrastructure and operational challenges. As more organizations go through digital transformation, the importance of building a highly effective threat management function rises to be one of their top priorities. In our paper,“Autonomic Security Operations — 10X Transformation of the Security Operations Center”, we’ve outlined our approach to modernizing Security Operations.One of the core elements of the Security Operations modernization journey is a relentless focus on eliminating “toil.” Toil is an SRE term defined in the SRE book as “the kind of work tied to running a production service that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows.” If you’re a security analyst, you may realize that sifting through toil is one of the most significant and burdensome elements of your role. For some analysts, their entire workload fits the SRE definition of “toil.”Another example from the same source states “If your service remains in the same state after you have finished a task, the task was probably toil.” Sound familiar? Some would say that most SOC work is inherently like this – attackers come, alerts trigger, triage and investigate, adjust, tune, respond, rinse, and repeat. If our infrastructure remains in the same state after this, it may be the desired outcome, but we are still left with all of the operational challenges that make the work of the analyst cumbersome.So, let’s talk about how you can make your SOC behave more like good SRE teams do. First, where is that 10X improvement, mentioned in the paper,  likely to come from? If you have an increase in attacks, an increase in assets under protection or an increase in the complexity of your environment, a “toil-based” SOC will need to grow at least linearly with all those changes. To get to 2X the attacks or to 2X increased scope (such as cloud added to your SOC coverage), you will need 2X the people, and sometimes 2X budget to spend on tools.However, if we transform the SOC based on the principles we discuss in the ASO paper, an increase of data and complexity may not require doubling your team and budget (two things that are quite an uphill battle for many security leaders!) The evolution of security operations in general and SOC effectiveness in particular is heavily dependent on driving an engineering-first mindset when operating secure systems at modern scale. You can’t “ops” your way to a modern SOC, but you can “dev” your way there! Using modern tools like Chronicle for detection and investigation can also help you reach that goal.So, how can we put these and other SRE lessons to work in your SOC?First,educate your team on how SRE philosophies can be implemented in the SOC. Find opportunities to do team-building exercises and empower your team to define the cultural transformation. Driving a cultural shift requires an inspired, motivated, and disciplined team.Invest in learning programs to upskill your analysts to develop more engineering skills. Investing in your team’s careers will both lead to more positive sentiment, a more motivated workforce, and a more solution-oriented team than a traditional operations team.Aim to minimize your ops time to 50%; try spending the remaining 50% on improving systems and detections with an “automate-first” mindset. BTW, engineering is not the same as writing code:  “Engineering work is novel and intrinsically requires human judgment. It produces a permanent improvement in your service, and is guided by a strategy.““Commit to eliminate a bit of toil each week with some good engineering” in your SOC. Here are some SOC examples: tweak that rule that produces non-actionables alerts, write a SOAR playbook to auto-close some alerts while using context data, script the test for log collection running optimally, etc. Another area to consider is to try hiring security automation engineers who have operations experience, or have the ability to ramp up quickly. The right person can set the tone for leading your whole team through evolution to an “SRE-inspired” 10X SOC.We here at the Google Cybersecurity Action Team look forward to helping organizations of all sizes and capabilities to achieve Autonomic Security Operations. While the challenges that plague the SOC can at times seem insurmountable, incremental engineering improvements can drive exponential outcomes. As you look to develop your roadmap for modernizing your threat management capabilities, we’re here to partner with you along the journey.Here are some additional resources that provide perspectives on the transition to more autonomic security operations:“Modernizing SOC … Introducing Autonomic Security Operations”“Autonomic Security Operations — 10X Transformation of the Security Operations Center””“SOC in a Large, Complex and Evolving Organization” (Google Cloud Security Podcast ep26) and “The Mysteries of Detection Engineering: Revealed!” (ep27)“A SOC Tried To Detect Threats in the Cloud … You Won’t Believe What Happened Next”Related ArticleModernizing SOC … Introducing Autonomic Security OperationsWe’ve launched the Autonomic Security Operations solution, a new approach to transforming Security Operations to protect against modern-d…Read Article
Quelle: Google Cloud Platform

Want to supercharge your DevOps practice? Research says try SRE

Reliability matters. When users can’t access your application, if it’s slow to respond, or it behaves unexpectedly, they don’t get the value that you intend to provide. That’s why at Google we like to say that reliability is the most important feature of any system. Its impact can be seen all the way to the bottom line, as downtime comes with steep costs—to revenue, to reputation, and to user loyalty. From the beginning of the DevOps Research and Assessment (DORA) project, we’ve recognized the importance of delivering a consistent experience to users. We measure this with the Four Key metrics—two metrics that track the velocity of deploying new releases, balanced against two that capture the initial stability of those releases. A team that rates well on all four metrics is not only good at shipping code, they’re shipping code that’s good. However, these four signals, which focus on the path to a deployment and its immediate effects, are less diagnostic of subsequent success throughout the lifespan of a release. In 2018, DORA began to study the ongoing stability of software delivered as a service (as typified by web applications), which we captured in an additional metric for availability, to explore the impact of technical operations on organizational performance. This year, we expanded our inquiry into this area, starting by renaming availability to reliability. Reliability (sometimes abbreviated as r9y) is a more general term that encompasses dimensions including response latency and content validity, as well as availability.In the 2021 State of DevOps Report’s cluster analysis, teams were segmented into four groups based on the Four Key metrics of software delivery. At first glance, we found that the application of reliability practices is not directly correlated to software delivery performance —  teams that score well on delivery metrics may not be the same as those who consistently practice modern operations. However, in combination, software delivery performance and reliability engineering exert a powerful influence on organizational outcomes: elite software delivery teams that also meet their reliability goals are 1.8 times more likely to report better business outcomes.How Google achieves reliability: SREIn Google’s early days, we took a traditional approach to technical operations; the bulk of the work involved manual interventions in reaction to discrete problems. However, as our products began to rapidly acquire users across the globe, we realized that this approach wasn’t sustainable. It couldn’t scale to match the increasing size and complexity of our systems, and even attempting to keep up would require an untenable investment in our operations workforce. So, for the past 15+ years, we’ve been practicing and iterating on an approach called Site Reliability Engineering (SRE). SRE provides a framework for measurement, prioritization, and information sharing to help teams balance between the velocity of feature releases and the predictable behavior of deployed services. It emphasizes the use of automation to reduce risk and to free up engineering capacity for strategic work. This may sound a lot like a description of DevOps; indeed, these disciplines have many shared values. That similarity meant that when, in 2016, Google published the first book on Site Reliability Engineering, it made waves in the DevOps community as practitioners recognized a like-minded movement. It also caused some confusion: some have framed DevOps and SRE as being in conflict or competition with each other.Our view is that, having arisen from similar challenges and espousing similar objectives, DevOps and SRE can be mutually compatible. We posited that, metaphorically, “class SRE implements DevOps”—SRE provides a way to realize DevOps objectives. Inspired by these communities’ continued growth and ongoing exchange of ideas, we sought to investigate their relationship further. This year, we expanded the scope of data collection to assess the extent of SRE adoption across the industry, and to learn how such modern operational practices interact with DORA’s model of software delivery performance.Starting from the published literature on SRE, we added the key elements of the framework as items in our survey of practitioners. We took care to avoid as much as possible any jargon, instead preferring plain language to describe how modern operations teams go about their work. Respondents reported on such practices as: defining reliability in terms of user-visible behavior; the use of automation to allow engineers to focus on strategic work; and having well-defined, well-practiced protocols for incident response. Along the way, we found that using SRE to implement DevOps is much more widely practiced than we thought. SRE, and related disciplines like Facebook’s Production Engineering, have a reputation for being niche disciplines, practiced only by a handful of tech giants. To the contrary, we found that SRE is used in some capacity by a majority of the teams in the DORA survey, with 52% of respondents reporting the use of one or more SRE practices.SRE is a force multiplier for software delivery excellenceAnalyzing the results, we found compelling evidence that SRE is an effective approach to modern operations across the spectrum of organizations. In addition to driving better business outcomes, SRE helps focus efforts—teams that achieve their reliability goals report that they are able to spend more time coding, as they’re less consumed by reacting to incidents. These findings are consistent with the observation that having reliable services can directly impact revenue, as well as offering engineers greater flexibility to use their time to improve their systems, rather than simply repairing them.But while SRE is widely used and has demonstrable benefits, few respondents indicated that their teams have fully implemented every SRE technique we examined. Increased application of SRE has benefits at all levels: within every cluster of software delivery performance, teams that also meet their reliability goals outperform other members of their cluster in regard to business outcomes. On the SRE road to DevOps excellenceSRE is more than a toolset; it’s also a cultural mindset about the role of operations staff. SRE is a learning discipline, aimed at understanding information and continuously iterating in response. Accordingly, adopting SRE takes time, and success requires starting small, and applying an iterative approach to SRE itself.Here are some ways to get started:Find free books and articles at sre.googleJoin a conversation with fellow practitioners, at all different stages of SRE implementation, at bit.ly/reliability-discussSpeak to your GCP account manager about our professional service offerings Apply to the DevOps awards to show how your organization is implementing award winning SRE practices along with the DORA principles!
Quelle: Google Cloud Platform

Vodafone provides anonymized mobile phone signal data to help monitor the spread of COVID-19 and inform early warning systems for new outbreaks

Editor’s note: When Europe’s largest mobile communications company, Vodafone, was asked by the European Commission to help understand population movement across the European Union and the UK to help fight COVID-19, it was able to provide anonymized mobile network-based insights to answer the call. Here’s how Vodafone, with the support of Google Cloud, rapidly mobilized the COVID-19 frontline, while respecting its customers’ privacy.With the emergence of COVID-19 in early 2020, the European Commission—the executive branch of the European Union (EU)—knew that technology would be instrumental in its fight to control the pandemic. With various lockdowns imposed across its member states, the Commission was keen to predict and prevent the spread of COVID-19 and to manage the related social, political and financial impacts. Mobile network data helps track COVID-19 across the EUMobile networks produce location data, which can be turned into useful anonymous insights to understand population movement within a geographic area. The European Commission, working with mobile industry association GSMA (Groupe Speciale Mobile Association), asked Europe’s major mobile phone operators for help in producing insights to support the fight against COVID-19. As the largest mobile network operator within the EU, Vodafone saw this as a critical opportunity to participate. Vodafone had previous experience of using mobile network data to support pandemic research. For example, in 2019, Vodafone provided mobility pattern analysis to help track the spread of Malaria in Mozambique. And, during the early stages of the COVID-19 pandemic (prior to working with the European Commission), Vodafone assisted the Italian and Spanish governments in understanding their citizens’ mobility patterns. Vodafone had also previously offered anonymized and aggregated population mobility insights to support public transport and tourism authorities and retail organizations in a number of countries. Consequently, Vodafone was perfectly placed to play a greater role in supporting the European Commission’s response to the pandemic. When asked to assist the European Commission, Vodafone first considered how it could safely share its data with the governing body without providing details on the individual movements of its customers. It realized it could achieve this through an elaborate set of anonymization and aggregation techniques. Insights are aggregated from a minimum of 50 users and Vodafone only shared these anonymous insights and never the actual raw data with the Commission. As specified by the EU, these insights are then presented onto a large geographical region, typically a city or a county with thousands of people living in that area.These insights illustrate how people move, helping to determine how lockdowns and self-isolation measures were impacting behaviors.Using Google Cloud to collate and store population mobility dataIn April 2020, Vodafone began migrating its operations, including its mobile data, to Google Cloud on servers in Europe and the UK with elaborate security safeguards, including encryption, building on a previous partnership. With the data residing in EU and UK data centers and not the United States, Vodafone could then retrieve anonymous insights from Google Cloud Storage instantaneously. Before supplying any information to the European Commission, however, Vodafone used Dataflow to validate the data and run a series of tests to ensure the database had accurate data, before ingesting and archiving the relevant metrics. For instant access, the data was then made available to the European Commission using a Redis database on Google Kubernetes Engine.To ensure aggregate Vodafone customer data was always safe, secure, and anonymous, all entry points to the front-end were protected behind Google Cloud Armor, where only specific IP addresses were allowed. Using these tools, seamless data pipelines fed in predefined key performance indicators from each specified European market. While data quality measures ensured the definitions for metrics across markets were consistent and could be accurately compared.The architecture (pictured below) shows how Vodafone integrated and anonymized its data on Google Cloud.Live interactive dashboard shows population mobility in real-timeWith its data integrated on Google Cloud, Vodafone created a live, interactive dashboard to track mobility patterns and share relevant information with the European Commission in real-time. The European Commission Joint Research Center (JRC) was able to gather valuable information from these insights, which enabled them to see where population mobility was aiding the spread of the disease, when cross-referenced with health data It could also assess the implications of lockdowns on different populations and forecast cross-country spreading.Mobile data aids disease modeling for multiple stakeholdersThe Vodafone data became instrumental in modeling the likely course of the disease too. For example, the University of Southampton in the UK used it to predict the outcome of different coordinated COVID-19 exit strategies across Europe. This research was published in Science Magazine in September 2020. The Vodafone data dashboard continues to be used by individual governments, NGOs and organizations to further investigate the impacts of the pandemic and to measure the effectiveness of response strategies alongside the rollout of vaccination programs. The project also helped Vodafone win a DataIQ award for most effective stakeholder engagement. Using the learnings from this project, Vodafone has been able to adapt its own B2B solution, called Vodafone Analytics, by adaptIng and migrating the code to work in Google Cloud Platform. This solution has been rolled out across Germany, Greece, Portugal and South Africa, and new countries are being onboarded every day. Vodafone Analytics already has more than 100 customers leveraging it for a variety of use cases—Italian fashion retailer OVS, uses it for its smart retail operation, while global real estate company, JLL, uses it to understand the footfall passing through its properties. Working together, Vodafone and Google Cloud continue to help a range of organizations, governments, and NGOs navigate through the ongoing pandemic,  optimize their operations, and help the greater good, without infringing individuals’ fundamental rights to privacy.To learn more about Google Cloud and Vodafone, watch our full interview here.Related ArticleCOVID-19 public datasets: supporting organizations in their pandemic responseSee how organizations have used the BigQuery COVID-19 public dataset for research, healthcare, and more.Read Article
Quelle: Google Cloud Platform

Security Command Center – Increasing operational efficiency with new mute findings capability

Security Command Center (SCC) is Google Cloud’s security and risk management platform that helps manage and improve your cloud security and risk posture. It is used by organizations globally to protect their environments providing visibility into cloud assets, discovering misconfigurations and vulnerabilities, detecting threats, and helping to maintain compliance with industry standards and benchmarks. SCC is constantly evolving, adding new capabilities to make your security operations and management processes more efficient. To help, we’re excited to announce a new “Mute Findings” capability in SCC that helps you more effectively manage findings based on your organization’s policies and requirements. SCC presents potential security risks in your cloud environment as ‘findings’ inclusive of misconfigurations, vulnerabilities, and threats. A high volume of findings can make it difficult for your security teams to effectively identify, triage, and remediate the most critical risks to your organization. In these cases, you may wish to tune the incoming volume of findings, as some findings may not be relevant for a given project or organization based on your company’s policies or risk appetite. This mute findings capability enables organizations to make Security Command Center findings more reflective of their particular risk model and prioritization.  Enabling operational efficiencies for your securityWith the launch of ‘mute findings’ capability, you gain a way to reduce findings volume and focus on the security issues that are highly relevant to you and your organization by suppressing findings that fit certain criteria. It saves you time from reviewing or responding to findings that you identify as acceptable risks within your environment. For example, alerts for assets that are isolated or fall within acceptable business parameters may not need to be responded to immediately or remediated at all.Once muted, findings continue to be logged for audit and compliance purposes, and muted findings are still available for review at any time. However, they are hidden by default in the SCC dashboard and can be configured to avoid creating pub/sub notifications, allowing your teams to focus on addressing issues highlighted by non-muted findings.  Sample Use Cases for muting findingsThe following are a few sample use cases or scenarios in which the new mute findings capability can be helpful:Assets within non-production environments where stricter requirements may not be applicable.Recommendations to use customer-managed encryption keys in projects that don’t contain critical data.When granting broad access to a datastore, which intentionally is open to the public in order to disseminate public information.Findings not relevant to your organization based on your company’s security policies.How to mute findings in SCCWith this release, SCC findings now have one of the following three states:Muted – Findings that have been either manually muted by a user or automatically muted by a mute ruleUnmuted – Findings that have been unmuted by a userUndefined – Findings that been never been neither muted nor unmutedYou can quickly set this up for your Google Cloud environment and take advantage of this capability: 1: Automatically mute findings using mute rulesMute rules enable you to scale and streamline your security operations process by automatically muting findings. You can create mute rules in SCC to silence findings based on criteria you specify. Any new, updated, or existing findings are automatically muted if they match the mute rule conditions.2. Manual option to mute findingsThe manual option enables you to review and silence individual findings. You can select one or more findings in your findings view and manually mute them.3. Unmuting findingsAs your organization policy changes, there maybe scenarios where you would want to unmute findings that have been silenced in the past. For findings that have been muted either by a mute rule or manually earlier, but are now important for your environment, you can simply unmute them in the findings view. Once unmuted, they remain in that state and will not be automatically muted again by any mute rule. However, you can use the manual option to mute them again.4. Auditing mute operationsThere are two additional attributes ‘mute initiator’ and ‘mute update time’ available in the findings. These attributes store the information on which mute rule or user took the mute/unmute action, along with a timestamp when the action was taken, providing you visibility for future auditing and investigation.5. Findings viewThe findings view in SCC provides a consolidated view of findings across threats, misconfigurations, and vulnerabilities. Muted findings are hidden in the default view. But to view muted findings, you can quickly and easily click on More Options > Include muted findings.If you wish to see ONLY muted findings, simply add a filter for mute=MUTEDGetting started with muting findings in SCCMute findings functionality is now available in SCC through the Google Cloud Platform console, gcloud tool, and API. You can get started with these new capabilities today using our product documentation.And, you can learn more about using SCC to comprehensively manage security and risk across your GCP footprint in our Getting Started video series.Related ArticleHow Veolia protects its cloud environment across 31 countries with Security Command CenterSecurity Command Center enables Veolia to manage security and risk for their cloud environmentRead Article
Quelle: Google Cloud Platform

Edge computing—building enterprise edge applications with Google Cloud

As we discussed in part 2 of this blog series, if you design your edge computing realistically, your systems may not be connected to the network all the time. But there are a variety of tools you can use to manage those edge deployments effectively, and that can even tie them back into your main environment! In this third blog of the series, we’ll discuss the role of software in edge computing, and Google Cloud’s solutions to this end.Google provides softwareWhen it comes to edge environments, Google Cloud’s role is clear: we treat the edge as the domain of our customers and partners. We do not send remote servers for pickup or preconfigure boxes to sync. Instead, we provide software and tools to configure and maintain all clusters as part of the Anthos suite, combined with Google Kubernetes Engine (GKE) and the open-source Kubernetes. An Anthos cluster at the edge may be a full GKE edge installation or a fleet of microk8s Raspi clusters. As long as the attached cluster is running Anthos Config Management and Policy Controller, the remote cluster may be managed via consistent or intermittent connectivity. In addition, Anthos Fleets facilitates organizing Kubernetes clusters into manageable groups — delineated by cross-service communication patterns, location, environment, and an administrator managing the block of clusters. This is a different approach from other cloud providers who may provide a similar fully managed experience but with proprietary hardware that inevitably leads to a certain level of lock-in. By focusing on the software stack, Google sets the path for long term successful edge fleet management.(As an aside, if you are interested in a fully managed experience, Google partners with vendors who will take the responsibility of managing the hardware and configuration of the Anthos edge clusters.)Let’s look at the various tools that Google Cloud offers and how they fit into an edge deployment. Kubernetes and GKEWhere does Kubernetes fit in? In a nutshell, Kubernetes brings convention.The edge is unpredictable by nature. Kubernetes brings stability, consistency and extends familiar control and data planes to the edge. It opens the door to immutable containerized deployments and predictable operations.Data centers and cloud service providers deliver predictable environments. But the broader reach of the edge introduces instability that platform managers are not accustomed to. In fact, platform managers have been working to avoid instability for the past two decades. Thankfully, Kubernetes thrives in this extended edge ecosystem. Often, in enterprise we think of massive k8s clusters running complex interdependent microservice workloads. But at its core, Kubernetes is a lightweight, distributed system that also works well when deployed on the edge with just a few and focused deployments. Kubernetes increases the level of stability, offers a standardized open-source control plane API, and can serve as a communications or consolidation hub at edge installations that are saturated with devices. Kubernetes brings a standard container host platform for software deployments. A simple redundant pair of NUCs or Raspi racks can improve edge availability and normalize the way our data centers communicate with our edge footprints. AnthosWhat about Anthos? Anthos brings order.Without a good strategy and tools, the edge can be daunting if not impossible to manage cost-effectively. While it’s common to have multiple data centers and cloud providers, edge surfaces can number in the hundreds or thousands! Anthos brings control, governance and security at scale. With Anthos, we overlay a powerful framework of controls that extends from our core cloud and data center management systems to the farthest reaches of your edge deployments. Anthos allows central administration of remote GKE or attached Kubernetes clusters — running private services to support location-specific clients. We see the Anthos edge story developing in all of these industries:WarehousesRetail StoresManufacturing and FactoriesTelco and Cable Providers Medical, Science and Research LabsAnthos Config Management and Policy ControllerConfiguration requirements have advanced in leaps and bounds. Anthos Config Management (ACM) and Policy Controller come to the rescue in these scenarios, enabling platform operations teams to manage large edge resources deployments (fleets) at scale. With ACM, operators create and enforce consistent configurations and security policies across edge installations.Source: https://cloud.google.com/anthos-config-management/docs/overviewFor example, one Google Cloud customer and partner plans to deploy three bare metal servers running either Anthos Bare Metal or attached clusters in an HA configuration (all three acting as both master and worker) at over 200 customer locations. The capacity of the cluster totals to more than 75K vCPU, and they plan to manage the configuration, security and policy at scale across this entire fleet using ACM.Anthos FleetsAs more edge clusters are added to your Anthos dashboard, cluster configurations become increasingly fragmented and difficult to manage. In order to provide proper management and governance capabilities for these clusters, Google Cloud has the concept of fleets. Anthos Fleets negates the need for organizations to build their own tooling to get the level of control that enterprises typically desire, and provides an easy way to logically group and normalize clusters and help simplify the administration and management of these clusters. Fleet-based management is applicable for both Anthos (edge included) and GKE clusters.Anthos Service MeshThe edge is fertile ground for microservices architectures. Smaller, lightweight services improve reliability, scalability and fault tolerance. But they also bring complexity in traffic management, mesh telemetry and security. Anthos Service Mesh (ASM), based on open source Istio, provides a consistent framework for reliable and efficient service management. It provides service operators with critical features like tracing, monitoring and logging. It facilitates zero-trust security implementations and allows operators to control traffic flow between services. These are features we have been dreaming of for years. Virtualizing services decouples networking from applications, and further separates operations from development. ASM together with ACM and Policy Controller is a powerful set of tools to simplify service delivery and drive agile practices without compromising on security. Pushing the edge to the edgeEven though edge computing has been around for a long time, we believe that enterprises are just beginning to wake up to the potential that this model provides. Throughout this series, we’ve demonstrated the incredible speed of change and high potential that edge technology promises. Distributing asynchronous and intermittently connected fleets of customer-managed commodity hardware and dedicated devices to do the grunt work for our data centers and cloud VPCs opens up huge opportunities in distributed processing.For enterprises, the trick to taking advantage of edge is to build edge installations that focus on the use of private services, and designing platforms that are tolerant of hardware and network failures. And the good news is that Google Cloud offers a full software stack including Kubernetes, GKE, Anthos, Anthos Fleets, Anthos Service Mesh, Anthos Config Management and Policy Controller that enable platform operators to manage remote edge fleets in places far, far away!Related ArticleEdge computing—a paradigm shift that goes beyond hybrid cloudEdge platforms are evolving at an incredible speed, opening up opportunities for enterprises. Google tools like Kubernetes, GKE and Antho…Read Article
Quelle: Google Cloud Platform

Empowering DevOps to foster customer loyalty in modern retail with MongoDB Atlas on Google Cloud

Consumer demands are becoming more complex, driven by high expectations for personalized experiences that strike the right chord at the perfect time. One study from McKinsey found thatnearly three-quarters of consumers demand personalization when interacting with retailers.Retailers old and new of any size must embrace the challenges head on and learn to capture customer loyalty. While each business has a unique journey toward modernizing its business, all of them share something in common: Effective approaches to DevOps and data analytics underpin their success.Retailers sometimes struggle to change previous retail models into the much more intimate, personalized, and real-time retail experiences that consumers now want, whether shopping in-store or online. At the same time, retailers and many newcomers are jumping all in, and devising exceptional experiences that transform shopper experiences and elevate expectations even further.MongoDB and Google Cloud have been helping retailers of all sizes better address quickly changing market opportunities. As retailers continue to need more powerful systems of engagement and data analytics, the combination of MongoDB Atlas and Google Cloud solutions offer retailers such as 1-800-FLOWERS.COM, Inc. a solid mix of proven IT infrastructure and expertise.Maximizing data value for developersDevOps is increasingly tasked with creating experiences that will bring customers to a retail website, and guide them through the purchasing process. Along the way, they need to build in steps that keep customers fully engaged in the buying process and discourage things like cart abandonment. A successful build depends a lot on how much quality data is available about customer shopping experiences and how easy it is for DevOps teams to derive insights from that information.Google Cloud is very much a developer’s cloud, and we at MongoDB are very much a developer’s database. We like the breadth of Google Cloud services, which pair well with our products. Our collaboration with Google Cloud feels very natural both in terms of the technology we develop and how we are approaching serving our clients’ needs. Together, we give DevOps teams at retailers a modern toolkit to maximize the value of their work.The cloud-based environment supported by Google Cloud and MongoDB Atlas increases the speed and success of experimentation and ultimately delivers solutions with the greatest impact. With agile environments like ours, teams experiment much faster, leading to more innovative shopping experiences that differentiate a retailer from their competitors.Any cloud solution has to be usable for retailers of all sizes so that they can develop services according to their unique needs, expertise, and visions. The goal should be to empower a retailer’s DevOps team to be as self-sufficient as possible, and not have to rely on a third-party every time changes need to be made.In an industry facing extremely tight margins—and where a two percent efficiency gain or 2x acceleration of time to market can make or break the success of a project—gaining any edge is essential for retailers. Google Cloud and MongoDB provide that edge to retailers, as well as to other companies across industries.Cultivating a vision at 1-800-FLOWERS.COM, Inc.1-800-FLOWERS.COM, Inc. is an exceptional example of what can be achieved when going all in with modern data and DevOps solutions. Chief Technology Officer Abi Sachdeva, has pursued emerging technologies to support its business teams with the latest technologies to drive value for its customers.Abi has been laser-focused on delivering new personalized experiences by continually innovating customer-facing services. Driven by a commitment to foster engagement across its industry-leading brands through a centralized customer experience, 1-800-FLOWERS.COM, Inc. built an e-commerce platform that is inclusive of both products and resources aimed at improving how people express themselves.To best manage all the eCommerce environments associated and ensure outstanding customer service, 1-800-FLOWERS.COM, Inc. with MongoDB and Google Cloud to revolutionize its DevOps.“With the help of MongoDB and Google Cloud, we transformed people, processes, and our technology. It has been a very stable experience requiring little administrative work,” says Abi. “Traditional technologies weighed us down in the past. MongoDB and Google Cloud deliver data models and DevOps solutions that accelerate our development and deployment.”MongoDB Atlas , Inc. with aggregation pipelines and a distributed system design that help it to scale quickly, while Google Cloud made its new approach to agile and DevOps a reality.he speed and agility that come with cloud services, companies like 1-800-FLOWERS.COM, Inc.  keep up with the constantly changing customer preferences. With proven cloud solutions that at once increase overall IT effectiveness and decrease the burdens on IT teams, 1-800-FLOWERS.COM, Inc.  is better positioned to constantly experiment, innovate, and deliver experiences that delight customers.“The fully managed MongoDB Atlas database on Google Cloud has unlocked tremendous potential in our IT architecture,” says Abi. “From agility in scaling and improved resource management to seamless global clusters and premium monitoring, MongoDB and Google Cloud reduce complexity and allow our teams to stay lean and focused on innovation rather than infrastructure.”Looking toward the holidays and beyondThe very same systems that encourage experimentation and innovation can position retailers and other companies to excel in during and long after the holiday season. Companies need elasticity, scalability, and agility to facilitate experimentation and to navigate the turbulent external factors across their marketplace.Every holiday season is challenging for retailers, but current supply chain concerns combined with massive changes to how people shop as a result of the COVID-19 pandemic will make 2021 a particularly important year for the industry. I believe that companies that have increased their backend elasticity and improved their DevOps culture will fare especially well amid the market upheaval.As organizations modernize IT, it will be increasingly important to pair the possibilities of software and infrastructure to enable smaller DevOps teams to act independently and quickly. This improves culture across the business as people are more empowered and supported.In addition, I encourage DevOps professionals to place more importance on understanding customers, business values, and to be people-first in their approaches to work. By combining this level of business experience with great coding skills, smaller teams can bolster a retailer’s performance this holiday season and beyond.We are proud to work with Google Cloud to develop and deliver new ways for DevOps in retail and other industries to experiment, innovate, and deploy groundbreaking experiences that transform how people achieve their goals.To learn more about the future of retail innovation,watch this video featuring members of the 1-800-FLOWERS.COM, Inc., MongoDB, and Google Cloud teams.Related ArticleDelivering smiles and sparking innovation at 1-800-FLOWERS.COM, Inc.See how gift retailer 1-800-FLOWERS.COM, Inc., migrated its customer touchpoints to cloud, including GKE and BigQuery, to build a microse…Read Article
Quelle: Google Cloud Platform

Illicit coin mining, ransomware, APTs target cloud users in first Google Cybersecurity Action Team Threat Horizons report

At Google we have an immense aperture into the global cybersecurity threat landscape and the means to mitigate risks that stem from those threats. With our recently launched Google Cybersecurity Action Team, we are bringing more of our security abilities and advisory services to our customers to increase their defenses. A big part of this is to bridge our collective threat intelligence to yield specific insights, such as when malicious hackers exploit improperly-secured cloud instances to download cryptocurrency mining software to the system—sometimes within 22 seconds of being compromised. This is one of several observations that we have published in the first issue of the Threat Horizons report (read the executive summary or the full report.) The report highlights recent observations from the Google Threat Analysis Group (TAG), Google Cloud Security and Trust Center, Google Cloud Threat Intelligence for Chronicle, Trust and Safety, and other internal teams who collectively work to protect our customers and users.The report’s goal is to provide actionable intelligence that enables organizations to ensure their cloud environments are best protected against ever-evolving threats. In this and future threat intelligence reports, the Google Cybersecurity Action Team will provide threat horizon scanning, trend tracking, and Early Warning announcements about emerging threats requiring immediate action.While cloud customers continue to face a variety of threats across applications and infrastructure, many successful attacks are due to poor hygiene and a lack of basic control implementation. Most recently, our internal security teams have responded to cryptocurrency mining abuse, phishing campaigns, and ransomware. Given these specific observations and general threats, organizations that put emphasis on secure implementation, monitoring and ongoing assurance will be more successful in mitigating these threats or at the very least reduce their overall impact.The cloud threat landscape in 2021 was more complex than just rogue cryptocurrency miners, of course. Google researchers from TAG exposed a credential phishing attack by Russian government-supported APT28/Fancy Bear at the end of September that Google successfully blocked; a North Korean government-backed threat group which posed as Samsung recruiters to send malicious attachments to employees at several South Korean anti-malware cybersecurity companies; and detected customer installations infected with Black Matter ransomware (the successor to the DarkSide ransomware family.)Across these four instances of malicious activity, we see the impact of poorly-secured customer installations. To stop them, we embrace a shared fate model with our customers, and provide trends and lessons learned from recent cybersecurity incidents and close calls. We suggest several concrete actions for customers that will help them manage the risks they face. Vulnerable GCP instances, spear-phishing attacks, patching software, and using public code repositories all come with risks. Following these recommendations can reduce the chance of unexpected financial losses and outcomes that may harm your business:Audit published projects to ensure certs and credentials are not accidentally exposed. Certs and credentials are mistakenly included in projects published on GitHub and other repositories on a regular basis. Audits help avoid this mistake. Authenticate downloaded code with hashing. The common practice for clients to download updates and code from cloud resources raises the concern that unauthorized code may be downloaded in the process. Meddler in the Middle (MITM) attacks may cause unauthorized source code to be pulled into production. Hashing and verifying all downloads preserves the integrity of the software supply chain and establishes an effective chain of custody.Use multiple layers of defense to combat theft of credentials and authentication cookies. Cloud-hosted resources have the benefit of high availability and “anywhere, anytime” access. While this streamlines workforce operations, malicious actors try to take advantage of the ubiquitous nature of the cloud to compromise cloud resources. Despite the growing public attention to cybersecurity, spear-phishing and social engineering tactics are frequently successful, so defensive measures need to be robust and layered to protect cloud resources due to ubiquitous access. In addition to two-factor authentication, Cloud administrators should strengthen their environment through Context-Aware Access and solutions such as BeyondCorp Enterprise and Work Safer.The executive summary of the Threat Horizons report is available here, and the full report goes into greater detail of the current cloud threat landscape and the steps we recommend to reduce those risks, and can be downloaded here.
Quelle: Google Cloud Platform