Prepare for Google Cloud certification with top tips and no-cost learning

Becoming Google Cloud certified has proven to improve individuals’ visibility within the job market, and demonstrate ability to drive meaningful change and transformation within organizations.  1 in 4 Google Cloud certified individuals take on more responsibility or leadership roles at work, and  87% of Google Cloud certified users feel more confident in their cloud skills1.75% of IT decision-makers are in need of technologically-skilled personnel to meet their organizational goals and close skill gaps2.94% of those decision-makers agree that certified employees provide added value above and beyond the cost of certification3.Prepare for certification with a no-cost learning opportunityThat’s powerful stuff, right?  That’s why we’ve teamed up with Coursera to support your journey to becoming Google Cloud certified.As a new learner, get one month of no-cost access to your selected Google Cloud Professional Certificate on Coursera to help you prepare for the relevant Google Cloud certification exam. Choose from Professional Certificates in data engineering, cloud engineering, cloud architecture, security, networking, machine learning, DevOps and for business professionals, the Cloud Digital Leader.Become Google Cloud certifiedTo  help you on your way to becoming Google Cloud certified, you can earn a discount voucher on the cost of the Google Cloud certification exam by completing the Professional Certificate on Coursera by August 31, 2022 Simply visit our page on Coursera and start your one month no-cost learning journey today. Top tips to prepare for your Google Cloud certification examGet hands-on with Google CloudFor those of you in a technical job role, we recommend leveraging the Google Cloud projects to build your hands-on experience with the Google Cloud console. With 500+ Google Cloud projectsnow available on Coursera, you can gain hands-on experience working in the real Google Cloud console, with no download or configuration required.Review the exam guideExam guides provide the blueprint for developing exam questions and offer guidance to candidates studying for the exam. We´d encourage you to be prepared to answer questions on any topic in the exam guide, but it’s not guaranteed that every topic within an exam guide will be assessed.Explore the sample questionsTaking a look at the sample questions on each certification page will help to familiarize you with the format of exam questions and example content that may be covered. Start your certification preparation journey today with a one month no-cost learning opportunity on Coursera. Want to know more about the value of Google Cloud Certification? Find out why IT leaders choose Google Cloud Certification for their teams.1. Google Cloud, Google Cloud certification impact report, 20202. Skillsoft Global Knowledge, IT skills and Salary report, 20213. Skillsoft Global Knowledge, IT skills and Salary report, 2021Related ArticleWhy IT leaders choose Google Cloud certification for their teamsWhy IT leaders should choose Google Cloud training and certification to increase staff tenure, improve productivity for their teams, sati…Read Article
Quelle: Google Cloud Platform

How Ocado Technology delivers smart, secure online grocery shopping with Security Command Center

Grocery shopping has changed for good and Ocado Group has played a major role in this transformation. We started as an online supermarket, applying technology and automation to revolutionise the online grocery space. Today, after two decades of innovation, we are a global technology company providing state-of-the-art software, robotics, and AI solutions for online grocery. We created the Ocado Smart Platform, which powers the online operations of some of the world’s most forward-thinking grocery retailers, from Kroger in the U.S. to Coles in Australia.Grocery shopping has changed for good and Ocado Group has played a major role in this transformation. We started as an online supermarket, applying technology and automation to revolutionise the online grocery space. Today, after two decades of innovation, we are a global technology company providing state-of-the-art software, robotics, and AI solutions for online grocery. We created the Ocado Smart Platform, which powers the online operations of some of the world’s most forward-thinking grocery retailers, from Kroger in the U.S. to Coles in Australia.With the global penetration of the Ocado Smart Platform and the increasing complexity of our operations, we’re paying close attention to our security estate. To proactively identify and tackle any security vulnerabilities, we decided to introduce Google Cloud’s Security Command Center (SCC) Premium as our centralized vulnerability and threat reporting service.Gaining consolidated visibility into Ocado’s cloud assetsFrom the start, we were impressed with the speed of deployment and security findings surfaced with SCC. Where it would take several weeks in the past with other software vendors, we were able to quickly set up SCC in our environment and we could immediately start identifying our most vulnerable assets.Today, we use SCC to detect misconfigurations and vulnerabilities across hundreds of projects throughout our organization and we use it to get an aggregated view of our security health findings. We filter the findings and then use Pub/Sub or Cloud Functions to send alerts directly to the tools each division is working with, such as Splunk or JIRA. This way, each of our teams can discover and respond to the security findings in their own environment, with SCC acting as the single source of truth for our security-related issues.Driving autonomy by delegating security findingsAutonomy fuels innovation at Ocado Technology, which is why we want to make our teams as self-sufficient as possible. SCC helps to make our divisions more autonomous from the central organization. It delivers all the security insights technology teams need to make smart decisions on their own and at pace. Here’s where SCC’s delegation features providing folder and project level access control come in. The platform’s fine-grained access control capabilities enable us to delegate SCC findings to specific teams, without having to give them a view of the entire Ocado Technology organization. Business units no longer need to contact us in the security team to track down vulnerabilities, they can do it themselves in a compliant and secure manner. It makes our work more efficient and autonomous, allowing everyone to focus on their own areas of expertise and environments.Identifying and remediating multiple medium and high vulnerabilitiesSCC’s findings are very rich and don’t end with the identification of the potential misconfigurations and vulnerabilities. It goes beyond this, recommending solutions to resolve any issues and providing clear guidelines on next steps. That’s why the feedback from our users across the organization has been so good.SCC delivers on both quality and quantity. Since implementation, it has helped us identify and remove hundreds of medium and high vulnerabilities from our Google Cloud estate. The number of security related findings have also gone down each quarter, indicating real and tangible improvements in our security posture. SCC is so useful in maintaining our security posture as once we know where the issues are, tackling them is easy.From 8-hour security scans to instant insightsOne particular issue we’ve been able to handle well with SCC are vulnerabilities targeting the Apache logging system Log4j. SCC informed us about attempted compromises, active compromises, or the vulnerability exposure of our Dataproc images. During Log4j response, all these would have been otherwise very hard to track down, especially with limited resources. With SCC, we were able to leverage the security expertise of Google Cloud to identify the latest vulnerabilities, based on the most up-to-date security trends, and act on them quickly.Obviously, speed is of the essence when it comes to threat mitigation and SCC has enabled us to fix issues faster, making us less exposed to outside threats. In the past, just scanning everything once could take up to eight hours. SCC sped things up from the start and findings have been nearly instantaneous since it rolled out real-time Security Health Analytics.Strengthening compliance and demonstrating standards to stakeholdersSCC helps us to achieve better compliance standards, and demonstrate these standards to our stakeholders. We recently ran an internal audit exercise across the Ocado Technology organization, for example, where we identified the projects with the most numerous and severe security-related findings. Without the reports from SCC, this would have been extremely hard or even impossible.We also use the Security Health Analytics information from SCC to visualize the data per project, creating a kind of heat map of security across the organization. This helps us assign our resources to the right projects and prioritize our efforts accordingly, informing our strategic decisions.From top-down to a developer-led securityThere’s been a paradigm shift in security operations, and things are moving from a top-down approach to a more developer-led and autonomous process. SCC helps drive that change at Ocado Technology. It enables us to place the responsibility for security-related issues closer to the resource owners. By making sure that the teams most impacted by a potential problem are the ones who get to fix it, we empower teams to resolve issues proactively and efficiently. Looking forward, we can’t wait to see SCC evolve further. One of the features we’re most excited about is the ability to create custom findings (currently in preview) and additional integration capabilities that enable automation. We’re still not using everything SCC has to offer, but it is already a vital tool for our security team.At Ocado Technology, we’re pioneering the future of online grocery shopping, and this future needs a strong security foundation. SCC helps us to strengthen and maintain that foundation, making profitable, scalable, and secure online grocery shopping possible for even more businesses around the world.Related ArticleProtecting customers against cryptomining threats with VM Threat Detection in Security Command CenterExtending threat detection in Security Command Center with Virtual Machine Threat Detection.Read Article
Quelle: Google Cloud Platform

Invest early, save later: Why shifting security left helps your bottom line

Shifting left on security with Google Cloud infrastructureThe concept of “shifting left” has been widely promoted in the software development lifecycle. The concept is that introducing security earlier, or leftwards, in the development process will lead to fewer software-related security defects later, or rightwards, in production.Shifting cloud security left can help identify potential misconfigurations earlier in the development cycle, which if unresolved can lead to security defects. Catching those misconfigurations early can improve the security posture of production deployments.Why shifting security left mattersGoogle’s DevOps Research and Assessment (DORA) highlights the importance of integrating security into DevOps in the 2016 State of DevOps Report. The report discussed the placement of security testing in the software development lifecycle. The survey found that most security testing and tool usage happened after the development of a release, rather than continuously throughout the development lifecycle. This led to increased costs and friction because remediating problems found in testing may involve big architectural changes and additional integration testing, as shown in Figure 1. For example, security defects in production can lead to GDPR violations, which can carry fines up to 4% of global annual revenue.Figure 1: Traditional Testing PatternBy inserting security testing into the development phase, we can identify security defects earlier and perform the appropriate remediation sooner. This results in fewer defects post-production and reduces remediation efforts and architectural changes.  Figure 2 shows us that integrating security earlier in the SDLC results in overall decreases in security defects and associated remediation costs.Figure 2: Security Landscape After Shiting LeftThe 2021 State of DevOps Report expands the work of the 2016 report and advocates for integrating automated testing throughout the software development lifecycle. Automated testing is useful for continuously testing development code without the need for additional skills or intervention by the developer. Developers can continue to iterate quickly while other stakeholders can be confident that common defects are being identified and remediated.From code to cloudThe DORA findings with regard to code security can also be applied to cloud infrastructure security. As more organizations deploy their workloads to the cloud, it’s important to test the security and configurations of cloud infrastructure. Misconfigurations in cloud resources can lead toward security incidents that could lead to data theft. Examples of such misconfigurations include overly permissive firewall rules, public IP addresses for VMs, or excessive Identity and Access Management (IAM) permissions on service accounts and storage buckets. We can and should leverage different Google Cloud services to identify these misconfigurations early in the development process and prevent such errors from emerging in production to reduce the costs of future remediation, potential legal fines, and compromised customer trust.The key tools in our toolshed are Security Command Center and Cloud Build. Security Command Center provides visibility into misconfigurations, vulnerabilities, and threats within a Google Cloud organization. This information is critical when protecting your cloud infrastructure (such as virtual machines, containers, web applications) against threats, or identifying potential gaps from compliance frameworks (such as CIS Benchmarks, PCI-DSS, NIST 800-53, or ISO 27001. Security Command Center further supports shifting security left by allowing visibility of security findings at the cloud project level for individual developers, while still allowing global visibility for Security Operations. Cloud Build provides for the creation of cloud-native CI/CD pipelines. You can insert custom health checks into a pipeline to evaluate certain conditions (such as security metrics) and fail the pipeline when irregularities are detected. We will now explore two use cases that take advantage of these tools.Security Health CheckerSecurity Health Checker continuously monitors the security health of a Google Cloud project and promptly notifies project members of security findings. Figure 3 shows developers interacting with a Google Cloud environment with network, compute, and database components. Security Command Center is configured to monitor the health of the project.When Security Command Center identifies findings, it sends them to a Cloud Pub/Sub topic. A Cloud Function then takes the findings published to that topic and sends them to a Slack channel monitored by infrastructure developers. Just like a spell checker providing quick feedback on misspellings, Security Health Checker provides prompt feedback on security misconfigurations in a Google Cloud project that could lead to deployment failures or post-production compromises. No additional effort is required on the part of developers.Figure 3: Security Command Center in a Google Cloud EnvironmentSecurity Pipeline CheckerIn addition to using Security Command Center for timely notification of security concerns during the development process, we can also integrate security checks into the CI/CD pipeline by using Security Command Center along with Cloud Build as shown in Figure 4.Figure 4: Security Pipeline Checker ArchitectureThe pipeline begins with a developer checking code into a git repository. This repository is mirrored to Cloud Source Repositories. A build trigger will begin the build process. The build pipeline will include a short waiting period of a few minutes to give Security Command Center a chance to identify security vulnerabilities. A brief delay may appear undesirable at first, but the analysis that takes place during that interval can result in the reduction of security defects post-production. At the end of the waiting period, a Cloud Function serving as a Security Health Checker will evaluate the findings from Security Command Center (Connector 1 in Figure 4). If the validator determines that unacceptable security findings exist, the validator will inject a failure indication into the pipeline to terminate the build process (Connector 2 in Figure 4). Developers have visibility into the failure triggers and remediate them before successfully deploying code to production. This is in contrast to the findings in the 2016 State of DevOps Report wherein organizations that didn’t integrate security into their DevOps processes spent 50% more time remediating security issues than those who “shifted left” on security.Closing thoughtsDORA’s 2016 State of DevOps report called out the need for “shifting left” with security, introducing security earlier in the development process to identify security vulnerabilities early to reduce mitigation efforts post-production. The report also advocated for automated testing throughout the software development lifecycle. We looked at two ways of achieving these objectives in Google Cloud. The Security Health Checker provides feedback to developers using Security Command Center and Slack to notify developers of security findings as they pursue their development activities. The Security Pipeline Checker uses Security Command Center as part of a Cloud Build pipeline to terminate a build pipeline if vulnerabilities are identified during the build process. To implement the Security Heath Checker and the Security Pipeline Checker, check out the GitHub repository. We hope these examples will help you to “shift left” using Google Cloud services. Happy coding!This article was co-authored with Jason Bisson, Bakh Inamov, Jeff Levne, Lanre Ogunmola, Luis Urena, and Holly Willey, Security & Compliance Specialists at Google Cloud.Related ArticleShift security left with on-demand vulnerability scanningUse on-demand vulnerability scanning to detect issues early and help prevent downstream problemsRead Article
Quelle: Google Cloud Platform

IN, NOT_IN and NOT EQUAL query operators for Firestore in Datastore Mode

We’re very pleased to announce that Firestore in Datastore mode now supports IN, Not IN, and Not Equal To operators.IN OperatorFirestore in Datastore Mode now supports the IN operator. With IN, you can query a specific field for multiple values (up to 10). You do this by passing in a list of all the values you want to query for, and Firestore in Datastore Mode will match any entity whose field equals one of those values.For example, if you had a database with entities of kind Orders and you wanted to find which orders had a “delivered” or “shipped” status, then you can now do something like this: Example:SELECT * FROM Orders WHERE status IN ARRAY(“delivered”, “shipped”)Let’s look at another example: say Orders has a field Category that contains a list of categories in which the products in the order may belong to. You can now run an IN query on the categories that you are looking for.Example:SELECT * FROM Orders WHERE Category IN ARRAY(“Home Decor”, “Home Improvements”)In this case, each entity would only be returned once in the query even though they match both the categories in the query.You are now also able to use ORDER BY on both IN and Equal. The query planner originally ignored ordering on an equality, but with the introduction of IN, ORDER BY queries on multiple-valued properties now become valuable. Please make sure to check out the official documentation for additional details.  You can also use the new Query Builder in the UI to use the IN operator.Not IN & Not Equal OperatorsYou can now query using Not IN, which will allow you to find all entities where a field is not in a list of values. For example, entities with kind Orders where the status field is Not IN [“shipped”, “ready to ship”].Example:SELECT * FROM Orders WHERE status NOT IN Array(“shipped”, “ready to ship”);Using Not IN via Query Builder in the UIWith Not Equal you can now query for entities where a field is not equal to some given value. For example, entities of kind Orders where the status field is not equal to the value “pending”.Example:SELECT * FROM Orders WHERE status != “pending”;Using Not Equal via Query Builder in the UINote that with Datastore’s multi-value behavior using Not IN and Not Equal requires only one element to match the given predicate. For instance, Category Not IN [“Home Decor”, “Home Improvements”] would still return both e1 and e2 since they contain the category “Kitchen” and “Living Room”.We hope these new additions enhance your development experience. We look forward to learning about how you’ve taken advantage of these new features, thank you! Please visit the official documentationto learn more.
Quelle: Google Cloud Platform

Investing in Differentiation brings great customer experiences and repeatable business

“Customer success is the cornerstone of our partner ecosystem and ensures our joint customers experience the innovation, faster time to value, and top notch skills from Google and Google Cloud Partners.”—Nina Harding, Global Chief, Partner Advantage Program.Our ecosystem is a strong, validated ally to help you drive business growth and solve complex challenges. Differentiation achievements help you select a partner with confidence, knowing that Google Cloud has verified their skills and customer success across our products, horizontal solutions and key industries.  In all cases, our partners have demonstrated their commitment to learning and ongoing training, demonstrated through earned certifications, Specialization and Expertise. To further refine the process of helping customers find the best partner fast, we recently introduced Net Promoter Score© within Partner Advantage.  This industry standard rating tool allows customers to provide feedback and insights on their successes with partners quickly and easily. We encourage you to work with your partners to share your success and provide feedback using Net Promoter Score.To find the most highly qualified, experienced partners the Google Cloud Partner Directory puts you in the driver’s seat. This purpose-built tool helps customers like you leverage partner Differentiation achievements to move forward with confidence as you start your next project.This new “How to find the right Google Cloud Partner” video shows you how to create a shortlist of potential partners by Region, and based on 14 different strategic solution categories or 100+ Expertise designations.To find a partner that meets your specific needs, or complements your capable team, look no further than Partner Advantage’s Differentiation framework and share in our congratulations to some partners that have achieved Specialization the past few quarters.Related ArticleStanding out to customers through the Partner Differentiation journeyLearn how Google Cloud Partner Advantage partners help customers solve real-world business challengesRead Article
Quelle: Google Cloud Platform

REWE Group accommodates growth spikes and enhances hybrid architecture with Google Cloud

Significant growth in our business partnerships at REWE Group in Austria has led to an unprecedented increase in traffic across our applications. As one of Europe’s largest retail and tourism groups, our burgeoning user base continues to emanate from a variety of sources including new retail partners, affiliate stores, and online customers from desktop and mobile applications. We serve millions of customers in the retail and tourism sectors worldwide and we onboarded Google Cloud services when our applications needed more flexibility and scalability. We needed to efficiently accommodate the dramatic seasonal and even weekly fluctuations we experienced as the pandemic increased our online shopping traffic. As traffic to our applications increased, our team began hosting our traffic-heavy data on a cluster in Google Kubernetes Engine (GKE), successfully leveraging the data management and storage of Cloud Spanner. As a fully managed relational database, Spanner provides unlimited scale, strong consistency, and up to 99.999% availability. By choosing this approach to deployment, we didn’t need to migrate our end user data and maintained a highly flexible cloud environment with an estimated 70 percent hosted in Google Cloud and 30 percent remaining on-premises.Cloud Spanner optimizes speed and performance for online customersGiven that some of the data we migrated was tied to the customer shopping experience on our applications, it was important that the solution we chose be highly secure and reliable. Google Cloud is known for offering the highest levels of availability, reliability, global scale, and security, enabling us to deliver the best possible experiences for our customers. While accessing Spanner through a Kubernetes cluster on Google Cloud, our team developed a ledger for each end user. As the single point of truth for all transactions across the company, the ledger contained two tables. In one, we input a variety of currencies and in the other, we maintained real-time records of the balance of each user in the currency of their purchase. We leveraged the industry-leading 99.999 percent availability SLA of Spanner to optimize the performance of our applications. Spanner also helped us improve the customer experience by providing consistent performance and accelerating the speed of applications and API calls during the purchase process.Spanner provided transactional consistency and accuracy for REWE’s several million users, automatically updating their data in real time as transactions took place. We were able to seamlessly scale the processing of transactions per day to almost double. Since the platform went live, more than 500 million successful transactions have been executed. The native integrations of Google Cloud made it easy to unify our data lifecycle, ensuring the highest performance of our infrastructure at every phase of our development.Query latency is always a critical thing for us, because we are deeply integrated into the point-of-sale applications in our store. If applications are too slow, it compromises the customer experience. However, thanks to Spanner, we are able to complete API calls extremely fast.Fully managed Google services increase team productivity and champion sustainabilityAs a fully managed service, Spanner gave us the freedom to focus on differentiating activities, while operating seamlessly on-premises and in the cloud. Our developers were empowered to iterate and deploy quickly, driving new opportunities for growth and cost reductions. As a company with a 90-year history and international impact, REWE has upheld a continued commitment to environmental efficiency and sustainability across the world. This mission aligns with Google’s goal of running fully carbon-free data centers by 2030. By leveraging Google’s carbon neutrality and sustainability services including waste diversion, use of renewable energy, and enhanced efficiency, we are continuing to optimize our business operations as we champion sustainability.Learn more about how your organization can get started with Spanner today.Related ArticleChange streams for Cloud Spanner: now generally availableCloud Spanner change streams are now generally available. With change streams, you can capture and stream out changes from your Cloud Spa…Read Article
Quelle: Google Cloud Platform

Show off your cloud skills by completing the #GoogleClout weekly challenge

Who’s up for a challenge? It’s time to show off your #GoogleClout!Starting today, check in every Wednesday to unlock a new cloud puzzle that will test your cloud skills against participants worldwide. Stephanie Wong’s previous record is 5 minutes, can you complete the new challenge in 4?#GoogleClout ChallengeThe #GoogleClout challenge is a no-cost weekly 20 minute hands-on challenge. Every Wednesday for the next 10 weeks, a new challenge will be posted on our website. Participants will race against the clock to see how quickly they can complete the challenge. Attempt the 20 minute challenge as many times as you want. The faster you go, the higher your score!How it worksTo participate, follow these four simple steps:Enroll – Go to our website, click the link to the weekly challenge, and enroll in the quest using your Google Cloud Skills Boost account. Play – Attempt the challenge as many times as you want. Remember the faster you are, the higher your score!Share – Share your score card on Twitter/LinkedIn using #GoogleCloutWin – Complete all 10 weekly challenges to earn exclusive #GoogleClout badgesReady to get started?Take the #GoogleClout challenge today!Related ArticleEarn Google Cloud swag when you complete the #LearnToEarn challengeEarn swag with the Google Cloud #LearnToEarn challengeRead Article
Quelle: Google Cloud Platform

How Kitabisa re-structured its fundraising platform to drive "kindness at scale" on Google Cloud

The name Kitabisa means “we can” in Bahasa Indonesia, the official language of Indonesia, and captures our aspirational ethos as Indonesia’s most popular fundraising platform. Since 2013, Kitabisa has been collecting donations in times of crisis and natural disasters to help millions in need. Pursuing our mission of “channeling kindness at scale,” we deploy AI algorithms to foster Southeast Asia’s philanthropic spirit with simplicity and transparency.Unlike e-commerce platforms that can predict spikes in demand, such as during Black Friday, Kitabisa’s mission of raising funds when disasters like earthquakes strike is by definition unpredictable. This is why the ability to scale up and down seamlessly is critical to our social enterprise.In 2020, Indonesia’s COVID-19 outbreak coincided with Ramadan. Even in normal times, this is a peak period, as the holy month inspires charitable activity. But during the pandemic, the crush of donations pushed our system beyond the breaking point. Our platform went down for a few minutes just as Indonesia’s giving spirit was at its height, creating frustrations for users. A new cloud beginningThat’s when we realized we needed to embark on a new cloud journey, moving from our monolithic system to one based on microservices. This would enable us to scale up for surges in demand, but also scale down when a wave of giving subsides. We also needed a more flexible database that would allow us to ingest and process the vast amounts of data that flood into our system in times of crisis.These requirements led us to re-architect our entire platform on Google Cloud. Guided by a proactive Google Cloud team, we migrated to Google Kubernetes Engine (GKE) for our overall containerized computing infrastructure, and from Amazon RDS to Cloud SQL for MySQL and PostgreSQL, for our managed database services.The result surpassed our expectations. During the following year’s Ramadan season, we gained a 50% boost in computing resources to easily handle escalating crowdfunding demands on our system. This was thanks to both the seamless scaling of GKE and recommendations from the Google Cloud Partnership team on deploying and optimizing Cloud SQL instances with ProxySQL to optimize our managed database instances.A progressive journey to kindness at scale While Kitabisa’s mission has never wavered, our journey to optimized performance took us through several stages before we ultimately landed on our current architecture on Google Cloud.Origins on a monolithic provider Kitabisa was initially hosted on DigitalOcean, which only allowed us to run monolithic applications based on virtual machines (VMs) and a stateful managed database. This meant manually adding one VM at a time, which led to challenges in scaling up VMs and core memory when a disaster triggered a spike in donations. Conversely, when a fundraising cycle was complete, we could not scale down automatically from the high specs of manually provisioned VMs, which was a strain on manpower and budgetary resources.Transition to containersTo improve scalability, Kitabisa migrated from DigitalOcean to Amazon Web Services (AWS), where we hoped deploying load balancers would provide sufficient automated scaling to meet our network needs. However, we still found manual configurations to be too costly and labor-intensive. We then attempted to improve automation by switching to a microservices-based architecture. But on Amazon Elastic Container Service (Amazon ECS) we hit a new pain point: when launching applications, we needed to ensure that they were compatible with CloudFormation in deployment, which reduced the flexibility of our solution building due to vendor locking. We decided it was “never too late” to migrate to Kubernetes, which is a more agile containerized solution. Given that we were already using AWS, it seemed natural to move our microservices to Amazon Elastics Kubernetes Service (Amazon EKS). But we soon found that provisioning Kubernetes clusters with EKS was still a manual process that required a lot of configuration work for every deployment. Unlocking automated scalability At the height of the COVID-19 crisis, faced with mounting demands on our system, we decided it was time to give Google Kubernetes Engine (GKE) a try. Since Kubernetes is a Google-designed solution, it seemed likeliest that GKE would provide the most flexible microservices deployment, alongside better access to new features. Through a direct comparison with AWS, we discovered that everything from provisioning Kubernetes clusters to deploying new applications became fully automated, with the latest upgrades and minimal manual setups. By switching to GKE, we can now absorb any unexpected surge in donations, and add new services without expanding the size of our engineering team. The transformative value of GKE became apparent when severe flooding hit Sumatra in November 2021, affecting 25,000 people. Our system easily handled the 30% spike in donations.Moving to Cloud SQL and ProxySQL Kitabisa was also held back by its monolithic database system, which was prone to crashing under heavy demand. We started to solve the problem by moving from a stateful DigitalOcean database to a stateless Redis one, which freed us from relying on a single server, giving us better agility and scale. But the strategy left a major pain point because it still required us to self-manage databases. In addition, we were experiencing high database egress costs due to the need to execute data transfers from a non-Google Cloud database into BigQuery. In December 2021, we migrated our Amazon RDS to Cloud SQL for MySQL, and immediately saved 10% in egress costs per month. But one of the greatest benefits came when the Google Cloud team recommended using the open source proxy for  MySQL to improve the scalability and stability of our data pipelines.Cloud SQL’s compatibility allowed us to use connection pooling tools such as ProxySQL to better load balance our application. Historically, creating a direct connection to a monolithic database was a single point of failure that could end up in a crash. With Cloud SQL plus ProxySQL, we create layers in front of our database instances. It serves as a load balancer that allows us to connect simultaneously to multiple database instances, by creating a primary and a read replica instance. Now, whenever we have a read query, we redirect the query to our read replica instance instead of the primary instance. This configuration has transformed the stability of our database environment because we can have multiple database instances running at the same time, with the load distributed across all instances. Since switching to Cloud SQL as our managed database, and using ProxySQL, we have experienced zero downtime on our fundraising platform even when a major crisis hits.We are also saving costs. Rather than having a separate database for each different Kubernetes cluster, we’ve merged multiple database instances into one instance. We now group databases according to business units instead of per service, yielding database cost reductions of 30%. Streamlining with Terraform deployment There’s another key way in which Google Cloud managed services have allowed us to optimize our environment: using Terraform as an infrastructure-as-a-code tool to create new applications and upgrades to our platform. We also managed to automate the deployment of Terraform code into Google Cloud with the help of Cloud Build, and no human intervention. That means our development team can focus on creative tasks, while Cloud Build deploys a continuous stream of new features to Kitabisa.  The combination of seamless scalability, resilient data pipelines, and creative freedom is enabling us to drive the future of our platform, expanding our mission to inspire people to create a kinder world in other Asian regions.We believe that having Google Cloud as our infrastructure backbone will be a critical part of our future development, which will include adding exciting new insurtech features. Now firmly established on Google Cloud, we can go further in shaping the future of fundraising to overcome turbulent times.Related ArticleTokopedia’s journey to creating a Customer Data Platform (CDP) on Google Cloud PlatformUsing tools like Big Query, and Data Flow, Tokopedia can better personalize search results and product recommendations for customers.Read Article
Quelle: Google Cloud Platform

How Google is preparing for a post-quantum world

The National Institute of Standards and Technology (NIST) on Tuesday announced the completion of the third round of the Post-Quantum Cryptography (PQC) standardization process, and we are pleased to share that a submission (SPHINCS+) with Google’s involvement was selected for standardization. Two submissions (Classic McEliece, BIKE) are being considered for the next round. We want to congratulate the Googlers involved in the submissions (Stefan Kölbl, Rafael Misoczki, and Christiane Peters) and thank Sophie Schmieg for moving PQC efforts forward at Google. We would also like to congratulate all the participants and thank NIST for their dedication to advancing these important issues for the entire ecosystem.This work is incredibly important as we continue to advance quantum computing. Large-scale quantum computers will be powerful enough to break most public-key cryptosystems currently in use and compromise digital communications on the Internet and elsewhere. The goal of PQC is to develop cryptographic systems that safeguard against these potential threats, and NIST’s announcement is a critical step toward that goal. Governments in particular are in a race to secure information because foreign adversaries can harvest sensitive information now and decrypt it later.  At Google, our work on PQC is focused on four areas: 1) driving industry contributions to standards bodies;  2) moving the ecosystem beyond theory and into practice (primarily through testing PQC algorithms); 3) taking action to ensure that Google is PQC ready; and 4) helping customers manage the transition to PQC. Driving industry contributions to a range of standards bodies In addition to our work with NIST, we continue to drive industry contributions to international standards bodies to help advance PQC standards. This includes ISO 14888-4, where Googlers are the editors for a standard on stateful hash-based signatures. More recently, we also contributed to the IETF proposal on data formats, which will define JSON and CBOR serialization formats for PQC digital signature schemes. These standards, collectively, will enable large organizations to build PQC solutions that are compatible and ease the transition globally.Moving the ecosystem beyond theory and into practice: Testing PQC algorithmsWe’ve been working with the security community for over a decade to explore options for PQC algorithms beyond theoretical implementations. We announced in 2016 an experiment in Chrome where a small fraction of connections between desktop Chrome and Google’s servers used a post-quantum key-exchange algorithm, in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm in a hybrid mode with the existing key-exchange, we were able to test its implementation without affecting user security. We took this work further in 2019 and announced a wide-scale post-quantum experiment with Cloudflare. We worked together to implement two post-quantum key exchanges, integrated them into Cloudflare’s TLS stack, and deployed the implementation on edge servers and in Chrome Canary clients. Through this work, we learned more about the performance and feasibility of deployment in TLS of two post-quantum key agreements, and have continued to integrate these learnings into our technology roadmap.  In 2021, we tested broader deployment of post-quantum confidentiality in TLS and discovered a range of network products that were incompatible with post-quantum TLS. We were able to work with the vendor so that the issue was fixed in future firmware updates. By experimenting early, we resolved this issue for future deployments.Taking action to ensure that Google is PQC readyAt Google, we’re well into a multi-year effort to migrate to post-quantum cryptography that is designed to address both immediate and long-term risks to protect sensitive information. We have one goal: ensure that Google is PQC ready. Internally, this effort has several key priorities, including securing asymmetric encryption, in particular encryption in transit. This means using ALTS, for which we are using a hybrid key-exchange, to secure internal traffic; and using TLS (consistent with NIST standards) for external traffic. A second priority is securing signatures in the case of hard-to-change public keys or keys with a long lifetime, in particular focusing on hardware, especially hardware deployed outside of Google’s control. We’re also focused on sharing the information we learn to help others address PQC challenges. For example, we recently published a paper that includes PQC transition timelines, leading strategies to protect systems against quantum attacks, and approaches for combining pre-quantum cryptography with PQC to minimize transition risks. The paper also suggests standards to start experimenting with now and provides a series of other recommendations to allow organizations to achieve a smooth and timely PQC transition. Helping customers manage the transition to PQCAt Google Cloud, we are working with many large enterprises to ensure they are crypto-agile and to help them prepare for the PQC transition. We fully expect customers to turn to us for post-quantum cloud capabilities, and we will be ready. We are committed to supporting their PQC transition with a range of Google products, services, and infrastructure. As we make progress, we will continue to provide more PQC updates on Google core, cloud, and other services, and updates will also come from Android, Chrome and other teams. We will further support our customers with Google Cloud transformation partners like the Google Cybersecurity Action Team to help provide deep technical expertise on PQC topics. Additional References:Google Cloud Security Foundations GuideGoogle Cloud Architecture Framework Google infrastructure security design overviewRelated ArticleRead Article
Quelle: Google Cloud Platform

AI Booster: how Vodafone is supercharging AI & ML at scale

One of the largest telecommunications companies in the world, Vodafone is at the forefront of building next-generation connectivity and a sustainable digital future.  Creating this digital future requires going beyond what’s possible today and unlocking significant investment in new technology and change. For Vodafone, a key driver is the use of artificial intelligence (AI) and machine learning (ML), enabling predictive capabilities in enhancing the customer experience, improving network performance, accelerating advances in research, and much more. Following 18 months of hard work, Vodafone has made a huge leap forward in advancing its AI capabilities at scale with the launch of its “AI Booster” AI / ML platform. Led by the Global Big Data & AI organization under Vodafone Commercial, the platform will use the latest Google technology to enable the next generation of AI use cases, such as optimizing customer experiences, customer loyalty, and product recommendations. Vodafone’s Commercial team has long focused on advancing its AI and ML capabilities to drive business results. Yet as demand grows, it is easier said than done to embed AI and ML into the fabric of the organization and rapidly build and deploy ML use cases at scale in a highly regulated industry. Accomplishing this task means not only having the right platform infrastructure, but also developing new skills, ways of working, and processes. Having made meaningful strides in extracting value from data by moving it into a single source of truth on Google Cloud, Vodafone had already significantly increased efficiency, reduced data costs, and improved data quality. This enabled a plethora of use cases that generate business value using analytics and data science. The next step was building industrial scale ML capability, capable of handling thousands of ML models a day across 18+ countries, while streamlining data science processes and keeping up with technological growth. Knowing they had to do something drastically different to scale successfully, along came the idea for AI Booster. “To maximize business value at pace and scale, our vision was to enable fast creation and horizontal / vertical scaling of use cases in an automated, standardized manner. To do this, 18 months ago we set out to build a next-generation AI / ML platform based on new Google technology, some of which hadn’t even been announced yet. “We knew it wouldn’t be easy. People said, ‘Shoot for the stars and you might get off the ground…’ Today, we’re really proud that AI Booster is truly taking off, and went live in almost double the markets we had originally planned. Together, we’ve used the best possible ML Ops tools and created Vodafone’s “AI Booster Platform” to make data scientists’ lives easier, maximise value and take co-creation and scaling of use cases globally to another level,” says Cornelia Schaurecker, Global Group Director for Big Data & AI at Vodafone. AI Booster: a scalable, unified ML platform built entirely on Google CloudGoogle’s Vertex AI lets customers build, deploy, and scale ML models faster, with pre-trained and custom tooling within a unified platform. Built upon Vertex AI, Vodafone’s AI Booster is a fully managed cloud-native platform that integrates seamlessly with Vodafone’s Neuron platform, a data ocean built on Google Cloud. “As a technology platform, we’re incredibly proud of building a cutting-edge MLOps platform based on best-in-class Google Cloud architecture with in-built automation, scalability and security. The result is we’re delivering more value from data science, while embedding reliability engineering principles throughout,” comments Ashish Vijayvargia, Analytics Product Lead at VodafoneIndeed, while Vertex AI is at the core of the platform, it’s much more than that. With tools like Cloud Build and Artifact Registry for CI/CD, and Cloud Functions for automatically triggering Vertex Pipelines, automation is at the heart of driving efficiency and reducing operational overhead and deployment times. Today, users simply complete an online form, and then, within minutes, receive a fully functional AI Booster environment with all the right guardrails, controls, and approvals. Not long ago it could take months to move a model from a proof of concept (PoC) to launching live in production. By focusing on ML operations (MLOps), the entire ML journey is now more cost-effective, faster, and flexible, all without compromising security. PoC-to-production can now be as little as four weeks, an 80% reduction.Diving a bit deeper, Vodafone’s AI Booster Product Manager, Sebastian Mathalikunnel, summarizes key features of the platform: “Our overarching vision was a single ML platform-as-a-service that scales horizontally (business use cases across markets) and vertically (from PoC to Production). For this, we needed innovative solutions to make it both technically and commercially feasible. Selecting a few highlights, we: completely automated ML lifecycle compliance activities (drift / skew detection, explainability, auditability, etc.) via reusable pipelines, containers, and managed services; embedded security by design into the heart of the platform;capitalized on Google-native ML tooling using BQML, AutoML, Vertex AI and others;accelerated adoption through standardized and embedded ML templates.”For the last point, Datatonic, a Google Cloud data and AI partner, was instrumental in building reusable MLOps Turbo Templates, a reference implementation of Vertex Pipelines, to accelerate building a production-ready MLOps solution on Google Cloud.  “Our team is devoted to solving complex challenges with data and AI, in a scalable way. From the start, we knew the extent of change Vodafone was embarking on with AI Booster. Through this open-source codebase, we’ve created a common standard for deploying ML models at scale on Google Cloud. The benefit to one data scientist alone is significant, so scaling this across hundreds of data scientists can really change the business,” says Jamie Curtis, Datatonic’s Practice Lead for MLOps.  Reimagining the data scientist & machine learning engineer experience With the new technology platform in place, driving adoption across geographies and markets is the next challenge. The technology and process changes have a considerable impact on people’s roles, learning, and ways of working. For data scientists, non-core work now is supported by machines in the background—literally at the click of a button. They can spend time doing what they do best and discovering new tools to help them do the job. With AI Booster, data scientists and ML engineers have already started to drive greater value and collaborate on innovative solutions. Supported by instructor-led and on-demand learning paths with Google Cloud, AI Booster is also shaping a culture of experimentation and learning. Together We Can Eighteen months in the making, AI Booster would not have happened without the dedication of teams across Vodafone, Datatonic, and Google Cloud. Googlers from across the globe were engaged in supporting Vodafone’s journey and continue to help build the next evolution of the platform. Cornelia highlights that “all of this was only possible due to the incredible technology and teams at Vodafone and Google Cloud, who were flexible in listening to our requirements and even tweaking their products as a result. Alongside our ‘Spirit of Vodafone,’ which encourages experimenting and adapting fast, we’re able to optimize value for our customers and business. A huge thank you also to Datatonic, who were a critical partner throughout this journey and to Intel for their valuable funding contribution.” The Google & Vodafone partnership continues to go from strength to strength, and together, we are accelerating the digital future and finding new ways to keep people connected. “Vodafone’s flourishing relationship with Google Cloud is a vital aspect of our evolution toward becoming a world-leading tech communications company. It accelerates our ability to create faster, more scalable solutions to business challenges like improving customer loyalty and enhancing customer experience, whilst keeping Vodafone at the forefront of AI and data science,” says Cengiz Ucbenli, Global Head of Big Data and AI, Innovation, Governance at Vodafone. Find out more about the work Google Cloud is doing to help Vodafone here, and to learn more about how Vertex AI capabilities continue to evolve, read about our recent Applied ML Summit.Related ArticleAccelerating ML with Vertex AI: From retail and finance to manufacturing and automotiveHow businesses across industries are accelerating deployment of machine learning models into production with VertexAI.Read Article
Quelle: Google Cloud Platform