SAP on Google Cloud: 2 analyst studies reveal quantifiable business benefits

Cloud migration is top of mind for most companies with SAP applications. While the advantages of the cloud for SAP customers is generally understood, the move itself can be complicated and disruptive. So what actually are the business benefits and cost savings? How long will it take to recoup such an investment? Two recently published reports from Forrester and IDC can help to quantify the benefits and ROI. Getting answers to the million-dollar questionsForrester and IDC bring different methodologies to the table; they asked somewhat different questions and used different models to calculate their financial KPIs. This allows you to get two different points of view on the same basic questions about value, risk, and ROI.As it turns out, both reports found that customers who migrate their SAP environments to Google Cloud see an impressive return on their investments. From uptime and infrastructure to efficiency and productivity—both Forrester and IDC identified major benefits to companies that have made the move to Google Cloud.Let’s walk through some of the highlights from both reports.Forrester’s TEI model spotlights the power of uptime improvementsBased on in-depth conversations and quantitative research with six companies, here are the key findings from the Forrester Total Economic Impact (TEI) study for companies running SAP systems on Google Cloud: Direct cost savings. When they compare cloud subscription and related costs to what they spent on legacy systems and infrastructure, most IT leaders expect a cloud migration to deliver up-front savings. But according to Forrester, the companies interviewed reported average savings of more than $3 million a year, including eliminated hardware purchases, right-sized software licensing, staffing efficiencies, and other operational cost savings.Dramatically improved uptime. Customers told Forrester that migrating SAP to Google Cloud pretty much eliminates downtime—planned or unplanned—as a significant IT concern. According to Forrester, companies realized an average of $1.5 million in savings per year by avoiding the revenue and user productivity losses that had once been a fact of life for their IT teams.Significant efficiency gains. Because Google Cloud works to mitigate performance bottlenecks, infrastructure mishaps, network delays and more, the companies Forrester interviewed reported a yearly average of $500,000 in productivity gains for SAP business users and frontline workers.Companies also reported an annual average of $500,000 in additional IT efficiency gains after migrating SAP to Google Cloud. This quantifies what happens when IT practitioners no longer have to deal with the bottlenecks that come with legacy systems, and are able to spend their time on tasks that actually build value and help the business. Based on the Forrester analysis, the companies interviewed could expect average three-year net benefits of about $15.4 million.“We benefit from any technical innovation in the infrastructure area because Google Cloud is doing that for us,” one customer told Forrester. “So, whenever there’s new hardware available or new processes or whatever, I don’t have to run the specific project to migrate from A to B.” IDC finds that good things happen when SAP downtime is reduced The IDC report highlights four areas where Google Cloud generates the most value for customers:1. Cutting infrastructure costs. According to IDC, customers running SAP on Google Cloud spent 31% less on infrastructure each year, or an average of $233,000 less per company. The ability to scale SAP environments dynamically and to keep them right-sized was a major factor; so were the advantages of automated infrastructure monitoring and savings on software licenses once these companies could stop overprovisioning.2. Giving a team better things to do. IDC found that the infrastructure, database, and security teams of the companies they interviewed reduced the time they need to maintain and manage SAP environments by an average of 66% per year, for a savings of $443,000, per company. As a result, these companies got the equivalent of a major staff expansion from their SAP migrations—giving them both the staff time and the expertise to focus on far more valuable activities.3. Limiting unplanned downtime. These companies reported to IDC an average 98% reduction in unplanned downtime. Migrating SAP to Google Cloud significantly reduces the threat of downtime and saves the business an average of nearly $770,000 per year in lost revenue and user productivity. For some firms, the downtime savings topped $1 million per year.4. Making users more productive. The companies interviewed told IDC that by avoiding downtime and disruptions associated with upgrade and maintenance tasks for their legacy SAP systems, they saved an average of $363,000 annually in user productivity. But there’s an even more interesting under-the-hood stat contributing to these gains: These companies reduced the time required to deploy new SAP compute and storage resources from an average of 8.8 days to 1 hour.When IDC added up these and other savings associated with running SAP on Google Cloud, it found an average three-year savings of more than $3.5 million and a five-month payback period. “We acquired another company, so basically overnight we needed to be able to deal with that increase,” said one customer IDC spoke with. “We doubled our footprint overnight, and we had to take on hundreds of additional employees. We needed a platform that we could easily scale up if we required, and that’s the benefit of running SAP on Google Cloud for us.” Explore the reportsThere is a lot to think about when considering a move of SAP systems to the cloud. The cloud has many advantages, but migration can seem complicated and tricky; we appreciate that you are looking to understand the full picture. These papers are a great place to start. Download the reports—Forrester’s “Total Economic Impact of SAP on Google Cloud” and IDC’s “Business Value of SAP for Google Cloud Environments.” Then, get in touch.
Quelle: Google Cloud Platform

Helping European businesses grow and digitally transform in the cloud

As we welcome thousands of customers, partners, business leaders and developers to Google Cloud Next OnAir EMEA, our five-week virtual event, I want to take a moment to reflect on how inspiring it’s been to see organizations around the world pivoting and adapting to unprecedented circumstances. It’s why we’ve reimagined our event and taken it online to best serve our customers and partners, in the same spirit as our global Google Cloud Next OnAir conference. Throughout the next five weeks, Next OnAir EMEA will bring you key announcements, product enhancements, and best practices tailored specifically for organizations across Europe, Middle East and Africa. As with almost every aspect of our lives, things are a little different this year. The ongoing pandemic has changed the course of business in 2020, and likely the world in the long term. At Google Cloud, we have seen first-hand how important it is to keep our cloud up and running – enabling remote collaboration and scaling to meet changing customer demands. We continue to help businesses and their employees, as well as governments and schools, collaborate and learn during these challenging times. Growing with our EMEA customers and partnersIt’s been a big year for us in EMEA. We announced new cloud regions in France, Italy, Poland, and Spain. Our Dunant cable has landed, crossing the Atlantic Ocean from Virginia Beach in the U.S. to the French Atlantic coast, and we announced a new subsea cable, Grace Hopper, that will connect the U.S. to the U.K. and Spain. All these projects will provide further capacity and resilience to our network, so we can better serve customers who are taking advantage of all our Google Cloud solutions.We are proud of our continuing work with some of the world’s biggest brands, including Carrefour, Lloyds Banking Group, Lufthansa Group, Renault, Telecom Italia and Telefonica, to name a few. And we’re exploring new opportunities as we partner with major industry players such as Deutsche Bank and Orange. Today, we’re also announcing our new collaboration with Reckitt Benckiser to drive stronger customer engagement as the consumer health, hygiene and nutrition company embarks on wide-scale digital transformation. We’re also excited about the findings of a new IDC study which shows Google Cloud’s ecosystem is thriving, growing and driving significant economic benefit for our partners in EMEA. According to IDC, the Google Cloud partner opportunity in Western Europe will increase more than 3.7 times by 2025. In addition, IDC expects Google Cloud partners to generate $5.49 USD in revenue for every $1 of Google Cloud products sold, increasing to $7.74 by 2025.To support our growing customer and partner base within the region, we’ve also bolstered our EMEA regional senior leadership team, welcoming Laurence Lafont as vice president for EMEA Industries (exclusive of France), Pip White as the new managing director of the U.K. and Ireland, Daniel Holz as vice president of the DACH and Northern region, and Samuel Bonamigo as the vice president for Southern Europe. With these new leaders on board, we’re strengthening our focus on our customers’ success.Building for the future, sustainablyWhile we continue to grow, our commitment to do so in a sustainable manner remains unchanged. Earlier this month, Google announced that we set our most ambitious energy goal yet: to run our business on carbon-free energy everywhere, at all times, by 2030. This means we’re aiming to always have our data centers supplied with carbon-free energy, and are the first cloud provider to make this kind of commitment. As we learn, we’ll help develop useful tools to empower others to follow suit. For example, we’re developing tools to help our customers measure the impact of migrating to Google Cloud, report on their emissions, and reduce them. We’re also building the Industrial Adaptive Controls platform in collaboration with DeepMind, which provides AI control of cooling systems in commercial and industrial facilities. In addition, we’ve been collaborating with our partners and customers on a wide range of sustainability initiatives. This includes working with the World Wildlife Fund (WWF) and fashion brands, like Stella McCartney, to create an environmental data platform that helps create a more sustainable supply chain for the fashion industry. Additionally, we’re working with Unilever to leverage AI on satellite imagery to improve detection of deforestation, bringing a new standard to supply chain monitoring.Committed to helping EMEA businesses At Google Cloud, we are committed to being a trusted partner to businesses of all sizes and industries across EMEA, and Google Next OnAir EMEA is just one of the ways we are investing locally. Over the course of the next five weeks, we hope you’ll join us as we help organizations grow and innovate for the future through digital transformation.Find out more about the event here.
Quelle: Google Cloud Platform

Engaging in a European dialogue on customer controls and open cloud solutions

At last year’s Europe-focused Google Cloud Next event, we outlined our commitment to European customers, sharing ways Google Cloud is helping European organizations transform their businesses in our cloud and address their strict data security and privacy requirements. This included expanding our existing cloud regions on the continent, growing our ecosystem of local partners, and adding compliance certifications, to name a few. Since then, we have made significant progress on all these fronts and are deeply committed to delivering additional capabilities.In recent months, European customers and policymakers have placed an even greater emphasis on working with cloud service providers to protect customers’ most sensitive information. Based on our conversations, this focus is driven by concerns about government access to sensitive European public and private sector data, and concerns about European customers’ reliance on global cloud service providers to support critical services and workloads.Today, Google Cloud’s baseline controls and security features offer strong protections, meet current robust security requirements, and, in most cases, fully address customer needs. We have a long history of supporting features that are most important to customers globally. This includes critical features such as data residency controls, default encryption for data-at-rest, organization policy constraints, and VPC Service Controls, among many others. Our whitepaper includes more details on the capabilities you can take advantage of with Google Cloud Platform. Through our close partnership and work with European customers and policymakers, we understand that they strive for even greater security and autonomy. At Google Cloud, we take these issues—often discussed under the umbrella term of digital sovereignty—seriously. We are working diligently across three areas: data sovereignty, operational sovereignty, and software sovereignty, to help address digital sovereignty in the cloud computing context. And we continue to listen to customers and policymakers and incorporate their feedback on the best potential path forward.Key to our approach is our commitment to open source-based software solutions that offer control and autonomy, high capability, usability and flexibility, and robust data protection, as well as solutions that expand opportunities to partner with European cloud service providers to build local skills. You can read more about our open, partnership-oriented approach here.Working together to address concernsIn our engagement with European customers and policymakers about their sovereignty needs, they describe several core requirements: control over all access to their data by the provider, including what type of personnel can access and from which region; inspectability of changes to cloud infrastructure and services that impact access to or the security of their data, ensuring the provider is unable to circumvent controls or move their data out of the region; and survivability of their workloads for an extended period of time in the event that they are unable to receive software updates from the provider.These requirements reflect three distinct pillars of sovereignty: data sovereignty, operational sovereignty, and software sovereignty.Click to enlargeBy engaging with customers and policymakers across these pillars, we can provide solutions that address their requirements, while optimizing for additional considerations like functionality, cost, infrastructure consistency, and developer experience. Data sovereignty provides customers with a mechanism to prevent the provider from accessing their data, approving access only for specific provider behaviors that customers think are necessary. Examples of customer controls provided by Google Cloud include storing and managing encryption keys outside the cloud, giving customers the power to only grant access to these keys based on detailed access justifications, and protecting data-in-use. With these capabilities, the customer is the ultimate arbiter of access to their data. Operational sovereignty provides customers with assurances that the people working at a cloud provider cannot compromise customer workloads. With these capabilities, the customer benefits from the scale of a multi-tenant environment while preserving control similar to a traditional on-premises environment. Examples of these controls include restricting the deployment of new resources to specific provider regions and limiting support personnel access based on predefined attributes such as citizenship or a particular geographic location. Software sovereignty provides customers with assurances that they can control the availability of their workloads and run them wherever they want, without being dependent on or locked-in to a single cloud provider. This includes the ability to survive events that require them to quickly change where their workloads are deployed and what level of outside connection is allowed. This is only possible when two requirements are met, both of which simplify workload management and mitigate concentration risks: first, when customers have access to platforms that embrace open APIs and services; and second, when customers have access to technologies that support the deployment of applications across many platforms, in a full range of configurations including multi-cloud, hybrid, and on-premises, using orchestration tooling. Examples of these controls are: platforms that allow customers to manage workloads across providers; and orchestration tooling that allows customers to create a single API that can be backed by applications running on different providers, including proprietary cloud-based and open-source alternatives.In working to deliver these capabilities, they must align with how we support customers’ efforts to provide operational transparency and documentation to regulators (e.g., for audits in regulated industries). Our work is an important part of the commitments we make to European customers and policymakers including our core commitment to customer control. My blog has more details on what we are doing to enhance customer control in the cloud.Building on an open source foundation to enable interoperability and survivabilityCertain customers and policymakers don’t want to be solely dependent on a single cloud provider to protect sensitive information and deliver critical services. This is an important part of their survivability requirement, particularly in the event that a provider is forced to suspend or terminate cloud services or software licenses. We do not believe it is possible to fully address survivability requirements with a proprietary solution. Instead, solutions based on open source tools and open standards are the route to addressing customer and policymaker concerns and, more importantly, giving customers the flexibility to deploy–and, if necessary, migrate–critical workloads across or even off public cloud platforms.An open source approach is highly differentiated from vendor solutions that keep customers tethered to a cloud service provider’s proprietary technology stack. At Google Cloud, we collaborate with the open source community to develop many of our services on open source technology and advance solutions that promote interoperability, and we also create new technologies for–and contribute to–the open source ecosystem. We are able to do this by leveraging decades of experience in open source and operating cloud services at scale, including creating and maintaining Kubernetes and Istio. This approach benefits customers by offering greater flexibility and provides ecosystem benefits, such as enabling and empowering innovation and workforce development outside Google. It is also consistent with our belief that openness enables faster innovation, tighter security, and offers freedom from vendor lock-in.Google Cloud’s open source approach is evidenced in products like Anthos, our hybrid and multi-cloud platform that provides a consistent development and operations experience for multi-cloud and on-premises environments. This approach makes it possible to leverage advanced cloud technologies with the safety net of migrating back to on-premises and operating without provider assistance if necessary.Significantly expanding regional partnerships and collaborationTo enhance our ability to deliver these solutions to customers across Europe, we are empowering a range of local partners. This has the added benefit of helping public and private sector stakeholders build and sustain a local workforce and contribute to the European economy. By empowering European providers, we can help the region accelerate digital transformation, support digital skill development, and foster collaboration with the open source community, as well as partner on common causes like environmental responsibilities. We will share more about our progress on this front in the coming months.
Quelle: Google Cloud Platform

IDC confirms bright future for Google Cloud partners in EMEA

Customers come first for our team at Google Cloud EMEA, but we couldn’t deliver to our high standard without the help of our partners. They play a key role in the delivery of Google Cloud technologies and solutions to our businesses all over the region. With this in mind, I’m very proud to share the results of a recent study from IDC, which highlights that Google Cloud is prospering, growing and unlocking huge commercial benefits for our EMEA partners. According to IDC, demand for cloud-based technology and services is growing rapidly in the EMEA market, especially for capabilities such as artificial intelligence (AI), data analytics, IoT, and security. In fact, IDC puts public cloud growth at 22% year-on-year for EMEA. Whereas implementations used to be lift-and-shift scenarios, today digital transformation remains the primary driver for cloud growth in organizations, who are looking to reimagine their business models. What’s more, this need to digitally transform has only been heightened in the face of global COVID-19 pandemic. As businesses seek to be flexible and adapt to current circumstances, they need cloud capabilities for support. “This is good news for Google Cloud partners, who by their nature are engaged across many of these technologies,” reports the IDC study. The research predicts that partners’ revenues from Google Cloud-linked opportunities will more than triple by 2025. This is an amazing opportunity for all of our partners across the entire EMEA ecosystem, as they develop and build out their Google Cloud practices. According to the IDC study, the opportunity for Western European partners will increase 3.7 fold by 2025. Moreover, for every $1 of Google Cloud technology sold, Western European partners will generate $5.49 in revenue via their own products, services, and IP (such as new apps and software). The future looks bright too, as IDC predicts this revenue stream will increase to $7.74 for every $1 of Google Cloud technology sold by 2025. The IDC results prove there is a flourishing ecosystem around Google Cloud. It’s heartening to see how much value our partners can achieve and will continue to do so in future years.For every $1 of Google Cloud technology sold, partners will generate:The IDC study also uncovers insights into numerous other benefits that partners are experiencing from collaborating with Google Cloud. For example, IDC identifies 50% of Google Cloud partners as at the “late stage of digital maturity,” with more than a third having “fully integrated digital into their strategies and businesses.” With this level of strong expertise in contemporary cloud technologies and solutions, it’s clear that customers can trust Google Cloud partners to deliver on support and implementation for even the most advanced technologies, such as AI and ML. At Google Cloud, we concentrate on the delivery of end-to-end, best-in-class solutions. Most of our partners are expanding these offerings. The IDC study shows that these partners who are engaged in developing their own unique IP around Google Cloud are witnessing strong margins linked to these offerings. Partners are reaping the benefits from our target goal of 100% partner attach on customer sales. Strong margins are being recorded across resale, IaaS, PaaS, and SaaS add-ons, IT services, business services, and support for hardware and networking.“We started our global partnership with Google Cloud a little over two years ago, and have already gone a long road together delivering greater value to our enterprise customers with our secure hybrid cloud, machine learning and collaboration solutions. Google Cloud has been a partner for growth and innovation, as we jointly help European and global companies through their digital transformation” says Wim Los, senior vice president for Atos Cloud Enterprise Solutions.“From the collaborative solutions to the Infrastructure and Machine learning ones, Google Cloud has been a key strategic partner for more than 10 years. This partnership brings value to the market not only with the trusted technology but also with the training support and capacities we bring to the market to make sure companies succeed in their digital transformation” says Sébastien Chevrel, COO and managing director) of Devoteam.“After more than a decade of partnership, we are really amazed about what we have been able to accomplish together, working hand-in-hand and with the entire Google Cloud ecosystem. This is why we wanted this relationship to go even stronger and announced in September 2020 a new, non-binding commitment to deliver $1.5 billion in Google Cloud infrastructure and services over the next five years,” says DoiT International’s CEO, Yoav Toussia-Cohen.“Google Cloud continues to show commitment to its partner ecosystem in EMEA, enabling its partners to capitalize on the growth, but also differentiate. Google Cloud is delivering a partner program that provides access to advanced technology and solutions, while maintaining a focus on strong partner profitability and specialization,” said Stuart Wilson, Research Director, European Partnering Ecosystems at IDC.Our EMEA partners have a unique opportunity to ride the wave of cloud adoption and capitalise on the opportunities provided by being part of the Google Cloud family. We will continue to work together to support customers spanning multiple industries who want to solve their most important challenges via the cloud.When I see the work our partners are doing in the region, I feel incredibly proud, and I’m certainly looking forward to seeing what the future brings as the Google Cloud ecosystem continues to evolve. To read the findings in full, you can download the Partner Opportunity in a Cloud World IDC study, here.Related ArticleUpdates to our Partner Advantage program help partners differentiate and grow their businessesWe’re showcasing our partners’ achievements and providing updates on our expanding ecosystem.Read Article
Quelle: Google Cloud Platform

Easily view your old queries with Cloud Logging recent queries

As you analyze your logs for application performance, infrastructure errors, system events, and more, sometimes you may need to look back to logs you were previously analyzing to help correlate events and identify the root cause of a problem. To help, we are excited to introduce Google Cloud Logging recent queries, to make it easy to track and run your past searches as you deep dive on your log data.With recent queries, now Cloud Logging automatically can give you the history of log searches you’ve run over the last 30 days. No more copying and pasting old queries from that doc/text file just to remember the exact syntax you previously used. With recent queries, all you need to do is open the “Recent” tab in the logs explorer to view your query history.Recent queries is one of many recent additions to Cloud Logging to help simplify the experience for developers and operators. With features like suggested queries, log fields panel, and the histogram, the logs explorer is continuing to evolve to help our users quickly and efficiently retrieve, view, and analyze their logs.Recent queries is available in beta for all Google Cloud Logging users. To get started, simply open Cloud Logging and select the “Recent” tab to see your queries. For more information on Cloud Logging or Recent Queries, please visit our documentation.Related ArticleAnalyze your logs quickly with suggested queries beta in Cloud LoggingNew suggested queries in Cloud Logging help highlight important logs, so you can troubleshoot issues faster.Read Article
Quelle: Google Cloud Platform

Google named a leader in the 2020 Gartner Magic Quadrant for Full Life Cycle API Management

We’re excited to share that Gartner has recognized Google (Apigee) as a Leader in the 2020 Magic Quadrant for Full Life Cycle API Management, marking the fifth time in a row we’ve earned this recognition. In this year’s report, Google (Apigee) is placed highest among all the vendors for the ability to execute. Download the full report here.APIs are critical to enterprise business strategy because they let developers easily access, combine, and share valuable data and functionality, helping organizations adapt to disruption and reinvent to meet the digital needs of customers. Therefore, partnering with the right API management vendor is critical to building and scaling a successful API program, and we believe research from industry analyst firms like Gartner can help enterprises evaluate and choose the right solution.We continue to be trusted by customers like Pitney Bowes and Experian for their most mission-critical programs. As part of our vision to accelerate enterprise digital transformations through APIs, Google (Apigee) is focused on enabling customers with comprehensive API management capabilities to accelerate application development, build API-powered digital ecosystems, and drive API economies. Building a platform business: API monetization enables enterprises  to unlock additional revenue streams and create ecosystems of developers and partners outside of the organization. By sharing valuable data and functionality with partners, enterprises can create more compelling digital products that combine their proprietary assets with complementary assets from outside the organization. This not only creates richer experiences for customers but can also expose a business to new channels and open new strategic opportunities.Flexibility of multi-cloud and hybrid: For compliance, security, or stringent latency requirements, Apigee hybrid provides enterprises the flexibility to choose where to host APIs —on-premise, Google Cloud, hybrid cloud, or multi-cloud. Fostering an API-first mindset: Google Cloud API Gateway helps enterprises get started with their API programs, and lets developers secure and manage their APIs built on Compute Engine, GKE, App Engine, and serverless backends (Cloud Functions and Cloud Run), all without having to worry about any of the infrastructure configuration or scaling.Democratizing application development: As a part Business Application Platform, we arebringing together Apigeefull lifecycle API management and AppSheet no-code application development to empower all users to quickly build data-driven applications without coding. As a part of this vision,  we recently launched Apigee data source for AppSheet to provide a consistent way for AppSheet users to consume services, data, and functionality, via APIs, despite complex backends.”Apigee has become the central nervous system for all communications between the digital core banking, the microservices, the front end, and the apps. Apigee has become our sun. Everything rotates around Apigee,” says Kaspar Situmorang, executive vice president, Bank Rakyat Indonesia.We also continue to provide strategic guidance and best practices for our customers with the Apigee Compass online assessment tool, That Digital Show podcast, API Program Excellence e-learning series, Day Zero initiative, and more.  We’re gratified to see Apigee positioned as a strategic partner for digital transformation in the latest Magic Quadrant. This reflects why large global brands like ABN Amro and Change Healthcare can trust Apigee to drive their API programs. Underscoring the strong business growth and customer momentum, Gartner also ranked Google (Apigee) as the leader in its latest Market Analysis for the fourth consecutive year: Full Life Cycle API Management, Worldwide, 2019 report (Published 27 July 2020).The Gartner 2020 Magic Quadrant for Full Life Cycle Management is available for download here (requires an email address).To learn more about Apigee, visit the website here.Gartner, Magic Quadrant for Full Life Cycle API Management, 22 September 2020, Paolo Malinverno, Kimihiko Iijima, Mark O’Neill, John Santoro, Shameen Pillai, Akash JainGartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Google Cloud Platform

AI Platform Prediction goes GA with improved reliability & ML workflow integration

Machine learning (ML) is transforming businesses and lives alike. Whether it be finding rideshare partners, recommending products or playlists, identifying objects in images, or optimizing marketing campaigns, ML and prediction is at the heart of these experiences. To support  businesses like yours that are revolutionizing the world using ML, AI Platform is committed to providing a world-class, enterprise-ready platform for hosting all of your transformative ML models. As a part of our continued commitment, we are pleased to announce the general availability of AI Platform Prediction based on a Google Kubernetes Engine (GKE) backend. The new backend architecture is designed for improved reliability, more flexibility via new hardware options (Compute Engine machine types and NVIDIA accelerators), reduced overhead latency, and improved tail latency. In addition to standard features such as autoscaling, access logs, and request/response logging available during our Beta period, we’ve introduced several updates that improve robustness, flexibility, and usability:XGBoost / scikit learn models on high-mem/high-cpu machine types: Many data scientists like the simplicity and power of XGBoost and scikit learn models for predictions in production. AI Platform makes it simple to deploy models trained using these frameworks with just a few clicks — we’ll handle the complexity of the serving infrastructure on the hardware of your choice.  Resource Metrics: An important part of maintaining models in production is understanding their performance characteristics such as GPU, CPU, RAM, and network utilization. These metrics can help make decisions about what hardware to use to minimize latencies and optimize performance. For example, you can view your model’s replica count over time to help understand how your autoscaling model responds to changes in traffic and alter minReplicas to optimize cost and/or latency. Resource metrics are now visible for models deployed on GCE machine types from Cloud Console and Stackdriver Metrics.Regional Endpoints: We have introduced new endpoints in three regions (us-central1, europe-west4, and asia-east1) with better regional isolation for improved reliability. Models deployed on the regional endpoints stay within the specified region.VPC-Service Controls (Beta): Users can define a security perimeter and deploy Online Prediction models that have access only to resources and services within the perimeter, or within another bridged perimeter. Calls to the CAIP Online Prediction APIs are made from within the perimeter. Private IP will allow VMs and Services within the restricted networks or security perimeters to access the CMLE APIs without having to traverse the public internet.But prediction doesn’t just stop with serving trained models. Typical ML workflows involve analyzing and understanding models and predictions. Our platform integrates with other important AI technologies to simplify your ML workflows and make you more productive:Explainable AI. To better understand your business, you need to better understand your model. Explainable AI provides information about the predictions from each request and is available exclusively on AI Platform.What-if tool. Visualize your datasets and better understand the output of your models deployed on the platform.Continuous Evaluation. Obtain metrics about the performance of your live model based on ground-truth labelling of requests sent to your model. Make decisions to retrain or improve the model based on performance over time.”[AI Platform Prediction] greatly increases our velocity by providing us with an immediate, managed and robust serving layer for our models and allows us to focus on improving our features and modelling,” said Philippe Adjiman, data scientist tech lead at Waze. Read more about Waze’s experience adopting the platform here.All of these features are available in a fully managed, cluster-less environment with enterprise support — no need to stand up or manage your own highly available GKE clusters. We also take care of the quota management and protecting your model from overload from clients sending too much traffic. These features of our managed platform allow your data scientists and engineers to focus on business problems instead of managing infrastructure.Related ArticleHow Waze predicts carpools with Google Cloud’s AI PlatformHow Waze predicts carpools using Google Cloud AI Platform.Read Article
Quelle: Google Cloud Platform

How Waze predicts carpools with Google Cloud’s AI Platform

Waze’s mission is to eliminate traffic and we believe our carpool feature is a cornerstone that will help us achieve it. In our carpool apps, a rider (or a driver) is presented with a list of users that are relevant for their commute (see below). From there, the rider or the driver can initiate an offer to carpool, and if the other side accepts it, it’s a match and a carpool is born.Let’s consider a rider who is commuting from somewhere in Tel-Aviv to Google’s offices, as an example that we’ll use throughout this post. Our goal will be to present to that rider a list of drivers that are geographically relevant to her commute, and to rank that list by the highest likelihood of the carpool between that rider and any driver on the list to actually happen. Finding all the relevant candidates in a few seconds involves a lot of engineering and algorithmic challenges, and we’ve dedicated a full team of talented engineers to the task. In this post we’ll focus on the machine learning part of the system responsible for ranking those candidates. In particular:If hundreds (or more) drivers could be a good match for our rider (in our example), how can we build a ML model that would decide which ones to show her first?How can we build the system in a way that allows us to iterate quickly on complex models in production while guaranteeing a low latency online in order to keep the overall user experience fast and delightful?ML models to rank lists of drivers and ridersSo, the rider in our example sees a list of potential drivers. For each such driver, we need to answer two questions:What is the probability that our rider will send this driver a request to carpool?What is the probability that the driver will actually accept the rider’s request?We solve this using machine learning: we build models that estimate those two probabilities based on aggregated historical data of drivers and riders sending and accepting requests to carpool. We use the models to sort drivers from highest to lowest likelihood of the carpool to actually happen.The models we’re using combine close to 90 signals to estimate those probabilities. Below are a few of the most important signals to our models:Star Ratings: higher rated drivers tend to get more requestsWalking distance from pickup and dropoff: riders want to start and end their rides as close as possible to the driver’s route. But, the total walking distance (as seen in the screenshot above) isn’t everything: riders also care about how the walking distance compares to their overall commute length. Consider the two plans below of two different riders: both have 15 minutes walking, but the second one looks much more acceptable given that the commute length is larger to start with, while in the first one, the rider needs to walk as much as the actual carpool length, and is thus much less likely to be interested. The signal that is capturing this in the model and that came up as one of the most important signals, is the ratio between the walking and carpool distance.The same kind of consideration is valid on the driver side, when considering the length of the detour compared to the driver’s full drive from origin to destination.Driver’s intent: One of the most important factors impacting the probability of a driver to accept a request to carpool (sent by a rider) is her intent to actually carpool. We have several signals indicating a driver’s intent, but the one that came up as the most important (as captured by the model) is the last time the driver was seen in the app. The more recent it is, the more likely the driver is to accept a request to carpool sent by a rider.Model vs. Serving complexityIn the early stage of our product, we started with simple logistic regression models to estimate the likelihood of users sending/accepting offers. The models were trained offline using scikit learn. The training set was obtained using a “log and learn” approach (logging signals exactly as they were during serving time) over ~90 different signals, and the learned weights were injected into our serving layer. Although those models were doing a pretty good job, we observed via offline experiments the great potential of more advanced non linear models such as gradient boosted regression classifiers for our ranking task. Implementing an in-memory fast serving layer supporting such advanced models would require non-trivial effort, as well as an on-going maintenance cost. A much simpler option was to delegate the serving layer to an external managed service that can be called via a REST API. However, we needed to be sure that it wouldn’t add too much latency to the overall flow. In order to make our decision, we decided to do a quick POC using the AI Platform Online Prediction service, which sounded like a potential great fit for our needs at the serving layer.A quick (and successful) POCWe trained our gradient boosted models over our ~90 signals using scikit learn, serialized it as a pickle file, and simply deployed it as-is to the Google Cloud AI Platform. Done. We get a fully managed serving layer for our advanced model through a REST API. From there, we just had to connect it to our java serving layer (a lot of important details to make it work, but unrelated to the pure model serving layer). Below is a very high level schema of what our offline/online training/serving architecture looks like. The carpool serving layer is responsible for a lot of logic around computing/fetching the relevant candidates to score, but we focus here on the pure ranking ML part. Google Cloud AI Platform plays a key role in that architecture. It greatly increases our velocity by providing us with an immediate, managed and robust serving layer for our models and allows us to focus on improving our features and modelling.Click to enlargeIncreased velocity and the peace of mind to focus on our core model logic was great, but a core constraint was around the latency added by an external REST API call at the serving layer. We performed various latency checks/load tests against the online prediction API for different models and input sizes. AI Platform provided the low double digit millisecond latency that was necessary for our application. In just a couple of weeks, we were able to implement and connect the components together and deploy the model in production for AB testing. Even though our previous models (a set of logistic regression classifiers) were performing well, we were thrilled to observe significant improvements on our core KPIs in the AB test. But what mattered even more for us, was having a platform to iterate quickly over even more complex models, without having to deal with the training/serving implementation and deployment headaches. The tip of the (Google Cloud AI Platform) icebergIn the future we plan to explore more sophisticated models using Tensorflow, along with Google Cloud’s Explainable AI component that will simplify the development of these sophisticated models by providing deeper insights into how they are performing. AI Platform Prediction’s recent GA release of support for GPUs and multiple high-memory and high-compute instance types will make it easy for us to deploy more sophisticated models in a cost effective way.Based on our early success with the AI Platform Prediction service, we plan to aggressively leverage other compelling components offered by GCP’s AI Platform, such as the Training service w/ hyper parameter tuning, Pipelines, etc. In fact, multiple data science teams and projects (ads, future drive predictions, ETA modelling) at Waze are already using or started exploring other existing (or upcoming) components of the AI Platform. More on that in future posts.Related ArticleAI Platform Prediction goes GA with improved reliability & ML workflow integrationAI Platform Prediction goes GA with enhanced reliability & ML workflow integration.Read Article
Quelle: Google Cloud Platform

Better monitoring and logging for Compute Engine VMs

Over the past several months we’ve been focused on improving observability and operations workflows for Compute Engine. Today, we are excited to share the first wave of these enhancements are now available. These include:Significantly improved operating system support for the Cloud Monitoring and Cloud Logging agents.The ability to rapidly deploy, update, and remove agents to groups of VMs, or all of your VMs, by policy, with as little as a single gcloud command.New VM-specific features within the Cloud Monitoring console, which we’ll discuss in an upcoming blog post.Understanding agentsAgents remain a key way to get fine-grained visibility into a virtual machine’s host operating system, and applications running on Compute Engine are no different. Out of the box, every Compute Engine instance (or managed instance group) provides some level of telemetry, including metrics for CPU utilization, uptime, disk throughput and operations, and networking operations. To capture more advanced operating system metrics like memory consumption and disk utilization, metrics from commonly used applications (databases, web proxies, etc.), and logs from your applications, you need to install the Cloud Monitoring and Cloud Logging agents onto each VM.Automatic agent installation and managementBecause agents are so essential in VM environments, we’veautomated the process of installing, updating, and removing the Cloud Monitoring Logging agents onto groups of Compute Engine VMs, or your entire fleet, via a new set of gcloud commands. With as little as one command, you can create a policy that governs existing and new VMs, ensuring proper installation and optional auto-upgrade of both agents. This is a great way to start using Cloud Monitoring or Cloud Logging right away, and to scale metrics and logs collection from a single VM to all VMs in a project.These policies can be applied to Linux virtual machines now as a part of the public alpha and will apply to Windows VMs soon.Improved operating system supportOver the past year, we’ve added Cloud Monitoring and Logging agent support to a host of new operating systems.LinuxThe Monitoring and Logging agents are now compatible with 30 of Compute Engine’s available Linux images, including:CentOS 7+Red Hat Enterprise Linux 7+Debian 9+SUSE Linux Enterprise Server 12+Ubuntu 16+With these additions, the Cloud Monitoring Linux agents can be used on every Compute Engine host operating system other than the Container Optimized OS, which has monitoring and logging capabilities built in to the OS itself.WindowsCloud Monitoring has been able to capture system and SQL Server metrics for Windows virtual machines since before 2015, thanks to its Windows agent. We’re currently improving the compatibility, quality, and functionality of our Windows support with a new agent that provides the following enhancements:Capturing the same advanced OS metrics as the Cloud Monitoring Linux agent, rather than a smaller incompatible setCompatibility with more Windows versionsCapturing application metrics from IIS, SQL Server, and Windows performance countersThe new agent is in preview. Please contact your account manager if you would like to participate in early tests.Wrapping upWe hope you enjoy these improvements to Cloud Monitoring and Cloud Logging, and we look forward to bringing even more capabilities to the platform. To check out these new features, go to our documentation for these new features or to the Cloud Monitoring and Logging in the Google Cloud Console.Related ArticleAll together now: Fleet-wide monitoring for your Compute Engine VMsCloud Monitoring now lets you manage an entire fleet of Compute Engine VMs.Read Article
Quelle: Google Cloud Platform

Introducing Student Success Services from Google Cloud

The shift to remote learning at all levels of education has thrown the challenges of ensuring student success and the student experience into sharp focus. Educational institutions want to guide students throughout their academic careers and improve graduation rates. Students want better remote learning options and ways to collaborate with peers and seek advice from instructors. There is a wealth of data that could drive decisions about these needs, but it’s often locked away in legacy technologies. We have launched Google Cloud’s Student Success Services to help meet these challenges. Student Success Services is a set of tools that aims to unlock student successes with personalized assistants, real-time insights, collaboration tools and more for higher ed and K-12 learners. Using built-in artificial intelligence (AI) models and analytics to gather data and use it for decision-making, this bundle of services benefits both institutions and students by engaging students, improving remote and in-person learning, and creating a modern, fulfilling student experience.How to improve the student experienceGoogle Cloud’s Student Success Services includes the following services that help institutions understand student needs and quickly respond with solutions:Virtual assistants for round-the-clock support: Use virtual assistants, created with Google’s machine learning and natural language tools, to support students 24/7 with instant answers. The virtual assistants can be trained to respond instantly to questions on topics like enrollment status and registration deadlines, freeing up staff for more personal student guidance.Tutors for personalized learning: Give students access to skills practice and guidance from intelligent technology tools. Our APIs and AI-powered learning tools can guide students in their writing practice or coaching for reading comprehension.Smart analytics to improve student engagement, achievement, and retention: The Unizin Data Platform, built on Google Cloud, is an institution-level data platform that aggregates, cleans, models, and stores all teaching and learning data to create a holistic view of the student. Too often, advisors and educators don’t have the information on the students they support until it’s too late to intervene. Google’s Student Success Services allow organizations to easily and securely share aggregate data and enables them to see a unified portrait of learners, uncover insights across diverse student groups at every level, and intervene in real time.Scalable student learning: Distance shouldn’t be a barrier to student success. Google Meet allows groups of up to 250 people to talk face-to-face for classroom learning as well as employee meetings, in compliance with regulations such as HIPAA and FERPA. With Meet’s premium features, meeting leaders can add closed-captioning and recording, while sharing meetings via learning management systems. For schools that rely on live streams, Meet can accommodate up to 100,000 viewers. And with virtual desktop infrastructure (VDI) remote learning solutions for distance learners, educators can create virtual labs and access compute power remotely.Incidence and intelligence management: Get real-time insights to detect issues and rapidly respond to risks on and off campus, as well as track student and campus health.“We believe that a data-informed academic mission must play an essential role in helping every student reach their potential. Every week, we see our institutions leveraging the Unizin Data Platform to engage, enrich, and empower their students with data, analytics, and insights” says Etienne Pelaprat, Chief Technology Officer of Unizin.Creating an equitable playing field for student success is also a goal of the University of Lynchburg in Virginia. “We have students learning with us from all over the world, especially in our online graduate programs,” says Charley Butcher, the university’s director of instructional technology. Those students can join classes using Meet and a web browser from anywhere they happen to be—a benefit at any time, but especially now when remote classes are often the only option for students. And it’s not just graduate students: During the pandemic-related campus shutdown, Meet is helping all students work closely with their instructors.The need to lift student success is certainly critical right now—but when the pandemic recedes, educators will likely still be adjusting to very different learning environments for their students. “In this year of disruption, we’ve seen just how much technology can impact the student experience. As teaching and learning become more digital, institutions must prioritize innovation and technology. This should go beyond learning platforms and include capabilities like artificial intelligence and predictive analytics – supports that students increasingly expect as part of their experience and are proven to be successful says Joe Schaefer, Chief Transformation Officer at Strategic Education. That’s why attention to student success is critical. If you’re looking for more guidance, join us for Student Success Week [register for free at: g.co/cloud/student-success-week], or reach out to our team to get started.Related ArticleCampuses use data analytics and virtual agents for student successStudent success is about much more than getting good grades. It also includes giving instructors more time to coach their students, helpi…Read Article
Quelle: Google Cloud Platform