Dataform is joining Google Cloud: Deploy data transformations with SQL in BigQuery

The value of data—and the insights it contains—only continues to grow, and Google has invested in technologies to empower teams to do more with that data for more than a decade. We were honored to be named a Leader in Gartner’s first-ever Magic Quadrant for Cloud Database Management Systems (DBMS). BigQuery, our cloud data warehouse, continues to be a place where an increasing number of enterprises across every industry turn to make sense of all this growing data.Today, we’re making this work even easier for our customers with our acquisition of Dataform. Dataform leverages BigQuery’s innovative architecture, allowing for practically unlimited scale, to enable analysts and engineers to manage all their data processes within BigQuery. This combination means you can leverage software development best practices to define, document, test and deploy data transformations using SQL executed within BigQuery. There’s no need to learn new programming languages or deploy and manage entirely new applications in your data stack. You can now create and manage your data transformations all within your comfortable, secure and reliable data warehouse.Click to enlargeDataform brings a software engineering approach to data modeling and pipelines making data transformations more accessible and reliable:Collaborate and create data pipelines—Develop data workflows in SQL and collaborate with others via Git. Include data documentation that is automatically visible to others.Deploy data pipelines—Keep logical data up-to-date by scheduling data workflows which incrementally update downstream datasets, reducing cost and latency.Ensure data quality—Define data quality checks in SQL and automatically receive alerts when those checks fail. View logs, version history and dependency graphs to understand changes in data.We’re excited to welcome Dataform to Google Cloud as we continue to deliver on our mission to democratize insights across organizations. Today, we are making Dataform free to all users and moving forward we are looking forward to bringing the best of Dataform and BigQuery together. You can learn more by visiting dataform.co.Gartner, Magic Quadrant for Cloud Database Management Systems, November 23, 2020, Donald Feinberg, Adam Ronthal, Merv Adrian, Henry Cook, Rick Greenwald.  Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Quelle: Google Cloud Platform

Unsiloing data to work toward solving food waste and food insecurity

While working on the Project Delta team, an early stage moonshot at X that was exploring new technologies to solve the pervasive problems of food waste and food insecurity, we worked closely with Kroger and Feeding America to transform and analyze datasets using Google Cloud.In this blog, we’ll talk about the technical effort of data un-siloing. (Check out this post for more on the overall project.) Before data can tell powerful stories, it needs to be made accessible, transformed and formatted so data sets can be joined, then reviewed with industry experts to surface underlying industry-specific relationships. Getting to the data: Automating flows into a shared data archipelagoMuch of the food system in the U.S. still operates on paper printouts and spreadsheets. While these ways of capturing, analyzing and communicating data have increased the pace and scale of business over time, they do bring limits. As disparate organizations look to work together and share vast amounts of data in real time, emailing spreadsheets back and forth no longer suffices.  Kroger is a longtime historic partner of Feeding America—the two organizations have worked together for four decades. As part of a nationwide retail donation program, Kroger stores regularly set aside food to be donated and Feeding America member food banks coordinate pickups and distribute the food in their communities through pantries.As part of their company-wide Zero Hunger, Zero Waste initiative, Kroger sought to make more of their vast donation and waste database. Leading the industry, in 2017 Kroger publicly committed to donating 3 billion meals by 2025 and were keen to find as many donation opportunities as possible across their network of 2,700-plus stores nationwide. To do so, they wanted to find deeper patterns in their own store data and also in the food charity data of their food banking partners pertaining to Kroger’s donation patterns. As the first retail organization in this data-unsiloing partnership, Kroger offered to share their shrink data on a daily basis. Shrink is the loss of grocery store inventory due to imperfection, spoilage, and other factors. Any item not sold to a customer is denoted as shrink and earmarked for donation, animal feed, compost, or landfill. Scan loss data represents the subset of shrink that is formally logged. While Kroger uses this information extensively across divisions internally, this was the first time they worked with two external partners. Collaborating closely with Kroger’s business intelligence and IT teams, the Kroger Zero Hunger, Zero Waste leadership team navigated Kroger’s hybrid multi-cloud system. The path of least organizational and technical resistance to get the X team a daily data snapshot was to send an automated nightly email with an attached data file from each of their 20 operating store divisions. Processing incoming dataWith all those emails containing data files coming in, the team needed a way to process and load the data for shaping and analysis. The X team chose BigQuery, Google Cloud’s enterprise data warehouse, for its scalability and speed. To hold and process incoming emails automatically, the team set up a Cloud Storage bucket. When a new file is added to the bucket, a Pub/Sub notification triggers a Cloud Function to load the data into BigQuery automatically. Processed files in the root bucket are then archived into a “completed” folder if successfully loaded into BigQuery or into an “error” folder if incomplete for any reason.Flow chart for ingesting and organizing incoming data every day.The team did this in two steps:1. Set up triggers and notifications: Pub/Sub notifications can be set up directly from the Pub/Sub section of the cloud console. An appropriate topic was created. Then, the team configured the Cloud Storage bucket to call the Pub/Sub topic when a new data file is added to the bucket. This can be done via the command line in Cloud Shell.2. Set up Cloud Function: The Pub/Sub will trigger the Cloud Function to be invoked and move the data to BigQuery. The function’s code is stored in Cloud Source Repositories and was written in Python with accompanying SQL templates. The code processes spreadsheet files into a dataframe using Pandas, then writes the dataframe into BigQuery using the BigQuery Python Client library. Making data consistent: Getting to a common languageThe food system lacks a common standardized language, an ontological and semantic infrastructure that everyone can baseline to and build from. Professor Matt Lange of UC Davis, who’s leading efforts toward an “Internet of Food,” often references the healthcare system, where conditions and diseases are clearly classified and coded, with a structure that drives, informs and supports all financial and operational activity in the sector. Nothing close to that exists for food.After building data pipelines to Feeding America and Kroger, the X team’s first task was to confront disparities in food descriptors head on. How does one name a tomato, describe it, quantify it, and locate it? How do we represent a clamshell container of tomatoes consistently across all datasets from all parties? Even within one organization, there were dialects and different ways of talking about and representing the same thing. Feeding America is a nationwide network of 200 independent food banks, all with their own origin stories, practices, and non-corresponding IT systems. The X team, as humans, could understand what a data record from a food bank represented, but accurately linking those records across food banks was very difficult. As an example, even something as simple as the name of the state of Texas was logged in 27 different ways! This was common throughout the data: for storage facilities, for example, one food bank may refer to their refrigerators as REFR, while another might use REFER. Pinpointing food locationsWith a vision of matching excess food supplies to where they are most needed, the partnership prioritized standardizing the geolocation of all data records. Where a particular quantity of food originated directly impacted the recommendation of where it could go, since transporting perishable food requires time, money and in certain cases, temperature control. Many records from Feeding America member food banks were filled with descriptive titles for their staff and useful for manual operations, but that was difficult for a computer to understand. For example, a retail donation from “Kroger on Main St.” makes sense to a tenured driver who has been picking up from that store for a decade, but this descriptor needed to be decoded and matched with Kroger’s description in its own donation data record that lists the same store as Store #123.Using Google Maps Platform, the first step was to identify the Place ID for each of Kroger’s approximately 2,700 stores, given a list of addresses. Google Maps Platform includes Place IDs, which uniquely identify a location, for more than 200 million places around the world. In parallel, food bank location descriptors like “Kroger on Main St. Frisco, AZ” were also converted into Place IDs using the Maps API search-based querying function. Beyond this, the food banks participating in the initial phase of this data effort serve over 18,000 pantries collectively. The partnership was keen to fully explore geospatial opportunities in the entire system, and agreed to include these locations as well. This enabled the team to not only map the flow of food from a Kroger store to the local food bank and then to the pantry, but also explore network route optimization opportunities broadly. Using these Place IDs helped give us a common language.When working with the food bank data, however, normalizing places was not always as straightforward as querying Maps API. While different food banks might get food from the same suppliers, these suppliers were often represented in each food bank’s database differently. Because of typos or incomplete addresses, the Maps API could return the wrong place or not be able to find a result. To reconcile these entries, the team built an algorithm to determine the confidence that two places were the same before assigning a unique ID to the location. This extensive effort resulted in a comprehensive picture of suppliers and pantries in the charitable food network.Seen in isolation, three pantries pick up food from a local Fry’s (Kroger) store.Those same three pantries also reach many other stores across the community.Finally, the partnership recognized that food insecurity is shaped by poverty, employment, and various demographic variables and sought to include this in the analysis. To bring in these variables, the team used the US Census API to find the block groups, statistical divisions of census tracts containing about 600 to 3,000 people, for each food bank and pantry location. This opened the door to easily bring in thousands of state and federal datasets, helping tell a richer story to stakeholders about the needs of specific communities. Shared maps bring humans and things together in the right place. In the case of mapping in the food system, they enable the more effective use of food and the associated transportation and labor resources. Mapping all the nodes in our food system has never been more important in these pandemic times, where there is still an abundance of food—just unevenly distributed. Knowing where that food is located is step number one. Visualizing data: Show and tell the storyAs part of a network of 200 independent food banks, each with its own network of hundreds of pantries, each Feeding America member food bank can speak to their work, but there is no way yet to see real-time food flows in the network nationwide. This is a common theme for industry groups and organizational networks; focusing closely on specific trees can make it easy to lose sight of the forest as a whole.One of the team’s first visuals was simply to show where food banks were getting their food from on a map. Food banks can find donated food anywhere and they do sometimes purchase food to supplement what they have received. This can mean that, if the right opportunity comes, they can acquire food from far away. There has been talk among food banks for many years about how routing might be made more efficient, but each can only see their part of the story; none is equipped to optimize a national logistics network. After moving the data from multiple food banks out of their silos, the Feeding America and X team worked together to plot the flows in Looker. The network is quite complex even with just a few food banks (see below). While this visual is easy to create and shows data that each food bank already had, the impact is in seeing the forest. There are tremendous opportunities to make more of every food bank dollar by pooling purchasing and optimizing routing. This visual is messy and not necessarily immediately actionable, but it was a powerful tool for gaining buy-in for building a national data warehouse at Feeding America. Leaders at the national office and food banking executives saw this visualization and immediately understood the purpose and potential benefits.Supplier flows into seven participating food banks.Tracking physical flows over timeWhile Kroger and Feeding America have partnered for more than 40 years, Kroger does not see where their donated food goes after it is picked up from a store. The store may receive confirmation from their food bank partner that 100 pounds was picked up a few weeks later, but Kroger did not have a way to track individual food items all the way through the food chain.To visualize these flows, the team first reconciled all of Kroger’s stores with the food bank representation of these stores. This made it possible to track inventory records in Store 123 from Kroger’s data and compare them to donation records the food bank recorded from Store 123. Next, the food received into the food banks was traced as it moved through their inventory. Food banks, particularly in grocery rescue and food drive programs, will verify food is safe to eat and then likely aggregate it to make more useful shipments. For example, 20 different cans of mixed vegetables that came in from different stores may be combined into a case of food for a local pantry. From this work, Kroger was able to see for the first time the ways that their donations help touch entire communities. When volunteers picked up food at Kroger stores, they broke the donation up, recombined it with others, and then sent it out to hundreds of small pantries. Even fairly small donations were coming together with others from across the community to make a huge impact, reaching hundreds of pantries and distribution points.Food flows from a Kroger store in Arizona through a food bank and to pantries.Solving enormous, large-scale problems like hunger starts with exploring data in new ways and visualizing for stakeholders the current state of flows geospatially and with respect to time. No single Kroger store was going to solve hunger in its community; no single organization was going to solve hunger across the country. Each contribution comes together to make a collective positive impact. Data, visualized well, tells the story of the work already underway, and invites others to join the mission, inspiring action in the right time and place. Putting data siloing into practiceWhen starting on a large multi-stakeholder data un-siloing initiative, be prepared for a journey with unexpected twists and turns. It is rarely straightforward to go from raw, disparate, datasets to integrated and impactful analytics. As you persist through obstacles—getting data out of silos, making it consistent, and visualizing it to tell stories—remember that this effort can fundamentally reshape your business and industry in positive ways. If you’d like to learn more and donate to these efforts, check out:Kroger’s Zero Hunger Zero Waste FoundationFeeding AmericaSt. Mary’s Food BankThe X and Google team would like to thank Kroger, Feeding America, its member food banks, and St. Mary’s Food Bank for their contributions to this article.Related ArticleThe democratization of insights: Empowering data analysts and business usersWe explore how what it means to be “data-driven” has changed over time, and how Google Cloud is helping customers push those boundaries t…Read Article
Quelle: Google Cloud Platform

Silos are for food, not data—tackling food waste with technology

While 40% of food in America goes to waste, 35 million Americans (likely even more during this pandemic) are food-insecure—meaning they are without food or not sure where future meals will come from. In addition, food waste has been called the “world’s dumbest environmental problem.” As the pandemic continues, the food system is in the spotlight. Farmers are plowing their crops back into their fields, restaurants are struggling to get back into business, and millions of newly unemployed Americans are lining up at food pantries. Leaders in the industry are looking to move more food to the right place as governments and philanthropists look to deploy capital to improve the situation. Moving things and moving money require, first and foremost, good data and a common language for describing food in the supply chain. As an early-stage team from X, an Alphabet subsidiary, worked on this moonshot, they worked closely with Kroger and Feeding America®️ to explore, transform and analyze datasets using Google Cloud technology. While our food sits securely in silos and storehouses across the U.S., information about the quality and quantity of that food also sits static in silos, with the latter benefitting no one. By sharing raw data with X as the neutral information steward, Kroger and Feeding America have discovered potential systems-level opportunities for change, beyond optimizing their own organizations. In a world where data is a highly valued corporate asset, sharing data may be viewed as a strategic and competitive risk, not to mention the legal, operational and technical hurdles. But it can lead to huge benefits, too. We’re sharing our collective story because we’ve learned a lot about how three very different organizations can work together to achieve industry-wide goals while ensuring that each organization’s data assets are secure. We found that solving industry-wide challenges starts with sharing datasets. Here’s how un-siloing data led to advances in reducing food waste. What are data silos and why do they exist? Data siloing is an information management pattern where relevant and interrelated subsystems are unable to communicate with one another in real-time, due to logical, physical, technical, or cultural barriers to their interaction. For example, a human resources system may be isolated from other company systems to protect sensitive employee information, but when compensation information is updated in the finance department, information across the two systems needs to be reconciled manually.Data silos are pervasive across industries and organizations. In government and policy circles, experts talk about “stovepiping,” application architects talk about “disparate systems,” and organizational culture consultants talk about incompatible “subcultures.” In each case, the end results are similar: Even with vast amounts of data, decision makers struggle to access data, process it, find the answers they need, and respond quickly. When X, Feeding America and Kroger came together, they first had to address underlying organizational obstacles. Mandates and beliefs: Each party had a different vision for coming together—some were focused on sustainability, and others on food security—and thereby had different data needs. At times, the data silos in place also reinforced inconsistent beliefs. For example, certain food banks were concerned that retail donations were dwindling, while Kroger had plenty to donate, but had not yet operationalized that data.Organizational fears: It took courage for Feeding America and Kroger to share what was behind the curtain, exposing their own challenges with data quality and standards. The individuals who led this project also had to face corporate approval processes and articulate why each organization had more to gain by sharing than holding onto data as a form of power, and why they shouldn’t fear unlikely unintended consequences.Technical limitations: While the leaders who came together had influence and decision-making power, they were not the technical staff who had the authority and knowledge to access the data and implement data pipelines. In addition, neither Kroger nor Feeding America was in a position to store and analyze each other’s proprietary data. How to break through data silosThis three-way partnership was able to break through data silos by being strategic about how to build confidence and credibility with their respective organizations. Here are steps that the partnership took. 1. Align on objectives, then bring in others. The partners came together and fully clarified respective high-level goals, data assets needed to achieve these goals, and overall operating principles, before bringing in their respective legal teams to draft data-sharing agreements and move through executive approval processes. By doing so, champions for this project inside of each organization were able to negotiate internally with a clear rationale rather than following the traditional company policies. 2. Think big, start small. While the partners all believed in what was possible with a combined global source of truth and reinforced this vision to their superiors, each individual leader also made it easy for their respective corporations to sponsor this effort by starting small. Rather than going immediately to scale, this team prototyped with one store and one food bank and went deep, building everything end to end. Learnings were incorporated before asking the next ten stores and ten food banks to participate. 3. Make it frictionless to share.  The X team invested in working with Kroger and Feeding America’s data teams to set up automated processes to schedule and sequence the transfer of data to Google Cloud regularly. This detailed case study explains how to set up extract, transform, load (ELT) processes using Cloud Composer. 4. Find a common language.  Once the X Team had both Kroger and Feeding America datasets in  BigQuery, Google Cloud’s enterprise data warehouse, they discovered that both organizations and their respective departments did not have a consistent language for locations, food items, quantities and other variables. There were at least 27 ways of representing Texas! The first step was to format the data to be consistent. As an example, this case study describes standardizing geolocation data using Maps API.  5. Show insights, early and often.  The partnership was able to show, with initial analyses on one store or five food banks, immediate impactful opportunities. Examples include ways to do bulk sourcing of food between specific food banks for better pricing, and which days pantries should schedule their store pickups to get the most donated food. This earned the team additional support from sponsors and operational staff to continue scaling the broad data un-siloing effort. Check out further examples and tools.Organizing the world’s food informationThe X team working on this project has now joined the Google Food team to continue to grow their partnership with Kroger and Feeding America together on Google Cloud. This collaborative effort to solve for waste and hunger will continue with the confidence they need in the reliability and security of Google’s infrastructure at global scale. Learn more about the technical details of this food waste project.If you’d like to learn more and donate to these efforts, check out:Kroger’s Zero Hunger Zero Waste FoundationFeeding AmericaSt. Mary’s Food BankThe X and Google team would like to thank Kroger, Feeding America, its member food banks, and St. Mary’s Food Bank for their contributions to this article.
Quelle: Google Cloud Platform

Google Cloud named a leader in latest Forrester Research IaaS Platform Native Security Wave

The adoption of cloud services has created a generational opportunity to meaningfully improve information security and reduce risk. As an organization moves applications and data to the cloud, they can take advantage of native security capabilities in their cloud platform. Done well, use of these engineered-in platform capabilities can simplify security to the extent that it becomes almost invisible to users, reducing operational complexity, favorably altering the balance of shared responsibility for customers, and decreasing the need for highly specialized security talent. At Google Cloud, we call the result Invisible Security, and it requires a foundation of innovative, powerful, best-in-class native security controls. Given the importance of these capabilities to our strategy, we are happy to announce today that Forrester Research has again named Google Cloud as one of just two leaders in The Forrester Wave™ Infrastructure-as-a-Service Platform Native Security (IPNS), Q4 2020 report, and rated Google Cloud highest among providers evaluated in the current offering category.The report evaluates the native security capabilities and features of cloud infrastructure as a service (IaaS) platform providers such as storage and data security, identity and access management, network security and hardware & hypervisor security. The report states that “Google has been steadily investing in its offering and has added many new security features, including Anthos (a service to manage non-Google public and private clouds) and Security Command Center Premium” and notes that the Google Cloud features of “data leak prevention (DLP) capabilities, integration support for external hardware security modules (HSMs), and third-party threat intelligence source integration are also nice.” The report also emphasizes the increasing importance of extending consistent security capabilities across hybrid and multi-cloud deployments, stating “vendors that can provide comprehensive IPNS, not only for their own platforms but also for competing public and private cloud and on premises workloads and platforms, position themselves to successfully evolve into their customers’ security central nervous systems” and notes in Google’s vendor profile that “Anthos is ahead of the competition when it comes to managing non-Google, third-party clouds.”In this Wave, Forrester evaluated seven cloud platforms against 29 criteria, looking at current offerings, strategy and market presence. Of the seven vendors, Google Cloud scored highest overall in the current offering category, and received the highest score possible in its plans for security posture management, hypervisor security, guest OS and container protection, and network security criteria.Further, Google Cloud’s had the highest possible score in the execution roadmap criterion. Google Cloud continues to redefine what’s possible in the cloud with unique security capabilities like External Key Manager, Key Access Justifications, Assured Workloads, Confidential VMs, Binary Authorization, IAM Recommender, and enabling a zero trust architecture for customers with BeyondCorp. Elaborating on Google Cloud’s roadmap, the report noted:“The vendor plans to: 1) invest in providing customers with digital sovereignty across data, operations and software in the cloud; 2) expand security for multicloud and cross-cloud environments; and 3) increase support for Zero Trust and identity-based and richer policy creation.” Google Cloud also received the highest possible score for the partner ecosystem strategy criterion. As further validation of the strength of our platform’s native capabilities, numerous Google Cloud security partners have chosen to take advantage of our platform to run and deliver their own security offerings:”At ForgeRock we help people safely access the connected world. We put a premium on security because our customers and our business depend on digital experiences that can withstand and prevent cyber attacks and bad actors,” said Fran Rosch, CEO of ForgeRock. “Our partnership with Google Cloud gives us access to unique security platform capabilities that help us meet customer needs and strengthens our position as a global identity and access management leader.”We are honored to be a Leader in The Forrester Wave™ IaaS Platform Native Security Q4 2020 report, and look forward to continuing to innovate and partner with you on ways to make your digital transformation journey safer and more secure. Download the full The Forrester Wave™ IaaS Platform Native Security (IPNS), Q4 2020 report.You can get started for free with Google Cloud today.
Quelle: Google Cloud Platform

Just in time for TechEd, our latest innovations for SAP customers

With SAP TechEd kicking off this week, we thought it would be a good time to update you on the ways that Google Cloud continues to enhance our offerings for SAP customers. We’ve released new capabilities for both running SAP applications on Google Cloud as well as ways to get more out of your SAP data including our advanced analytics, AI and ML capabilities. Here’s a quick rundown of what’s new and what’s coming soon. SAP Application Certifications: We continue to add to a growing list of SAP application solutions certified by SAP to run on Google Cloud. The most recent additions include:Custom machine types for N2 and N2DN2 and N2D Custom Machine Types – SAP NetWeaverAMD N2D certifications up to 96 vCPUS – SAP NetWeaver6TB OLAP Scale-up for SAP HANA12TB OLTP Scale-out for SAP S/4HANASAP ASE (Sybase) CertificationNetApp CVS Performance – SAP HANA (all sizes)Scaling up SAP on bare metal to 18TB/24TB: Google Cloud already offers 6TB and 12TB VM-based offerings for specialized SAP workloads such as very large HANA deployments. For customers looking to scale beyond 12TB per server, we will have 18TB and 24TB bare metal configurations. This makes lift-and-shift migrations easier for even the largest on-premises SAP systems, clearing an even wider path to cloud migration.Backint for HANA/4 backup: Google Cloud’s SAP-certified Cloud Storage Backint agent for SAP HANA lets customers use Cloud Storage directly for backups and recoveries for both on-premises and cloud installations of SAP databases. The Backint agent is integrated with SAP HANA so you can store and retrieve backups directly from Cloud Storage by using the native SAP backup and recovery functions. When you use the Backint agent, you don’t need to use persistent disk storage for backups. Our latest release enhances support of large or high-frequency backups and also allows customers to supply their own encryption keys for backups in addition to Google Cloud’s native encryption.  Connector for SAP Landscape Management (LaMa) (in preview): SAP LaMa simplifies, automates, and centralizes the management of SAP systems running in different infrastructures, whether on premises, in the cloud, or a hybrid of both. Google Cloud connector for SAP LaMa interfaces with Google Compute Engine and Cloud Storage operations so customers can schedule system management events connected to Google Cloud infrastructure right from SAP LaMa. System administrators can now handle management tasks such as snapshots, mounting and unmounting storage, relocating servers, and performing system refreshes without having to leave the LaMa interface.BigQuery integration: A key goal for SAP customers is to enrich data from their SAP assets with non-SAP data in Google Cloud’s analytics solutions. SAP customers can confidently derive more use and value from their data via consolidation on BigQuery, our fully managed, enterprise data warehouse. Using robust integration solutions from our partners—SAP, Informatica, Qlik, Datavard, Software AG, Boomi, and HVR customers have more choice on how to best accelerate and simplify the delivery of SAP data to BigQuery, whether it originates from legacy SAP environments, SAP HANA, or SAP application servers. We’re also working to add real-time data connectivity and integration options with Google Cloud native solutions such as Data Fusion so customers can build complex ETL/ELT data pipelines from their SAP systems leveraging existing skillsets and investments. Stay tuned for more announcements in this space in 2021.Apigee API management platform: APIs have emerged as a pillar of modern digital business practice. For businesses using SAP either in the cloud or data centers, Apigee can provide value in three different ways. First, unlocking the value of legacy systems. Every company in the world has valuable data and functionality housed in its systems—but activating that value via APIs means being able to leverage systems for faster time to market of new experiences. Second, modernize legacy systems. Apigee provides an abstraction layer between client facing applications and backend systems during the backend modernization process to minimize business disruption. Third, Creating cloud-native, scalable services. In addition to repackaging SAP data as a microservice and providing capabilities to monetize this data, Apigee takes on some essential performance, availability and security functions: handling access control, authentication, security monitoring and threat assessment plus throttling traffic when necessary to keep backend systems running normally while providing applications with an endpoint that can scale to suit any of your workloads.Cloud Acceleration Program (CAP): This first-of-its-kind program empowers customers with solutions from both Google Cloud and our partners to simplify and de-risk their SAP cloud migrations. Google Cloud and our partners have created specialized migration solutions, accelerators, and methodologies for both lift and shift as well as migrating to S/4HANA. We have also created new ways to extend SAP solutions to drive fresh insights quickly and efficiently. In addition, CAP provides financial incentives to defray many of the costs associated with moving SAP systems to Google Cloud and safeguard customer migrations.Partner spotlight: OpenTextAn enterprise environment generates a staggering number of documents and unstructured content—contracts, orders, invoices, receipts, emails to name a few. Each requires proper governance to store and manage over its lifetime. For SAP customers, attaching these documents to each transaction is relatively easy, but their sheer volume increases database size and slows performance and it is hard to collaborate across multiple stakeholders, applications and processes. .OpenText’s enterprise content management (ECM) turns documents into a resource rather than a burden, making them securely accessible to those who need them, whenever and wherever they need them, while improving the performance of the organization’s SAP solutions and reducing compliance risks. Now, Google Cloud has selected OpenText as its preferred ECM partner, which multiplies the advantages that OpenText brings to SAP customers by letting them take advantage of advanced technologies such as analytics and AI, streamline and automate workflows, and manage and capitalize on critical data. With Google Cloud as OpenText’s preferred partner for enterprise cloud, SAP customers using OpenText gain flexibility and more powerful capabilities, including:Containerized managed services with full hybrid functionalities across existing on-premises infrastructure and Google Cloud.Offloading data. Documents and other unstructured content from SAP to an integrated archiving and content management system, streamlining the SAP database to make cloud and/or SAP S/4 HANA migration faster and less complex.Multiple deployment options on virtual machines, servers or containers, on premises or in the cloud.There’s more to comeIt is our goal to become the perfect home for your SAP solutions, for worry-free, highly scalable infrastructure as well as the ability to extract the most value possible from data within your organization and beyond using groundbreaking analytics and cutting-edge innovations.Join Google Cloud (virtually) at TechEd and tune into our session, DT137, Innovate your business with Google Cloud industry solutions. To learn more about Google Cloud for SAP, including technical resources, visithttps://cloud.google.com/solutions/sap.Related ArticleSAP on Google Cloud: 2 analyst studies reveal quantifiable business benefitsFrom uptime and infrastructure to efficiency and productivity—both Forrester and IDC identified major benefits to companies that have mad…Read Article
Quelle: Google Cloud Platform