AWS kündigt die allgemeine Verfügbarkeit von Amazon GameLift in sechs neuen Regionen an

Wir freuen uns, heute die allgemeine Verfügbarkeit eines Updates für Amazon GameLift ankündigen zu können, mit dem die globale Abdeckung für Entwickler erhöht wird und für Gamer weltweit nahtlose Spielerlebnisse mit geringer Latenz ermöglicht werden. Einige der erfolgreichsten Gaming-Unternehmen der Welt, z. B. Gungho und Ubisoft, setzen auf GameLift, um dedizierte Server für Multiplayer-Spiele bereitzustellen, zu betreiben und zu skalieren. Mit diesem Update können Sie als Spieleentwickler jetzt sechs neue Regionen nutzen (Erhöhung von 15 auf 21) und aufgrund des vereinfachten Flottenmanagements zudem die erforderliche Zeit für die Einrichtung verkürzen.  
Quelle: aws.amazon.com

Pega: Optimizing business operations with SAP on Google Cloud

For Pegasystems, a leading provider of cloud software for customer engagement and intelligent automation, success depends on being able to provide highly scalable systems for its customers. But with its core SAP systems in a traditional data center, the IT department found it was spending too much time on day-to-day maintenance and less on strategic projects that generated value for customers. In 2020, Pega chose to deploy its SAP environment, including SAP ERP Central Component (ECC) and SAP HANA data warehousing, to Google Cloud. The move has helped Pega overcome many of the challenges it faced with the previous deployment. It also made Pega’s SAP systems more reliable and offered powerful new data capabilities to support the business in making the right decisions.The challenge: A lack of agilityAlthough Pega ran its SAP suite in a third-party data center, it was architected with traditional servers and storage and lacked the scale and agility necessary to keep up with Pega’s needs. Adding more capacity, for example, required too much work and coordination with the hosting provider, which often led to unacceptable levels of downtime. Instead of focusing on the transformational initiatives that Pega required for growth, the company’s IT team was putting time and resources into making sure its SAP systems were running correctly. The solution: Less maintenance for IT, more insights for the businessPega considered several public cloud providers with a particular focus on enterprise architecture, flexibility, and operating costs before settling on Google Cloud. “SAP on Google Cloud is a repeatable and proven architecture. We were confident that we would be in good hands and have a smooth transition,” says David Vidoni, Vice President of information Technology, Pega. Working with Managecore, an SAP on Google Cloud Specialization partner, it took Pega just  nine weeks to move 30 servers onto Google Cloud. Managecore helped Pega manage changeover, testing, and integration of the customer-facing solutions that run internally and integrate with the SAP environment. Additional assistance came from the Cloud Acceleration Program (CAP) from Google Cloud, a critical enabler for Pega, offering SAP customers tools to simplify the migration process and financial incentives to defray infrastructure costs. For Pega, the move to Google Cloud is allowing them to deliver better service to their customers. According to Vidoni, “Google Cloud enables us to provide a higher level of service by minimizing downtime and outages. As we grow, adding capacity to meet our clients’ needs is a lot simpler.” Other benefits of the SAP on Google Cloud migration include reliability, scalability, and agility:Reliability: Google Cloud infrastructure has significantly reduced the strain on Pega’s IT resources. Pega has found that since moving to Google Cloud, the IT team has cut the time it spends on SAP operational maintenance by two-thirds. New analytical capabilities: Having fewer maintenance tasks means getting more time to analyze data that reveals customer needs and how to fulfill them. The move to Google Cloud has made it easy to connect data from SAP HANA. Richer data integration makes for faster and better-informed decisions. Per Vidoni, “Data is our lifeblood. We are making it easier to find the information we’re looking for to get actionable insights and to gain an edge over our competition.”Scalability and performance: Pega has benefitted from improved performance and a new ability to scale up and scale down resources as needed. For example, Pega will now be able to provide field teams with sandbox SAP HANA environments where they can test out new integrations with its software development platform. Spinning up these development environments is much easier and faster than it was before migration. Security and compliance: Google Cloud has helped provide a much stronger security posture for SAP HANA implementation. Pega’s disaster recovery processes are now more streamlined, and compliance testing is easier as well.What’s next for Pega Now that Pega’s SAP applications are up and running on Google Cloud, the company is setting its sights on moving its SAP ERP environment from ECC to S/4HANA, which will allow it to take advantage of entirely new, cloud-centric capabilities within the suite. In the meantime, however, Vidoni continues to explore the opportunities that Google Cloud offers to transform the way it provides systems and services to customers at scale. “Running SAP in a future-ready environment means that we can deliver the best possible service to employees and customers today and tomorrow.”  Hear directly from Pegasystems about their SAP on Google Cloud deployment and also how other SAP customers are having similar success.Related ArticleRémy Cointreau drives customer centricity with SAP on Google CloudRémy Cointreau moved to Google Cloud to gain flexibility in its SAP systems and gain easy access to valuable data for business decision m…Read Article
Quelle: Google Cloud Platform

Introducing the latest Slurm on GCP scripts

Do you use the Slurm job scheduler to manage your high performance computing (HPC) workloads? Today, alongside SchedMD, we’re announcing the newest set of features for Slurm running on Google Cloud, including support for Terraform, the HPC VM Image, placement policies, Bulk API and instance templates, as well as a Google Cloud Marketplace listing. Note that Slurm’s support for the Bulk API is in Beta at the time of this release.Slurm is one of the leading open-source HPC workload managers used in TOP500 supercomputers around the world. Over the past four years, we’ve worked with SchedMD, the company behind Slurm, to release ever-improving versions of Slurm on Google Cloud. Here’s more information about these new features:Support for TerraformIn this release, Terraform support is now generally available. The latest scripts automatically deploy a SchedMD-provided Virtual Machine (VM) image based on the Google Cloud HPC VM image, a CentOS 7-based VM image optimized for HPC workloads that we announced in February. This new image-based deployment reduces the time to deploy a Slurm cluster to just a few minutes.Placement policiesYou can now create a set of nodes on demand, per job, in a placement policy. With the previous version of our Slurm on GCP scripts, you were only able to enable placement policies at a cluster-level. Now you can configure placement policies per partition, enabling you to achieve significant improvements in latency and performance for your tightly coupled workloads.Bulk APISlurm is now able to use the Bulk API to create instances. This allows for faster and more efficient creation of VM instances than ever before by collecting up to 1,000 in a single API call. The Bulk API also supports “regional capacity finding,” and can create instances in whichever zone within a region that has the necessary capacity, improving the speed and likelihood of getting the resources requested.Instance templatesYou can now specify instance templates as the definitions for creating Slurm instances.Cloud Marketplace listingLast but not least, we’re excited to share that the Slurm on Google Cloud scripts are now available through our Cloud Marketplace. From the Google Cloud Console, you can locate and launch the latest version of Slurm on Google Cloud in just a few clicks. The Cloud Marketplace listing also provides more information about how to access additional managed services from SchedMD, helping you expand and deepen your HPC workloads on Google Cloud using Slurm. Research organizations are taking advantage of Google Cloud’s capacity with Slurm scripts to meet increased demand for their HPC compute clusters. “When it comes to supporting cutting-edge research requiring advanced computing, there are never enough resources on-prem. Driven by the application of Artificial Intelligence in a wide spectrum of research areas, the undertaking of urgent COVID-19 research, and the increasing popularity of AI, ML and Data Science academic courses, the job wait times on our HPC cluster have been increasing. To address the increasing job wait times, and to allow researchers to evaluate the latest CPUs and GPUs, the HPC team had been evaluating the viability of bursting jobs to Google Cloud. With additional features from the Slurm on Google Cloud, and offerings such as preemptible virtual machines, we decided to burst jobs that have been submitted to our on-prem cluster to GCP, enabling us to reduce job wait times and produce research results faster.” – Stratos Efstathiadis, Director, Research Technology Services at NYUGetting startedThis new release was built by the Slurm experts at SchedMD. You can download this release in SchedMD’s GitHub repository. For more information, check out the included README. If you need help getting started with Slurm check out the quick start guide, and for help with the Slurm features for Google Cloud check out the Slurm Auto-Scaling Cluster codelab and the Deploying a Slurm cluster on Google Compute Engine and Installing apps in a Slurm cluster on Compute Engine solution guides. If you have further questions, you can post on the Slurm on GCP Google discussion group, or contact SchedMD directly.Related ArticleIntroducing HPC VM images—pre-tuned for optimal performanceGoogle Cloud’s first pre-configured HPC VM image is a CentOS 7-based image optimized for tightly-coupled MPI workloads.Read Article
Quelle: Google Cloud Platform

VM Manager 101: Create a disk clone before patching VMs

Earlier this year, we introduced VM Manager, a suite of tools that can be used to manage virtual machines running on Google Cloud at scale.One of the services available with VM Manager isOS patch management, which helps to apply patches to virtual machines on-demand and based on schedules. Both Linux and Windows operating systems are supported and the service uses the respective update infrastructure of the operating system (e.g.apt,ZYpp,yum and Windows Update Agent) to both identify and apply missing patches. A request that comes up often when talking to customers that plan on using this service or are already using it, is how to create a backup of the state of a virtual machine before patches are applied in order to be able to roll back in case something goes wrong with patching or with the patches themselves. Unfortunately this feature is not supported by VM Manager out of the box. One of the capabilities the service supports however is the ability torun pre-patch and post-patch scripts on each VM that is targeted for patching. Scripts running pre-patching or post-patching run on the instance and in the context of the service account that is associated with it (eitherthe Compute Engine default service account or the one that was used during creation).In this blog, I will explain how pre-patch scripts can be leveraged to create a crash consistent disk clone of the attached persistent disks of a VM before patches are applied.ConsiderationsThis blog describes a solution to a common customer problem. The ideal solution would be to have a direct integration in the service, that does not rely on executing the snapshot creation on the VM and in the context of the associated service account. Assigning the required permission to the service account ultimately gives these permissions to any user that can login onto the VMs.By making the patching of a VM dependent on taking a disk clone (this is how the sample script in this article is put together), a failure to create the clone ultimately results in not patching the VM.PrerequisitesSetting up VM Manager and OS patch management is out of the scope of this article. Follow the instructions onSetting up VM Manager to enable VM Manager for your project.PermissionsCreating disk clones requires at least the followingpermissions to be assigned to the service account associated with the VM:compute.disks.create # on the projectcompute.disks.createSnapshot # on the source diskScopesThe script that creates the clone ultimately runs on the VM that is being patched. This means that it is not only required to set the correct permission to the service account associated with the VM but the API scope needs to be set as well.Set the scope to either Allow full access to all Cloud APIsUpload scriptsI’ve included sample scripts for both Linux and Windows based operating systems at the end of this section. I have tested these scripts Debian 10, Ubuntu 20.04, the latestContainer-Optimize OS and Windows Server 2019. If you use different versions, I strongly recommend to test the scripts.Both versions of the sample script follow the same logic:Retrieve the ID of the patch job (used to tag the snapshot for better discoverability)Retrieve disks associated with the VMCreate disk clonesYou need to download the appropriate version of the update script and then upload them to a storage bucket (this guide explains how to do just that):# Copy script to GCS bucketgsutil cp clone-linux.sh gs://<BUCKET>/clone-linux.shNow we need to get the version of the file we just uploaded. We need to pass along the version so the patch service can pick up the right version for execution:# Retrieve file versiongsutil ls -a gs://<BUCKET>/clone-linux.sh | cut -d’#’ -f 2LinuxFind the latest version on GitHub.WindowsFind the latest version on GitHub.Create patch job with pre-patch script executionNow that the scripts have been uploaded we can create patch jobs. These can either be on-demand or scheduled. Additionally they can be configured to target different subsets of VM instances.More information about instance filters can be found in the documentation.The following samples create on-demand patch jobs targeting all instances. Make sure to supply the correct values for the GCS bucket and the file version for the script.LinuxWindowsValidate snapshot creationPatch results / Cloud LoggingNavigate to Compute Engine then OS patch management.Select Patch Jobs.Select the job and review the status.For more details, scroll down in the patch job execution details overlay and select View for a VM that was targeted by this job.This opens Cloud Logging and contains a detailed log of the script execution.ClonesNavigate to Compute EnginethenDisks.Review the available disks.The name of the disk clone is the original disk name with the ID of the patch job appended. Additionally a few labels have been set to make discovery easier:The name of the disk clone is the original disk name with the ID of the patch job appended. Additionally a few labels haven been set to make discovery easier:ConclusionHope you enjoyed today’s blog, illustrating how the pre-patch and post-patch scripts can be used to automate common enterprise requirements. While there are limitations and considerations to be made this process can be used to secure workloads before patching at scale.To learn more about VM Manager, visit the documentation, or watch our Google Cloud Next ‘20: OnAir session, Managing Large Compute Engine VM Fleets.Related ArticleIntroducing VM Manager: Operate large Compute Engine fleets with easeThe new VM Manager simplifies infrastructure and compliance management for the largest of Compute Engine VM fleets.Read Article
Quelle: Google Cloud Platform

Next-generation claims: Transforming vehicle accidents with AI

Editor’s note: Today we’re hearing from risk management software provider Solera Holdings on how they transformed their automotive claims process using machine learning from Google Cloud.Stuck on hold with your car insurance claims department? If a fender-bender isn’t enough to send your stress levels through the roof, negotiating costs and insurance deductibles with a claims adjuster probably is. At Solera Holdings, our business is automobile damage estimation. We deal with around 60% of the claims worldwide between insurance companies, drivers, and the automotive industry. Like anything today, when people want their cars fixed, they want it done as fast as possible. But unlike other modern services such as rideshare or food delivery, claims departments at your insurance company likely aren’t quite up to speed. That’s why we decided to transform Qapter, our established claims workflow platform, into a touchless intelligent claims solution. Better safe than sorry—but no one wants slowWhen I joined Solera in 2020, I came with the understanding that no one particular artificial intelligence (AI) or machine learning (ML) technology could be applied to solve every business problem, no matter how innovative or disruptive that technology might be. In my experience, solving issues always requires multiple in-house and cloud technologies. My vision was to effectively implement AI technologies to the right problems to gain and maintain competitive advantages for Solera. So, I was delighted to discover my team was already way ahead of me and had been working on a way to solve one of their biggest problems with the help of AI and ML. Based on input from insurance companies over the years, the Solera product team knew that customers wanted an AI-based claims process. While repair estimation technology has evolved from estimation spreadsheets to three-dimensional models, modern customer expectations are fast outpacing yesterday’s solutions and processes. Unfortunately, many insurance providers take a “better safe than sorry” approach to existing systems, and the end result is a customer experience that is as frustrating as it is slow. It was clear this was an area that was ripe for improvement, and with our long history of transforming the insurance and automotive industry, we wanted to be the ones to crack the case. The challenge with any AI project is applying the right technologies to the problem at hand. It’s essential to understand the space and scope so we can use technology effectively, or risk falling short. Several insurers had already tried (and failed) to use computer vision to automate the collision damage repair process. While they managed to build working in-house solutions, all of these AI projects ultimately ran into issues when it came time to scale. What could we do differently to avoid failing as an AI project? First, we kept our focus narrow, only looking at ways to apply AI to identify vehicle damage in the collision claims workflow, not the entire repair process. We then chose to augment our existing backend systems with ML to leverage our substantial existing database of proprietary automotive images and parts catalogs to streamline the process of offering precise methods, cost, and time estimates for repairs. Additionally, before I arrived at Solera, the team had already built a previous version of an automated claims system that helped eliminate several less successful approaches. The original version gave us a strong blueprint to work off and enabled us to reimagine Qapter’s full potential when combined with the latest cloud and AI technologies. We knew where we wanted to go—all we needed was the right AI solution and the latest cloud technologies to help us transform the initial damage assessment into an AI-powered process.Google Cloud: An AI technology toolbox with everything we needOur team was already experienced with cloud technology when we started looking for an AI/ML solution that could integrate with a full suite of advanced cloud technologies. While we host our own data lake for contractual reasons with our customers, our accident claim workflow was already cloud-based. We knew that choosing the right technology vendor would be critical to a successful outcome for the next-generation platform.After completing a thorough technology bake-off, we found that Google Cloud’s AI/ML solutions were more sophisticated, robust, and scalable than what other vendors could offer. Having best-in-class technologies for building and deploying AI applications, such as Google Kubernetes Engine and Cloud Run, that integrate with the entire Google Cloud ecosystem played a definitive role in our decision. In short, Google Cloud had everything we needed to take full advantage of AI and ML solutions for processing touchless claims while also providing us with additional sophisticated capabilities and tooling that speeds up development and deployment rather than worrying about maintaining infrastructure. The core value of Qapter is its ability to understand how the vehicle is composed using 3D vehicle models. We repurpose this data and put it through different workflows, such as vehicle inspection or collision estimation. Using Vision API and TensorFlow, we built a system that allows us to collect and recognize claims information, such as vehicle make and model, damage information, and parts required for repairs—all based on collision images. Starting with Vision API’s simple image processing, we used its optical character recognition (OCR) to collect license plates and VINs. We then used TensorFlow to build custom algorithms and machine learning models for image recognition and vehicle data extraction, which enables us to collect other important information like vehicle make and model, damage information, and parts for repairs. In addition, Cloud GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units) enabled us to accelerate our data model processing and increase our ability to train large, complex models faster. Now, all we need is a picture of the damaged car—and Qapter does the rest. Once Qapter has the image, it compares it against our massive repository of claims images to estimate the extent of the damage, recognizes the vehicle’s make and model, identifies what parts are needed, and estimates the final repair cost.From breakdown to breakthrough We started rolling out the new Qapter in France and the Netherlands during 2020, and there’s no doubt that it has dramatically changed the entire claims experience. Our customers are thrilled with the new AI-based approach. Instead of sending a claims adjuster to examine a vehicle physically, all a driver has to do now is take a picture of the car, upload it, and start the process.  It’s been a game-changer—within months of the initial launch, Qapter could auto-authorize 50% of damage claims, reducing estimation costs by nearly half. It has also provided an unexpected benefit across the entire damage claims value chain during the COVID-19 pandemic. While Qapter reduces time and costs for drivers, insurers, and auto repair providers—ultimately, it also cuts down on the need for human interaction. Even in a world of social distancing, necessary services must still be available. Qapter keeps the vehicle repair cycle running smoothly, so drivers can get back on the road, repair shops can continue working, and insurance companies don’t have to send out employees to assess claims in person.At Solera, we want to continue developing and building new products and services on top of the new Google Cloud framework we’ve created. Computer vision has a lot of applications within the damage estimation space, such as window and windshield damage, insurance coverage assessments, rental or lease returns, and fraud detection. Google Cloud isn’t just a spot solution for solving an issue, it’s a core competency for us that can be leveraged across the entire company.Related ArticleUSAA and Google Cloud work together to speed auto claimsLearn how USAA is using machine learning to speed up the auto claims process.Read Article
Quelle: Google Cloud Platform

What industry leaders teach us about the future of data

Almost two-thirds of leading organizations claim that creating data-rich platforms is one of the best ways they can “future proof” their business. The research, commissioned by McKinsey & Company highlights that one of the attributes of industry winners is that they don’t just think of data as a component of their business, they act as if “Data is the business”.What does this mean for your company? How can your team develop practices and perspectives that will allow it to stay ahead of the game?There’s no doubt that data is the essential ingredient for business transformation across analytical and transactional applications. Once generated, it powers deeper AI-driven business insights, helps companies make better real-time decisions, and is also the basis for how companies build and run their data-driven applications. Google Cloud customers have taught us that there are three key dimensions to a winning data strategy: leaders seek to build architectures that are Open, Intelligent and Flexible. In this blog, we explore what they each mean and how you can apply them.An Open ApproachWhile it might be logical to believe that tightly integrated and closed IT environments allow for more value creation through better control, the pace of technology innovation has shown to outstrip a company’s ability to build the solutions its needs from a handful of technologies; let alone get all the data it needs from a single source stored in the same cloud. In its latest forecast, IDC stated that 2021 would be the year of Multi-Cloud*. This makes sense: whether you’re in manufacturing, retail or healthcare, your business requires that you work with partners who most likely have made choices different from yours: the data you need, the protocols you use and the applications you’ll collaborate around are bound to be heterogeneous. A CIO’s reality is one of multiple interfaces, multiple technology stacks and multiple clouds. And in order to win, she needs to architect environments that are both open and adaptable to this “multiplicity”. This multiplicity extends beyond the choice of a cloud or a datastore. It also applies to her organization’s ability to build around its partners’ business models. Open-Source is an important consideration of an enterprise modern stack, and as it has been noted numerous times previously, open-source software has the potential to take over the world. We note that the companies that outpace their competitors’ ability to innovate also partner with vendors who have invested in open-source at their core. By embracing open-source early, industry leaders can contribute to the growth of a wider ecosystem and they benefit from the imagination unleashed by the community faster.Being open in 2021 means starting from the community up, embracing and enabling its choices across multiple clouds, multiple vendors and multiple business models – commercial and open-source.  For more, we suggest:Three ways Google Cloud delivers on hybrid and multi cloud, today hereBringing multi-cloud analytics to your data with BigQuery Omni hereGoogle Open Source Site hereMore Intelligent InsightsLeaders will also find that this “Open” mindset accelerates the operationalization of critical workloads like Artificial Intelligence for example. According to Gartner, “by 2025,50% of enterprises implementing AI orchestration platforms will use open-source technologies, alongside proprietary vendor offerings, to deliver state-of-the-art AI capabilities.”** Being “Open” is thus a key attribute of the “Intelligent Enterprise”.But, what does it mean to be “Intelligent”? We’ve found that “Intelligence” materializes in two ways at leading organizations. There is “Intelligence in Operation” and “Intelligence in Innovation”.“Operational Intelligence” refers to the methods used to optimize the operation of infrastructure. A great example of such intelligence can be found in Google’s Active Assist which provides policy, cost, network, compute, data and application platform intelligence. Intelligence in Operation refers to “self-tuning”, “self-healing” or “self-driving” capabilities, and the use of algorithms to increase operational efficiency and reliability.The second type of intelligence refers to the use of Artificial Intelligence to improve customer experiences and accelerate the creation of insights. Product recommendation solutions can help consumers discover better products and anomaly detection systems can help financial analysts detect fraud faster to protect customers and their company.I often joke that “A.I” doesn’t just stand for “Artificial Intelligence” but that it also stands for “Applied and Invisible”. The reason for such a pun is that, over the years, I’ve learned from customers that AI has been most useful to them when it was well embedded in the applications that support them and when it is applied to specific business problems and use-cases.You’ll find that the opportunity to democratize the consumption of artificial intelligence comes by enabling its integration with the applications your users already know and love. Take a look at Veolia (VEOEY), a French transnational utilities company and how it enables its non-technical employees to get answers fast through Data QnA, a natural language interface for analytics. You might also find the example of PWC familiar to your own needs: the global professional services organization, uses Connected Sheets as part of its efforts to make data more accessible across its workforce. Functionality like Sheets Smart Fill or Sheets Smart Cleanup are additional ways a company can take advantage of Google AI natively built into familiar applications.When looking for intelligence, look for modern applications that are built from AI and from the Data up. Look for tools that aim at democratizing access to analysis and artificial intelligence to more people. As more people get access to machine learning capabilities in applications they know and love, the faster your company will achieve its goal to become an “Open and Intelligent Enterprise”. For more, we suggest:How Toyota Canada 6X their conversions by using Embedded Machine Learning here.How PwC Connected Sheets to scale data insights hereWant to get started? Use any of our Design Patterns here.Flexibility of ChoiceOn the way to building an open and intelligent data architecture, your company might encounter friction. You might find the pricing models of the technologies you need to combine, rigid or incompatible with one another. You might find that certain technologies work well during your evaluation of pilots and at small scale but fail to perform when met with the reality of your fast growing and real-world workloads. And you might find solutions that are effective for batch-level work, don’t work for your real-time needs, forcing you to pull from completely different toolsets to accommodate your needs.When it comes to pricing, scale and the versatility of functionality, don’t compromise. Choice and Flexibility are key ingredients to your success for the future of enterprise data architecture is composable. According to Gartner, “by 2023, 60% of organizations will combine components from three or more analytics solutions to build business applications infused with analytics that connect insights to actions.”***Beware the “law of the instrument”The “composability” trend will have consequences with the types of vendors you decide to partner with. Increasingly you will find that the answer rarely comes from one vendor alone. Rather, value will be created through a well coordinated ecosystem that is both technologically open and offers choice of business models and deployment options.A key practice industry leaders observe is to “beware the law of the instrument”. The “law of the instrument” or “the law of the hammer” is a cognitive bias that involves an over-reliance on a familiar tool. As Abraham Maslow said in 1966, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”Industry leaders study their use-cases carefully. They focus on the type of scenarios they aim to enable and the productivity they accelerate by employee category (aka personas). They inspect the core capabilities of the solutions they aim to deploy in order to maximize their effectiveness around what they have been primarily built for (aka ‘center of design’).Next time a vendor offers its data lake solution to serve as a data warehouse, ask what you will gain and what you will lose. While convergence across these technologies is definitely occurring, your company will need to assess trade-offs before stretching the use of a particular solution beyond its ‘center of design’. Remember, a hammer can do a lot of things but it was primarily built to push down nails. You may also demand that the same product might be licensed to you differently based on your use-cases. Take a look at Google BigQuery pricing options. The same data warehouse product can be used on 3 different constructs: pay per-query (on-demand), allocation (flat-rate) or a mix of both. Another example includes Dataflow FlexRS, a pricing option that reduces batch processing costs by using advanced scheduling techniques. Examples of organizations that have successfully built Open, Intelligent and Flexible data architectures include Unity combining technologies like Dataproc, Dataflow & BigQuery. Another great example is how Vodafone executes on its vision for a Data Ocean for all users and all data. We hope you can learn from each of the above customers the way we have. Please reach out to our team if there is anything we can do to help you towards a more Open, Intelligence and Flexible World!For more, we suggest:Dataflow in a minute hereHow Unity is “making real-time real easy” hereHow Vodafone Built a Data Platform on Google Cloud below*IDC Press Release, IDC Expects 2021 to be the Year of Multi-Cloud as Global COVID-19 Pandemic Reaffirms Critical need for Business Agility, March 2020**Gartner, Predicts 2021: Operational AI Infrastructure and Enabling AI Orchestration Platforms, Chirag Dekate, et al., 2 December 2020. ***Gartner, Predicts 2021: Analytics, BI and Data Science Solutions — Pervasive, Democratized and Composable, Austin Kronz, et al., 5 January 2021Related ArticleWhy Verizon Media picked BigQuery for scale, performance and costSee the proof of concept (POC) numbers that Verizon’s Yahoo got when testing and verifying the improved performance, cost, and scale of B…Read Article
Quelle: Google Cloud Platform