Google Cloud fuels new discoveries in astronomy

From understanding our origins to predicting future events, some of the greatest breakthroughs we’ve made on Earth have come from studying the universe. High-performance computing and machine learning (ML) are accelerating this kind of research at an unprecedented pace. At Google Cloud, we’re proud to play even a small role in advancing the science of astronomy—and that’s why we’re excited today to highlight new work with the Vera C. Rubin Observatory in Chile and researchers at the California Institute of Technology. The cloud foundation for 20TB of nightly sky observationsIn a pioneering collaboration, the Rubin Observatory has finalized a three-year agreement to host its Interim Data Facility (IDF) on Google Cloud. Through this collaboration, Rubin will process astronomical data collected by the observatory and make the data available to hundreds of users in the scientific community in advance of its 10-year Legacy Survey of Space and Time (LSST) project. The LSST aims to conduct a deep survey over an enormous area of sky to create an astronomical catalog thousands of times larger than any previously compiled survey. Using the 8.4-meter Simonyi Survey Telescope and the gigapixel LSST Camera, the survey will capture about 1,000 images of the sky every night for 10 years. These high-resolution images will contain data for roughly 20 billion galaxies and a similar number of stars, providing researchers with an unparalleled resource for understanding the structure and evolution of our universe over time. By building the IDF on Google Cloud, Rubin Observatory will lay the foundation to manage a massive dataset—500 petabytes in total—that will eventually be shared with the scientific community at scale, and with flexibility. “We’re extremely pleased to work with Google Cloud on this project, which will have a big and positive impact on our ability to deliver for the Rubin community,” says Bob Blum, acting director of operations for Rubin Observatory.“We don’t have to build the infrastructure ourselves—it’s well-established and has been tested and improved for other users, so we benefit from that,” explains Hsin-Fang Chiang, data management science analyst and engineer for Rubin Observatory, and one of the early users of the IDF. The Rubin Observatory will use Google Cloud Storage and Google Kubernetes Engine, and Google Workspace will enable productivity and collaboration.Rubin Observatory at sunset, lit by a full moonCaltech researcher discovers new comet with AIWhile comet sightings are relatively common, the discovery of a new comet is rare. The Minor Planet Center, which tracks the solar system’s minor bodies in space, cataloged fewer than 100 new comets in 2019, as opposed to about 21,000 new minor planets. In late August 2020, Dr. Dmitry Duev, research scientist in the Astronomy department at Caltech, began a pilot program to use Google Cloud’s tools to identify the objects observed by the Zwicky Transient Facility (ZTF) at the Palomar Observatory in Southern California. The ZTF scans the Northern skies every clear night, measuring billions of astronomical objects and registering millions of transient events. Using these images, Duev trained an ML model on Google Cloud to pinpoint comets with over 99% accuracy. On October 7, the model identified Comet C/2020 T2, the first ever such discovery attributed to artificial intelligence. This achievement makes the discovery of new comets possible at a greatly accelerated rate. “Having a fast and accurate way to classify objects we see in the sky is revolutionizing our field,” Duev says. “It’s like having a myriad of highly trained astronomers at our disposal 24/7.”The orbit of comet C/2020 T2 as of October 7, 2020.Image credit: NASA/JPL-Caltech / D. Duev.Interested in using Google Cloud to unlock the secrets of the universe?These are just a few of the fascinating projects we’re working on with our customers in astronomy. In April 2019, the Event Horizon Telescope, a virtual combination of eight radio telescopes from all over the world, used Google Cloud’s virtual machine (VMs) instances to produce thefirst image of a supermassive black hole. And, since 2018, Google has also been working in partnership with the Frontier Development Lab on applying machine learning to some of NASA’s most challenging problems in our universe: forecasting floods here on Earth, finding minerals on the moon to support a permanent base there, and predicting solar flares that can interrupt satellite communications.To start or ramp up your own project, we offer research credits to academics using Google Cloud for qualifying projects in eligible countries. You can find our application form on Google Cloud’s website or contact our sales team. To learn more about powering research in astronomy and other fields, register for the Google Cloud Public Sector Summit which features many research sessions. The sessions launch December 8-9 and will also be available on demand.Related ArticleIs there life on other planets? Google Cloud is working with NASA’s Frontier Development Lab to find outGoogle Cloud collaborates with NASA’s Frontier Development Lab to pursue astrobiological analytics, in an attempt to profile the atmosphe…Read Article
Quelle: Google Cloud Platform

Keeping students, universities and employers connected with Cloud SQL

Editor’s note: Today we’re hearing from Handshake, an innovative startup and platform that partners with universities and employers to ensure that college students have equal access to meaningful career opportunities. With over 7 million active student users, 1,000 university and 500,000 employer partners, it’s now the leading early career community in the U.S. Here’s how they migrated to Google Cloud SQL.At Handshake, we serve students and employers across the country, so our technology infrastructure has to be reliable and flexible to make sure our users can access our platform when they need it. In 2020, we’ve expanded our online presence, adding virtual solutions and establishing new partnerships with community colleges and bootcamps to increase career opportunities for our student users.These changes and our overall growth would have been harder to implement on Heroku, our previous cloud service platform. Our website application, running on Rails, uses a sizable cluster and PostgreSQL as our primary data store. As we grew, we were finding Heroku to be increasingly expensive at scale. To reduce maintenance costs, boost reliability, and provide our teams with increased flexibility and resources, Handshake migrated to Google Cloud in 2018, choosing to have our data managed through Google Cloud SQL. Cloud SQL freed up time and resources for new solutionsThis migration proved to be the right decision. After a relatively smooth migration over a six-month period, our databases are completely off of Heroku now. Cloud SQL is now at the heart of our business. We rely on it for nearly every use case, continuing with a sizable cluster and using PostgreSQL as our sole owner of data and source of truth. Virtually all of our data, including information about our students, employers, and universities, is in PostgreSQL. Anything in our website is translated to a data model that’s reflected in our database.Our main web application uses a monolithic database architecture. It uses an instance with one primary and one read replica and it has 60 CPUs, almost 400 GB of memory, and 2 TB of storage, of which 80 percent is utilized.Cloud SQL is at the heart of our business, providing our startup with enterprise-level features. Rodney Perez, Infrastructure EngineerSeveral Handshake teams use the database, including Infrastructure, Data, Student, Education, and Employer teams. The data team is usually interacting with the transactional data, writing pipelines, pulling data out of PostgreSQL and loading it into BigQuery or Snowflake. We run a separate replica for all of our databases, specifically for the data team, so they can export without a performance hit. With most managed services, there will always be maintenance that requires downtime, but with Cloud SQL, any necessary maintenance is easy to schedule. If the Data team needs more memory, capacity, or disk space, our Infrastructure team can coordinate and decide if we need a maintenance window or a similar approach that involves zero downtime. We also use Memorystore as a cache and heavily leverage Elasticsearch. Our Elasticsearch index system uses a separate PostgreSQL instance for batch processing. Whenever there are record changes inside our main application, we send a Pub/Sub message from which the indexers queue off, and they’ll use that database to help with that processing, putting that information into Elasticsearch and creating those indices. Nimble, flexible and planning for the futureWith Cloud SQL managing our databases, we can devote resources toward creating new services and solutions. If we had to run our own PostgreSQL cluster, we’d need to hire a database administrator. Without Cloud SQL’s service-level agreement (SLA) promises, if we were setting up a PostgreSQL instance in a Compute Engine virtual machine, our team would have to double in size to handle the work that Google Cloud now manages. Cloud SQL also offers automatic provisioning and storage capacity management, saving us additional valuable time. We’re generally far more read-heavy than write-heavy, and our future plans for our data with Cloud SQL include offloading more of our reads to read replicas, and keeping the primary for just writes, using PgBouncer in front of the database to decide where to send which query. We are also exploring committed use discounts to cover a good baseline of our usage. We still want to have the flexibility to do cost cutting and reduce our usage where possible, and to realize some of those initial savings right away. Also, we’d like to split up the monolith into smaller databases to reduce the blast radius, so that they can be tuned more effectively to each use case. With Cloud SQL and related services from Google Cloud freeing time and resources for Handshake, we can continue to adapt and meet the evolving needs of students, colleges, and employers.Read more about Handshake and the solutions we found in Cloud SQL.Related ArticleCloud SQL now supports PostgreSQL 13Fully managed Cloud SQL cloud database service now supports PostgreSQL 13.Read Article
Quelle: Google Cloud Platform

Enabling Microsoft-based workloads with file storage options on Google Cloud

Enterprises are rapidly moving Microsoft and Windows-based workloads to the cloud to reduce license spend and embark on modernization strategies to fully leverage the power of cloud-native architecture. Today’s business climate requires agility, elasticity, scale, and cost optimization, all of which are far more difficult to attain by operating out of data centers. Google Cloud offers a top-level enterprise-grade experience for Microsoft-based services and tools. Many Windows-based workloads require a Server Message Block (SMB) file service component. For example, highly available SAP application servers running in Windows Server clusters need SMB file servers to store configuration files and logs centrally. The COVID-19 pandemic has resulted in increased demand for virtual desktop solutions to enable workers to adapt to the sudden necessity of working remotely. Those virtual desktop users often require access to SMB file servers to store documents and to collaborate with coworkers. Fortunately, there are numerous options for SMB file services in Google Cloud that meet the varying needs of Microsoft shops. They fall into three categories: fully managed, semi-managed, and self-managed services. In this post, we’ll examine several options across those three buckets. (Note: this is by no means an exhaustive list of SMB file service providers for Google Cloud. Rather, this is a brief review of some of the common ones.)Fully managed SMB file servicesFor many enterprises, reducing operational overhead is a key objective of their cloud transformation. Fully managed services provide the capabilities and outcomes, without requiring IT staff to worry about mundane tasks like software installation and configuration, application patching, and backup. These managed SMB file service options let customers get their Windows applications and users to work expeditiously, reducing toil and risk. (Note that these are managed partner-provided services, so make sure to check the region you’ll be using to ensure availability.)NetApp Cloud Volumes ServiceIf you work in IT and have ever managed, used, or thought about storage, chances are you’re familiar with NetApp. NetApp has been providing enterprise-grade solutions since 1992. With NetApp Cloud Volumes Service (CVS), you get highly available, cloud-native, managed SMB services that are well-integrated with Google Cloud. Storage volumes can be sized from 1 to 100 TB to meet the demands of large-scale application environments, and the service includes tried-and-true NetApp features like automated snapshots and rapid volume provisioning. It can be deployed right from the Google Cloud Marketplace, managed in the Google Cloud console, supported by Google, and paid for in your Google Cloud bill.Dell Technologies PowerScaleDell Technologies is another leader in the enterprise storage market, and have partnered with them to offer PowerScale on Google Cloud. PowerScale leverages an all-flash architecture for blazing fast storage operations. However, it will be backward-compatible, allowing you to choose between PowerScale all-flash nodes and Isilon nodes in all-flash, hybrid, or archive configuration. The OneFS file system boasts a maximum of 50 PB per namespace; this thing scales! And as with NetApp, PowerScale in Google Cloud includes enterprise-grade features like snapshots, replication, and hybrid integration with on-premises storage. It’s tightly integrated with Google Cloud: it can be found in the Google Cloud Marketplace, is integrated with the Google Cloud console, and billed and supported directly by Google.Both of these managed file storage products support up to SMBv3, making them outstanding options to support Windows workloads, without a lot of management overhead.  Semi-managed SMB file servicesNot everyone wants fully managed SMB services. While managed services take a lot of work off your plate, as a general rule they also reduce the ways in which you can customize the solution to meet your particular requirements. Therefore, some customers prefer to use self-managed (or semi-managed) services, like the storage services below, to tailor the configurations to the exact specifications needed for their Windows workloads.NetApp Cloud Volumes ONTAPLike the fully managed NetApp Cloud Volumes Service, NetApp Cloud Volumes ONTAP (CVO) gives you the familiar features and benefits you’re likely used to with NetApp in your data center, including SnapMirror. However, as a semi-managed service, it’s well-suited for customers who need enhanced control and security of their data on Google Cloud. CVO deploys into your Google Cloud virtual private cloud (VPC) on Google Compute Engine instances, all within your own Google Cloud project(s), so you can enforce policies, firewall rules, and user access as you see fit to meet internal or external compliance requirements. You will need to deploy CVO yourself by following NetApp’s step-by-step instructions. In the Marketplace, you get your choice of a number of CVO price plans, each with varying SMB storage capacity (2 TB to 368 TB) and availability. NetApp Cloud Volumes ONTAP is available in all Google Cloud regions.Panzura Freedom Hybrid Cloud StoragePanzura Freedom is a born-in-the-cloud, hybrid file service that allows global enterprises to store, collaborate, and back up files. It presents a single, geo-distributed file system called Panzura CloudFS that’s simultaneously accessible from your Google Cloud VPCs, corporate offices, on-premises data centers, and other clouds. The authoritative data is stored in Google Cloud Storage buckets and cached in Panzura Freedom Filers deployed locally, giving your Windows applications and users high-performing access to the file system. Google Cloud’s global fiber network and 100+ points of presence (PoPs) reduce global latency to ensure fast access from anywhere. Panzura can be found in the Google Cloud Marketplace as well.  Self-managed SMB file servicesIn some cases, managed services will not meet all the requirements. This is not limited to technical requirements. For example, in your industry you might be subject to a compliance regulation for which none of the managed services are certified. If you consider all of the fully managed and semi-managed SMB file service options, but none of them are just right for your budget and requirements, don’t worry. You still have the option of rolling your own Windows SMB file service on Google Cloud. This approach gives you the most flexibility of all, along with the responsibility of deploying, configuring, securing, and managing it all. Don’t let that scare you, though: These options are likely very familiar to your Microsoft-focused staff.Windows SMB file servers on a Google Compute Engine instanceThis option is quite simple: you deploy a Compute Engine instance running your preferred version of Windows Server, install the File Server role, and you’re off to the races. You’ll have all the native features of Windows at your disposal. If you’ve extended or federated your on-premises Active Directory into Google Cloud or are using the Managed Service for Active Directory, you’ll be able to apply permissions just as you do on-prem.  Persistent Disks add a great deal of flexibility to Windows file servers. You can add or expand Persistent Disks to increase the storage capacity and disk performance of your SMB file servers with no downtime. Although a single SMB file server is a single point of failure, the native protections and redundancies of Compute Engine make it unlikely that a failure will result in extended downtime. If you choose to utilize Regional Persistent Disks, your disks will be continuously replicated to a different Google Cloud zone, adding an additional measure of protection and rapid recoverability in the event of a VM or zone failure.  Windows clusteringIf your requirements dictate that your Windows file services cannot go down, a single Windows file server will not do. Fortunately, there’s a solution: Windows Failover Clustering. With two or more Windows Compute Engine instances and Persistent Disks, you can build a highly available SMB file cluster that can survive the failure of Persistent Disks, VMs, the OS, or even a whole Google Cloud zone with little or no downtime. There are two different flavors of Windows file clusters: File Server Cluster and Scale-out File server (SOFS).  Windows file server clusters have been around for around 20 years. The basic architecture is two Windows servers in a Windows Failover Cluster, connected to shared storage such as a storage area network (SAN). These clusters are active-passive in nature. At any given time, only one of the servers in the cluster can access the shared storage and provide file services to SMB clients. Clients access the services via a floating IP address, front-ended by an internal load balancer. In the event of a failure of the active node, the passive node will establish read/write access to the shared storage, bind the floating IP address, and launch file services. In a cloud environment, physical shared storage devices cannot be used for cluster storage. Instead, Storage Spaces Direct (S2D) may be used. S2D is a clustered storage system that combines the persistent disks of multiple VMs into a single, highly available, virtual storage pool. You can think of it as a distributed virtual SAN.Scale-Out File Server (SOFS) is a newer and more capable clustered file service role that also runs in a Windows Failover Cluster. Like Windows File Server Clusters, SOFS makes use of S2D for cluster storage. Unlike a Windows File Server Cluster, SOFS is an active-active file server. Rather than presenting a floating IP address to clients, SOFS creates separate A records in DNS for each node in the SOFS role. Each node has a complete replica of the shared dataset and can serve files to Windows clients, making SOFS both vertically and horizontally scalable. Additionally, SOFS has some newer features that make it more resilient for application servers.  As mentioned before, both Windows File Server Clusters and SOFS depend on S2D for shared storage. You can see the process of installing S2D on Google Cloud virtual machines hereis described, and the chosen SMB file service role may be installed afterwards. Check out the process of deploying a file server cluster role here, and the process for an SOFS role.  Scale-Out File Server or File Server Cluster?File Server Clusters and SOFS are alike in that they provide highly available SMB file shares on S2D. SOFS is a newer technology that provides higher throughput and more scalability than File Server Cluster. However, SOFS is not optimized for the metadata-heavy operations common with end-user file utilization (opening, renaming, editing, copying, etc.). Therefore, in general, choose File Server Clusters for end-user file services and choose SOFS when your application(s) need SMB file services. See this page for a detailed comparison of features between File Server Cluster (referred to there as “General Use File Server Cluster”) and SOFS.Which option should I choose?We’ve described several good options for Microsoft shops to provide their Windows workloads and users access to secure, high-performing, and scalable SMB file services. How do you choose which one is best suited for your particular needs? Here are some decision criteria you should consider:Are you looking to simplify your IT operations and offload operational toil? If so, look at the fully managed and semi-managed options.Do you have specialized technical configuration requirements that aren’t met by a managed service? Then consider rolling your own SMB file service solution as a single Windows instance or one of the Windows cluster options.Do you require a multi-zone for fully automated high availability? If so, NetApp Cloud Volumes ONTAP and the single instance Windows file server are off the table. They run in a single Google Cloud zone.Do you have a requirement for a particular Google Cloud region? If so, you’ll need to verify whether NetApp Cloud Volumes Service and NetApp Cloud Volumes ONTAP are available in the region you require. As partner services that require specialized hardware, these two services are available in many, but not all, Google Cloud regions today.Do you require hybrid storage capabilities, spanning on-premises and cloud? If so, all of the managed options have hybrid options.Is your budget tight? If so, and if you’re OK with some manual planning and work to minimize the downtime that’s possible with any single point of failure, then a single Windows Compute Engine instance file server will do fine. Do you require geo-diverse disaster recovery? You’re in luck—every option described here offers a path to DR.What next?  This post serves as a brief overview of several options for Windows file services in Google Cloud. Take a closer look at the ones that interest you. Once you’ve narrowed it down to the top candidates, you can go through the Marketplace pages (for the managed services) to get more info or start the process of launching the service. The self-managed options above include links to Google Cloud-specific instructions to get you started, then general Microsoft documentation to deploy your chosen cluster option.Related ArticleFilestore Backups eases migration of file-based apps to cloudThe new Filestore Backups lets you migrate your copy data services and backup strategy for your file systems in Google Cloud.Read Article
Quelle: Google Cloud Platform

Pub/Sub makes scalable real-time analytics more accessible than ever

These days, real-time analytics has become critical for business. Automated, real-time decisions based on up-to-the-second data are no longer just for advanced, tech-first companies. It is becoming a basic way of doing business. According to IDC, more than a quarter of data created will be real-time in the next five years. The factors we see driving this growth are the competitive pressure to improve service and user experience quality. Another factor is the consumerization of many traditional businesses where many functions that used to be performed by agents are now done by consumers themselves. Now, every bank, retailer, and service provider needs to have a number of user interfaces, from internal apps, to mobile apps, and web apps. These interfaces not only require fresh data to operate but also produce transaction and interaction data at unprecedented scale. Real-time data is not just about application features. It is fundamentally about scaling operations to deliver great user experiences: up-to-date systems monitoring, alerts, customer service dashboards, and automated controls for anything from industrial machinery to customer service operations to consumer devices. It can accelerate data insights to action and in turn increase operational responsiveness.“With Google Cloud, we’ve been able to build a truly real-time engagement platform,” says Levente Otti, Head of Data, Emarsys. “The norm used to be daily batch processing of data. Now, if an event happens, marketing actions can be executed within seconds, and customers can react immediately. That makes us very competitive in our market.” Real-time analytics all starts with messaging At Google, we’ve contended with the challenge of creating real-time user experiences at a vast scale from the early days of the company. A key component of our solution for this Pub/Sub, a global, horizontally scalable messaging system. For over a decade, Google products, including Ads, Search and Gmail, have been using this infrastructure to handle hundreds of millions of events per second. Several years ago, we made this system available to the world as Cloud Pub/Sub. Pub/Sub is uniquely easy to use. Traditional messaging middleware offered many of the same features, but were not designed to scale horizontally or were offered as services. Apache Kafka, the open-source stream processing platform, has solved the scalability problem by creating a distributed, partitioned log that supported horizontally scalable streaming writes and reads. Managed services inspired by the same idea have sprung up. Because these services are generally based on the notion of a fixed, local resource, such as a partition or a cluster, these services still left the users to solve the problem of global distribution of data and managing capacity.Pub/Sub took automated capacity management to an extreme: Data producers need not worry about the capacity required to deliver data to subscribers, with up to 10,000 subscribers per topic supported. In fact, consumers even pay for the capacity needed to read the data independently from the data producers. The global nature of Pub/Sub is unique, with a single endpoint resolving to nearby regions for fast persistence of data. On the other side, the subscribers can be anywhere and receive a single stream of data aggregated from across all regions. At the same time, users retain precise control over where the data is stored and how it makes it there. This makes Pub/Sub a convenient way to make data available to a broad range of applications on Google Cloud and elsewhere, from ingestion into BigQuery to automated, real-time AI-assisted decision making with Dataflow. This provides data practitioners with the choice of creating an integrated feedback loop easily. “Our clients around the world increasingly are looking for quality real-time data within the cloud,” said Trey Berre, CME Group Global Head of Data Services. “This innovative collaboration with Google Cloud will not only make it easier for our clients to access the data they need from anywhere with an internet connection, but will also make it easier than ever to integrate our market data into new cloud-based technologies.” Making messaging more accessibleIn 2020, we have focused on making Pub/Sub even simpler. We observed that some of our users had to adapt their application design to the guarantees made by the service. Others were left building their own cost-optimized Apache Kafka clusters to achieve ultra low-cost targets. To address these pain points, we have made Pub/Sub much easier to use for several use cases and introduced an offering that achieves an order of magnitude lower total cost of ownership (TCO) for our customers. The cost-efficient ingestion optionWe set out to build a version of Pub/Sub for customers who needed a horizontally scalable messaging service at a cost typical of cost-optimized, self-managed single-zone Apache Kafka or similar OSS systems. The result is Pub/Sub Lite, which can match or even improve upon the TCO of running your own OSS solution. In comparison to Pub/Sub itself, Pub/Sub Lite is as much as ten times cheaper, as long as the single-zone availability and capacity management models work for your use case. This managed service is suitable for a number of use cases, including:Security log analysis, where it is often a cost center and not every event must be scanned to detect threats Search indexes and serving cache updates, which are commonly “best effort” cost-saving measures and don’t require a highly reliable messaging serviceGaming and media behavior analytics, where low price is often key to getting startups off the groundThisguide to choosing between Pub/Sub and Pub/Sub Lite and thepricing comparisons can help you decide if Lite is for you. Comprehensive and enterprise-ready messaging that scalesThis year, Pub/Sub added a number of features that will allow our users to simplify their code significantly. These features include: Scalable message ordering: Scalable message delivery in order is a tough problem and critical for many applications, from general change data capture (CDC) to airplane operations. We were able to make this work with only minimal changes to our APIs and without sacrificing scalability and the on-demand capacity. Your applications that require ordering can now be much less stateful, and thus simpler to write and operate. There are no shards or partitions and every message for a key, such as a customer ID, arrives in order reliably. Dead-letter topics automatically detect messages that repeatedly cause applications to fail and put them aside for manual, off-line debugging. This saves on processing time and keeps processing pipeline latency low. Filters automatically drop messages your application does not care to receive, saving on processing and egress costs. Filters are configuration, so there is no need to write code or deploy an application. It’s that simple. Data residency controls: In addition to Pub/Sub’s resource location constraints, which allows organizations to dictate where Pub/Sub stores message data regardless of where it is published, we have launched regional endpoints to give you a way of connecting to Pub/Sub servers in a specific region. Publisher flow control (Java, Python) is perhaps the most notable of many updates to our client libraries. Flow control is another surprisingly tough problem, as many applications require multiple threads to publish data concurrently, which can overwhelm the client machine’s network stack and lose data unless the threads coordinate. With flow control, you can achieve very high, sustainable publish rates safely. Also of note are configurable retry policy and subscription detachment. As one of our users recently said: “I’m going to go and use this right now.”What’s nextWe will continue to make Pub/Sub and our real-time processing tools easier to use in the coming months. You can stay up-to-date by watching our release notes. In the meantime, we invite you to learn more about how to get started and everything you can do with Google Cloud’s real-time stream analytics services in our documentation or by contacting the Google Cloud sales team.Related ArticleSimplify creating data pipelines for media with Spotify’s KlioSpotify open-sources Klio: scalable, efficient media processing on top of Apache Beam.Read Article
Quelle: Google Cloud Platform