Leaf Space: Enabling next-gen satellites on Google Cloud

There is a revolution happening in the space industry. Spurred on by low rocket launch costs, component miniaturization, and digitalization, more than 15,000 satellites will launch over the next decade1—more than have been launched throughout the prior six decades—studying the Earth’s weather and environment, helping people and things communicate in remote locations, monitoring critical infrastructure, backhauling cellular traffic, and performing other important tasks. And if all goes as planned, Leaf Space’s “Ground Station as a Service” which runs on Google Cloud will be there to help.Most of these new satellites will be launched into low Earth orbit (LEO). Unlike traditional broadcast TV satellites, which operate in a geostationary (GEO) orbit approximately 36,000 kilometers from the Earth’s surface, LEO satellites are much closer to Earth, typically at an altitude of 500 to 2,000 kilometers. For communications missions, this reduces the latency, or the amount of time that it takes a signal to travel from the ground to the satellite and back, and increases the capacity density, or the number of bits per square kilometer that the satellite can deliver. For Earth observation satellites, this increases the resolution of images or other observations that the satellite is making. LEO satellites are also cheaper and faster to manufacture and launch.However, there is a downside to using LEO. Unlike GEO satellites which appear to be fixed in the sky, LEO satellites move relative to the Earth’s surface. As a result, for a specific LEO satellite, there is no single ground antenna that will always have that satellite within its field of view. Uninterrupted connectivity between the ground and the satellite requires multiple antennas that are distributed around the world. The number of antennas can be reduced if interruptions to space-to-ground communications are allowed, but even in that scenario, several sites are desirable.This creates a problem for new satellite operators. Launching even one satellite requires a worldwide network of antennas, and most of the time, those antennas will be idle when the satellite is not overhead.Ground Station as a ServiceEnter companies like Leaf Space. Since it was founded in 2014, Leaf Space’s mission has been to simplify access to space with global infrastructure composed of antennas, processing equipment, and software that it offers as a service to satellite operators. A satellite operator can lease time on Leaf Space’s ground network, and when a satellite is within its field of view, use Leaf Space’s antennas and other equipment to communicate between the satellite and ground. And because an antenna can be shared among many satellites and even satellite operators via a reservation system (much like a conference room that’s reserved by different teams over the course of the day), this lowers operating costs by creating a more efficient utilization of resources. This model is known as Ground Station as a Service (GSaaS). Today, Leaf Space operates a network of eight such GSaaS stations across Europe and New Zealand with plans to expand around the world.Leaf Space software operating on Google CloudNetwork Cloud Engine (NCE) is the brain of Leaf Space’s GSaaS solution. NCE manages multiple satellite missions by ingesting relevant mission constraints, automatically optimizing a schedule of contacts between the satellites and the antennas, orchestrating the activity of the network by automatically configuring signal processing equipment at the ground stations, and enabling control and visibility for satellite mission control center operations teams.NCE orchestrates the entire ground station network operation, while edge resources are utilized to handle baseband processing (the process needed to extract bits and bytes from an RF signal). You can see the flow of data from a satellite to its mission control center (and vice versa) in the diagram below.NCE runs entirely on Google Cloud. A member of Google Cloud’s startup program, Leaf Space chose Google Cloud for the wealth and maturity of its services, its worldwide regions, and its high-speed network backbone. NCE uses Google Cloud services for network connectivity, data transfer, processing, and software control and orchestration. These products enabled Leaf Space’s engineering team to save several months of development time relative to implementing these services from scratch, start commercial operations and continuously roll-out upgrades and new features on a weekly basis. Key solution design decisions and performance metricsIn designing NCE, Leaf Space took advantage of Google Cloud services in order to establish a secure, reliable, scalable, easy to maintain and efficient system by virtualizing major components of a typical ground station network backbone and avoiding any human-in-the-loop process. NCE is composed of several components. The main ones, such as scheduling, ground station control, data transfer, routing, and APIs are built on Google Kubernetes Engine (GKE). Additional specific tasks are handled through Cloud Functions, Cloud Load Balancer and Compute Engine.Building the NCE on Google Cloud has enabled Leaf Space to achieve the following objectives:Leverage automated continuous deployment via Cloud Build and source repositoriesUtilize a multi-server distributed system with liveness probes to ensure zero downtimeLoad-balance traffic with fast autoscaling for high loads Avoid wasting compute resources for low-load servicesEliminate operating system maintenance tasks, allowing more focus on developmentThanks to Google Cloud services, Leaf Space also reduced time-to-market of new features from weeks to days, leveraging all the automated tools for code management: any time a new tag is pushed, the code is validated, a docker container is built and set in production on GKE. A key advantage of this approach is that the team can deploy new capabilities with zero downtime, a significant advantage for a system that must run at high SLAs.The Leaf Space solution utilizes Cloud Function and Kubernetes Engines, Google products that are tightly integrated with other services such as Pub/Sub and Cloud Storage. It decreases time to process logs, creating alerts to monitor the GSaaS network and providing visual analysis of the data received from the satellites.The solution is inherently scalable, making it easy for Leaf Space to add new ground stations or new customers to the production environment and to handle surges in demand.A single NCE ground station and user running on Google Cloud.User experience with using cloud-powered GSaaSWith Google Cloud services such GKE, Scheduler, Redis Cloud Memory Store, Pub/Sub and CloudSQL, Leaf Space was able to create a GSaaS solution that customers report is easy and straightforward to use.To access the system, a user simply provides their spacecraft details, such as orbital parameters, launch date, and baseband configuration. NCE then creates a user account and configures the GSaaS network, including accounting for any applicable regulatory requirements, such as the user’s spectrum license. The mission constraints that the automatic scheduling service needs are set through the API, and NCE then creates an optimized schedule for communications between the satellite and the antenna network. When it comes time to establish a link between the satellite and the ground, NCE spins up the edge baseband processing chain and enables the data routing between the active ground station and the user interface. Any packets that are received, demodulated, and decoded are directly forwarded in real-time  to the user or the satellite.What happens to the downlinked data?Once the data reaches the user, customers can perform further processing and extract useful information from it, for example performing weather monitoring from GPS occultation sensors, ship detection from a SAR (Synthetic Aperture Radar) acquisition or deforestation trend analysis from optical images. All these analyses can be done directly in the cloud environment by the user using Google Cloud Data Analytics, Artificial Intelligence and Machine Learning services. The ultimate end-product can then be easily stored and distributed to end customers. In short, Google Cloud provides an efficient way for Leaf Space to provide GSaaS services to the space ecosystem and further open the doors for development of the space economy. Having the entire processing chain in the cloud from the acquisition phase (through the GSaaS) to data analytics and distribution significantly lowers the delivery latencies and allows efficient distribution of the data. Together, Leaf Space and Google Cloud look forward to enabling the next generation of LEO satellites.  If you want to learn more about how Google Cloud can help your startup, visit our startup page to learn more or apply for our Startup Program, and sign up for our monthly startup newsletter to get a peek at our community activities, digital events, special offers, and more.A big thank you to Jai Dialani, Leaf Space Sr. Business Developer and the entire Leaf Space team, for creating this solution and your contributions to this blog post.1. Euroconsult study; https://www.euroconsult-ec.com/research/WS319_free_extract_2019.pdfRelated ArticleGlideFinder: How we built a platform on Google Cloud that can monitor wildfiresDmitry Kryuk, founder and CTO of GlideFinder, shares how they built a platform that can locate wildfires, alert subscribers, and provide …Read Article
Quelle: Google Cloud Platform

Lighter lift-and-shifts with the new Database Migration Service

“Database migrations are super fun!” – No one ever.There can be considerable friction in moving databases from platform to platform. If you’re doing a lift-and-shift to Google Cloud, your ability to take advantage of cloud features slows down when you have to handle all the intricacies of:Devising a migration strategy for safely and efficiently moving data (while managing downtime)Assessing the impact of the migrationDatabase preparationSecure connectivity setupData replicationAnd migration validationBeyond that, there might be manual work to rewrite database code for a new engine or rewrite applications, and deep validation of aspects like performance impact.It goes without saying that migration to the cloud is a complex process with many moving parts and personas involved, including a network administrator to account for infrastructure/security requirements like VPN. Most DBAs know that one of the largest risks of migrating a database is downtime, which often prevents companies from taking on the heavy task. Typically you shutdown the application, create a backup of the current database schema, perform all required update operations using migration tools, restart the application, and hope that everything works fine. But that changes if you can’t accept any downtime. PostgreSQL users, for example, often have very large databases, meaning they are facing hours of downtime, which for most companies isn’t realistic. Migration tools as a fast trackA number of tools are available to help you move data from one type of database to another or to move data to another destination like a data warehouse or data lake. Moving critical datasets — entire databases — requires the latest-generation cloud-based migration tools that can handle data replication with ease, while providing enhanced security. While we’ve seen cloud-based migration tools, like Alooma, Matillion, and Snaplogic, we also know cloud migration tools need to integrate well with both the source and the target systems, enabling you to migrate databases with minimal effort, downtime, and overhead. In 2019 Alooma joined Google Cloud, bringing Alooma one step closer to delivering a full self-service database migration experience bolstered by Google Cloud technology. Alooma helps enterprise companies streamline database migration in the cloud with a data pipeline tool that simplifies moving data from multiple sources to a single data warehouse. The addition of Alooma and their expertise in enterprise and open source databases has been critical to bringing additional migration capabilities to Google Cloud. Then, in November 2020, Google Cloud launched the new, serverless Database Migration Service (DMS) as part of our vision for meeting these modern needs in a user-friendly, fast, and reliable way for migration to Cloud SQL. While Alooma is an ETL platform for data engineers to build a flexible streaming data pipeline to a cloud data warehouse for analytics, DMS is a database migration service for DBAs and IT professionals to migrate their databases to the cloud as part of their larger migration goals.Database Migration Service is now GADatabase Migration Service makes it easy for you to migrate your data to Google Cloud. It’s a fully managed service that helps you lift and shift your MySQL and PostgreSQL workloads into Cloud SQL. You can migrate from on-premises, self-hosted in Google Cloud, or from another cloud, and get a direct path to unlocking the benefits of Cloud SQL. The focus of DMS is to manage the migration of your database schema, metadata, and the data itself. It streamlines the networking workflow, manages the initial snapshot and ongoing replication, provides a status of the migration operation, and supports one-time and continuous migrations. This lets you cut over with minimal downtime. That’s a lot to absorb, but here are three main things I want you to take away: With DMS, you get a simple, integrated experience to guide you through every step of the migration (not just a combination of tools to perform the assessment and migration).It’s serverless. You don’t need to deploy, manage, or monitor instances that run the migration. The onus of deciding on appropriate sizing, monitoring the instance, ensuring that compute / storage are sufficient is on Google Cloud. Serverless migrations eliminate surprises and are performant at scale.It’s free.Supported source databases include:AWS RDS 5.6, 5.7, 8.0Self-managed (on-prem, on any cloud VM) 5.5, 5.6, 5.7, 8.0Cloud SQL 5.6, 5.7, 8.0Supported destination databases include:Cloud SQL for MySQL 5.6, 5.7, 8.0Gabe Weiss, Developer Advocate for Cloud SQL, has gone in-depth around the various migration scenarios, how DMS works, and how to prepare Postgres instances for migration, so check out his content, as well as best practices around homogeneous migrations. For now, I’ll give you a quick sneak peak by walking you through the basic DMS experience for a new migration job.A stroll through DMSGabe WeissYou can access DMS from the Google Cloud console under Databases. To get started, you’ll create a migration job. This represents the end-to-end process of moving your source database to a Cloud SQL destination. Define your migration jobFirst, let’s define what kind of migration the job will run. I want to move a MySQL database I am running on Google Compute Engine to Cloud SQL. I can choose between one-time or a continuous replication. For minimal downtime, I select continuous. Once I define my job, DMS shows me the source and connectivity configuration required to be able to connect to and migrate the source database.DMS clearly explains what you need to do by showing you the prerequisites directly in the UI. *Don’t skip this step! Be sure to review these prerequisites because you’re going to save yourself a headache during connectivity testing. Define your sourceFirst you have to create a connection profile, a resource that represents the information needed to connect to a database. These profiles aren’t locked to an individual migration. This means you can reuse it if you want to first test out a migration or if someone else in your organization is in charge of connecting to the database. If you’re replicating from a self-hosted database, enter the Hostname or IP address (domain or IP) and Port to access the host. If you’re replicating from a Cloud SQL database select the Cloud SQL instance from the dropdown list.Create a destinationThe next step is to create the Cloud SQL instance to migrate the database to. This will feel familiar to anyone who has created a Cloud SQL instance before. You’ll see many of the same options, like connectivity and machine configuration. Since DMS relies on the replication technology of the Database Engine, you don’t have to create resources, aside from the Cloud SQL instance.Connect your source to your target To me this is where the magic is because establishing connectivity is often viewed as pretty hard. Depending on the type of your source database and its location, you can choose among four types of connectivity methods:IP allowlists – Use this when your source database is external to Google Cloud.Reverse SSH tunnel – Use this to set up private connectivity through a reverse SSH tunnel on a VM in your project.VPCs through VPNs – Use this if your source database is inside a VPN (i.e. in AWS or your on-premises VPN).VPC peering – Use this if your source database is in the same VPC on Google Cloud.Since my source VM is in Google Cloud, I set up VPC Peering. Just be sure to enable the Service Networking API to do this, which provides automatic management of network configurations necessary for certain services.Validate the migration jobAnd that’s it. I configured my source, created a Cloud SQL destination, and established connectivity between them. All that’s left is to validate the migration job setup and start my migration. Once it’s validated, I can trust that my migration will run smoothly.You can start the job and run it immediately. Once the migration job has started, you can monitor its progress and see if it encounters any issues. DMS will first transfer the initial snapshot of existing data, then continuously replicate changes as they happen. When the initial snapshot is migrated and continuous replication is keeping up, you can promote the Cloud SQL instance to be your primary instance and point your applications to work directly against it.DMS guides you through the process and manages the connection between your source and Cloud SQL instance with flexible options. You can test the migration and get a reliable way to migrate with minimal downtime to a managed Cloud SQL instance. It’s serverless, highly performant, and free to use. If you want to give it a spin, check out the quickstart and migrations solutions guide, and let me know your thoughts and any feedback. Looking for SQL Server migrations? You can request access to participate in the SQL Server preview.Find me on Twitter: @stephr_wong
Quelle: Google Cloud Platform