Cloud CISO Perspectives: October 2021

October has been a busy month for Google Cloud. We just held our annual conference, Google Cloud Next ‘21, where we made significant security announcements for our customers of all sizes and geographies. It’s also National Cybersecurity Awareness Month where our security teams across Google delivered important research on new threat campaigns and product updates to provide the highest levels of account security for all users. In this month’s post, I’ll recap all of the security “Action” from Next ‘21, including product updates that deliver “secure products” not just “security products” and important industry momentum for tackling open source software security and ransomware. Google Cloud Next ‘21 Recap Google Cybersecurity Action Team: While having access to the latest, most advanced security technology is important, the knowledge of what and how best to transform security to become resilient in today’s risk and threat environment is foundational. This is the reason we announced the formation of the Google Cybersecurity Action Team to support the security and digital transformation of governments, critical infrastructure, enterprises and small businesses around the world. We’re already doing a lot of this work every single day with our public and private sector customers. The Cybersecurity Action Team builds on these efforts with services and guidance across the full spectrum of what our customers need to strengthen security from strategy to execution. Under this team, we will guide customers through the cycle of security transformation – from their first cloud adoption roadmap and implementation, through increasing their cyber-resilience preparedness for potential events and incidents, and engineering new solutions as requirements change.  We describe the team vision in more depth in this podcast episode. If you are interested in learning more about the Google Cybersecurity Action Team, reach out to your  Google Cloud Account Team(s) to arrange a security briefing.A Safer Way to Work: The way we work has fundamentally changed. Users and organizations are creating more sensitive data and information than ever before, creating a culture of collaboration across organizations. This modern way of working has many benefits but also creates new security challenges that legacy collaboration tools aren’t equipped to handle. During Next, we announced a new program called Work Safer to provide businesses and public sector organizations of all sizes with a hybrid work package that is cloud-first, built on a proven Zero Trust security model and delivers up-to-date protection against phishing, malware, ransomware, and other cyberattacks. Work Safer includes best-in-class Google security products like Google Workspace, BeyondCorp Enterprise, Titan Security Keys and powerful services from our cybersecurity partners CrowdStrike and Palo Alto Networks. A Secure and Sustainable Cloud: Seeing security and sustainability come together in one announcement is uncommon, and our Unattended Project Recommender is a great example of how at Google Cloud we’re helping customers combat two pressing issues: climate change and cybersecurity.  At Next we announced that Active Assist Recommender will now include a new sustainability impact category, extending its original core pillars of cost, performance, security, and manageability. Starting with the Unattended Project Recommender, you’ll soon be able to estimate the gross carbon emissions you’ll save by removing your idle resources. Workspace Security Updates: To further strengthen security and privacy across the Google Workspace platform, we announced four new capabilities: In June, we announced that Client-side encryption (CSE) was available in beta for Drive, Docs, Sheets, and Slides. Now we’re bringing CSE to Google Meet, giving customers complete control over encryption keys while helping them meet data sovereignty and compliance requirements. Data Loss Prevention (DLP) for Chat is continuation of our ongoing commitment to help organizations protect their sensitive data and information from getting into the wrong hands, without impacting the end-user experience. Drive labels are now generally available to help organizations classify files stored in Drive based on their sensitivity level. Additional protections to safeguard against abusive content and behavior. If a user opens a file that we think is suspicious or dangerous, we’ll display a warning to the user to help protect them and their organization from malware, phishing, and ransomware. Distributed Cloud: From conversations with customers, we understand there are various factors why an organization may resist putting certain workloads in the cloud. Data residency and some other compliance issues can be a driver. Google Distributed Cloud Hosted – one of the first products in the Distributed Cloud Portfolio – builds on the digital sovereignty vision we outlined last year, supporting public-sector customers and commercial entities that have strict data residency requirements. It provides a safe and secure way to modernize an on-premises deployment.New Invisible Security Capabilities: Over the past year, Google Cloud has been delivering on our vision of Invisible Security for our customers, where capabilities are continuously engineered into both our trusted cloud platform and market-leading products to bring the best of Google’s security to wherever your IT assets are. At Next we announced new capabilities, here are just a few, we’ll be talking more about these next month: The new BeyondCorp Enterprise client connector enables identity and context-aware access to non-web applications running in Google Cloud and non-Google Cloud environments. We are also making it easier for admins to diagnose access failure, triage events, and unblock users with the new Policy Troubleshooter feature.Automatic DLP is a prime example of how we are making Invisible Security a reality. It’s a game-changing capability that discovers and classifies sensitive data for all the BigQuery projects across your entire organization without you needing to do a single thing.Ubiquitous Data Encryption is a new solution which combines our Confidential Computing, External Key Management, and Cloud Storage products to seamlessly encrypt data as it’s sent to the cloud. Using our External Key Management solution, data can now only be decrypted and run in a confidential VM environment, greatly limiting potential exposure. This is a groundbreaking example how Confidential Computing and cryptography can be used for building solutions that many industries and regions with sovereignty requirements demand as they move to the cloud. Thoughts from around the industryOpenSSF: It’s great to see the Open Source Security Foundation announce additional funding to help the industry curb the rise in software supply chain attacks and address critical efforts like the Biden Administration’s Executive Order. Google is proud to support this new funding with others in the industry. The OpenSSF helps drive important work to improve security for all with projects like the security scorecards and Allstar. I encourage every executive that wants to see meaningful improvements in their own software supply chain to getinvolved.  Trusted Cloud Principles: Last month, we joined the Trusted Cloud Principles initiative with many other cloud providers and technology companies. This is a great development to keep the cloud industry committed to basic human rights and rule of law as we expand infrastructure and services around the world — all while ensuring the free flow of data, to promote public safety, and to protect privacy and data security in the cloud.White House Ransomware Summit: Ransomware continues to be top of mind for businesses and governments of all sizes. This month we saw the White House gather representatives from 30 countries to continue combatting this growing threat through technology, finance, law enforcement, and diplomacy.  In order to be helpful and provide insights into this form of malware, we recently released the VirusTotal Ransomware Report, analyzing 80 million ransomware samples.Google Cloud Security Highlights Every day we’re building enhanced security, controls, resiliency and more into our cloud products and services. This is what we mean by our guiding principle that can best serve our customers and the industry with secure products, not just security products.  Here’s a snapshot of the latest updates and new capabilities across Google Cloud products and services since our last post. SecurityCloud customers running high-intensity workloads (such as analytics on Hadoop) and managing their own encryption keys on top of those provided by Cloud will see better support. Keeping track of cryptographic keys is essential to managing complex systems. New Cloud features make that arduous task much simpler with the Key Inventory Dashboard. Also great to see Cloud KMS PKCS #11 Library as well as capabilities for automating Variable Key Destruction and Fast Key Deletion.Firewalls remain an important part of security architecture, especially during migration, so we created a module within our Firewall Insights tool to help tame overly permissive firewall rules. This is a great benefit of a software defined infrastructure. ResilienceThe network security portfolio secures applications from fraudulent activity, malware, and attacks. Updates to Cloud Armor, our DDoS protection and WAF service, bring four new features to our partners: Integration with the Google Cloud reCAPTCHA Enterprise bot and fraud management; per-client rate limiting; Edge security policies; and Adaptive Protection, our ML-based, application-layer DDoS detection and WAF protection mechanism.SovereigntyOur EU Data Residency allows European customers to specify one of five Google Cloud Regions in the EU where their data will be stored and where it will remain. Customers retain cryptographic control of their data and can even block Google administrator access thanks to the new Key Access Justifications feature.ControlsThe Policy Controller within Anthos Config Management enables the enforcement of fully programmable policies for clusters. These policies can audit and prevent changes to the configuration of your clusters to enforce security, operational, or compliance controls. The folks at USAA tell us how they use Google Cloud and security best practices to automatically onboard new hires. We covered a lot today and are excited to bring you more exciting updates for cybersecurity throughout the end of the year. If you’d like to have this Cloud CISO Perspectives post delivered every month to your inbox, click here to sign-up. If you missed our security sessions and spotlights at Google Cloud Next ‘21, sign up at the link to watch on-demand.Related ArticleCloud CISO Perspectives: September 2021Google Cloud CISO Phil Venables shares his thoughts on what to expect for security at Google Cloud Next ‘21, digital sovereignty, global …Read Article
Quelle: Google Cloud Platform

Video walkthrough: Set up a multiplayer game server with Google Cloud

Imagine that you’re playing a video game with a friend, hosting the game on your own machine. You’re both having a great time—until you need to shut down your computer and the game world ceases to exist for everyone until you’re back online. With a multiplayer server in the cloud, you can solve this problem and create persistent, shared access that doesn’t depend on your online status. To show you how to do this, we’ve created a video that takes you through the steps to set up a private virtual, multiplayer game server with Google Cloud, with no prior experience required.In this video, we walk through the real-world situation described above, in which one of our team members wants to create a persistent shared gaming experience with a friend. One of our training experts shows his colleague step-by-step how to use Compute Engine to host a multiplayer instance of Valheim from Iron Gate Studio and Coffee Stain Studios.This tutorial doesn’t assume that you’ve done this before. Along with our in-house novice, you’ll be guided through the process to create a virtual machine on Google Cloud Platform and configure it to connect to remote computers. Then, using Valheim as an example, we’ll show you how to set up a dedicated game server. The video also takes you through decisions about user settings and permissions, such as whether you want to allow multiple parties to manage the cloud host, and security considerations to keep in mind. We’ll talk about resource requirements and possibilities for scaling up, and break down some of the factors that will influence the cost, including a detailed explanation of the specifications we used in our walkthrough scenario. Ready to play? Check out the Create Valheim Game Server with Google Cloud walkthrough video:Related ArticleNew to Google Cloud? Here are a few free trainings to help you get startedFree resources like hands-on events, on-demand training, and skills challenges can help you develop the fundamentals of Google Cloud so y…Read Article
Quelle: Google Cloud Platform

Django ORM support for Cloud Spanner is now Generally Available

Today we’re happy to announce GA support for Google Cloud Spanner in the Django ORM. The django-google-spanner package is a third-party database backend for Cloud Spanner, powered by the Cloud Spanner Python client library. The Django ORM is a powerful standalone component of the Django web framework that maps Python objects to relational data. It provides a nice Pythonic interface to the underlying database, and includes tools for automatically generating schema changes and managing schema version history. With this integration, Django applications can now take advantage of Cloud Spanner’s high availability and consistency at scale.We’ll follow the Django tutorial below to create a new project and start writing data to Cloud Spanner. This is a follow up to the “Introducing Django ORM support for Cloud Spanner” blog post, which we published during the Beta launch. We have updated the tutorial to work with the Django 3.2 library.If you’re already using Django with another database backend you can skip down to “Migrating an existing Django project to Cloud Spanner” for instructions on switching to Cloud Spanner. You can also read the documentation here, and follow the repository here. Changes since the Beta releaseThe library supports Django version 2.2.x, and 3.2.x. Both versions are long-term support (LTS) releases for the Django project. The minimum required Python version is 3.6.NUMERIC data type support is now available. We have also added support for JSON object storage and retrieval with Django 3.2.x support, but querying inside the JSONfield is not supported in the current django-google-spanner release. This feature is being worked on and can be tracked here. Support for PositiveBigIntegerField, PositiveIntegerField and PositiveSmallIntegerField were added along with relevant check constraints.InstallationTo use django-google-spanner, you’ll need a working Python installation and Django project. The library requires Django~=2.2 or Django~=3.2 and Python>=3.6. If you’re new to Django, see the Django getting started guide specific to the Django version you are using. For the tutorial below we will be using Django~=3.2 but the process is similar for Django~=2.2 as well.You’ll also need an active Google Cloud project with the Cloud Spanner API enabled. For more details on getting started with Cloud Spanner see the Cloud Spanner getting started guide.Django applications are typically configured to use a single database. If you’re an existing Cloud Spanner customer, you should already have a database suitable for use with your Django application. If you don’t already have a Cloud Spanner database, or want to start from scratch for a new Django application, you can create a new instance and database using the Google Cloud SDK:To install the Cloud Spanner database backend package:Next, start a new Django project:django-google-spanner provides a Django application named django_spanner. To use the Cloud Spanner database backend, this application needs to be the first entry in INSTALLED_APPS in your application’s settings.py file:The django_spanner application changes the default behavior of Django’s AutoField so that it generates random (instead of automatically incrementing sequential) values. We do this to avoid a common anti-pattern in Cloud Spanner usage.Configure the database engine by setting the project, instance, and database name:To run your code locally during development and testing, you’ll need to authenticate with Application Default Credentials, or set the GOOGLE_APPLICATION_CREDENTIALS environment variable to authenticate using a service account. This library delegates authentication to the Cloud Spanner Python client library. If you’re already using this or another client library successfully, you shouldn’t have to do anything new to authenticate from your Django application. For more information, see the client libraries documentation on setting up authentication. Working with django-google-spannerUnder the hood, django-google-spanner uses the Cloud Spanner Python client library, which communicates with Cloud Spanner via its gRPC API. The client library also manages Cloud Spanner session lifetimes, and provides sane request timeout and retry defaults.To support the Django ORM, we added an implementation of the Python Database API Specification (or DB-API) to the client library in the google.cloud.spanner_dbapi package. This package handles Cloud Spanner database connections, provides a standard cursor for iterating over streaming results, and seamlessly retries queries and DML statements in aborted transactions. In the future we hope to use this package to support other libraries and ORMs that are compatible with the DB-API, including SQLAlchemy.Django ships with a powerful schema version control system known as migrations. Each migration describes a change to a Django model that results in a schema change. Django tracks migrations in an internal django_migrations table, and includes tools for migrating data between schema versions and generating migrations automatically from an app’s models. django-google-spanner provides backend support for Cloud Spanner by converting Django migrations into DDL statements – namely CREATE TABLE and ALTER TABLE – to be run at migration time.Following the Django tutorial, let’s see how the client library interacts with the Cloud Spanner API. The example that follows starts from the “Database setup” step of Tutorial 2, and assumes you’ve already created the mysite and polls apps from the first part of the tutorial.After configuring database backend as described above, we can run the initial migrations for the project:After running the migrations, we can see the tables and indexes Django created in the GCP Cloud Console:Alternatively, we can inspect information_schema.tables to display the tables Django created using the Google Cloud SDK:Note that this will display Spanner-internal tables too, including SPANNER_SYS and INFORMATION_SCHEMA tables. These are omitted in the example above.Check the table schema of any table in GCP cloud console by clicking the SHOW EQUIVALENT DDL link on the table detail page in the Cloud Console:Now, following the Playing with the API section of the tutorial, let’s create and modify some objects in the polls app and see how the changes are persisted in Cloud Spanner. In the example below, each code segment from the tutorial is followed by any SQL statements executed by Django, and a partial list of resulting Cloud Spanner API requests, including their arguments.To see the generated SQL statements yourself, you can enable the django.db.backends logger.Query the empty Questions tableThis code results in the SQL statement:We have skipped the internal Cloud Spanner API calls that are made, those details can be found in our earlier blog post.Create and save a new Question objectThis code results in the SQL statement:Modify an existing QuestionThis code results in the SQL statement:Migrating an existing Django project to Cloud SpannerTo migrate a Django project from another database to Cloud Spanner, we can use Django’s built-in support for multiple database connections. This feature allows us to connect to two databases at once; to read from one and write to another.Suppose you want to move your application’s data from SQLite to Cloud Spanner. Assuming the existing database connection is already configured as “default”, we can add a second database connection to Cloud Spanner. We’ll call this connection “spanner”:As in the tutorial, running python manage.py migrate will create tables and indexes for all models in the project. By default, migrate will run on all configured database connections, and generate DDL specific to each database backend. After running migrate, both databases should have equivalent schemas, but the new Cloud Spanner database will still be empty.Since Django automatically generates the schema from the project’s models, it’s a good idea to check that the generated DDL follows Cloud Spanner best practices. You can adjust the project’s models accordingly in a separate migration after copying data into Cloud Spanner.There are several options for copying data into Cloud Spanner, including using HarbourBridge to import data from a PostgreSQL or MySQL database or Dataflow to import Avro files. Any option will work as long as the imported data matches the new schema, but the easiest (if not the fastest) way to copy data between databases is by using Django itself.Consider the models we created in the tutorial. In this code snippet, we read all Questions and Choices from the SQLite database and then write them to Cloud Spanner:For each row in each table in the existing database, we:Read the row and store it in memory as a Django model objectUnset the primary key, andWrite it back to the new database, at which point it gets assigned a new primary key.Note that we need to update foreign keys to use the newly-generated primary keys too. Also note that we call question.choice_set.all() before we change question’s primary key – otherwise the QuerySet would be evaluated lazily using the wrong key!This is a naive example; meant to be easy to understand, but not necessarily fast. It makes a separate “SELECT … FROM polls_choice” query for each Question. Since we know ahead of time that we’re going to read all Choices in the database, we can reduce this to a single query with Choice.objects.all().select_related(‘question’). In general, it should be possible to write your migration logic in a way that takes advantage of your project’s schema, e.g. by using bulk_update instead of a separate request to write each row. This logic can take the form of a code snippet to be run in the Django shell (as above), a separate script, or a Django data migration.After migrating from the old database to Cloud Spanner, you can remove the configuration for the old database connection and rename the Cloud Spanner connection to “default”:LimitationsNote that some Django database features are disabled because they’re not compatible with the Cloud Spanner API. As Django ships with a comprehensive test suite, you can look at the list of Django tests that we skip for a detailed list of Django features that aren’t yet supported by python-spanner-django.Customers using the Cloud Spanner Emulator may see different behavior than the Cloud Spanner service, for instance because the emulator doesn’t support concurrent transactions. See the Cloud Spanner Emulator documentation for a list of limitations and differences from the Cloud Spanner service.We recommend that you go through the list of additional limitations of Spanner and django-google-spanner both before deploying any projects using this library. These limitations are documented here. Getting involvedWe’d love to hear from you, especially if you’re using the Cloud Spanner Python client library with a Django application now, or if you’re an existing Cloud Spanner customer who is considering using Django for new projects. The project is open source, and you can comment, report bugs, and open pull requests on Github.See alsoDjango Cloud Spanner Python client library documentationCloud Spanner Python client library documentationCloud Spanner product documentationDjango 3.2 documentationDjango 3.2 tutorialRelated ArticleCloud Spanner connectivity using JetBrains IDEsYou can now browse the database schema and query data stored in Cloud Spanner directly from your JetBrains IDE. In this post, I will show…Read Article
Quelle: Google Cloud Platform