Schedule Connectivity Tests for continuous networking reachability diagnostics

As the scope and size of your cloud deployments expand, the need for automation to quickly and consistently diagnose service-affecting issues increases in parallel. Connectivity Tests – part of the Network Intelligence Center capabilities focused on Google Cloud network observability, monitoring, and troubleshooting – help you quickly troubleshoot network connectivity issues by analyzing your configuration and, in some cases, validating the data plane by sending synthetic traffic.  It’s common to start using Connectivity Tests in an ad hoc manner, for example, to determine whether an issue reported by your users is caused by a recent configuration change.  Another popular  use case for Connectivity Tests is to verify that applications and services are reachable post-migration, which helps verify that the cloud networking design is working as intended.  Once workloads are migrated to Google Cloud, Connectivity Tests help prevent regressions caused by mis-configuration or maintenance issues.  As you become more familiar with the power of Connectivity Tests, you may discover different use cases for running Connectivity Tests on a continuous basis.  In this post, we’ll walk through a solution to continuously run Connectivity Tests.Scheduling Connectivity Tests leverages existing Google Cloud platform tools to continuously execute tests and surface failures through Cloud Monitoring alerts.  We use the following products and tools as part of this solution:One or more Connectivity Tests to check connectivity between network endpoints by analyzing the cloud networking configuration and (when eligible) performing live data plane analysis between the endpoints.A single Cloud Function to programmatically run the Connectivity Tests using the Network Management API, and publish results to Cloud Logging.One or more Cloud Scheduler jobs that run the Connectivity Tests on a continuous schedule that you define.Operations Suite integrates logging, log-based metrics and alerting to surface test results that require your attention.Let’s get started.In this example there are two virtual machines running in different cloud regions of the same VPC.Connectivity TestsWe configure a connectivity test to verify that the VM instance in cloud region us-east4 can reach the VM instance in cloud region europe-west1 on port 443 using the TCP protocol.  The following Connectivity Test UI example shows the complete configuration of the test.For more detailed information on the available test parameters, see the Connectivity Tests documentation.At this point you can verify that the test passes both the configuration and data plane analysis steps, which tells you that the cloud network is configured to allow the VM instances to communicate and the packets transmitted between the VM instances were successfully passed through the network.Before moving on to the next step, note the name of the connectivity test in URI format, which is visible in the equivalent REST response output:We’ll use this value as part of the Cloud Scheduler configuration in a later step.Create Cloud FunctionCloud Functions provide a way to interact with the Network Management API to run a connectivity test.  While there are other approaches for interacting with the API, we take advantage of the flexibility in Cloud Functions to run the test and enrich the output we send to Cloud Logging.  Cloud Functions also provide support for numerous programming languages, so you can adapt these instructions to the language of your choice.  In this example, we use Python for interfacing with the Network Management API.Let’s walk through the high-level functionality of the code.First, the Cloud Function receives an HTTP request with the name of the connectivity test that you want to execute.  By providing the name of the connectivity test as a variable, we can reuse the same Cloud Function for running any of your configured connectivity tests.code_block[StructValue([(u’code’, u’if http_request.method != ‘GET':rn return flask.abort(rn flask.Response(rn http_request.method +rn ‘ requests are not supported, use GET instead’,rn status=405))rn if ‘name’ not in http_request.args:rn return flask.abort(rn flask.Response(“Missing ‘name’ URL parameter”, status=400))rn test_name = http_request.args[‘name’]’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea6e88ebd90>)])]Next, the code runs the connectivity test specified using the Network Management API.code_block[StructValue([(u’code’, u’client = network_management_v1.ReachabilityServiceClient()rn rerun_request = network_management_v1.RerunConnectivityTestRequest(rn name=test_name)rn try:rn response = client.rerun_connectivity_test(request=rerun_request).result(rn timeout=60)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea6dda3a250>)])]And finally, if the connectivity test fails for any reason, a log entry is created that we’ll later configure to generate an alert.code_block[StructValue([(u’code’, u”if (response.reachability_details.result !=rn types.ReachabilityDetails.Result.REACHABLE):rn entry = {rn ‘message':rn f’Reran connectivity test {test_name!r} and the result was ‘rn ‘unreachable’,rn ‘logging.googleapis.com/labels': {rn ‘test_resource_id': test_namern }rn }rn print(json.dumps(entry))”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea6f4a04dd0>)])]There are a couple of things to note about this last portion of sample code:We define a custom label (test_resource_id: test_name) used when a log entry is written.  We’ll use this as part of the logs-based metric in a later step.We only write a log entry when the connectivity test fails.  You can customize the logic for other use cases, for example logging when tests that you expect to fail succeed or writing logs for successful and unsuccessful test results to generate a ratio metric.The full example code for the Cloud Function is below.code_block[StructValue([(u’code’, u’import jsonrnimport flaskrnfrom google.api_core import exceptionsrnfrom google.cloud import network_management_v1rnfrom google.cloud.network_management_v1 import typesrnrnrndef rerun_test(http_request):rn “””Reruns a connectivity test and prints an error message if the test fails.”””rn if http_request.method != ‘GET':rn return flask.abort(rn flask.Response(rn http_request.method +rn ‘ requests are not supported, use GET instead’,rn status=405))rn if ‘name’ not in http_request.args:rn return flask.abort(rn flask.Response(“Missing ‘name’ URL parameter”, status=400))rn test_name = http_request.args[‘name’]rn client = network_management_v1.ReachabilityServiceClient()rn rerun_request = network_management_v1.RerunConnectivityTestRequest(rn name=test_name)rn try:rn response = client.rerun_connectivity_test(request=rerun_request).result(rn timeout=60)rn if (response.reachability_details.result !=rn types.ReachabilityDetails.Result.REACHABLE):rn entry = {rn ‘message':rn f’Reran connectivity test {test_name!r} and the result was ‘rn ‘unreachable’,rn ‘logging.googleapis.com/labels': {rn ‘test_resource_id': test_namern }rn }rn print(json.dumps(entry))rn return flask.Response(status=200)rn except exceptions.GoogleAPICallError as e:rn print(e)rn return flask.abort(500)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea6dfe39b10>)])]We use the code above and create a Cloud Function named run_connectivity_test.  Use the default trigger type of HTTP and make note of the trigger URL to use in a later stepcode_block[StructValue([(u’code’, u’https://us-east4-project6.cloudfunctions.net/run_connectivity_test’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea6df0fcf10>)])]Under Runtime, build, connections and security settings, increase the Runtime Timeout to 120 seconds.For the function code, select Python for the Runtime.For main.py, use the sample code provided above and configure the following dependencies for the Cloud Function in requirements.txt.code_block[StructValue([(u’code’, u’# Function dependencies, for example:rn# package>=versionrngoogle-cloud-network-management>=1.3.1rngoogle-api-core>=2.7.2′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea6df0fc590>)])]Click Deploy and wait for the Cloud Function deployment to complete.Cloud SchedulerThe functionality to execute the Cloud Function on a periodic schedule is accomplished using Cloud Scheduler.  A separate Cloud Scheduler job is created for each connectivity test you want to schedule.The following Cloud Console example shows the Cloud Scheduler configuration for our example.Note that the Frequency is specified in unix-cron format and in our example schedules the Cloud Function to run once an hour.  Make sure you take the Connectivity Tests pricing into consideration when configuring the frequency of the tests.The URL parameter of the execution configuration in the example below is where we bring together the name of the connectivity test and the Cloud Function trigger from the previous steps.  The format of the URL is{cloud_function_trigger}?name={connectivity-test-name}In our example, the URL is configured as:https://us-east4-project6.cloudfunctions.net/run_connectivity_test?name=projects/project6/locations/global/connectivityTests/inter-region-test-1The following configuration options complete the Cloud Scheduled configuration:Change the HTTP method to GET.Select Add OIDC token for the Auth header.Specify a service account that has the Cloud Function invoker permission for your Cloud Function.Set the Audience to the URL minus the query parameters, e.g.:https://us-east4-project6.cloudfunctions.net/run_connectivity_testLogs-based MetricThe Logs-based metric will convert unreachable log entries created by our Cloud Function into a Cloud Monitoring metric that we can use to create an alert. We start by configuring a Counter logs-based metric named unreachable_connectivity_tests.  Next, configure a filter to match the `test_resource_id` label that is included in the unreachable log messages.The complete metric configuration is shown below.Alerting PolicyThe Alerting Policy is triggered any time the logs-based metric increments, indicating that one of the continuous connectivity tests has failed.  The alert includes the name of the test that failed, allowing you to quickly focus your effort on the resources and traffic included in the test parameters.To create a new Alerting Policy, select the logging/user/unreachable_connectivity_test metric for the Cloud Function resource.Under Transform data, configure the following parameters:Within each time seriesRolling window = 2 minutesRolling window function = rateAcross time seriesTime series aggregation = sumTime series group by = test_resource_idNext, configure the alert trigger using the parameters shown in the figure below.Finally, configure the Documentation text field to include the name of the specific test that logged an unreachable result.Connectivity Tests provide critical insights into the configuration and operation of your cloud networking environment.  By combining multiple Google Cloud services, you can transform your Connectivity Tests usage from an ad-hoc troubleshooting tool to a solution for ongoing service validation and issue detection.We hope you found this information useful.  For a more in-depth look into Network Intelligence Center check out the What is Network Intelligence Center? post and our documentation.Related ArticleWhat is Network Intelligence Center?Network Intelligence Center provides a single console for managing Google Cloud network observability, monitoring, and troubleshooting.Read Article
Quelle: Google Cloud Platform

Enabling real-time AI with Streaming Ingestion in Vertex AI

Many machine learning (ML) use cases, like fraud detection, ad targeting, and recommendation engines, require near real-time predictions. The performance of these predictions is heavily dependent on access to the most up-to-date data, with delays of even a few seconds making all the difference. But it’s difficult to set up the infrastructure needed to support high-throughput updates and low-latency retrieval of data. Starting this month, Vertex AI Matching Engine and Feature Store will support real-time Streaming Ingestion as Preview features. With Streaming Ingestion for Matching Engine, a fully managed vector database for vector similarity search, items in an index are updated continuously and reflected in similarity search results immediately. With Streaming Ingestion for Feature Store, you can retrieve the latest feature values with low latency for highly accurate predictions, and extract real-time datasets for training. For example, Digits is taking advantage of Vertex AI Matching Engine Streaming Ingestion to help power their product, Boost, a tool that saves accountants time by automating manual quality control work.“Vertex AI Matching Engine Streaming Ingestion has been key to Digits Boost being able to deliver features and analysis in real-time. Before Matching Engine, transactions were classified on a 24 hour batch schedule, but now with Matching Engine Streaming Ingestion, we can perform near real time incremental indexing – activities like inserting, updating or deleting embeddings on an existing index, which helped us speed up the process. Now feedback to customers is immediate, and we can handle more transactions, more quickly,” said Hannes Hapke, Machine Learning Engineer at Digits.This blog post covers how these new features can improve predictions and enable near real-time use cases, such as recommendations, content personalization, and cybersecurity monitoring.Streaming Ingestion enables you to serve valuable data to millions of users in real time.Streaming Ingestion enables real-time AIAs organizations recognize the potential business impact of better predictions based on up-to-date data, more real-time AI use cases are being implemented. Here are some examples:Real-time recommendations and a real-time marketplace: By adding Streaming Ingestion to their existing Matching Engine-based product recommendations, Mercari is creating a real-time marketplace where users can browse products based on their specific interests, and where results are updated instantly when sellers add new products. Once it’s fully implemented, the experience will be like visiting an early-morning farmer’s market, with fresh food being brought in as you shop. By combining Streaming Ingestion with Matching Engine’s filtering capability, Mercari can specify whether or not an item should be included in the search results, based on tags such as “online/offline” or “instock/nostock.”Mercari Shops: Streaming Ingestion enables real-time shopping experimenLarge-scale personalized content streaming: For any stream of content representable with feature vectors (including text, images, or documents), you can design pub-sub channels to pick up valuable content for each subscriber’s specific interests. Because Matching Engine is scalable (i.e., it can process millions of queries each second), you can support millions of online subscribers for content streaming, serving a wide variety of topics that are changing dynamically. With Matching Engine’s filtering capability, you also have real-time control over what content should be included, by assigning tags such as “explicit” or “spam” to each object. You can use Feature Store as a central repository for storing and serving the feature vectors of the contents in near real time.Monitoring: Content streaming can also be used for monitoring events or signals from IT infrastructure, IoT devices, manufacturing production lines, and security systems, among other commercial use cases. For example, you can extract signals from millions of sensors and devices and represent them as feature vectors. Matching Engine can be used to continuously update a list of “the top 100 devices with possible defective signals,” or “top 100 sensor events with outliers,” all in near real time.Threat/spam detection: If you are monitoring signals from security threat signatures or spam activity patterns, you can use Matching Engine to instantly identify possible attacks from millions of monitoring points. In contrast, security threat identification based on batch processing often involves potentially significant lag, leaving the company vulnerable. With real-time data, your models are better able to catch threats or spams as they happen in your enterprise network, web services, online games, etc.Implementing streaming use casesLet’s take a closer look at how you can implement some of these use cases. Real-time recommendations for retailMercari built a feature extraction pipeline with Streaming Ingestion.Mercari’s real-time feature extraction pipelineThe feature extraction pipeline is defined with Vertex AI Pipelines, and is periodically invoked by Cloud Scheduler and Cloud Functions to initiate the following process:Get item data: The pipeline issues a query to fetch the updated item data from BigQuery.Extract feature vector: The pipeline runs predictions on the data with the word2vec model to extract feature vectors.Update index: The pipeline calls Matching Engine APIs to add the feature vectors to the vector index. The vectors are also saved to Cloud Bigtable (and can be replaced with Feature Store in the future).”We have been evaluating the Matching Engine Streaming Ingestion and couldn’t believe the super short latency of the index update for the first time. We would like to introduce the functionality to our production service as soon as it becomes GA, ” said Nogami Wakana, Software Engineer at Souzoh (a Mercari group company).This architecture design can be also applied to any retail businesses that need real-time updates for product recommendations.Ad targetingAd recommender systems benefit significantly from real-time features and item matching with the most up-to-date information. Let’s see how Vertex AI can help build a real-time ad targeting system.Real-time ad recommendation systemThe first step is generating a set of candidates from the ad corpus. This is challenging because you must generate relevant candidates in milliseconds and ensure they are up to date. Here you can use Vertex AI Matching Engine to perform low-latency vector similarity matching, generate suitable candidates, and use Streaming Ingestion to ensure that your index is up-to-date with the latest ads. Next is reranking the candidate selection using a machine learning model to ensure that you have a relevant order of ad candidates. For the model to use the latest data, you can use Feature Store Streaming Ingestion to import the latest features and use online serving to serve feature values at low latency to improve accuracy.  After reranking the ads candidates, you can apply final optimizations, such as applying the latest business logic. You can implement the optimization step using a Cloud Function or Cloud Run. What’s Next?Interested? The documents for Streaming Ingestion are available and you can try it out now. Using the new feature is easy: For example, when you create an index on Matching Engine with the REST API, you can specify the indexUpdateMethod attribute as STREAM_UPDATE.code_block[StructValue([(u’code’, u'{rn displayName: “‘${DISPLAY_NAME}'”, rn description: “‘${DISPLAY_NAME}'”,rn metadata: {rn contentsDeltaUri: “‘${INPUT_GCS_DIR}'”, rn config: {rn dimensions: “‘${DIMENSIONS}'”,rn approximateNeighborsCount: 150,rn distanceMeasureType: “DOT_PRODUCT_DISTANCE”,rn algorithmConfig: {treeAhConfig: {leafNodeEmbeddingCount: 10000, leafNodesToSearchPercent: 20}}rn },rn },rn indexUpdateMethod: “STREAM_UPDATE”rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea55bfb7910>)])]After deploying the index, you can update or rebuild the index (feature vectors) with the following format. If the data point ID exists in the index, the data point is updated, otherwise, a new data point is inserted.code_block[StructValue([(u’code’, u'{rn datapoints: [rn {datapoint_id: “‘${DATAPOINT_ID_1}'”, feature_vector: […]}, rn {datapoint_id: “‘${DATAPOINT_ID_2}'”, feature_vector: […]}rn ]rn}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ea55bfa3d50>)])]It can handle the data point insertion/update at high throughput with low latency. The new data point values will be applied in any new queries within a few seconds or milliseconds (the latency varies depending on the various conditions). The Streaming Ingestion is a powerful functionality and very easy to use. No need to build and operate your own streaming data pipeline for real-time indexing and storage. Yet, it adds significant value to your business with its real-time responsiveness.To learn more, take a look at the following blog posts for learning Matching Engine and Feature Store concepts and use cases:Vertex Matching Engine: Blazing fast and massively scalable nearest neighbor searchFind anything blazingly fast with Google’s vector search technologyKickstart your organization’s ML application development flywheel with the Vertex Feature StoreMeet AI’s multitool: Vector embeddingsRelated ArticleHow Let’s Enhance uses NVIDIA AI and GKE to power AI-based photo editingLet’s Enhance uses AI to beautify images. GKE provides auto-provisioning, autoscaling and simplicity, while GPUs provide superior process…Read Article
Quelle: Google Cloud Platform

No workload left behind: Extending Anthos to manage on-premises VMs

We are pleased to announce general availability of virtual machine (VM) support in Anthos. VM support is available on Anthos for bare metal (now known as Google Distributed Cloud Virtual). Customers can now run VMs alongside containers on a single, unified, Google Cloud-connected platform in their data center or at the edge. With VM support in Anthos, developers and operations teams can run VMs alongside containers on shared cloud-native infrastructure. VM support in Anthos lets you achieve consistent container and VM operations with Kubernetes-style declarative configuration and policy enforcement, self-service deployment, observability and monitoring, all from the familiar Google Cloud console, APIs, and command line interfaces. The Anthos VM runtime can be enabled on any Anthos on bare metal cluster (v1.12 or higher) at no additional charge. During preview, we saw strong interest in VM support in Anthos for retail edge environments, where there is a small infrastructure footprint and a need to run new container apps and heritage VM apps on just a few hosts. In fact, a global Quick Service Restaurant (QSR), using a single VM of their existing point of sale solution, simulated throughput of more than 1,700 orders per hour for 10 hours, totalling more than 17,000 orders. The VM was running on the same hardware that exists at the store.Why extend Anthos to manage VMs?Many of our customers are modernizing their existing (heritage) applications using containers and Kubernetes. But few enterprise workloads are containerized today and millions of business-critical workloads still run in VMs. While many VMs can be modernized by migrating to VMs in Google Cloud or to containers on GKE or Anthos, many can’t — at least not right away. You might depend on a vendor-provided app that hasn’t been updated to run in containers yet, need to keep a VM in a data center or edge location for low latency connectivity to other local apps or infrastructure, or you might not have the budget to containerize a custom-built app today. How can you include these VMs in your container and cloud app modernization strategy?Anthos now provides consistent visibility, configuration, and security for VMs and containersRun and manage VMs and containers side-by-sideAt the heart of VM support in Anthos is the Anthos VM Runtime, which extends and enhances the open source KubeVirt technology. We integrated Kubevirt with Anthos on bare metal to simplify the install and upgrade experience. We’ve provided tools to manage VMs using the command line, APIs and the Google Cloud console. We’ve integrated VM observability logs and metrics with the Google Cloud operations suite, including out of the box dashboards and alerts. We’ve included significant networking enhancements like support for multiple network interfaces for VMs and IP/MAC stickiness to enable VM mobility that is also compatible with Kubernetes pod (multi-NIC). And we’ve added VLAN integration while also enabling customers to apply L4 Kubernetes network policies for an on-premises, VPC-like, micro segmentation experience. If you’re an experienced VM admin, you can take advantage of VM high availability, and simplified Kubernetes storage management for a familiar yet updated VM management experience. VM lifecycle management is built into the Google Cloud console for a simplified user experience that integrates with your existing Anthos and Google Cloud authentication and authorization frameworks.View and manage VMs running on Anthos in the Google Cloud ConsoleGet started right away with new VM assessment and migration toolsHow do you know if Anthos is the right technology for your VM workloads? Google Cloud offers assessment and migration tools to help you at every step of your VM modernization journey. Our updated fit assessment tool collects data about your existing VMware VMs and generates a detailed report. This no-cost report belongs to you and can be uploaded to the Google Cloud console for detailed visualization and historical views. The report provides a fit score for every VM that estimates the effort required to containerize the VM and migrate it to Anthos or GKE as a container, or migrate it to Anthos as a VM. Once you’ve identified the best VMs to migrate, use our no-cost updated Migrate to Containers tool to migrate VMs to Anthos from the command line or the console.Sample fit assessment report showing VMs that can be shifted (migrated) to Anthos as VMsDon’t let business-critical VM workloads or virtualization management investments keep you from realizing your cloud and container app modernization goals. Now you can include your heritage VMs in your on-premises managed container platform strategy. Please reach out for a complimentary fit assessment and let us help you breathe new life into your most important VMs.To learn more about all the exciting innovations we’re adding to Anthos, mark your calendar and join us at Google Cloud Next ‘22.Related ArticleAnthos on-prem and on bare metal now power Google Distributed Cloud VirtualGoogle Distributed Cloud Virtual uses Anthos on-prem or bare metal to create a hybrid cloud on your existing hardware.Read Article
Quelle: Google Cloud Platform

Deploy OCI artifacts and Helm charts the GitOps way with Config Sync

One principle of GitOps is to have the desired state declarations as Versioned and Immutable, where Git repositories play an important role as the source of truth. But can you have an alternative to a Git repository for storing and deploying your Kubernetes manifests via GitOps? What if you could package your Kubernetes manifests into a container image instead? What if you can reuse the same authentication and authorization mechanism as your container images?To answer the above questions, an understanding of OCI registries and OCI artifacts is needed. Simply put, OCI registries are the registries that are typically used for container images but can be expanded to store other types of data (aka OCI artifacts) such as Helm charts, Kubernetes manifests, Kustomize overlays, scripts, etc. Using OCI Registries and OCI Artifacts provides you with the following advantages: Less tools to operate: Single artifact registry can store expanded data types apart from container images.  In-built release archival system: OCI registries give users two sets of mutable and immutable URLs which are tags and content-addressable ones. Flourishing ecosystem: Standardized and supported by dozen of providers which helps users take advantage of new features and tools developed by large Kubernetes community Given these benefits, and in addition to the support of files stored in Git repositories, we are thrilled to announce two new formats supported by Config Sync 1.13 to deploy OCI artifacts:Sync OCI artifacts from Artifact RegistrySync Helm charts from OCI registriesConfig Sync is an open source tool that provides GitOps continuous delivery for Kubernetes clusters.The Open Container Initiative (OCI) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. OCI artifacts give you the power of storing and distributing different types of data such as Kubernetes manifests, Helm Charts, and Kustomize overlays, in addition to container images via OCI registries.Throughout this blog, you will see how you can leverage the two new formats (OCI artifacts and Helm charts) supported by Config Sync, by using:oras and helm to package and push OCI artifactsArtifact registry as OCI registry to store the OCI artifactsGKE cluster to host the OCI artifacts syncedConfig Sync installed in that GKE cluster to sync the OCI artifactsInitial setupFirst, you need to have a common setup for the two scenarios by configuring and securing the access from the GKE cluster with Config Sync to the Artifact Registry repository.Initialize the Google Cloud project you will use throughout this blog:code_block[StructValue([(u’code’, u’PROJECT=SET_YOUR_PROJECT_ID_HERErngcloud config set project $PROJECT’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca253a6c10>)])]Create a GKE cluster with Workload Identity registered in a fleet to enable Config Management:code_block[StructValue([(u’code’, u’CLUSTER_NAME=oci-artifacts-clusterrnREGION=us-east4rngcloud services enable container.googleapis.comrngcloud container clusters create ${CLUSTER_NAME} \rn –workload-pool=${PROJECT}.svc.id.goog \rn –region ${REGION}rngcloud services enable gkehub.googleapis.comrngcloud container fleet memberships register ${CLUSTER_NAME} \rn –gke-cluster ${REGION}/${CLUSTER_NAME} \rn –enable-workload-identityrngcloud beta container fleet config-management enable’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca263d4e10>)])]Install Config Sync in the GKE cluster:code_block[StructValue([(u’code’, u’cat <<EOF > acm-config.yamlrnapplySpecVersion: 1rnspec:rn configSync:rn enabled: truernEOFrngcloud beta container fleet config-management apply \rn –membership ${CLUSTER_NAME} \rn –config acm-config.yaml’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca263d4cd0>)])]Create an Artifact Registry repository to host OCI artifacts (–repository-format docker):code_block[StructValue([(u’code’, u’CONTAINER_REGISTRY_NAME=oci-artifactsrngcloud services enable artifactregistry.googleapis.comrngcloud artifacts repositories create ${CONTAINER_REGISTRY_NAME} \rn –location ${REGION} \rn –repository-format docker’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca15b519d0>)])]Create a dedicated Google Cloud Service Account with the fine granular access to that Artifact Registry repository with the roles/artifactregistry.reader role:code_block[StructValue([(u’code’, u’GSA_NAME=oci-artifacts-readerrngcloud iam service-accounts create ${GSA_NAME} \rn –display-name ${GSA_NAME}rngcloud artifacts repositories add-iam-policy-binding ${CONTAINER_REGISTRY_NAME} \rn –location ${REGION} \rn –member “serviceAccount:${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com” \rn –role roles/artifactregistry.reader’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca15b51090>)])]Allow Config Sync to synchronize resources for a specific RootSync:code_block[StructValue([(u’code’, u’ROOT_SYNC_NAME=root-sync-ocirngcloud iam service-accounts add-iam-policy-binding \rn –role roles/iam.workloadIdentityUser \rn –member “serviceAccount:${PROJECT}.svc.id.goog[config-management-system/root-reconciler-${ROOT_SYNC_NAME}]” \rn ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.com’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6cd0>)])]Login to Artifact Registry so you can push OCI artifacts to it in a later step:code_block[StructValue([(u’code’, u’gcloud auth configure-docker ${REGION}-docker.pkg.dev’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce61d0>)])]Build and sync an OCI artifactNow that you have completed your setup, let’s illustrate our first scenario where you want to sync a Namespace resource as an OCI image.Create a Namespace resource definition:code_block[StructValue([(u’code’, u’cat <<EOF> test-namespace.yamlrnapiVersion: v1rnkind: Namespacernmetadata:rn name: testrnEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6350>)])]Create an archive of that file:code_block[StructValue([(u’code’, u’tar -cf test-namespace.tar test-namespace.yaml’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6990>)])]Push that artifact to Artifact Registry. In this tutorial, we use oras, but there are other tools that you can use like crane.code_block[StructValue([(u’code’, u’oras push \rn ${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}/my-namespace-artifact:v1 \rn test-namespace.tar’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca26ce6290>)])]Set up Config Sync to deploy this artifact from Artifact Registry:code_block[StructValue([(u’code’, u’cat << EOF | kubectl apply -f -rnapiVersion: configsync.gke.io/v1beta1rnkind: RootSyncrnmetadata:rn name: ${ROOT_SYNC_NAME}rn namespace: config-management-systemrnspec:rn sourceFormat: unstructuredrn sourceType: ocirn oci:rn image: ${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}/my-namespace-artifact:v1rn dir: .rn auth: gcpserviceaccountrn gcpServiceAccountEmail: ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.comrnEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1eb10>)])]Check the status of the sync with the nomos tool:code_block[StructValue([(u’code’, u’nomos status –contexts $(k config current-context)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1ef50>)])]Verify that the Namespace test is synced:code_block[StructValue([(u’code’, u’kubectl get ns test’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca14f1e990>)])]And voilà! You just synced a Namespace resource as an OCI artifact with Config Sync.Build and sync a Helm chartNow, let’s see how you could deploy a Helm chart hosted in a private Artifact Registry.Create a simple Helm chart:code_block[StructValue([(u’code’, u’helm create test-chart’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca25ff7f50>)])]Package the Helm chart:code_block[StructValue([(u’code’, u’helm package test-chart –version 0.1.0′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca2434b150>)])]Push the chart to Artifact Registry:code_block[StructValue([(u’code’, u’helm push \rn test-chart-0.1.0.tgz \rn oci://${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580b890>)])]Set up Config Sync to deploy this Helm chart from Artifact Registry:code_block[StructValue([(u’code’, u’cat << EOF | kubectl apply -f -rnapiVersion: configsync.gke.io/v1beta1rnkind: RootSyncrnmetadata:rn name: ${ROOT_SYNC_NAME}rn namespace: config-management-systemrnspec:rn sourceFormat: unstructuredrn sourceType: helmrn helm:rn repo: oci://${REGION}-docker.pkg.dev/${PROJECT}/${CONTAINER_REGISTRY_NAME}rn chart: test-chartrn version: 0.1.0rn releaseName: test-chartrn namespace: defaultrn auth: gcpserviceaccountrn gcpServiceAccountEmail: ${GSA_NAME}@${PROJECT}.iam.gserviceaccount.comrnEOF’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580b290>)])]Check the status of the sync with the nomos tool:code_block[StructValue([(u’code’, u’nomos status –contexts $(k config current-context)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca1580bc10>)])]Verify that the resources in the Namespace test-chart are synced:code_block[StructValue([(u’code’, u’kubectl get all -n default’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3eca25bd62d0>)])]And voilà! You just synced an Helm chart with Config Sync.Towards more scalability and securityIn this blog, you synced both an OCI artifact and an Helm chart with Config Sync.OCI registries and OCI artifacts are new kids on the block that can also work alongside with the Git option depending on your needs and use-cases. One of such patterns could be Git  still acting as the source of truth for the declarative configs in addition to the well established developer workflow it provides: pull request, code review, branch strategy, etc.The continuous integration pipelines, triggered by pull requests or merges, will run tests against the declarative configs to eventually push the OCI artifacts in an OCI registry.Finally, the continuous reconciliation of GitOps will take it from here and will reconcile between the desired state, now stored in an OCI registry, with the actual state, running in Kubernetes. Your Kubernetes manifests as OCI artifacts are now just seen like any container images for your Kubernetes clusters as they are pulled from OCI registries. This continuous reconciliation from OCI registries, not interacting with Git, has a lot of benefits in terms of scalability, performance and security as you will be able to configure very fine grained access to your OCI artifacts.To get started, check out the two Sync OCI artifacts from Artifact Registry and the Sync Helm charts from OCI registries features today.You could also find this other tutorial showing how you can package and push an Helm chart to GitHub Container Registry with GitHub actions, and then how you can deploy this Helm chart with Config Sync.Attending KubeCon + CloudNativeCon North America 2022 in October? Come check out our session Build and Deploy Cloud Native (OCI) Artifacts, the GitOps Way during the GitOpsCon North America 2022 co-located event on October, 25th. Hope to see you there!Config Sync is open sourced. We are open to contributions and bug fixes if you want to get involved in the development of Config Sync. You can also use the repository to track ongoing work, or build from source to try out bleeding-edge functionalities.Related ArticleGoogle Cloud at KubeCon EU: New projects, updated services, and how to connectEngage with experts and learn more about Google Kubernetes Engine at KubeCon EU.Read Article
Quelle: Google Cloud Platform

Announcing Pub/Sub metrics dashboards for improved observability

Pub/Sub offers a rich set of metrics for resource and usage monitoring. Previously, these metrics were like buried treasure: they were useful to understand Pub/Sub usage, but you had to dig around to find them. Today, we are announcing out-of-the-box Pub/Sub metrics dashboards that are accessible with one click from the Topics and Subscriptions pages in the Google Cloud Console. These dashboards provide more observability in context and help you build better solutions with Pub/Sub.Check out our new one-click monitoring dashboardsThe Overview section of the monitoring dashboard for all the topics in your project.We added metrics dashboards to monitor the health of all your topics and subscriptions in one place, including dashboards for individual topics and subscriptions. Follow these steps to access the new monitoring dashboards:To view the monitoring dashboard for all the topics in your project, open the Pub/Sub Topics page and click the Metrics tab. This dashboard has two sections: Overview and Quota.To view the monitoring dashboard for a single topic, in the Pub/Sub Topics page, click any topic to display the topic detail page, and then click the Metrics tab. This dashboard has up to three sections: Overview, Subscriptions, and Retention (if topic retention is enabled).To view the monitoring dashboard for all the subscriptions in your project, open the Pub/Sub Subscriptions page and click the Metrics tab. This dashboard has two sections: Overview and Quotas.To view the monitoring dashboard for a single subscription, in the Pub/Sub Subscriptions page, click any subscription to display the subscription detail page, and then click the Metrics tab. This dashboard has up to four sections: Overview, Health, Retention (if acknowledged message retention is enabled), and either Pull or Push depending on the delivery type of your subscription.A few highlightsWhen exploring the new Pub/Sub metrics dashboard, here are a few examples of things you can do. Please note that these dashboards are a work in progress, and we hope to update them based on your feedback. To learn about recent changes, please refer to the Pub/Sub monitoring documentation.The Overview section of the monitoring dashboard for a single topic.As you can see, the metrics available in the monitoring dashboard for a single topic are closely related to one another. Roughly speaking, you can obtain Publish throughput in bytes by multiplying Published message count and Average message size. Because a publish request is made up of a batch of messages, dividing Published messages by Publish request count gets you Average number of messages per batch. Expect a higher number of published messages than publish requests if your publisher client has batching enabled. Some interesting questions you can answer by looking at the monitoring dashboard for a single topic are: Did my message sizes change over time?Is there a spike in publish requests?Is my publish throughput in line with my expectations?Is my batch size appropriate for the latency I want to achieve, given that larger batch sizes increase publish latency?The Overview section of the monitoring dashboard for a single subscription.You can find a few powerful composite metrics in the monitoring dashboard for a single subscription. These metrics are Delivery metrics, Publish to ack delta, and Pull to ack delta. All three aim to give you a sense of how well your subscribers are keeping up with incoming messages. Delivery metrics display your publish, pull, and acknowledge (ack) rate next to each other. Pull to ack delta and Publish to ack delta offer you the opportunity to drill down to any specific bottlenecks. For example, if your subscribers are pulling messages a lot faster than they are acknowledging them, expect the values reported in Pull to ack delta to be mostly above zero. In this scenario, also expect both your Unacked messages by region and your Backlog bytes to grow. To remedy this situation, you can increase your message processing power or setting up subscriber flow control.The Health section of the monitoring dashboard for a single subscription.Another powerful composite metric available in the monitoring dashboard for a single subscription is the Delivery latency health score in the Health section. You may treat this metric as a one-stop shop to examine the health of your subscription. This metric tracks a total of five properties; each can take a value of zero or one. If your subscribers are not keeping up, zero scores for “ack_latency” and/or “expired_ack_deadlines” effectively tell you that those properties are the reason why. We prescribe how to fix these failing scores in our documentation. If your subscription is run by a managed service like Dataflow, do not be alarmed by a “utilization” score of zero. With Dataflow, the number of streams open to receive messages is optimized, so the recommendation to open more streams does not apply. Some questions you can answer by looking at your monitoring dashboard for a single subscription are: What is the 99th percentile of my ack latencies? Is the majority of my messages getting acknowledged in under a second, allowing my application to run in near real-time? How well are my subscribers keeping up with my publishers? Which region has a growing backlog? How frequently are my subscribers allowing a message’s ack deadline to expire?Customize your monitoring experienceHopefully the existing dashboards are enough to diagnose a problem. But maybe you need something slightly different. If that’s the case, from the dropdown menu, click Save as Custom Dashboard to save an entire dashboard in your list of monitoring dashboards, or click Add to Custom Dashboard in a specific chart to save the chart to a custom dashboard. Then, in the custom dashboard, you can edit any chart configuration or MQL query. For example, by default, the Top 5 subscriptions by ack message count chart in the Subscriptions section of the monitoring dashboard for a single topic shows the top five attached subscriptions with the highest rate of acked messages. You can modify the dashboard to show the top ten subscriptions. To make the change, export the chart, click on the chart, and edit the line of MQL “| top 5, .max()” to “| top 10, .max()”. To know more about editing in MQL, see Using the Query Editor | Cloud Monitoring.For a slightly more complex example, you can build a chart that compares current data to past data. For example, consider the Byte Cost chart in the Overview section of the monitoring dashboard for all topics. You can view the chart in Metrics Explorer. In the MQL tab, add the following lines at the end of the provided code snippet:code_block[StructValue([(u’code’, u’| {rn add [when: “now”]rn ;rn add [when: “then”] | time_shift 1drn }rn| union’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e526daee9d0>)])]The preceding lines turn the original chart to a comparison chart that compares data at the same time on the previous day. For example, if your Pub/Sub topic consists of application events like requests for cab rides, data from the previous day can be a nice baseline for current data and can help you set the right expectations for your business or application for the current day. If you’d prefer, update the chart type to a Line chart for easier comparison. Set alertsQuota limits can creep up on you when you least expect it. To prevent this, you can set up alerts that will notify you once you hit certain thresholds. The Pub/Sub dashboards have a built-in function to help you set up these alerts. First, access the Quota section in one of the monitoring dashboards for topics or subscriptions. Then, click Create Alert inside the Set Quota Alert card at the top of the dashboard. This will take you to the alert creation form with an MQL query that triggers for any quota metric exceeding 80% capacity (the threshold can be modified).The Quota section of the monitoring dashboard for all the topics in your project.In fact, all the provided charts support setting alerting policies. You can set up your alerting policies by first exporting a chart to a custom dashboard and then selecting Convert a provided chart to an alert chart, using the dropdown menu.Convert a provided chart to an alert chart.For example, you might want to trigger an alert if the pull to ack delta is positive more than 90% of the time during a 12-hour period. This would indicate that your subscription is frequently pulling messages faster than it is acknowledging them. First, export the Pull to Ack Delta chart to a custom dashboard, convert it to an alert chart, and add the following line of code at the end of the provided MQL query:| condition gt(val(), 0)Then, click Configure trigger. Set the Alert trigger to Percent of time series violates, the Minimum percent of time series in violation to 90%, and Trigger when condition is met for this amount of time to 12 hours. If the alert is created successfully, you should see a new chart with a red horizontal line representing the threshold with a text bubble that tells you if there have been any open incidents violating the condition. You can also add an alert for the Oldest unacked message metric. Pub/Sub lets you set a message retention period on your subscriptions. Aim to keep your oldest unacked messages within the configured subscription retention period, and fire an alert when messages are taking longer than expected to be processed. Making metrics dashboards that are easy to use and serve your needs is important for us. We welcome your feedback and suggestions for any of the provided dashboards and charts. You can reach us by clicking on the question icon on the top right corner in Cloud Console and choosing Send feedback. If you really like a chart, please let us know too! We will be delighted to hear from you.Related ArticleHow Pub/Sub eliminates boring meetings and makes your systems scaleWhat is Cloud Pub/Sub? A messaging service for application and data integration!Read Article
Quelle: Google Cloud Platform

Sign up for the Google Cloud Fly Cup Challenge

Are you ready to take your cloud skills to new heights? We’re excited to announce the Google Cloud Fly Cup Challenge, created in partnership with The Drone Racing League (DRL) and taking place at Next ‘22 to usher in the new era of tech-driven sports. Using DRL race data and Google Cloud analytics tools, developers of any skill level will be able to predict race outcomes and provide tips to DRL pilots to help enhance their season performance. Participants will compete for a chance to win an all-expenses-paid trip to the season finale of the DRL World Championship Race and be crowned the champion on stage.  How it works: Register for Next 2022 and navigate to the Developer Zone challenges to unlock the gameComplete each stage of the challenge to advance and climb the leaderboardWin prizes, boost skills and have fun!There will be three stages of the competition, and each will increase in level of difficulty. The first stage kicks off on September 15th, where developers will prepare data and become more familiar with the tools for data-driven analysis and predictions with Google ML Tools. There are over 500 prizes up for grabs, and all participants will receive an exclusive custom digital badge, and an opportunity to be celebrated for their achievements alongside DRL Pilots. There will be one leaderboard that will cumulate scores throughout the competition and prizes will be awarded as each stage is released. Stage 1: DRL Recruit: Starting on September 15th, start your journey here to get an understanding of DRL data by loading and querying race statistics. You will build simple reports to find top participants and fastest race times. Once you pass this lab you will be officially crowned a DRL recruit and progress for a chance to build on your machine learning skills and work with two more challenge labs involving predictive ML models. Prize: The top 25 on the leaderboard will win custom co-branded DRL + Google Cloud merchandise.Stage 2: DRL Pilot: Opening in conjunction with the first day of Next 2022 on October 11, in this next stage you will develop a model which can predict a winner in a head to head competition and a score for each participant, based on a pilots profile and flight history. Build a “pilot profile card” that analyzes the number of crashes and lap times and compares it to other pilots. Fill out their strengths and weaknesses and compare them to real life performances, and predict the winner of the DRL Race in the Cloud at Next 2022, and be crowned top developer for this stage.Prize: The first 500 participants to complete stage 2 of the contest will receive codes to download DRL’s Simulator on Steam.Stage 3: DRL Champion: Continue this journey throughout the DRL championship season. Using the model developed in Stage 2. Use data from past races to score participants and predict outcomes. Provide pilots with real life tips and tricks to help improve their performance. The developer at the top of the leaderboard at the end of December 2022 will win an expenses-paid VIP trip to DRL’s final race in early 2023. Prize: Finish in the top 3 for an opportunity to virtually present your tips and tricks to professional DRL Pilots before the end of the 2022-2023 race seasonTop the leaderboard as the Grand Champion and win an expenses paid VIP experience to travel to a DRL Championship Race in early 2023 and be celebrated on stage. For more information on prizes and terms please visit the DRL and Google Cloud website.  Ready to Fly? The Google Cloud Fly Cup Challenge opens today and will remain available on the Next ‘22 portal through December 31, 2022 when the winner will be announced. We are looking forward to seeing how you innovate and build together for the next era of tech-driven sports. Let’s fly!
Quelle: Google Cloud Platform

Try out Cloud Spanner databases at no cost with new free trial instances

We are excited to announce Cloud Spanner free trial instances to give everyone an opportunity to try out Spanner at no cost for 90 days. Spanner is a fully managed relational database created by Google that provides the highest levels of consistency and availability at any scale. Developers love Spanner for its ease of use and its ability to automatically handle replication, sharding, disaster recovery, and strongly consistent transaction processing. Organizations across industries including finance, retail, gaming, and healthcare are using Spanner to modernize their tech stack, transform the end user experience, and innovate quickly to uncover new possibilities. Now any developer or organization can try Spanner hands-on at no cost with the Spanner free trial instance. Watch this short video to learn more about the Spanner free trial and get started today.The Spanner free trial provides a Spanner instance where users can create GoogleSQL or PostgreSQL-dialect databases, explore Spanner capabilities, and prototype an application to run on Spanner. To make it easier for developers to evaluate Spanner, the free trial instance comes with built-in guided tutorials in the Google Cloud console. These tutorials provide step-by-step guidance to create a database, load predefined schema and data, and run sample queries and transactions so that users can quickly get started with Spanner and learn key concepts. Making Spanner more accessible for every developer and workload Our mission is to make Spanner’s unique benefits accessible to every developer and organization. So, we are making it easier, faster, and cost effective for everyone to use Spanner for applications big and small. We launched the PostgreSQL interface for Spanner so that users can build transformative applications with the fully managed experience of Spanner while using the familiar PostgreSQL dialect. We introduced Committed Use Discounts (CUDs) that give up to 40% discount to help users reduce their database costs. We also reduced the cost of entry for production workloads with granular instance sizing to make enterprise-grade production-ready databases available starting at approximately $65 per month. Users can even lower the cost to $40 per month by combining granular instance sizing and committed use discounts! With the launch of the Spanner free trial instance, users now have the opportunity to explore its capabilities without any upfront costs. Further, we make it easy for users to seamlessly upgrade from the free trial instance to a paid instance and scale without requiring any re-architecture and downtime. Get started with Spanner free trial in minutesBoth existing and new Google Cloud users are eligible for the Spanner free trial. If you are an existing Google Cloud user (i.e. if you have an active Google Cloud Billing account), you are ready to create your Spanner free trial instance. If you are a new Google Cloud user, you need to create a Cloud Billing account first that gives you eligibility to create a Spanner free trial instance as well as receive $300 in free credits to use on one or a combination of Google Cloud products. A billing account is required for verification of identity. Don’t worry, you are not charged until you choose to upgrade your Spanner free trial instance. Getting started takes just a few steps: you can create a free trial instance using Google Cloud console or the gcloud CLI.Using the Google Cloud Console, click “Start a free trial”, and simply enter the name and select the region of your free trial instance. A Spanner instance represents the resource in which you can create your databases while the region represents the geographic location where your databases are stored and replicated. Free trial instances are available in selected regions in the US, Europe, and Asia. If you prefer different regions or a multi-region configuration, create a paid Spanner instance instead.Once you create your free trial instance, you can start using Spanner by launching a step-by-step tutorial in the console that provides guidance on database creation and schema design. Tutorials also help you understand the salient features of Spanner such as interleaved tables and secondary indexes that help in realizing the true power of Spanner.Free trial instances are meant for learning and exploring Spanner. They are limited to a 10 GB storage capacity and a 90-day trial duration. Spanner Service Level Agreement (SLA) doesn’t apply to your free trial instances. Once you familiarize yourself with Spanner using the free trial, you can seamlessly upgrade the free trial instance to a paid instance with SLA guarantees to continue with your proof of concept, or to run your production workload. Furthermore, upgrading your free trial instance gives you the full benefits of Spanner such as creating multi-region configurations and scaling without limits. Get startedIt has never been easier to try out Spanner. With the free trial instance, you can try out Spanner at no cost for 90 days. You can even prototype an entire application for free on Google Cloud by using the Spanner free trial along with the free tier offered by other Google Cloud products such as Compute Engine and BigQuery.  Create a 90-day Spanner free trial instance. Try Spanner free . Take a deep dive into the new trial experience and learn more about how Spanner provides unlimited scale and up to 99.999% availability.Related ArticleCome for the sample app, stay for the main course: Cloud Spanner free trial instancesCloud Spanner now offers free trial instances with sample data and guided tutorials to try the fully managed relational database.Read Article
Quelle: Google Cloud Platform

Latest database innovations for transforming the customer experience

We’re all accustomed to Google magic in our daily lives. When we need information like the time a store closes, the best route to a destination, help cooking a favorite recipe — we just ask Google. Google is known for providing useful information everywhere, wherever you are. But for business, it isn’t so easy to get answers to important questions.  In a recent study by HBR, only 30% of respondents report having a well-articulated data strategy and, at most, 24% of respondents said they thought their organization was data-driven. If you asked, most of these executives would probably tell you they’re drowning in data, so how come they’re struggling to be data-driven?Data’s journey begins in operational databases, where the data is born as users and systems interact with applications. Operational databases are the backbone of applications and are essential ingredients for building innovative customer experiences. At Google Cloud, we have been focused on building a unified and open platform that provides you the easiest and fastest way to innovate with data. Google’s data cloud enables data teams to manage each stage of the data lifecycle from operational transactions in databases to analytical applications across data warehouses, lakes, data marts, and to rich data-driven experiences. For example, Merpay, a mobile payment service in Japan, wanted to build its experience on a database that delivered on requirements around availability, scalability and performance. By using Cloud Spanner, Merpay was able to reduce overhead costs and ultimately devote engineering resources to developing new tools and solutions for their customers. If you’re looking to bring the same kinds of transformation to your organization, let’s dive into the latest database announcements and how they can help you do that:Tap into transformative database capabilitiesGoogle’s data cloud allows organizations to build transformative applications quickly with our one-of-a-kind databases such as Cloud Spanner, Google Cloud’s fully managed relational database with global scale, strong consistency, and industry leading 99.999% availability. Leading businesses across industries such as Sabre, Vimeo, Uber, ShareChat,  Niantic, and Macy’s use Spanner to power their mission-critical applications and delight customers. Recent innovations in Spanner such as granular instance sizing and PostgreSQL interface lower the barrier to entry for Spanner, thus making it accessible for any developer and for any application, big or small. In that effort, we are excited to announce Spanner free trial instances to give developers an easy way to explore Spanner at no cost. The new free trial for Spanner provides a zero-cost, hands–on experience to try out Spanner and comes with guided tutorials and a sample database. The Spanner free trial provides a Spanner instance with 10GB storage for 90 days. Customers can upgrade to a paid instance anytime during the 90-day period to continue exploring Spanner and unlock the full capabilities of Spanner, such as unlimited scaling and multi-region deployments. To learn more about the new free trial, you can read thisblog and watch this short video. Start your free trial today in the Google Cloud consoleWe’re also excited to announce the preview of fine-grained access control for Spanner that lets you authorize access to Spanner data at the table and column level. Spanner already provides access control with Identity and Access Management (IAM) which enables a simple and consistent access control interface for all Google Cloud services. With fine-grained access control, it’s now easier than ever to protect your transactional data in Spanner and ensure appropriate security controls are in place when granting access to data. Learn more about fine-grained access control in this blog.Generate exponential value from data with a unified data cloud Google’s common infrastructure for data management has many advantages. Thanks to our highly durable distributed file systems, disaggregated compute and storage at every layer, and high-performance Google-owned global networking, we’re able to provide best-in-class, tightly integrated operational and analytical data services with superior availability and scale at the best price- performance and operational efficiency.In that effort, today we’re excited to announce Datastream for BigQuery, now in preview. Developed in close partnership with the BigQuery team, Datastream for BigQuery delivers a unique, truly seamless and easy-to-use experience that enables real-time insights in BigQuery with just a few steps.Datastream efficiently replicates updates directly from source systems into BigQuery tables in real-time by using BigQuery’s newly developed Change Data Capture (CDC) and Storage Write API’s UPSERT functionality. You no longer have to waste valuable resources building and managing complex data pipelines, self-managing staging tables, tricky DML merge logic, or manual conversion from database-specific data types into BigQuery data types. Just configure your source database, connection type, and destination in BigQuery and you’re all set.Klook, a Hong Kong-based travel company, is one industry-leading enterprise using Datastream for BigQuery to help drive better business decisions. “Prior to adopting Datastream, we had a team of data engineers dedicated to the task of ingesting data into BigQuery, and we spent a lot of time and effort making sure that the data was accurate,” said Stacy Zhu, senior manager for data at Klook. “With Datastream, our data analysts can have accurate data readily available to them in BigQuery with a simple click. We enjoy Datastream’s ease of use and its performance helps us achieve large scale ELT data processing.”Achievers, an award-winning employee engagement software and platform, recently adopted Datastream, as well. “Achievers had been heavily using Google Cloud VMs (GCE), and Google Kubernetes Engine (GKE)” says Daljeet Saini, lead data architect at Achievers. “With the help of Datastream, Achievers will be streaming data into BigQuery and enabling our analysts and data scientists to start using BigQuery for smart analytics, helping us take the data warehouse to the next level.”To make it easier to stream database changes to BigQuery and other destinations, Datastream is also adding support for PostgreSQL databases as a source, also in preview. Datastream sources now include MySQL, PostgreSQL, AlloyDB for PostgreSQL, and Oracle databases, which can be hosted on premises, on Google Cloud services such as Cloud SQL or Bare Metal Solution for Oracle, or anywhere else on any cloud. Key use cases for Datastream include real-time analytics, database replication via continuous change data capture, and enablement of event-driven application architectures. In real-world terms, real-time insights can help a call center provide better service by measuring call wait times continuously, rather than retrospectively at the end of the week or month. And retailers or logistics companies that do inventory management based on real-time data can become far less wasteful than if it were based on periodic reports.Accelerate your modernization journey to the cloud In recent years, application developers and IT decision makers have increasingly adopted open-source databases to ensure application portability and extensibility and prevent lock-in. They’re no longer willing to tolerate the opaque costs or the overpriced and restrictive licensing of legacy database vendors.In particular, PostgreSQL has become the emerging standard for cloud-based enterprise applications. Many organizations are choosing to standardize on PostgreSQL to reduce the learning curve for their teams and avoid the lock-in from the previous generation of databases. Google’s data cloud offers several PostgreSQL database options including AlloyDB for PostgreSQL, a PostgreSQL-compatible database that provides a powerful option for migrating, modernizing, or building commercial-grade workloadsTo help developers and data engineers modernize and migrate their applications and databases to the cloud, we’re pleased to announce, in preview, that our Database Migration Service (DMS) now supports PostgreSQL migrations to AlloyDB. DMS makes the journey from PostgreSQL to AlloyDB easier and faster, with a serverless migration you can set up in just a few clicks. Together with Oracle to PostgreSQL data and schema migration support, also in preview, DMS gives you a way to modernize legacy databases and adopt a modern, open technology platform. Learn more about PostgreSQL to AlloyDB migration with DMS in this blog.Among organizations adopting PostgreSQL is SenseData. A platform created to improve the relationship between companies and customers, SenseData is a market leader in Latin America in the field of Customer Success. “At Sensedata, we’ve built our customer success platform on PostgreSQL, and are looking to increase our platform performance and scale for the next phase of our growth,” said Paulo Souza, co-founder and CTO of SenseData. “We have a mixed database workload, requiring both fast transactional performance and powerful analytical processing capabilities, and our initial testing of AlloyDB for PostgreSQL has given impressive results, with more than a 350% performance improvement in our initial workload, without any application changes. We’re looking forward to using Database Migration Service for an easy migration of multiple terabytes of data to AlloyDB.”CURA Grupo is one of Brazil’s largest medical diagnostics conglomerates with over 1,600 employees, 500 qualified physicians, and 6 million examinations conducted every year. With more than 30,000 examinations performed every day, storing the database on premises was becoming increasingly unfeasible. CURA Grupo used Database Migration Service to migrate to Google Cloud, along with synchronization between their on-prem server’s database and the cloud. The result was an easy transition with just around 20 minutes of downtime.To learn more about these exciting innovations and more, join us at Next 22 happening on October 11-13. You will also hear from customers such as PLAID, Major League Baseball, DaVita, CERC, Credit Karma, Box, and Forbes who are all innovating faster with Google Cloud databases.Related ArticleRead Article
Quelle: Google Cloud Platform

Introducing fine-grained access control for Cloud Spanner: A new way to protect your data in Spanner

As Google Cloud’s fully managed relational database that offers unlimited scale, strong consistency, and availability up to 99.999%, Cloud Spanner powers applications of all sizes in industries such financial services, gaming, retail, and healthcare. Today, we’re excited to announce the preview of fine-grained access control for Spanner that lets you authorize access to Spanner data at the table and column level. With fine-grained access control, it’s now easier than ever to protect your transactional data in Spanner and ensure appropriate controls are in place when granting access to data. In this post, we’ll take a look at Spanner’s current access control model, examine the use cases of fine-grained access control, and look at how to use this new capability in your Spanner applications.Spanner’s access control model todaySpanner provides access control with Identity and Access Management (IAM). IAM provides a simple and consistent access control interface for all Google Cloud services. With capabilities such as a built-in audit trail and context-aware access, IAM makes it easy to grant permissions at the instance and database level to Spanner users.The model for IAM has three main parts:Role. A role is a collection of permissions. In Spanner, these permissions allow you to perform specific actions on Spanner projects, instances, or databases. For example, spanner.instances.create lets you create a new instance, and spanner.databases.select lets you execute a SQL select statement on a database. For convenience, Spanner comes with a set of predefined roles such as roles/spanner.databaseUser, which contains the permissions spanner.databases.read and spanner.databases.write, but you can define your own custom roles, too. IAM principal. A principal can be a Google Account (for end users), a service account (for applications and compute workloads), a Google group, or a Google Workspace account that can access a resource. Each principal has its own identifier, which is typically an email address.Policy. The allow policy is the collection of role bindings that bind one or more principals to individual roles. For example, you can bind roles/spanner.databaseReader to IAM principal user@abc.xyz.The need for more robust access controlsThere are a number of use cases where you may need to define roles at a level that is more granular than the database-level. Let’s look at a few of these use cases below.Ledger applicationsLedgers, which are useful for inventory management, cryptocurrency, and banking applications, let you look at inventory levels and apply updates such as credits or debits to existing balances. In a ledger application, you can look at balances, add inventory, and remove inventory. You can’t go back and adjust last week’s inventory level to 500 widgets. This corresponds to having SELECT privileges (to look at balances) and INSERT privileges (to add or remove inventory), but not UPDATE or DELETE privileges. Analytics usersAnalytics users often need SELECT access to a few tables in Spanner database, but should not not have access to all tables in the database. Nor should they have INSERT, UPDATE, or DELETE access to anything in the database. This corresponds to having SELECT privileges on a set of tables – but not all tables – in the database.Service accountsA service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data from Google Cloud. Each Spanner service account likely needs to have its own set of privileges on specific tables in the database. For example, consider a ride-sharing application that has service accounts for drivers and passengers. Likely the driver service account needs SELECT privileges on specific columns of the passenger’s profile table (e.g., user’s first name, profile picture, etc.), but should not be allowed to update the passenger’s email address or other personal information.The basics of fine-grained access control in SpannerIf you’re familiar with role-based access control in other relational databases, you already are familiar with the important concepts of fine-grained access control in Spanner. Let’s review the model for fine-grained access control in Spanner:Database Privilege. Spanner now supports four types of privileges: SELECT, INSERT, UPDATE, and DELETE. SELECT, INSERT, UPDATE and DELETE privileges can be assigned to tables, and SELECT, INSERT, and UPDATE can be applied to tables or columns.Database Role. Database roles are collections of privileges. For example, you can have a role called inventory_admin that has SELECT and INSERT privileges on the Inventory_Transactions table and SELECT, INSERT, UPDATE, and DELETE privileges on the Products table.Because Spanner relies on IAM for identity and access management, you need to assign database roles to the appropriate IAM principals by managing conditional role bindings. Let’s look at an example. Suppose we want to set up IAM principal user@abc.xyz with fine-grained access to two tables: Inventory_Transactions and Products. To do this, we’ll create a database role called inventory_admin and grant this role to user@abc.xyz.Step 1: Set up the IAM principal as a Cloud Spanner fine-grained access userUntil today, if you wanted to grant database-level access to an IAM principal, you’d grant them either the roles/spanner.databaseUser role, or some privileges that are bundled in that role. Now, with fine-grained access control, you can instead grant IAM principals the Cloud Spanner Fine-grained Access User role (roles/spanner.fineGrainedAccessUser).Cloud Spanner Fine-grained Access User allows the user to make API calls to the database, but does not confer any data access privileges other than those conferred to the public role. By default, the public role does not have any privileges, and this role only grants access to make API calls to the database. To access data, a fine grained access user must specify the database role that they want to act as.Step 2: Create the database roleTo create a role, run the standard SQL CREATE ROLE command:CREATE ROLE inventory_admin;The newly created database role can be referenced in IAM policies via the resource URI: projects/<project_name>/instances/<instance_name>/databases/<database_name>/databaseRoles/inventory_admin. Later on, we’ll show how to configure an IAM policy that allows a specific IAM principal permission to act as this database role.Step 3: Assign privileges to the database roleNext, assign the appropriate privileges to this role:code_block[StructValue([(u’code’, u’GRANT SELECT, INSERTrnON TABLE Inventory_TransactionsrnTO ROLE inventory_admin;rnrnGRANT SELECT, INSERT, UPDATE, DELETErnON TABLE ProductsrnTO ROLE inventory_admin;’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e52caf78490>)])]While you can run these statement individually, we recommend that you issue Cloud Spanner DDL statements in a single batch: Step 4: Assign the role to an IAM principalFinally, to allow user@abc.xyz to act as the database role inventory_admin, grant Cloud Spanner Database Role User to user@abc.xyz with the database role as a condition. To do this, open the database’s IAM Info Panel and add the following conditions using the IAM condition editor:resource.type == “spanner.googleapis.com/DatabaseRole” &&resource.name.endsWith(“/inventory_admin”)You can also add any other conditions to further restrict access to this database role, such as scheduling access by time of day, day of week, or with an expiration date.Transitioning to fine-grained access controlWhen you’re transitioning to fine-grained access control, you might want to assign both  roles/spanner.databaseUser and roles/spanner.fineGrainedAccessUser to an IAM principal. When you’re ready to switch that IAM principal over to fine-grained permissions, simply revoke the databaseUser role from that IAM principal.Using the role as an end userWhen an end user logs into Spanner, they can access the database using the role they’ve been granted, through the Google Cloud console or gcloud commands. Go, Java, Node.js and Python client libraries are also supported, with support for more client libraries coming soon.Learn moreWith fine-grained access control, you can set up varying degrees of access to your Spanner databases based on the user, their role, or the organization to which they belong. In preview today, fine-grained access control is available to all Spanner customers at no additional charge. To get started with Spanner, create an instance, try it out with a Spanner Qwiklab, or create a free trial instanceTo get started with fine-grained access control in Spanner, check out About fine-grained access control or access it directly from the Write DDL statements in the Google Cloud consoleTo get started with Spanner, create an instance or try it out for free, or take a Spanner QwiklabRelated ArticleCloud Spanner myths bustedThe blog talks about the 7 most common myths and elaborates the truth for each of the myths.Read Article
Quelle: Google Cloud Platform

Come for the sample app, stay for the main course: Cloud Spanner free trial instances

Cloud Spanner is a fully managed relational database that offers unlimited scale, strong consistency, and industry leading high availability of up to 99.999%. In our ongoing quest to make Spanner more accessible to every developer and workload, we are introducing Spanner free trial instances. Now you can learn and test drive Spanner at no cost for 90 days using a trial instance that comes with 10 GB storage capacity.At this point you might be thinking, well that’s all well and good, but what can I actually do with the Spanner free trial instance? And how do I actually start?We’re glad you asked.To help you get the best value out of this free trial instance, we built a guided experience in the Cloud console that helps you through some basic tasks with Spanner, such as creating and querying a database. And since databases aren’t very useful without any data in them, we provide a sample data set so you can get a feel for how you might deploy Spanner in a common scenario, such as a bank’s financial application. Along the way, we also highlight particularly relevant articles and videos for you to learn more about Spanner’s full range of capabilities.To get started, create a free trial instance in one of the available regions.Create an instanceOnce you’ve created your Spanner free trial instance, you’ll see a custom guide featuring Spanner’s core tasks – you’ve already completed one of them! Now that you’ve created an instance, you can choose whether to create your own database, or click the “Launch walkthrough” button to follow along with  a step-by-step tutorial to explore some Spanner features and create your first database.Create a database with sample dataOnce you complete the tutorial, you’ll have an empty database ready for data and the sample application. We’ll teach you how to insert the sample data set and query it in the second tutorial, so be sure to complete the first one.Query the databaseAs you progress through the second tutorial, you’ll be able to confirm that the finance application works by querying your data. You can continue to play around with this sample finance app, try it in another database dialect (it’s available in the Google Standard SQL and PostgreSQL), or clean it up and create some new databases on your own. Either way, your Spanner free trial instance is available to you at no cost for 90 days, and you can create up to 5 databases within it.Get startedWe’re incredibly excited to offer the Spanner free trial instance to customers at no cost for 90 days. Organizations across industries such as finance, gaming, healthcare, and retail have built their applications on Spanner to benefit from its capabilities such as industry leading high availability and unlimited scale. It is now possible for any developer or organization to try out Spanner at no cost. For more detailed instructions, check out our latest video demonstrating this experience. We hope you’ll enjoy this glimpse of what Spanner has to offer, and get inspired to build with Google Cloud.Get started today, and try Spanner for free.Related ArticleTry out Cloud Spanner databases at no cost with new free trial instancesCreate a 90-day Spanner free trial instance with 10GB storage at no cost. Try Spanner free.Read Article
Quelle: Google Cloud Platform