The new Google Cloud Region in Israel is now open

Today, we are excited to announce that the new Google Cloud region in Israel is open.  We’ll be celebrating the launch at an event in Tel Aviv on November 9 — register to join us.  Israel is known as the startup nation, and has long been a hub of technology innovation for startups and Google alike. We’re excited to extend that innovation-first approach to other industries, accelerating digital transformation to help create new jobs and digital experiences that better serve users in Israel. According to recent research commissioned by Google, AlphaBeta Economics (part of Access Partnership) estimates that by 2030, the Google Cloud region in Tel Aviv will contribute a cumulative USD 7.6 billion to Israel’s GDP, and support the creation of 21,200 jobs in that year alone1. The Google Cloud region in Tel Aviv (me-west1), joins our network of cloud regions around the world, delivering high-performance, low-latency services to customers of all sizes and across industries. Now that the Israel cloud region is part of the Google Cloud network, it will help local organizations connect with users and customers around the globe, and help fuel innovation and digital transformation across every sector of the economy. Last year, Google Cloud was selected by the Israeli government to provide cloud services to government ministries. This partnership can enable the government, and private companies operating in regulated industries, to simplify the way in which users are served, create a uniform approach to digital security, and support compliance and residency requirements. Over a number of years we’ve grown our local Googler presence in both Tel Aviv and Haifa to support the growing number of customers, and bring a culture of innovation to every sector of the economy.  From technology, retail, and media and entertainment, to financial services and the public sector, leading organizations come to Google Cloud as their trusted innovation partner. “With Google Cloud we are changing the way millions of people read and write, by serving our own Large Language Models on top of the most advanced GPU platform that offers unparalleled performance, availability and elasticity.” – Ori Goshen, CEO, AI21 Labs“PayBox is supervised by the Bank of Israel and is completely hosted in the cloud. Google Cloud provides us with the tools needed to meet regulatory compliance and security obligations as well as the flexibility and agility to serve the millions of customers that rely on our app every day.” – Dima Levitin, CIO, PayboxIsrael has long been a hub of technology innovation, and we’re excited to support customers Like AI21, Paybox, and others with a cloud that helps them:Better understand and use data: Google Cloud helps customers make better decisions with a unified data platform. We help customers reduce complexity and combine unstructured and structured data — wherever it resides — to quickly and easily produce valuable insights. Establish an open foundation for growth: When customers move to Google Cloud, they get a flexible, secure, and open platform that can evolve with their organization. Our commitment to multicloud, hybrid cloud, and open source offers organizations the freedom of choice, helping to allow their developers to build faster.Create a collaborative environment: In today’s hybrid work environment, Google Cloud provides the tools needed to help transform how people connect, create, and collaborate. Protect systems and users: As every company rethinks its security posture, we help customers protect their data using the same infrastructure and security services that Google uses for its own operations. Build a cleaner, more sustainable future: Google has been carbon neutral for our operations since 2007, and we are working to operate entirely on carbon-free energy by 2030. Today, when customers run on Google Cloud — the cleanest cloud in the industry — the energy that powers their workloads is matched with 100% renewable energy. We’re excited to see what you build with the new Google Cloud region in Israel. Learn more about ourglobal cloud infrastructure, including new and upcoming regions. And don’t miss the Israel launch event.Related ArticleGoogle Cloud announces new region to support growing customer base in IsraelThe new Google Cloud region in Israel will bring low-latency for users in the area, as well as a full complement of Google Cloud services.Read Article
Quelle: Google Cloud Platform

Introducing lock insights and transaction insights for Spanner: troubleshoot lock contentions with pre-built dashboards

As a developer, DevOps engineer or a database administrator, you have to typically deal with database lock issues. Often, rows locked by queries cause lags and can slow down applications resulting in poor user experience. Today, we are excited to announce the launch of lock insights and transaction insights for Cloud Spanner that provide a set of new visualization tools for developers and database administrators to quickly diagnose lock contention issues on Spanner. If you observe application slowness, a common issue could be lock contentions, which happen when multiple transactions are trying to modify the same row. Debugging lock contentions is not easy as it requires identifying the row ranges and columns on which transactions are contending for locks. This process can be tedious and time consuming without a visual interface. Today, we are solving this problem for customers.Lock insights and transaction insights provide pre-built dashboards that make it easy to detect row ranges with the highest lock wait time, find transactions reading or writing on these row ranges, and identify the transactions with highest latencies causing these lock conflicts.Earlier this year, we launched query insights for debugging query performance issues. Together with lock insights and transaction insights, these capabilities provide developers easy-to-use observability tools to troubleshoot issues and optimize the performance of their Spanner databases.Lock insights and transaction insights are available at no additional cost.”Lock insights will be very helpful to debug lock contention which typically takes hours.” said Dominick Anggara, MSc., Staff Software Engineer at Kohl’s. “It allows the user to see the big picture, and make it easy to make correlations, and then narrow down to specific transactions. That’s what makes it powerful. Really looking forward to using this in production”.Why do lock issues happen?Most databases take locks on data to prohibit other transactions from concurrently changing the data to preserve data integrity. When you access data with the intent to change it, a lock prohibits other transactions from accessing the data while it is being modified. But when the data is locked, it can negatively impact application performance as other tasks wait to access the data. Cloud Spanner, Google Cloud’s fully managed horizontally scalable relational database service, offers the strictest concurrency-control guarantees, so that you can focus on the logic of the transaction without worrying about data integrity. To give you this peace of mind, and to ensure consistency of multiple concurrent transactions, Spanner uses a combination of shared locks and exclusive locks at the table cell level (granularity of row-and-column) and not at the whole row level. You can learn more about different types of Lock modes for Spanner in our documentation.Follow a visual journey with pre-built dashboardsWith lock insights and transaction insights, developers can smoothly move from detection of latency issues to diagnosis of lock contentions, and ultimately identification of transactions that are contending for locks. Once the transactions causing the lock conflicts are identified, you can then try to identify issues in each transaction that are contributing to the problem.You could do this by following a simple journey where you can quickly confirm if the application slowness is due to lock contentions, correlate row ranges and columns which have the highest lock wait time with the transactions taking locks on these row ranges, identify the transactions with the highest latencies, and analyze these transactions which are contending on locks. Let’s walk through an example scenario. Diagnose application slownessThis journey will start by setting up an alert on Google Cloud Monitoring for latency (api/request_latencies) going above a certain threshold. The alert could be configured in a way that if this threshold is crossed, you will be notified with an email alert, with a link to the “Monitoring” dashboard.Once you receive this alert, you would click on the link in the email, and navigate to the “Monitoring” dashboard. If you observe a spike in read/write latency, no observable spike in CPU utilization, and a dip in Throughput and/or Operations per second, a possible root cause could be lock contentions. A combination of these patterns in these metrics could be a strong signal that the system is locking due to the transactions contending on the same cells, even though the workload remains the same. Below, you can observe a spike between 5:45 PM and 6:00 PM. This could be due to new application code deployment which might have introduced a new access pattern.The next step is to confirm that this application slowness is indeed due to the lock contentions. This is where lock insights comes in. You can get to this tool by clicking on “Lock insights” in the left navigation of the Spanner Instance view in your Cloud Console. Here, the first graph that you see will be for Total lock wait time. If you observe a corresponding spike on this graph in the corresponding time window, this would confirm that the application’s slowness is due to lock contentions.Co-relating row ranges, columns and transactionsNow you can select the database which is seeing the spike in total lock wait time, and drill down to see the row ranges with the highest lock wait times. When a user clicks on a row-range which has the highest lock wait times, a right panel will open up. This will show sample lock requests for that row range which includes the columns which were read from or written to, the type of lock which was acquired on this row-column combination (database cell), and links to view the transactions which were contending for these locks. This helps co-relate row ranges, columns and transactions makes this journey seamless to switch between lock insights and transaction insights as explained in the next section.In the above screenshot, we can see that at 5:53 PM, the first row range in the table (order_item(82,12)) is showing the highest lock wait times. You can investigate further by looking at the transactions which were acting on the sample lock columns. Identifying transactions with highest write latencies causing locksWhen you click on “View transactions” on the lock insights page, you will navigate to the transaction insights page with the topN transactions table (by latency) filtered on the Sample lock Columns from the previous page (lock insights), so you will view the topN transactions in the context of the locks (and row ranges) which were identified earlier in the journey.In this example we can see that the first transaction reading from and writing to columns item_inventory._exists, item_inventory.count has the highest latencies and could be one of the transactions causing lock contentions. We can also see that the second transaction in the table is also trying to read from the same column, and could be waiting on locks since the average latency is high. We should drill deep and investigate both these transactions.Analyzing transactions to fix lock contentionsOnce you have identified the transactions causing the locks, you can drill down into these transaction shapes to analyze the root cause of lock contentions.You can do this by clicking on the Fingerprint ID for the specific transactions from the topN table, and navigating to the Transaction Details page where you will be able to see a list of metrics (Latency, CPU Utilization, Execution count, Rows Scanned / Rows Returned) over a time series for that specific transaction.In this example, we notice that when we drill down into the second transaction, this transaction is only attempting to read and not write. By definition, the topN transactions table (on the previous page) only shows read-write transactions which take locks. We can also see that the abort count / total attempt count ratio (28/34) is very high, which means that most of the attempts are getting aborted.Fixing the issueTo fix the problem in this scenario, you can convert this transaction from a read-write transaction to a read-only transaction, which would prevent it from taking locks on the cell, and thereby reducing lock contention and reducing write latencies.By following this simple visual journey, you can easily detect, diagnose and fix lock contention issues on Spanner.When looking at potential issues in your application, or even when designing your application, consider these best practices to reduce the number of lock conflicts in your database.Get started with lock insights and transaction insights todayTo learn more about lock insights and transaction insights, review the documentation here, and watch the explainer video here.Lock insights and transaction insights are enabled by default. In the Spanner console, you can click on “Lock insights” and “Transaction insights” in the left navigation and start visualizing lock issues and transaction performance metrics! New to Spanner? Create a 90-day Spanner free trial instance. Try Spanner for free.Related ArticleIntroducing Query Insights for Cloud Spanner: troubleshoot performance issues with pre-built dashboardsSpanner’s ‘Query insights’ – a new tool that makes it easy to debug query performance issues.Read Article
Quelle: Google Cloud Platform

Using Envoy to create cross-region replicas for Cloud Memorystore

In-memory databases are a critical component that deliver the lowest possible latency for your users who might be adding items to online shopping carts, getting personalized content recommendations, or checking their latest account balances. Memorystore makes it easy for developers building these types of applications on Google Cloud to leverage the speed and powerful capabilities of the most loved in-memory store: Redis. Memorystore for Redis offers zonal high availability with a 99.9% SLA for its Standard Tier instances. In some cases, users are looking to expand their Memorystore footprint to multiple regions to support disaster recovery scenarios for regional failure or to provide the lowest possible latency for a multi-region application deployment. We’ll show you how to deploy such an architecture today with the help of the Envoy proxy Redis filter, which we introduced in our previous blog, Scaling to new heights with Cloud Memorystore and Envoy. Envoy makes creating such an architecture both simple and extensible due to its numerous supported configurations. Let’s get started with a hands-on tutorial which demonstrates how you can build a similar solution.Architecture OverviewLet’s start by discussing an architecture of Google Cloud native services combined with open-source software which enables a multi-region Memorystore architecture. To do this, we’ll be using Envoy to mirror traffic to two Memorystore instances which we’ll create in separate regions. For simplicity, we’ll be using Memtier Benchmark, a popular CLI for Redis load generation, as a sample application to simulate end user traffic. In practice, feel free to use your existing application or write your own.Because of Envoy’s traffic mirroring configuration, the application does not need to be aware of the various backend instances that exist and only needs to connect to the proxy. You’ll find a sample architecture below and we’ll briefly detail each of the major components.Before we start, you’ll also want to ensure compatibility with your application by reviewing the list of the Redis commands which Envoy currently supports.  Prerequisites To follow along with this walkthrough, you’ll need a Google Cloud project with permissions to do the following: Deploy Cloud Memorystore for Redis instances (required permissions)Deploy GCE instances with SSH access (required permissions)Cloud Monitoring viewer access (required permissions) Access to Cloud Shell or another gCloud authenticated environment Deploying the multi-region Memorystore backend You’ll start by deploying a backend Memorystore for Redis cache which will serve all of your application traffic. You’ll deploy two instances in separate regions so that we can protect our deployment against regional outages. We’ve chosen regions US-West1 and US-Central1 though you are free to choose whichever regions work best for your use case. From an authenticated cloud shell environment, this can be done as follows:$ gcloud redis instances create memorystore-primary –size=1 –region=us-west1 –tier=STANDARD –async$ gcloud redis instances create memorystore-standby –size=1 –region=us-central1 –tier=STANDARD –asyncIf you do not already have the Memorystore for Redis API enabled in your project, the command will ask you to enable the API before proceeding. While your Memorystore instances deploy, which typically takes a few minutes, you can move onto the next steps. Creating the Client and Proxy VMsNext, you’ll need a VM where you can deploy a Redis client and the Envoy proxy. To protect against regional failures, we’ll create a GCE instance per region. On each instance, you will deploy the two applications, Envoy and Memtier Benchmark, as containers. This type of deployment is referred to as a “sidecar architecture” which is a common Envoy deployment model. Deploying in this fashion nearly eliminates any added network latency as there is no additional physical network hop that takes place. You can start by creating the primary region VM: $ gcloud compute instances create client-primary –zone=us-west1-a –machine-type=e2-highcpu-8 –image-family cos-stable –image-project cos-cloud Next, create the secondary region VM: $ gcloud compute instances create client-standby –zone=us-central1-a –machine-type=e2-highcpu-8 –image-family cos-stable –image-project cos-cloud Configure and Deploy the Envoy Proxy Before deploying the proxy, you need to gather the necessary information to properly configure the Memorystore endpoints. To do this, you need the host IP addresses for the Memorystore instances you have already created. You can gather these like: gcloud redis instances describe memorystore-primary –region us-west1 –format=json | jq -r “.host”gcloud redis instances describe memorystore-standby –region us-central1 –format=json | jq -r “.host”Copy these IP addresses somewhere easily accessible as you’ll use them shortly in your Envoy configuration. You can also find these addresses in the Memorystore console page under the “Primary Endpoint” columns. Next, you’ll need to connect to each of your newly created VM instances, so that you can deploy the Envoy Proxy. You can do this easily via SSH in the Google Cloud Console. More details can be found here.After you have successfully connected to the instance, you’ll create the Envoy configuration. Start by creating a new file named envoy.yaml on the instance with your text editor of choice. Use the following .yaml file, entering the IP addresses of the primary and secondary instances you created:code_block[StructValue([(u’code’, u’static_resources:rn listeners:rn – name: primary_redis_listenerrn address:rn socket_address:rn address: 0.0.0.0rn port_value: 1999rn filter_chains:rn – filters:rn – name: envoy.filters.network.redis_proxyrn typed_config:rn “@type”: type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxyrn stat_prefix: primary_egress_redisrn settings:rn op_timeout: 5srn enable_hashtagging: truern prefix_routes:rn catch_all_route:rn cluster: primary_redis_instancern request_mirror_policy:rn cluster: secondary_redis_instancern exclude_read_commands: truern – name: secondary_redis_listenerrn address:rn socket_address:rn address: 0.0.0.0rn port_value: 2000rn filter_chains:rn – filters:rn – name: envoy.filters.network.redis_proxyrn typed_config:rn “@type”: type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxyrn stat_prefix: secondary_egress_redisrn settings:rn op_timeout: 5srn enable_hashtagging: truern prefix_routes:rn catch_all_route:rn cluster: secondary_redis_instancern clusters:rn – name: primary_redis_instancern connect_timeout: 3srn type: STRICT_DNSrn lb_policy: RING_HASHrn dns_lookup_family: V4_ONLYrn load_assignment:rn cluster_name: primary_redis_instancern endpoints:rn – lb_endpoints:rn – endpoint:rn address:rn socket_address:rn address: <primary_region_memorystore_ip>rn port_value: 6379 rn – name: secondary_redis_instancern connect_timeout: 3srn type: STRICT_DNS rn lb_policy: RING_HASHrn load_assignment:rn cluster_name: secondary_redis_instancern endpoints:rn – lb_endpoints:rn – endpoint:rn address:rn socket_address:rn address: <secondary_region_memorystore_ip>rn port_value: 6379rn rnadmin:rn address:rn socket_address:rn address: 0.0.0.0rn port_value: 8001′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ff3ba9090>)])]The various configuration interfaces are explained below:Admin: This interface is optional, it allows you to view configuration and statistics etc. It also allows you to query and modify different aspects of the envoy proxy. Static_resources: This contains items that are configured during startup of the envoy proxy. Inside this we have defined clusters and listeners interfaces. Clusters:  This interface allows you to define clusters which we are defining per region. Inside cluster configuration you define all the available hosts and how to distribute load across those hosts. We have defined two clusters, one in the primary region and another in the secondary region. Each cluster can have a different set of hosts and different load balancer policies. Since there is only one host in each cluster, you can use any load balancer policy as all the requests will be forwarded to that single host.Listeners: This interface allows you to expose the port on which the client would connect, and define behavior of traffic received. In this case we have defined two listeners, one for each regional Memorystore instance.Once you’ve added your Memorystore instance IP addresses, save the file locally to your container OS VM where it can be easily referenced. Make sure to repeat these steps for your secondary instance as well. Now, you’ll use Docker to pull the official Envoy proxy image and run it with your own configuration. On primary region client machine, run this command: $ docker run –rm -d -p 8001:8001 -p 6379:1999 -v $(pwd)/envoy.yaml:/envoy.yaml envoyproxy/envoy:v1.21.0 -c /envoy.yaml On the standby region client machine, run this command: $ docker run –rm -d -p 8001:8001 -p 6379:2000 -v $(pwd)/envoy.yaml:/envoy.yaml envoyproxy/envoy:v1.21.0 -c /envoy.yamlFor our standby region, we have changed the binding port to port 2000. This is to ensure that traffic from our standby clients are routed to the standby instance in the event of a regional failure which makes our primary instance unavailable.In this example, we are deploying envoy proxy manually, but, in practice, you will implement a CI/CD pipeline which will deploy the envoy proxy and bind ports depending on your region based configuration. Now that Envoy is deployed, you can test it by visiting the admin interface from the container VM: $ curl -v localhost:8001/statsIf successful, you should see a print out of the various Envoy admin stats in your terminal. Without any traffic yet, these will not be particularly useful, but they allow you to ensure that your container is running and available on the network. If this command does not succeed, we recommend checking that the Envoy container is running. Common issues include syntax errors within your envoy.yaml and can be found by running your Envoy container interactively and reading the terminal output. Deploy and Run Memtier Benchmark After reconnecting to the primary client instance in us-west1 via SSH, you will now deploy the Memtier Benchmark utility which you’ll use to generate artificial Redis traffic. Since you are using Memtier Benchmark, you do not need to provide your own dataset. The utility will populate the cache for you using a series of set commands.code_block[StructValue([(u’code’, u’$ for i in {1..5}; do docker run –network=”host” –rm -d redislabs/memtier_benchmark:1.3.0 -s 127.0.0.1 -p 6379 u2014threads 2 u2013clients 10 –test-time=300 –key-maximum=100000 –ratio=1:1 –key-prefix=”memtier-$RANDOM-“; done’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ff3ba4910>)])]Validate the cache contents Now that we’ve generated some data from our primary region’s client, let’s ensure that it has been written to both of our regional Memorystore instances. We can do this by using cloud monitoring metrics-explorer. Next, you’ll configure the chart via “MQL” which can be selected at the top of the explorer pane. For ease, we’ve created a query which you can simply paste into your console to populate your graph:code_block[StructValue([(u’code’, u”fetch redis_instancern| metric ‘redis.googleapis.com/keyspace/keys’rn| filterrn (resource.instance_id =~ ‘.*memorystore.*’) && (metric.role == ‘primary’)rn| group_by 1m, [value_keys_mean: mean(value.keys)]rn| every 1m”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ff25a6210>)])]If you have created your Memorystore instances with a different naming convention or have other Memorystore instances within the same project, you may need to modify the resource.instance_id filter. Once you’re finished, ensure that your chart is viewing the appropriate time range, and you should see something like:In this graph, you should see two like lines which show the same number of keys in both Memorystore instances. If you want to view metrics for a single instance, you can do this by using the default monitoring graphs which are available from the Memorystore console after selecting a specific instance. Simulate Regional Failure Regional failure is a rare event. We will simulate this by deleting our primary Memorystore instance and primary client VM. Let’s start by deleting our primary Memorystore instance like: $ gcloud redis instances delete memorystore-primary –region=us-west1And then our client VM like: $ gcloud compute instances delete client-primaryNext, we’ll need to generate traffic from our secondary region client VM which we are using as our standby application. For the sake of this example, we’ll manually perform a failover and generate traffic to save time. In practice, you’ll want to devise a failover strategy to automatically divert traffic to the standby region when the primary region becomes unavailable. Typically, this is done with the help of services like Cloud Load Balancer. Once more, ssh into the secondary region client VM from the console and run the Memtier benchmark application as mentioned in the previous section. You can validate that reads and writes are properly routing to our standby instance by viewing the console’s monitoring graphs once more.  Once the original primary Memorystore instance is available again, it will become the new standby instance based on our Envoy configuration. It will also be out of sync with our new primary instance as it has missed writes during its unavailability. We do not intend to cover a detailed solution in this post, but we find that most users opt to rely on TTL which they have set on their keys to determine when their caches will eventually be in sync.  Clean UpIf you have followed along, you’ll want to spend a few minutes cleaning up resources to avoid accruing unwanted charges. You’ll need to delete the following: Any deployed Memorystore instances Any deployed GCE instancesMemorystore instances can be deleted like: $ gcloud redis instances delete <instance-name> –region=<region>The GCE container OS instance can be deleted like: $ gcloud compute instances delete <instance-name>If you created additional instances, you can simply chain them in a single command separated by spaces. ConclusionWhile Cloud Memorystore Standard tier provides high availability, some use cases require an even higher availability guarantee. Envoy and its Redis filter make creating a multi-regional deployment simple and extensible. The outline provided above is a great place to get started. These instructions can easily be extended to support automated region failover or even dual region active-active deployments. As always, you can learn more about Cloud Memorystore through our documentation or request desired features via our public issue tracker.Related ArticleScaling to new heights with Cloud Memorystore and EnvoyLearn how to scale your Google Cloud Memorystore for Redis database for high volume use cases in just a few minutes with the help of Envo…Read Article
Quelle: Google Cloud Platform

Announcing open innovations for a new era of systems design

We’re at a pivotal moment in systems design. Demand for computing is growing at insatiable rates. At the same time, the slowing of Moore’s law means that improvements to CPU performance, power consumption, memory and storage cost efficiencies have all plateaued. These headwinds are further exacerbated by new challenges in reliability, and security. At Google, we’ve responded to these challenges and opportunities with system design innovations across the stack: from new custom-silicon accelerators (e.g., TPU, VCU, and IPU), new hardware and data center infrastructure, all the way to new distributed systems and cloud solutions. But this is only the beginning. There are many more opportunities for advancements, including closely-coupled accelerators for core data center functions to minimize the so-called “data center tax.” As server and data center infrastructure diverges from decades-old traditional designs to be more modular, heterogeneous, disaggregated, and software-defined, distributed systems are also entering a new epoch — one defined by optimizations for the “killer microsecond” and novel programming models optimized for low-latency and accelerators. At Google, we believe that these new opportunities and challenges are best addressed together, across the industry. Today, at the Open Compute Project (OCP) Global Summit, we are demonstrating our support of open hardware ecosystems, presenting at more than 40 talks, and announcing several key contributions:Server design: We will share Google’s vision for a “multi-brained” server of the future, transforming traditional server designs to more modular disaggregated distributed systems across host computing, accelerators, memory expansion trays, infrastructure processing units, etc. We are sharing the work we are doing with all our OCP partners on the varied innovations needed to make this a reality — from modular hardware with DC-MHS, standardized management with OpenBMC and RedFish, standardized root of trust, and standardized interfaces including CXL, NVMe and beyond.Trusted computing: The root of trust is an essential part of future systems. Google has a tradition of making contributions for transparent and best in-class security, including our OpenTitan discrete security solutions on consumer devices. We are looking ahead to future innovations in confidential computing and varied use-cases that require chip-level attestation at the level of a package or System on a Chip (SoC). Together with other industry leaders, AMD, Microsoft, and NVIDIA, we are contributing Caliptra, a re-usable IP block for root of trust measurement, to OCP. In the coming months we will roll out initial code for the community to collectively harden together.Reliable computing: To address the challenges of reliability at scale, we’ve formed a new server-component resilience workstream at OCP,  along with AMD, ARM, Intel, Meta, Microsoft, and NVIDIA. Through this workstream, we’ll develop consistent metrics about silent data errors and corruptions for the broader industry to track. We’ll also contribute test execution frameworks and suites, and provide access to test environments with faulty devices. This will enable the broader community — across industry and academia — to take a systems-approach to addressing silicon faults and silent data errors. Sustainability: Finally, we’re announcing our support for a new initiative within OCP to support environmental sustainability as a key tenet across the ecosystem. Google has been a leader in environmental sustainability for many years. We have been carbon neutral since 2007, powered by 100% renewable energy since 2017, and have an ambitious goal to achieve net-zero emissions across all of our operations and value chain by 2030. In turn, as the cleanest cloud in the industry, we have helped customers track and reduce their carbon footprint and achieve significant energy savings. We’re excited to share these best practices with OCP and work with the broader community to standardize sustainability measurement and optimization in this important area. As the industry body focused on system integration (e.g., compute, memory, storage, management, power and cooling), the OCP Foundation is uniquely positioned to facilitate the industry-wide codesign we need. Google is active in OCP, serving in leadership roles, incubating new initiatives, and supporting numerous contributions.These announcements are the latest example of our history of fostering open and standards-based ecosystems. Open ecosystems enable a diverse product marketplace, with agility in time-to-market, and the opportunity to be strategic about innovation. Google’s open source leadership is multidimensional: driving industry standardization and adoption, strong and varied community contributions to grow the ecosystem, as well as broad policy and organizational leadership and sharing of best practices. The four initiatives we are announcing today, in combination with the Google-led talks at the OCP Summit, provide a small glimpse into the exciting new era of systems ahead. We look forward to working with the broader OCP community and other industry organizations to build a vibrant open hardware ecosystem to support even more innovation in this space. Please join us in this exciting journey.Related ArticleJupiter evolving: Reflecting on Google’s data center network transformationThanks to optical circuit switching (OCS) and wave division multiplexing (WDM) in the Jupiter data center network, Google enjoys a host o…Read Article
Quelle: Google Cloud Platform

Unifying data and AI to bring unstructured data analytics to BigQuery

Over one third of organizations believe that data analytics and machine learning have the most potential to significantly alter the way they run business over the next 3 to 5 years. However, only 26% of organizations are data driven. One of the biggest reasons for this gap is that a major portion of the data generated today is unstructured, which includes images, documents, and videos. It is estimated to cover roughly up to 80% of all data, which has so far remained untapped by organizations.One of the goals of Google’s data cloud is to help customers realize value from data of all types and formats. Earlier this year, we announced BigLake, which unifies data lakes and warehouses under a single management framework, enabling you to analyze, search, secure, govern and share unstructured data using BigQuery. At Next ‘22, we announced the preview of object tables, a new table type in BigQuery that provides a structured record interface for unstructured data stored in Google Cloud Storage. This enables you to directly run analytics and machine learning on images, audio, documents and other file types using existing frameworks like SQL and remote functions natively in BigQuery itself. Object tables also extend our best practices of securing, sharing and governing structured data to unstructured, without needing to learn or deploy new tools.Directly process unstructured data using BigQuery MLObject tables contain metadata such as URI (Uniform Resource Identifier), content type, and size that can be queried just like other BigQuery tables. You can then derive inferences using machine learning models on unstructured data with BigQuery ML. As part of preview, you can import open source TensorFlow Hub image models, or your own custom models to annotate the images. Very soon, we plan to enable this for audio, video, text and many other formats, and pre-trained models to enable out-of-the box analysis. Check out this video to learn more and watch a demo.code_block[StructValue([(u’code’, u’# Create an object tablernCREATE EXTERNAL TABLE my_dataset.object_tablernWITH CONNECTION us.my_connection rnOPTIONS(uris=[“gs://mybucket/images/*.jpg”],rn object_metadata=”SIMPLE”, metadata_cache_mode=”AUTOMATIC”);rnrn # Generate inferences with BQMLrnSELECT * FROM ML.PREDICT(rn MODEL my_dataset.vision_model, rn (SELECT ML.DECODE_IMAGE(data) AS img FROM my_dataset.object_table)rn);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e08491c1750>)])]By analyzing unstructured data natively in BigQuery, businesses canEliminate manual effort as pre-processing steps such as tuning image sizes to model requirements are automatedLeverage the simple and familiar SQL interface to quickly gain insightsSave costs by utilizing existing BigQuery slots without needing to provision new forms of computeAdswerve is a leading Google Marketing, Analytics and Cloud partner on a mission to humanize data. Twiddy & Co. is Adswerve’s client – a vacation rental company in North Carolina. By combining structured and unstructured data, Twiddy and Adswerve used BigQuery ML to analyze images of rental listings and predict the click-through rate, enabling data-driven photo editorial decisions. “Twiddy now has the capability to use advanced image analysis to stay competitive in an ever changing landscape of vacation rental providers – and can do this using their in-house SQL skills.” said Pat Grady, Technology Evangelist, AdswerveProcess unstructured data using remote functionsCustomers today use remote functions (UDFs) to process structured data for languages and libraries that are not supported in BigQuery. We are extending this capability to process unstructured data using object tables. Object tables provide signed URLs to allow remote UDFs running on Cloud Functions or Cloud Run to process the object table content. This is particularly useful for running Google’s pre-trained AI models, including Vision AI, Speech-to-Text, Document AI, open source libraries such as Apache Tika, or deploying your own custom models where performance SLAs are important. Here’s an example of an object table being created over PDF files that are parsed using an open source library running as a remote UDF.code_block[StructValue([(u’code’, u’SELECT uri, extract_title(samples.parse_tika(signed_url)) AS titlernFROM EXTERNAL_OBJECT_TRANSFORM(TABLE pdf_files_object_table, rn [“SIGNED_URL”]);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e083ebb4310>)])]Extending more BigQuery capabilities to unstructured dataBusiness intelligence – The results of analyzing unstructured data either directly in BigQuery ML or via UDFs can be combined with your structured data to build unified reports using Looker Studio (at no charge), Looker or any of your preferred BI solutions. This allows you to gain more comprehensive business insights. For example, online retailers can analyze product return rates by correlating them with the images of defective products. Similarly, digital advertisers can correlate ad performance with various attributes of ad creatives to make more informed decisions.BigQuery search index – Customers are increasingly using the search functionality of BigQuery to power search use cases. These capabilities now extend to unstructured data analytics as well. Whether you use BigQueryML to produce inference on images or use remote UDFs with Doc AI to produce document extraction, the results can now be search indexed and used to support search access patterns. Here’s an example of search index on data that is parsed from PDF files:code_block[StructValue([(u’code’, u’CREATE SEARCH INDEX my_index ON pdf_text_extract(ALL COLUMNS);rnrnSELECT * FROM pdf_text_extract WHERE SEARCH(pdf_text, “Google”);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e084957e110>)])]Security and governance – We are extending BigQuery’s row-level security capabilities to help you secure objects in Google Cloud Storage. By securing specific rows in an object table, you can restrict the ability of end users to retrieve the signed URLs of corresponding URIs present in the table. This is a shared responsibility security model, for which administrators need to ensure that end users don’t have direct access to Google Cloud Storage, and use signed URLs from object tables as the only access mechanism.Here’s an example of a policy for PII images that are secured to be first processed through a blur pipeline:code_block[StructValue([(u’code’, u’CREATE ROW ACCESS POLICY pii_data ON object_table_imagesrnGRANT TO (“group:admin@example.com”) rnFILTER USING (ARRAY_LENGTH(metadata)=1 AND rn metadata[OFFSET(0)].name=”face_detected”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e083dd17e90>)])]Soon, Dataplex will support object tables, allowing you to automatically create object tables in BigQuery and manage and govern unstructured data at scale.Data sharing – You can now use Analytics Hub to share unstructured data with partners, customers and suppliers while not compromising on security and governance. Subscribers can consume the rows of object tables that are shared with them, and use signed URLs for unstructured data objects. Getting StartedSubmit this form to try these new capabilities that unlock the power of your unstructured data in BigQuery. Watch this demo to learn more about these new capabilities.Special thanks to engineering leaders Amir Hormati, Justin Levandoski and Yuri Volobuev for contributing to this post.Related ArticleBuilt with BigQuery: BigQuery ML enables Faraday to make predictions for any US consumer brandHow Building with BigQuery ML enables Faraday to make predictions for any US consumer brand.Read Article
Quelle: Google Cloud Platform

EVO2CLOUD – Vodafone’s SAP migration from on-prem to Google Cloud

Editor’s note: Vodafone is migrating its SAP system, the backbone for its financial, procurement and HR services, to Google Cloud. Vodafone’s SAP system has been running on-prem for 15 years, during which time it has significantly grown in size, making this one of the largest and one of the most complex SAP migrations in EMEA. By integrating its cloud-hosted SAP system to its data ocean running on Google Cloud, Vodafone aims to introduce operational efficiency and drive innovation.Vodafone: from telco to tech-coVodafone, a leading telecommunications company in Europe and Africa, is accelerating its digital transformation from a telco to a tech-co that provides connectivity and digital services such as 5G services, IoT, TV and hosting platforms. Vodafone is partnering with Google Cloud to enable various elements of this transformation — from building one of the industry’s largest data oceans on Google Cloud to  driving value from data insights and deploying AI/ML models. One of Vodafone’s core initiatives is ‘EVO2CLOUD’, a strategic program to migrate its  SAP workloads to Google Cloud. Vodafone uses SAP for its financial, procurement and HR services; it’s the backbone of its internal and external operations. High availability and reliability are fundamental requirements to ensure smooth operation with minimal downtime. Moreover, hosting SAP on Google Cloud is a foundation for digital innovation and maintaining cybersecurity.EVO2CLOUD: enabling SAP on Google CloudWhen complete, EVO2CLOUD will have been one of the largest SAP to Google Cloud migrations. Over the course of two to three years, EVO2CLOUD will enable the transformation of a broad SAP ecosystem composed of more than 100 applications that have been running on-prem for the past 15 years, to a leaner, more agile and scalable deployment that is cloud-first and data-led. With EVO2CLOUD, Vodafone aims to improve operational efficiency, increase its NPS score and maximize business value by incorporating SAP into its cloud and data ecosystem,  introducing data analytics capabilities to the organization and enabling future innovations. As such, EVO2CLOUD is providing standardized SAP solutions and facilitating the transition to a data-centric model that leverages real-time, reliable data to drive data-based corporate decision making. SAP’s operating model on Google CloudVodafone foresees a step change in its operating model, where it can leverage an on-demand, highly performant, and memory-optimized M1 and M2 infrastructure at a low cost. Thanks to infrastructure as code, this improved operating model will provide increased capacity, high availability, flexibility and consistent enforcement of security rules. Vodafone is also reshaping its security architecture and leveraging the latest technologies to ensure privacy, data protection, and resilient threat detection mechanisms. Furthermore, it expects to increase its release-cycle frequency from bi-annual rollouts to weekly release cycles, increasing agility and introducing features faster. In short, Vodafone wants to build agility and flexibility in all that it does — from design all the way to delivery and operations, and DevSecOps will need to be an integral part of its operating model.Leveraging data to drive innovationBefore migrating to Google Cloud, it was difficult for Vodafone to extract and make use of its SAP data. Now with the transition to the cloud and with Google Cloud tools, it can expand how it uses its data for analytics and process mining. This includes operations and monitoring opportunities to map data with other external sources, e.g., combining HR data from SAP with other non-SAP data, resulting in data enrichment and additional business value. Vodafone is continuing to explore opportunities with Google Cloud to identify even more ways to leverage their data.Why Google Cloud and what’s NextIn fact, Vodafone is not only building its system on Google Cloud, but rather sees this project as the first step in a three-phase transformation: Redesigning the SAP environment and migrating to Google Cloud to make it ready for integration with Vodafone’s data ocean.Integrating SAP with Vodafone’s data ocean that sits on Google BigQuery. Leveraging cloud-based data analytics tools to optimize data usage, processes, and how Vodafone operates its business. Moving to Google Cloud is in line with Vodafone’s data-centric strategy, which aims to introduce enhanced features in data analytics and artificial intelligence, and effectively serves Vodafone’s employees and customers in more real-time. Transformation and change managementThe migration to Google Cloud is underway with Vodafone, Google Cloud, SAP and Accenture working together as one team to make this transformation a success. “An innovative and strategic initiative, co-shaped with a truly integrated partnership. A daily collaboration among four parties, Vodafone, Google, SAP and Accenture are executing the cloud transformation of a complex SAP estate within a compressed timeframe, for rapid benefits realization and accelerated innovations in the cloud.” – Antonio Leomanni, EVO2CLOUD program lead, AccentureVodafone recently celebrated the pilot’s go-live, an important milestone in this program. Change management has been fundamental to this transformation, incorporating learning and enablement, financial governance, lifecycle management, security, architecture reviews and innovation. By focusing on these disciplines, Vodafone and Google Cloud are ensuring the success of this transformation and strengthening their partnership.ConclusionIn conclusion, the SAP migration aligns with Vodafone’s data strategy by enabling a step change towards operational efficiency and innovation, by integrating SAP to Vodafone’s data ocean. The key to the success of this ongoing migration is:Clear migration requirements and objectives – infrastructure availability, security and resilience. Strong change managementApplication of the right technologies and toolsTo learn more about how Google Cloud is advancing the telecommunications industry visit us here.Related ArticleDelivering data-driven IT and networks, Google Cloud expands its analytics partner ecosystem for telecommunicationsCommunication Service Providers are becoming data driven and leveraging Google Cloud and their partners to solve tough problems.Read Article
Quelle: Google Cloud Platform

Fortress Vault on Google Cloud: Bringing private data to NFTs

Over the past two years, the general population has become more acquainted with cryptocurrencies and the first iterations of NFTs, which were among the earliest use cases for blockchain technology. This public awareness and participation has led to a growing interest in, and demand for, Web3 technology at the enterprise level. But building trust in a new wave of technology, especially in large organizations, doesn’t happen overnight. That is why it’s critical for Web3 technologists to bring the broader benefits, use cases, and core capabilities of blockchain to the forefront of the conversation. If businesses don’t understand how this new technology can help them, how can they prioritize it among competing tech plans and resources? And without baseline protocols that account for privacy, confidential data, and IP, how can they future-proof a business? Answering these questions and delivering trustworthy infrastructure is exactly why Scott Purcell and I founded Fortress Web3 Technologies — to bring about the next wave of Web3  utility. The company’s goal is to provide infrastructure that eliminates barriers to Web3 adoption with RESTful APIs and widgetized services that enable businesses to quickly launch and scale their Web3 initiatives. Our tools include embeddable wallets for NFTs and fungible rewards tokens; NFT minting engines; and core financial services . These include payments, compliance, and crypto liquidity via our wholly-owned financial institution, Fortress Trust. Being overseen by a chartered, regulated entity ensures privacy, compliance and business continuity.Fortress chose Google Cloud to help usher in this new-wave technology because no other cloud provider is better suited to helping regulated industries get up to scale on our Web3 infrastructure and blockchain technology. I’ll get into more specifics below, but at the highest level: IPFS (the current standard distributed storage) is going to face major resistance when it comes to industries that are heavily regulated or deal in ownership rights. By leveraging Google Cloud, which has critical certifications such as HIPPA, Department of Defense, ISO, and Motion Picture, we’re striking the appropriate balance between decentralization and centralization, using the best of both technologies. The Fortress Vault on Google Cloud is a huge and necessary step forward as the first ever NFT-database solution to protect intellectual property, confidential documents, and other electronic records. It represents the first technology that marries privately stored content with the accessibility, privacy, portability, and provenance that blockchain provides. Understanding Non-Fungible Tokens (NFTs)An NFT is not an expensive jpeg. From a technical point of view, an NFT is a unique key stored in a distributed and trustless ledger we call a blockchain. This blockchain token is uniquely identifiable from any other token and acts as a digital key to authenticate ownership and unlock data held in a database. While different blockchains have adopted different standards, Ethereum standards are a good proxy to represent overall concepts. Going back to the primitives, if you read the EIP 721 proposal, metadata is explicitly optional. While today’s NFT hype has indeed leveraged that technology to monetize and distribute digital art, the potential of blockchain is in the ability to digitally represent ownership of a wide variety of different asset classes on a decentralized ledger. Unique, non-fungible tokens are not a new concept. We use them every day in technical systems for things like authentication, database keys, idempotency, and much more. Now, thanks to blockchain technology, you can take those out of their walled gardens and into an open platform that can lead to transformational utility and applications. Take real estate, for example. Instead of a paper-based title documenting you as the owner of your home, imagine that the title is tokenized with an NFT on a blockchain. Any platform could cryptographically verify the authenticity of that form of title along with its provenance in real time and confirm that you’re the rightful owner of that property. But, perhaps you don’t want the title of your property visible to others, nor the associated permits, tax documents, architectural drawings, contractor lists, and other documents. Maybe you just want banks, insurance companies, and others to be able to confirm that you are indeed the owner without revealing the details of those records. The NFT metadata records immutable public-facing provenance, while the underlying data remains private and protected using Fortress Vault on Google Cloud. Apply that same utility to other sensitive information such as medical records, intellectual property, estate documents, corporate contracts, and other confidential information and it’s easy to see how enterprises are just now exploring how to hold traditional assets as NFTs.Fortress Vault: Intellectual Property, Confidential Documents, and Other Electronic RecordsWhat NFTs and Web3 have been lacking is the ability to make the tokenized data accessible exclusively by the owner — and only the owner.  NFTs are a digital key to unlock everything ranging from music and event tickets, to real estate deeds and healthcare records, to estate documents, and to everything in the world that’s digital.This is why we created the Fortress Vault.  When building it, we had to make a fundamental decision: Either go with a distributed and permissionless storage protocol like IPFS, filecoin, or other blockchain-based database offerings, or work with an industry-leading cloud platform that understands data integrity and is establishing itself as the leader in the space. Ultimately, we chose Google Cloud for its industry-leading object storage, professional management, fault tolerance, and myriad of certifications for architecture and data integrity.Some of the challenges faced when vaulting a vast variety and quantity of digital content at scale include:Balancing data availability versus cost of storageData redundancyLong term archival needsBusiness continuityFlexibility to meet current and future needs of the rapidly evolving Web3 industry. Google Cloud is the clear leader across all of these pain points. The object lifecycle management of Google Cloud Storage enables efficient  transition between storage classes when either the data matures to a certain point or it’s updated with newer files. Content in the Fortress Vault can range from on-demand data to long-term uses, such as estate planning documents that won’t be accessed for 30 years. When storing NFT data, robust disaster recovery is table stakes. We quickly gravitated to the automatic redundancy options and multi-region storage buckets that let us customize where we store our data without massive devops and management overhead. By leveraging Google Cloud, we can offer industry leading retention, redundancy, and integrity for our customers’ NFT content.Working with a leader in data storage was key to making this a reality. Additionally, Google Cloud shares our vision of bringing every industry forward into the world of Web3. We are both focused on building the critical infrastructure that allows everyone from Web3 native companies to Fortune 500 brands navigate the strategic shift to blockchain technology.Why Web3 Matters“Web3” is shorthand  for the “third wave” of the internet and the technological innovation that brought us here. Web 1 — the earliest internet — democratized reading and access to information, opening the doors to  mass communication.  Web 2 expanded on that with the ability to read and “write.” It democratized publishing by letting people directly engage in producing information through blogs, social media, gaming, and contributions to collective knowledge. Web 3 expands our technological capabilities even more with the ability to read, write, and “own.”  With blockchain, we can now establish clear provenance with visibility into the  origination of ownership of any tokenized asset, and we can see the chain of ownership. We can rely on this next-generation technology to track, authenticate, protect, and keep a ledger of our assets. With the Fortress Vault on Google Cloud, we have the capability to ensure the integrity of non-public data while making it accessible via NFTs. This is a game changer for Web3 adoption, particularly  in industries like music, event ticketing, gaming, finance, transportation, real estate, and healthcare. Every industry can benefit from the ability to tokenize assets on blockchain technology without leaving the trusted safety of Google Cloud data storage. The market for NFTs is everyone. And the Fortress Vault on Google Cloud is the technology evolution that makes it possible for Web3 innovators to confidently build, launch, and scale their initiatives across every industry imaginable.Related ArticleWhat’s new in Google Cloud databases: More unified. More open. More intelligent.Google Cloud databases deliver an integrated experience, support legacy migrations, leverage AI and ML and provide developers world class…Read Article
Quelle: Google Cloud Platform

Reliable peering to access Google Cloud

Peering is often seen as a complex and nuanced topic, particularly for some of our Cloud customers. Today we’d like to demystify peering’s inner workings and share how a peering policy update that requires local redundancy helps improve reliability for our users and customers. Redundancy is a well understood and documented concept to improve reliability. We have talked previously about how our significant investments in infrastructure and peering enables our internet content to reach users and how we are making our peering more secure. Google Cloud on the internetEvery day Google Cloud customers collaborate with colleagues using Workspace, leverage Google Cloud CDN to serve content to users worldwide or choose to deploy a Global Cloud Load Balancer to leverage our anycast IPs. Each use case has the same thing in common: these and many other Google products rely on peering to connect Google’s global network to ISPs worldwide to reach their destination, users like you and me. Peering delivers internet trafficPeering is the physical fiber interconnection between networks such as Google and your Internet Service Provider (ISP), or between Google and cloud customers, which occurs at various facilities all around the world. Its purpose is to exchange public internet traffic between networks to optimize for cost and performance. Google has built our network to over 100 facilities worldwide to peer with networks both large and small. This is how Google provides a great experience for all of our users, reduces costs for ISPs, and is one of several ways our cloud customers can connect to the Google network. One of the other common ways enterprises connect to Google Cloud that is often confused with peering is Dedicated Interconnect, which offers private connectivity between your on-premise environment and Google Cloud. Think of peering like part of a city water system where the pipes are the fiber optic cables and the water is the bits of data coming to your phone, computer, or data center. Just as your city’s water system needs to interconnect to your house plumbing, Google’s global network needs to interconnect to your neighborhood ISP to deliver all types of Google traffic. The water flowing out of your sink faucet is analogous to being able to use Google services on your home Wi-Fi. Peering infrastructureThousands of networks including Google are peering with each other all over the world every day. Networks who peer mutually agree on the locations and capacity to address traffic demand, cost, and performance. Since there are so many networks worldwide it is not practical for every network to peer with each other so most networks retain some type of IP transit that allows users to reach the entirety of the internet. Essentially, IP transit is a paid service offered to networks to ‘transit’ another well connected network to reach the entirety of the internet. This transit also acts as a failover path for when a peering connection is unavailable, and plays an important role in ensuring the universal reachability of every endpoint on the Internet. One potential downside to transit is that traffic may traverse an indirect and costly path to reach an end user which therefore can decrease performance compared to peering. Google’s preference is to deliver all traffic on the most optimal peering paths to maximize performance.When peering goes downWith any type of physical infrastructure, components can malfunction or need to be taken out of service for maintenance. The same is true for the infrastructure that supports peering. Downtime can sometimes last days or weeks depending upon the cause and time to repair. During downtime, internet traffic to and from Google gets rerouted to failover paths. Sometimes these paths are another peering location in the same city, sometimes they are rerouted hundreds or thousands of miles away to peering in a different city or even country, and in some cases to an IP transit connection if no other peering connection is available. Much of this depends upon how and where a network is peered with Google.  The further the traffic is physically rerouted from the intended peering connection, and if any IP transit connections are in the traffic path, the higher the likelihood of increased latency, packet loss, or jitter, all of which can translate into a frustrating or poor user experience. A deep and diverse peering footprintOver many years we have built our peering with ISPs and cloud customers to be both physically redundant and locationally diverse to ensure an optimal user experience for all Google services. This translates to a deep and diverse peering interconnection footprint with networks and customers around the world. As Google Cloud services like Premium Network Tier, Cloud VPN, and Workspace use peering to reach their end users, this type of planning helps to avoid user experience issues mentioned above. A more stable and predictable peering interconnectTo help achieve our goal of a reliable experience for all Google users we have recently updated our peering policy to require physical redundancy on all Google private peering connections within the same metropolitan area. This update will allow Google and ISPs to continue to exchange traffic locally during peering infrastructure outages and maintenance under most circumstances. For our customers and users this means more predictable traffic flows, consistent and stable latency, and a higher effective availability of peering that provides an overall more predictable experience with Google services, while still offering cost savings to ISPs. There are a multitude of factors that can influence performance of an application on the internet, however this change is designed so that outages and maintenance on our peering infrastructure will be a less noticeable and impacting experience.  You can read more details about the change on our peering page.Fig A – Two examples of metropolitan area peering redundancy. A redundant peering link (green) in the same metropolitan area helps keep traffic local during peering infrastructure maintenance or outages.Working with our peering partners and customersWe are working closely with our existing and new Google Cloud customers and ISP peers to ensure we build out locally redundant peering interconnects. We also know that many networks have challenges to build this configuration so we are identifying ways to work with them.  We encourage Google Cloud customers and any ISPs who are interested to review their redundancy topology with Google to contact us, and to also review our best peering practices. To learn more about peering and to request peering with Google please visit our Peering websiteRelated Article20+ Cloud Networking innovations unveiled at Google Cloud NextUpdates to the Google Cloud Networking portfolio center on content delivery, migrations, security, and observability, to name a few.Read Article
Quelle: Google Cloud Platform

Manage storage costs by automatically deleting expired data using Firestore Time-To-Live (TTL)

We are thrilled to announce that we have added time-to-live (TTL) support for both Firestore Native and Datastore mode!Use Firestore TTL policies to remove out-of-date data from the database. You can think of it as a managed deletion mechanism built-into Firestore. Once documents or entities are considered expired, they will be eligible for deletion. Similar to direct DELETE operations, it will also notify all external services (ex:  Function Triggers, etc.), upon a deletion event.Common use casesGarbage collection. TTL can be handy if you have data that has a well-defined lifecycle for a document.Support time relevant features natively. You can rely on TTLs if you want to build features relying on ephemeral data.Security and privacy compliance. There are some regulations that require data retention for no longer than a certain time. You will have the flexibility to configure different expiration at the document level, which can help you meet the requirements from varying sources.Example walkthroughSounds like a good candidate for your application? Let’s walk through an example to see how it works from end to end. The example below uses documents and collections, but it works similarly for entities and kinds.Assume you have a database that saves lots of documents in collection Chats and some of them will be useless at some point in the future.First of all, you need to decide on a field to use as the TTL field, and that field must contain a  timestamp value. For example, you can choose to designate the expireAt field as a TTL field, even  if your documents don’t contain values for this field yet. There are two ways of configuring TTL policies:Use the gcloud CLI. You can find some sample commands to view and modify TTL policies. Use the Google Cloud Console. You can navigate to the Firestore Time-to-live configuration page to configure a new policy.Now that you have configured TTL policies, the documents should be updated with the TTL field if not already. In this case it is expireAt that serves as TTL field.That’s everything you need to do. Once a document expires, it’s eligible for deletion, and Firestore will perform the deletion on your behalf.Want to learn more? Check out the documentation and happy databasing.Special thanks to Minh Nguyen, Lead Product Manager for Firestore, and Joseph Batchik, Software Engineer for Firestore, for contributing to this post.Related ArticleAll you need to know about Firestore: A cheatsheetBuilding applications is a heavy lift due to the technical complexity, which includes the complexity of backend services that are used to…Read Article
Quelle: Google Cloud Platform

Best kept security secrets: How Cloud EKM can help resolve the cloud trust paradox

Whether driven by government policy, industry regulation, or geo-political considerations, the evolution of cloud computing has led organizations to want even more control over their data and more transparency from their cloud services. At Google Cloud, one of the best tools for achieving that level of control and transparency is a bit of technological magic we call Cloud External Key Manager (EKM). Cloud EKM can help you protect your cloud data at rest with encryption keys which are stored and managed in a third-party key management system that’s outside Google Cloud’s infrastructure, and ultimately outside Google’s control. This can help you achieve full separation between your encryption keys and your data stored in the cloud. Cloud EKM works with symmetric and asymmetric encryption keys, and offers organization policies that allow for fine-grained control over what types of keys are used. Via Key Access Justification (KAJ) it also offers the way for clients to control each key use.At their core, many cloud security and cloud computing discussions are about the kinds of trust that Cloud EKM specifically and encryption more broadly can help create. While the concept of digital trust is much bigger than cybersecurity and its tripartite components of security, privacy, and compliance, one of the most crucial themes of cloud computing is the cloud trust paradox. In order to trust the cloud more, you must be able to trust it less, and external control of keys and their use can help reduce concerns over unauthorized access to sensitive data.How it worksFrom our Cloud EKM documentation, you can use keys that you manage within a supported external key management partner to protect data within Google Cloud. You can protect data at rest in services that support CMEK, or by calling the Cloud Key Management Service API directly.Cloud EKM provides several benefits:Key provenance: You control the location and distribution of your externally-managed keys. Externally-managed keys are never cached or stored within Google Cloud. Google cannot see them. Instead, Cloud EKM communicates directly with the external key management device for each request.Access control: You manage access to your externally-managed keys. Before you can use an externally-managed key to encrypt or decrypt data in Google Cloud, you must grant the Google Cloud project access to use the key. You can revoke this access at any time.Centralized key management: You can manage your keys and access policies from a single location and user interface, whether the data they protect resides in the cloud or on your premises. The system that managed the keys is entirely outside Google control.In all cases, the key resides on the external system, and is never sent to Google. Here’s how it works:Create or use an existing key in a supported external key management partner system. This key has a unique URI.Grant your Google Cloud project access to use the key, in the external key management partner system.Create a Cloud EKM key in your Google Cloud project, using the URI for the externally-managed key.The Cloud EKM key and the external key management partner key work together to protect your data. The external key is never exposed to Google and cannot be accessed by Google employees. Furthermore, Cloud EKM can be combined with Key Access Justifications (KAJ) to establish cryptographic control over data access. KAJ with Cloud EKM can give customers the ability to deny Google Cloud administrators access to their data at rest for any reason, even in situations typically exempted from customer control, such as outages or responses to third-party data requests. KAJ does this by providing customers a clear reason why data is being decrypted, which they can use to programmatically decide whether to permit decryption and thus allow access to their data. Previously, we’ve discussed three patterns where keeping the keys off the cloud may in fact be truly necessary or outweighs the benefits of cloud-based key management. Here’s a brief summary of those three scenarios where Cloud EKM can help solve these Hold Your Own Key dilemmas.Scenario 1: The last data to go to the cloudAs organizations complete their digital transformations by migrating data processing workloads to the cloud, there is often a pool of data that can not be moved to the cloud. Perhaps it’s the most sensitive data, the most regulated data, or the data with the toughest internal security control requirements.Finance, healthcare, manufacturing and other heavily-regulated organizations face myriad risk, compliance, and policy reasons that may make it challenging to send some of their data to a public cloud provider. However, the organization may be willing to migrate this data set to the cloud as long as it is encrypted and they have sole possession of the encryption keys. Scenario 2: Regional regulations and concernsRegional requirements are playing a larger role in how organizations migrate to and operate workloads in the public cloud. Some organizations are already facing situations where they are based in one country and want to use a cloud provider based in a different country, but they aren’t comfortable with or legally allowed to give the provider access to encryption keys for their stored data. Here the situations are more varied, and can include an organization’s desire to stay ahead of evolving regulatory demands or industry-specific mandates. Ultimately, this scenario allows organizations to utilize Google Cloud while keeping their encryption keys in the location of their choice, and under their physical and administrative control.Scenario 3: Centralized encryption key controlThe focus here is on operational efficiency. Keeping all the keys within one system to cover multiple cloud and on-premise environments can help reduce  overhead and attack surface, thus helping to improve security. As Gartner researchers concluded in their report, “Develop an Enterprisewide Encryption Key Management Strategy or Lose the Data1,” organizations are motivated to reduce the number of key management tools. “By minimizing the number of third-party encryption solutions being deployed within an environment, organizations can focus on establishing a cryptographic center of excellence,” Gartner researchers saidGiven that few organizations are 100% cloud-based today for workloads that require encryption, keeping keys on-prem can streamline key management. Centralizing key management can give the cloud user a central location to enforce policies around access to keys and access to data-at-rest, while a single set of keys can help reduce management complexity. A properly implemented system with adequate security and redundancy outweighs the need to have multiple systems.Do I need Cloud EKM?Whether protecting highly sensitive data, retaining key control to address geopolitical and regional concerns, or supporting hybrid and multi-cloud architectures, Cloud EKM is best suited for those Google Cloud customers who must keep their encryption keys off of the cloud and always under their full control. To learn more about Cloud EKM, please review these resources:Our research explaining why Google Cloud users can benefit from Cloud EKMThe most recent updates to Cloud EKMTake a deeper dive into the cloud trust paradox1. Gartner, Develop an Enterprisewide Encryption Key Management Strategy or Lose the Data, David Mahdi, Brian Lowans, March 2022.Related ArticleBest Kept Security Secrets: Tap into the power of Organization Policy ServiceOrganization Policy Service is a powerful tool for creating broad security guardrails in the cloud. Learn more about how this Best Kept S…Read Article
Quelle: Google Cloud Platform