Accelerating app development lifecycle with managed container platforms, Firebase and CI/CD

We understand that for startups in the build phase, the highest priority task is to continuously ship features based on your users’ needs. There are three main focus areas when building applications: Development: When it comes to development, focus on tasks that make your app unique by offloading backend setup and processing to someone else. For example, instead of setting up your own API servers and managing backend services, Firebase offers a managed experience. Hosting: Once you’ve built your app, the next step is to host it. Containers have become the de facto way of packaging applications today. You can easily run your containers in managed environments, such as Google Kubernetes Engine or Cloud Run. Improvements: A one-time deployment is not enough. Growth is about taking in feedback from the market and improving our applications based on the same. We recommend incorporating CI/CD and automating improvements in your software delivery pipelines.In this blog post, you can learn more about the tools that help you with the above three focus areas.Develop apps faster by shifting focus to business logic with FirebaseIn a traditional app architecture, you would need to set up and manage an API server to direct requests to your backend. With Firebase, you can easily add features in your mobile or web app with a few lines of code, without worrying about the infrastructure. The products on Firebase help you Build, Release & Monitor, and Engage. Doing so will allow your teams to:Add features like authentication and databases with only a few lines of codeUnderstand your users and apps better using Google Analytics for Firebase, Crashlytics, and Performance MonitoringSend messages to engage your users with Firebase Cloud Messaging and In-App MessagingWith simple-to-use cross-platform SDKs, Firebase can help you develop applications quicker and reduce your time to market, improve app quality in less time with less effort,  and optimize your app experience. Find out how you can put together these building blocks in our video on Working with Firebase.Host apps easily with managed container platforms on Google CloudFor startups who are looking to utilize resources better, containerization becomes the next step. With our investment in Google Kubernetes Engine (GKE) and Cloud Run, Google Cloud gives you the freedom to build with containers on a tech stack based on open source tools like Kubernetes, Knative and Istio. This means no vendor lock-in for you.Google Kubernetes EngineWe understand that our customers are looking for autonomous and extensible platforms that are expertly run. GKE gives you a managed environment to run applications, simplified consoles to create/update your clusters with a single click, and lets you deploy applications with minimal operational overhead.Google manages your control plane, and 4-way autoscaling gives you the option to fine tune to get the most optimized utilization for the resources used.These best practices are applied by default with the second mode of operation for GKE – Autopilot. It dynamically adjusts compute resources so you don’t have to worry about unused capacity and you pay only for the pods you use , billed per second for vCPU, memory and disk resource requests. This means that you can reduce operational costs, while still optimizing for production and higher workload availability.Head to Compute with Google Kubernetes Engine to quickly get started with GKE.Cloud RunCloud Run lets you run containers in a fully managed serverless environment and gives you the ability to scale down to ‘zero’ when there are no requests coming in. It is a great fit for stateless applications like web frontends, REST APIs, lightweight data transformation jobs, etc. There are 3 steps to any Cloud Run deployment –Create a build using your source code. Submit the build to store it in a container registry.Deploy the application using a simple command. This process is very similar to the usual steps followed for deployments on other platforms but what makes Cloud Run special is that all of this can be achieved in one single command – `gcloud run deploy –source . `Watch this in action in the video to Get started on Cloud RunImprove and iterate more often with CI/CD solutionsSoftware systems are living things and need to adapt to reflect your changing priorities. Continuous integration/Continuous deployments (CI/CD)  as the term suggests, means that you are adding code updates and deploying them continuously. Our developer’s time should be spent writing code, so CI/CD steps should be triggered and run in the background when code is pushed. Let’s look at the components of a CI/CD pipeline and how Google Cloud tools support them – Cloud Code integrates with your IDE and lets you easily write, run and debug your applications.Cloud Build lets you run your build steps to package and deploy your applications on any platform on Google Cloud. You can set up triggers to start builds automatically. Artifact Registry is where we store the intermediate artifacts created during a build. Container images stored here can be used to create newer deployments to other platforms as well.Cloud Deploy automates the delivery of your updated application to target environments specified by you. Both Cloud Run and GKE come integrated with Cloud Operations Suite so you can monitor your application for any errors or performance issues. We know that you want to deliver bug-free features to your customers. So when you are shipping code, consider how a CI/CD pipeline can help you catch performance issues early and improve developer workflows. To set up your CI/CD pipeline on Google Cloud, refer to CI/CD on Google Cloud.Stay in touch for moreThe Google Cloud Technical Guides for Startups series has many more detailed videos and resources to support you on all steps of your growth journey. Check out our full playlist on the Google Cloud Tech channel and handbooks and sample architectures on our website. Don’t forget to subscribe to stay up to date. If you’re ready to get started with Google Cloud, apply now for the Google for Startups Cloud Program.See you in the cloud.Related ArticleBootstrap your startup with the Google Cloud Technical Guides for Startups : A Look into the Start SeriesAnnouncing the summary of the first phase of the Google Cloud Technical Guides for Startups, a video series for technical enablement aime…Read Article
Quelle: Google Cloud Platform

Cloud Logging pricing for Cloud Admins: How to approach it & save cost

Flexera’sState of the Cloud Report 2022 pointed out that significant cloud spending is wasted, a major issue that is getting more critical as cloud costs continue to rise. In the current macroeconomic conditions, companies focus on identifying ways to reduce spending. To  effectively do that, we need to understand the pricing model. We can then work towards the challenges of cost monitoring, optimization, and forecasting. One area that often gets overlooked in budgeting is observability—logging, monitoring, tracing. This can represent a significant cost, especially if it’s not optimized. Let’s explore how to understand and optimize our most voluminous data source—logs—within Google Cloud.Cloud Logging is a fully managed real-time log solution that allows you to ingest, route, store, search and analyze your logs to easily troubleshoot incidents using your log data. It can collect data from on-prem, Google Cloud and other clouds with open source agents that support over 150+ services. Unlike traditional licensing models or self hosted logging solutions, Cloud Logging pricing model is simple and based on actual usage. Let’s explore the various components of Cloud Logging and address a few commonly asked questions about pricing. Cloud Logging – Components & PurposeTo understand pricing better and be able to predict future costs, we need to understand the high-level components of Cloud Logging and where billing occurs in our system. There are three important components within Cloud Logging: Cloud Logging API, Cloud Logging Router (Log Router) and log buckets (Log Storage).The below table outlines the high-level components, purpose and pricing information for Cloud Logging. As indicated above, today billing in Cloud Logging occurs only for a log that is routed and ingested into a log bucket. “Ingestion” in Cloud Logging is the process of saving log data into a log bucket, not simply processing it in the Log Router. There are three options for log buckets – RequiredDefault User-defined or Custom. Only Default and User-defined buckets are billed.Today, ourlogging pricing is based on the volume of logs ingested in a chargeable log bucket—default or user-defined. All charges in Cloud Logging occur at the log bucket and all log types incur the same cost.  Logs dropped using sink filters or exclusion filters are not charged by Cloud Logging, even if these logs are routed to a destination outside of Cloud Logging. Now, we’ll address frequently asked questions about the Cloud Logging pricing model.  What Cloud Logging charges will I see on my bill?There are two types of charges your logs can potentially incur: An ingestion charge of 0.50 cents/GB which includes default storage of 30 days. Note that the first 50 GB in a project fall under the free tier quota. You get charged based on the volume of logs ingested into the Default and User-defined log buckets.Logs stored beyond 30 days will incur a retention charge of $0.01/GiB/month for non-required buckets. Note that this pricing is not currently enforced. We will begin charging in early 2023.For the latest pricing, check here.How can I reduce my bill?Because Cloud Logging pricing is based on actual usage, you can reduce your pricing by adjusting the ingestion volume or retention period.Reduce the volume of logs ingested per log bucket by identifying and keeping (ingesting) only valuable log data for analysis. If you do not need to keep data beyond the included 30 days, reduce the retention period. Because the first 30 days of retention are included with ingestion, reducing retention to less than 30 days will have no impact on your bill.Does Cloud Logging charge based on the number of queries, searches either from Cloud Logging UI or Client SDK/APIs?No, Cloud Logging does not charge for the number of queries, searches, logs read from disks during queries, or varied log types.  There is a quota limit for querying logs, though, so for integrations with SIEMs or other logging tools, it’s a best practice to set up a log sink via Pub/Sub to push the logs to the downstream system.Can I incur multiple ingestion charges?It is possible to be charged for ingesting the same log entry into Cloud Logging log buckets multiple times.  For example, if your sinks route a log entry to two log buckets, you will pay ingestion costs at two buckets. You may choose to do this to have independent retention of logs or to keep copies of logs in multiple regions for compliance reasons. Are there different costs for hot and cold storage?No, there are no differences between hot and cold storage. The beauty of Cloud Logging is that all logs are accessible throughout their lifespan. Cloud Logging is designed to scale easily and efficiently, which makes logs accessible for troubleshooting, investigating and compliance reasons whether they are seconds or years old. How much does it cost to route logs to other destinations?Today, Cloud Logging does not charge for centrally collecting and routing logs to other destinations like Cloud Storage, BigQuery, Pub/Sub. Usage rates for the destination services, Cloud Storage, BigQuery and Pub/Sub apply.Do Logs have a generation fee?For network telemetry logs such as VPC Logs, Firewall rules logs and Cloud NAT logs, you might incur an additional network generation charge if logs are not stored in Cloud Logging. If you store your logs in Cloud Logging, networking logs generation charges are waived, and only Cloud Logging charges apply. How do I understand my ingestion volume in Cloud Billing?To determine the cost per Project:Go to Cloud Console -> Billing -> Select the Billing Account -> Reports (left pane) On the right side, under filters -> Services -> select “Cloud Logging”Now, Let’s drill down to learn about the cost incurred by each log bucket. Select the Project on the top bar. On the Left pane, go to Logging -> Logs Storage. Now you should be able to see the log volume per bucket.Putting it all togetherNow that we understand pricing for Cloud Logging, we can optimize our usage. Here are four best practices:Recommendation #1: Use a log router to centralize your collection; get a 360 view of your log world and then use an exclusion filter to reduce noisy logs and send only valuable logs to the log bucket. Logs dropped using sink filters or exclusion filters are not charged by Cloud Logging, even if these logs are routed to a destination outside of Cloud Logging. Recommendation #2: Admin activity audit logs are captured by default for all GCP services for no additional cost. Leverage the audit logs from Required Bucket by identifying use-cases for your organizations and configure log-based-alerts on them. Recommendation #3: Logs can be stored cost effectively for up to 10 years and easily accessed via Cloud Logging. Cloud Logging will begin charging customers for long term log retention starting Jan 2023. Between now and Jan 2023, determine the required lifespan of a log and set the appropriate retention period for each log bucket.Recommendation #4: If you are a new customer, estimate your bills. This is a great way to compare costs with your current Cloud Logging solution. If you are an existing customer, create a budgetand set up alerts on your Cloud Logging bills. In addition to analyzing log volumes by buckets, customers may want to analyze the sources, projects, etc. Metrics explorer in Cloud Monitoring can also be used to identify costs. We will discuss this in the next blog.  For more information, join us in ourdiscussion forum. As always, we welcome your feedback. Interested in using Cloud Logging to save costs in your organization, contact us here. We are hosting a webinar to talk about how you can leverage Log Analytics, powered by BigQuery in Cloud Logging for no additional cost. Register here.
Quelle: Google Cloud Platform

From the NFL to Google’s Data Centers: Why KP Philpot still values teamwork over everything

Editor’s note: KP Philpot is the Environmental Health & Safety Manager at Alphabet’s data center campus in Douglas County, Georgia. It’s a long way from both a childhood in Chicago’s South Side, and standing in football stadiums with thousands of fans, but one thing has always held true for him: The importance of personal and team performance. How did you come to Google?At surface level, it was pretty direct. I was working as a site safety engineer for a contractor that was building a Google data center, and I was offered a job at Google. On a deeper level, it was a long and unexpected journey. I grew up in inner city Chicago, and we didn’t hear a lot about data center technicians and environmental engineering. We had blue collar jobs you stuck to, or you played sports. I played football and basketball, and was recruited by colleges for both. I set three NCAA records playing linebacker at Eastern Michigan University, and then I was with the Detroit Lions and played some Arena Football. A few years after that someone I played with in college brought me into the construction industry, and that’s what I did at three other companies before arriving at Google. How different is Google from other places?One thing that’s a breath of fresh air is that when you come to Google, it’s okay to not have all the answers. I think you work more freely and more confidently when there’s no expectation to know everything from day one. If someone you ask doesn’t know the answer, they’re interested in finding it out. There’s a healthy curiosity that you don’t find in most places. One other difference is that Google tends to be team oriented. That part comes naturally to me,  even if it is tech.  I’ve played on teams since I was a kid, and both my parents were athletes. On a team, everyone has a part to play. You have different people, with different skill sets, but everyone belongs. Their contributions are different, but the goal is the same.What is a typical day like?Many people see data centers as rooms full of servers and switches, but I assure you no two days are alike. There are many things to think about in terms of safety, since a data center has a lot of moving parts, especially when working with electricity, we have rigorous protocols to ensure safety for everyone on the site. We also take our environmental impact seriously. A big part of our environmental work is the innovative cooling system we have here in Douglas County — we recycle local sewer water that would otherwise be put in the Chattahoochee River. As for leftover water that does not evaporate, we treat it before returning it to the river. More than that, though, it’s the diversity of people you find in a data center. There may be construction people, who tend to have a lot of hands-on experience and are task focused; there are engineers and managers, who are more focused on how to optimize a process;and of course, there are Googlers. We all become interesting to each other. I get to coordinate and work alongside all of them, which I enjoy a lot.So is team building part of the job?Teamwork is the lens through which I see the world. I was raised by very principled people, who taught me how much your individual actions impact everyone. A family is a team as well. My grandfather would point at his first name, and say, “That’s my name,” then point at my last name, and say, “that’s our name. Every time you walk out the door, that’s who you are.” When I work, I see the world the same way, the need to be a principled person who’s part of a larger team, and constantly working to build respect and trust. Being in the NFL was more expected than being at Google, but these things don’t change.Related ArticleSales specialist, mentor, and woman in Web3: Anella Bokhari is building community and helping others tell their story along the waySales Specialist, Mentor, and Woman in Web3: Anella Bokhari Wears Many Hats But Has the Same “Why” – Helping Others Find & Tell Their Sto…Read Article
Quelle: Google Cloud Platform

The new Google Cloud Region in Israel is now open

Today, we are excited to announce that the new Google Cloud region in Israel is open.  We’ll be celebrating the launch at an event in Tel Aviv on November 9 — register to join us.  Israel is known as the startup nation, and has long been a hub of technology innovation for startups and Google alike. We’re excited to extend that innovation-first approach to other industries, accelerating digital transformation to help create new jobs and digital experiences that better serve users in Israel. According to recent research commissioned by Google, AlphaBeta Economics (part of Access Partnership) estimates that by 2030, the Google Cloud region in Tel Aviv will contribute a cumulative USD 7.6 billion to Israel’s GDP, and support the creation of 21,200 jobs in that year alone1. The Google Cloud region in Tel Aviv (me-west1), joins our network of cloud regions around the world, delivering high-performance, low-latency services to customers of all sizes and across industries. Now that the Israel cloud region is part of the Google Cloud network, it will help local organizations connect with users and customers around the globe, and help fuel innovation and digital transformation across every sector of the economy. Last year, Google Cloud was selected by the Israeli government to provide cloud services to government ministries. This partnership can enable the government, and private companies operating in regulated industries, to simplify the way in which users are served, create a uniform approach to digital security, and support compliance and residency requirements. Over a number of years we’ve grown our local Googler presence in both Tel Aviv and Haifa to support the growing number of customers, and bring a culture of innovation to every sector of the economy.  From technology, retail, and media and entertainment, to financial services and the public sector, leading organizations come to Google Cloud as their trusted innovation partner. “With Google Cloud we are changing the way millions of people read and write, by serving our own Large Language Models on top of the most advanced GPU platform that offers unparalleled performance, availability and elasticity.” – Ori Goshen, CEO, AI21 Labs“PayBox is supervised by the Bank of Israel and is completely hosted in the cloud. Google Cloud provides us with the tools needed to meet regulatory compliance and security obligations as well as the flexibility and agility to serve the millions of customers that rely on our app every day.” – Dima Levitin, CIO, PayboxIsrael has long been a hub of technology innovation, and we’re excited to support customers Like AI21, Paybox, and others with a cloud that helps them:Better understand and use data: Google Cloud helps customers make better decisions with a unified data platform. We help customers reduce complexity and combine unstructured and structured data — wherever it resides — to quickly and easily produce valuable insights. Establish an open foundation for growth: When customers move to Google Cloud, they get a flexible, secure, and open platform that can evolve with their organization. Our commitment to multicloud, hybrid cloud, and open source offers organizations the freedom of choice, helping to allow their developers to build faster.Create a collaborative environment: In today’s hybrid work environment, Google Cloud provides the tools needed to help transform how people connect, create, and collaborate. Protect systems and users: As every company rethinks its security posture, we help customers protect their data using the same infrastructure and security services that Google uses for its own operations. Build a cleaner, more sustainable future: Google has been carbon neutral for our operations since 2007, and we are working to operate entirely on carbon-free energy by 2030. Today, when customers run on Google Cloud — the cleanest cloud in the industry — the energy that powers their workloads is matched with 100% renewable energy. We’re excited to see what you build with the new Google Cloud region in Israel. Learn more about ourglobal cloud infrastructure, including new and upcoming regions. And don’t miss the Israel launch event.Related ArticleGoogle Cloud announces new region to support growing customer base in IsraelThe new Google Cloud region in Israel will bring low-latency for users in the area, as well as a full complement of Google Cloud services.Read Article
Quelle: Google Cloud Platform

Introducing lock insights and transaction insights for Spanner: troubleshoot lock contentions with pre-built dashboards

As a developer, DevOps engineer or a database administrator, you have to typically deal with database lock issues. Often, rows locked by queries cause lags and can slow down applications resulting in poor user experience. Today, we are excited to announce the launch of lock insights and transaction insights for Cloud Spanner that provide a set of new visualization tools for developers and database administrators to quickly diagnose lock contention issues on Spanner. If you observe application slowness, a common issue could be lock contentions, which happen when multiple transactions are trying to modify the same row. Debugging lock contentions is not easy as it requires identifying the row ranges and columns on which transactions are contending for locks. This process can be tedious and time consuming without a visual interface. Today, we are solving this problem for customers.Lock insights and transaction insights provide pre-built dashboards that make it easy to detect row ranges with the highest lock wait time, find transactions reading or writing on these row ranges, and identify the transactions with highest latencies causing these lock conflicts.Earlier this year, we launched query insights for debugging query performance issues. Together with lock insights and transaction insights, these capabilities provide developers easy-to-use observability tools to troubleshoot issues and optimize the performance of their Spanner databases.Lock insights and transaction insights are available at no additional cost.”Lock insights will be very helpful to debug lock contention which typically takes hours.” said Dominick Anggara, MSc., Staff Software Engineer at Kohl’s. “It allows the user to see the big picture, and make it easy to make correlations, and then narrow down to specific transactions. That’s what makes it powerful. Really looking forward to using this in production”.Why do lock issues happen?Most databases take locks on data to prohibit other transactions from concurrently changing the data to preserve data integrity. When you access data with the intent to change it, a lock prohibits other transactions from accessing the data while it is being modified. But when the data is locked, it can negatively impact application performance as other tasks wait to access the data. Cloud Spanner, Google Cloud’s fully managed horizontally scalable relational database service, offers the strictest concurrency-control guarantees, so that you can focus on the logic of the transaction without worrying about data integrity. To give you this peace of mind, and to ensure consistency of multiple concurrent transactions, Spanner uses a combination of shared locks and exclusive locks at the table cell level (granularity of row-and-column) and not at the whole row level. You can learn more about different types of Lock modes for Spanner in our documentation.Follow a visual journey with pre-built dashboardsWith lock insights and transaction insights, developers can smoothly move from detection of latency issues to diagnosis of lock contentions, and ultimately identification of transactions that are contending for locks. Once the transactions causing the lock conflicts are identified, you can then try to identify issues in each transaction that are contributing to the problem.You could do this by following a simple journey where you can quickly confirm if the application slowness is due to lock contentions, correlate row ranges and columns which have the highest lock wait time with the transactions taking locks on these row ranges, identify the transactions with the highest latencies, and analyze these transactions which are contending on locks. Let’s walk through an example scenario. Diagnose application slownessThis journey will start by setting up an alert on Google Cloud Monitoring for latency (api/request_latencies) going above a certain threshold. The alert could be configured in a way that if this threshold is crossed, you will be notified with an email alert, with a link to the “Monitoring” dashboard.Once you receive this alert, you would click on the link in the email, and navigate to the “Monitoring” dashboard. If you observe a spike in read/write latency, no observable spike in CPU utilization, and a dip in Throughput and/or Operations per second, a possible root cause could be lock contentions. A combination of these patterns in these metrics could be a strong signal that the system is locking due to the transactions contending on the same cells, even though the workload remains the same. Below, you can observe a spike between 5:45 PM and 6:00 PM. This could be due to new application code deployment which might have introduced a new access pattern.The next step is to confirm that this application slowness is indeed due to the lock contentions. This is where lock insights comes in. You can get to this tool by clicking on “Lock insights” in the left navigation of the Spanner Instance view in your Cloud Console. Here, the first graph that you see will be for Total lock wait time. If you observe a corresponding spike on this graph in the corresponding time window, this would confirm that the application’s slowness is due to lock contentions.Co-relating row ranges, columns and transactionsNow you can select the database which is seeing the spike in total lock wait time, and drill down to see the row ranges with the highest lock wait times. When a user clicks on a row-range which has the highest lock wait times, a right panel will open up. This will show sample lock requests for that row range which includes the columns which were read from or written to, the type of lock which was acquired on this row-column combination (database cell), and links to view the transactions which were contending for these locks. This helps co-relate row ranges, columns and transactions makes this journey seamless to switch between lock insights and transaction insights as explained in the next section.In the above screenshot, we can see that at 5:53 PM, the first row range in the table (order_item(82,12)) is showing the highest lock wait times. You can investigate further by looking at the transactions which were acting on the sample lock columns. Identifying transactions with highest write latencies causing locksWhen you click on “View transactions” on the lock insights page, you will navigate to the transaction insights page with the topN transactions table (by latency) filtered on the Sample lock Columns from the previous page (lock insights), so you will view the topN transactions in the context of the locks (and row ranges) which were identified earlier in the journey.In this example we can see that the first transaction reading from and writing to columns item_inventory._exists, item_inventory.count has the highest latencies and could be one of the transactions causing lock contentions. We can also see that the second transaction in the table is also trying to read from the same column, and could be waiting on locks since the average latency is high. We should drill deep and investigate both these transactions.Analyzing transactions to fix lock contentionsOnce you have identified the transactions causing the locks, you can drill down into these transaction shapes to analyze the root cause of lock contentions.You can do this by clicking on the Fingerprint ID for the specific transactions from the topN table, and navigating to the Transaction Details page where you will be able to see a list of metrics (Latency, CPU Utilization, Execution count, Rows Scanned / Rows Returned) over a time series for that specific transaction.In this example, we notice that when we drill down into the second transaction, this transaction is only attempting to read and not write. By definition, the topN transactions table (on the previous page) only shows read-write transactions which take locks. We can also see that the abort count / total attempt count ratio (28/34) is very high, which means that most of the attempts are getting aborted.Fixing the issueTo fix the problem in this scenario, you can convert this transaction from a read-write transaction to a read-only transaction, which would prevent it from taking locks on the cell, and thereby reducing lock contention and reducing write latencies.By following this simple visual journey, you can easily detect, diagnose and fix lock contention issues on Spanner.When looking at potential issues in your application, or even when designing your application, consider these best practices to reduce the number of lock conflicts in your database.Get started with lock insights and transaction insights todayTo learn more about lock insights and transaction insights, review the documentation here, and watch the explainer video here.Lock insights and transaction insights are enabled by default. In the Spanner console, you can click on “Lock insights” and “Transaction insights” in the left navigation and start visualizing lock issues and transaction performance metrics! New to Spanner? Create a 90-day Spanner free trial instance. Try Spanner for free.Related ArticleIntroducing Query Insights for Cloud Spanner: troubleshoot performance issues with pre-built dashboardsSpanner’s ‘Query insights’ – a new tool that makes it easy to debug query performance issues.Read Article
Quelle: Google Cloud Platform

Using Envoy to create cross-region replicas for Cloud Memorystore

In-memory databases are a critical component that deliver the lowest possible latency for your users who might be adding items to online shopping carts, getting personalized content recommendations, or checking their latest account balances. Memorystore makes it easy for developers building these types of applications on Google Cloud to leverage the speed and powerful capabilities of the most loved in-memory store: Redis. Memorystore for Redis offers zonal high availability with a 99.9% SLA for its Standard Tier instances. In some cases, users are looking to expand their Memorystore footprint to multiple regions to support disaster recovery scenarios for regional failure or to provide the lowest possible latency for a multi-region application deployment. We’ll show you how to deploy such an architecture today with the help of the Envoy proxy Redis filter, which we introduced in our previous blog, Scaling to new heights with Cloud Memorystore and Envoy. Envoy makes creating such an architecture both simple and extensible due to its numerous supported configurations. Let’s get started with a hands-on tutorial which demonstrates how you can build a similar solution.Architecture OverviewLet’s start by discussing an architecture of Google Cloud native services combined with open-source software which enables a multi-region Memorystore architecture. To do this, we’ll be using Envoy to mirror traffic to two Memorystore instances which we’ll create in separate regions. For simplicity, we’ll be using Memtier Benchmark, a popular CLI for Redis load generation, as a sample application to simulate end user traffic. In practice, feel free to use your existing application or write your own.Because of Envoy’s traffic mirroring configuration, the application does not need to be aware of the various backend instances that exist and only needs to connect to the proxy. You’ll find a sample architecture below and we’ll briefly detail each of the major components.Before we start, you’ll also want to ensure compatibility with your application by reviewing the list of the Redis commands which Envoy currently supports.  Prerequisites To follow along with this walkthrough, you’ll need a Google Cloud project with permissions to do the following: Deploy Cloud Memorystore for Redis instances (required permissions)Deploy GCE instances with SSH access (required permissions)Cloud Monitoring viewer access (required permissions) Access to Cloud Shell or another gCloud authenticated environment Deploying the multi-region Memorystore backend You’ll start by deploying a backend Memorystore for Redis cache which will serve all of your application traffic. You’ll deploy two instances in separate regions so that we can protect our deployment against regional outages. We’ve chosen regions US-West1 and US-Central1 though you are free to choose whichever regions work best for your use case. From an authenticated cloud shell environment, this can be done as follows:$ gcloud redis instances create memorystore-primary –size=1 –region=us-west1 –tier=STANDARD –async$ gcloud redis instances create memorystore-standby –size=1 –region=us-central1 –tier=STANDARD –asyncIf you do not already have the Memorystore for Redis API enabled in your project, the command will ask you to enable the API before proceeding. While your Memorystore instances deploy, which typically takes a few minutes, you can move onto the next steps. Creating the Client and Proxy VMsNext, you’ll need a VM where you can deploy a Redis client and the Envoy proxy. To protect against regional failures, we’ll create a GCE instance per region. On each instance, you will deploy the two applications, Envoy and Memtier Benchmark, as containers. This type of deployment is referred to as a “sidecar architecture” which is a common Envoy deployment model. Deploying in this fashion nearly eliminates any added network latency as there is no additional physical network hop that takes place. You can start by creating the primary region VM: $ gcloud compute instances create client-primary –zone=us-west1-a –machine-type=e2-highcpu-8 –image-family cos-stable –image-project cos-cloud Next, create the secondary region VM: $ gcloud compute instances create client-standby –zone=us-central1-a –machine-type=e2-highcpu-8 –image-family cos-stable –image-project cos-cloud Configure and Deploy the Envoy Proxy Before deploying the proxy, you need to gather the necessary information to properly configure the Memorystore endpoints. To do this, you need the host IP addresses for the Memorystore instances you have already created. You can gather these like: gcloud redis instances describe memorystore-primary –region us-west1 –format=json | jq -r “.host”gcloud redis instances describe memorystore-standby –region us-central1 –format=json | jq -r “.host”Copy these IP addresses somewhere easily accessible as you’ll use them shortly in your Envoy configuration. You can also find these addresses in the Memorystore console page under the “Primary Endpoint” columns. Next, you’ll need to connect to each of your newly created VM instances, so that you can deploy the Envoy Proxy. You can do this easily via SSH in the Google Cloud Console. More details can be found here.After you have successfully connected to the instance, you’ll create the Envoy configuration. Start by creating a new file named envoy.yaml on the instance with your text editor of choice. Use the following .yaml file, entering the IP addresses of the primary and secondary instances you created:code_block[StructValue([(u’code’, u’static_resources:rn listeners:rn – name: primary_redis_listenerrn address:rn socket_address:rn address: 0.0.0.0rn port_value: 1999rn filter_chains:rn – filters:rn – name: envoy.filters.network.redis_proxyrn typed_config:rn “@type”: type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxyrn stat_prefix: primary_egress_redisrn settings:rn op_timeout: 5srn enable_hashtagging: truern prefix_routes:rn catch_all_route:rn cluster: primary_redis_instancern request_mirror_policy:rn cluster: secondary_redis_instancern exclude_read_commands: truern – name: secondary_redis_listenerrn address:rn socket_address:rn address: 0.0.0.0rn port_value: 2000rn filter_chains:rn – filters:rn – name: envoy.filters.network.redis_proxyrn typed_config:rn “@type”: type.googleapis.com/envoy.extensions.filters.network.redis_proxy.v3.RedisProxyrn stat_prefix: secondary_egress_redisrn settings:rn op_timeout: 5srn enable_hashtagging: truern prefix_routes:rn catch_all_route:rn cluster: secondary_redis_instancern clusters:rn – name: primary_redis_instancern connect_timeout: 3srn type: STRICT_DNSrn lb_policy: RING_HASHrn dns_lookup_family: V4_ONLYrn load_assignment:rn cluster_name: primary_redis_instancern endpoints:rn – lb_endpoints:rn – endpoint:rn address:rn socket_address:rn address: <primary_region_memorystore_ip>rn port_value: 6379 rn – name: secondary_redis_instancern connect_timeout: 3srn type: STRICT_DNS rn lb_policy: RING_HASHrn load_assignment:rn cluster_name: secondary_redis_instancern endpoints:rn – lb_endpoints:rn – endpoint:rn address:rn socket_address:rn address: <secondary_region_memorystore_ip>rn port_value: 6379rn rnadmin:rn address:rn socket_address:rn address: 0.0.0.0rn port_value: 8001′), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ff3ba9090>)])]The various configuration interfaces are explained below:Admin: This interface is optional, it allows you to view configuration and statistics etc. It also allows you to query and modify different aspects of the envoy proxy. Static_resources: This contains items that are configured during startup of the envoy proxy. Inside this we have defined clusters and listeners interfaces. Clusters:  This interface allows you to define clusters which we are defining per region. Inside cluster configuration you define all the available hosts and how to distribute load across those hosts. We have defined two clusters, one in the primary region and another in the secondary region. Each cluster can have a different set of hosts and different load balancer policies. Since there is only one host in each cluster, you can use any load balancer policy as all the requests will be forwarded to that single host.Listeners: This interface allows you to expose the port on which the client would connect, and define behavior of traffic received. In this case we have defined two listeners, one for each regional Memorystore instance.Once you’ve added your Memorystore instance IP addresses, save the file locally to your container OS VM where it can be easily referenced. Make sure to repeat these steps for your secondary instance as well. Now, you’ll use Docker to pull the official Envoy proxy image and run it with your own configuration. On primary region client machine, run this command: $ docker run –rm -d -p 8001:8001 -p 6379:1999 -v $(pwd)/envoy.yaml:/envoy.yaml envoyproxy/envoy:v1.21.0 -c /envoy.yaml On the standby region client machine, run this command: $ docker run –rm -d -p 8001:8001 -p 6379:2000 -v $(pwd)/envoy.yaml:/envoy.yaml envoyproxy/envoy:v1.21.0 -c /envoy.yamlFor our standby region, we have changed the binding port to port 2000. This is to ensure that traffic from our standby clients are routed to the standby instance in the event of a regional failure which makes our primary instance unavailable.In this example, we are deploying envoy proxy manually, but, in practice, you will implement a CI/CD pipeline which will deploy the envoy proxy and bind ports depending on your region based configuration. Now that Envoy is deployed, you can test it by visiting the admin interface from the container VM: $ curl -v localhost:8001/statsIf successful, you should see a print out of the various Envoy admin stats in your terminal. Without any traffic yet, these will not be particularly useful, but they allow you to ensure that your container is running and available on the network. If this command does not succeed, we recommend checking that the Envoy container is running. Common issues include syntax errors within your envoy.yaml and can be found by running your Envoy container interactively and reading the terminal output. Deploy and Run Memtier Benchmark After reconnecting to the primary client instance in us-west1 via SSH, you will now deploy the Memtier Benchmark utility which you’ll use to generate artificial Redis traffic. Since you are using Memtier Benchmark, you do not need to provide your own dataset. The utility will populate the cache for you using a series of set commands.code_block[StructValue([(u’code’, u’$ for i in {1..5}; do docker run –network=”host” –rm -d redislabs/memtier_benchmark:1.3.0 -s 127.0.0.1 -p 6379 u2014threads 2 u2013clients 10 –test-time=300 –key-maximum=100000 –ratio=1:1 –key-prefix=”memtier-$RANDOM-“; done’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ff3ba4910>)])]Validate the cache contents Now that we’ve generated some data from our primary region’s client, let’s ensure that it has been written to both of our regional Memorystore instances. We can do this by using cloud monitoring metrics-explorer. Next, you’ll configure the chart via “MQL” which can be selected at the top of the explorer pane. For ease, we’ve created a query which you can simply paste into your console to populate your graph:code_block[StructValue([(u’code’, u”fetch redis_instancern| metric ‘redis.googleapis.com/keyspace/keys’rn| filterrn (resource.instance_id =~ ‘.*memorystore.*’) && (metric.role == ‘primary’)rn| group_by 1m, [value_keys_mean: mean(value.keys)]rn| every 1m”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e4ff25a6210>)])]If you have created your Memorystore instances with a different naming convention or have other Memorystore instances within the same project, you may need to modify the resource.instance_id filter. Once you’re finished, ensure that your chart is viewing the appropriate time range, and you should see something like:In this graph, you should see two like lines which show the same number of keys in both Memorystore instances. If you want to view metrics for a single instance, you can do this by using the default monitoring graphs which are available from the Memorystore console after selecting a specific instance. Simulate Regional Failure Regional failure is a rare event. We will simulate this by deleting our primary Memorystore instance and primary client VM. Let’s start by deleting our primary Memorystore instance like: $ gcloud redis instances delete memorystore-primary –region=us-west1And then our client VM like: $ gcloud compute instances delete client-primaryNext, we’ll need to generate traffic from our secondary region client VM which we are using as our standby application. For the sake of this example, we’ll manually perform a failover and generate traffic to save time. In practice, you’ll want to devise a failover strategy to automatically divert traffic to the standby region when the primary region becomes unavailable. Typically, this is done with the help of services like Cloud Load Balancer. Once more, ssh into the secondary region client VM from the console and run the Memtier benchmark application as mentioned in the previous section. You can validate that reads and writes are properly routing to our standby instance by viewing the console’s monitoring graphs once more.  Once the original primary Memorystore instance is available again, it will become the new standby instance based on our Envoy configuration. It will also be out of sync with our new primary instance as it has missed writes during its unavailability. We do not intend to cover a detailed solution in this post, but we find that most users opt to rely on TTL which they have set on their keys to determine when their caches will eventually be in sync.  Clean UpIf you have followed along, you’ll want to spend a few minutes cleaning up resources to avoid accruing unwanted charges. You’ll need to delete the following: Any deployed Memorystore instances Any deployed GCE instancesMemorystore instances can be deleted like: $ gcloud redis instances delete <instance-name> –region=<region>The GCE container OS instance can be deleted like: $ gcloud compute instances delete <instance-name>If you created additional instances, you can simply chain them in a single command separated by spaces. ConclusionWhile Cloud Memorystore Standard tier provides high availability, some use cases require an even higher availability guarantee. Envoy and its Redis filter make creating a multi-regional deployment simple and extensible. The outline provided above is a great place to get started. These instructions can easily be extended to support automated region failover or even dual region active-active deployments. As always, you can learn more about Cloud Memorystore through our documentation or request desired features via our public issue tracker.Related ArticleScaling to new heights with Cloud Memorystore and EnvoyLearn how to scale your Google Cloud Memorystore for Redis database for high volume use cases in just a few minutes with the help of Envo…Read Article
Quelle: Google Cloud Platform

Announcing open innovations for a new era of systems design

We’re at a pivotal moment in systems design. Demand for computing is growing at insatiable rates. At the same time, the slowing of Moore’s law means that improvements to CPU performance, power consumption, memory and storage cost efficiencies have all plateaued. These headwinds are further exacerbated by new challenges in reliability, and security. At Google, we’ve responded to these challenges and opportunities with system design innovations across the stack: from new custom-silicon accelerators (e.g., TPU, VCU, and IPU), new hardware and data center infrastructure, all the way to new distributed systems and cloud solutions. But this is only the beginning. There are many more opportunities for advancements, including closely-coupled accelerators for core data center functions to minimize the so-called “data center tax.” As server and data center infrastructure diverges from decades-old traditional designs to be more modular, heterogeneous, disaggregated, and software-defined, distributed systems are also entering a new epoch — one defined by optimizations for the “killer microsecond” and novel programming models optimized for low-latency and accelerators. At Google, we believe that these new opportunities and challenges are best addressed together, across the industry. Today, at the Open Compute Project (OCP) Global Summit, we are demonstrating our support of open hardware ecosystems, presenting at more than 40 talks, and announcing several key contributions:Server design: We will share Google’s vision for a “multi-brained” server of the future, transforming traditional server designs to more modular disaggregated distributed systems across host computing, accelerators, memory expansion trays, infrastructure processing units, etc. We are sharing the work we are doing with all our OCP partners on the varied innovations needed to make this a reality — from modular hardware with DC-MHS, standardized management with OpenBMC and RedFish, standardized root of trust, and standardized interfaces including CXL, NVMe and beyond.Trusted computing: The root of trust is an essential part of future systems. Google has a tradition of making contributions for transparent and best in-class security, including our OpenTitan discrete security solutions on consumer devices. We are looking ahead to future innovations in confidential computing and varied use-cases that require chip-level attestation at the level of a package or System on a Chip (SoC). Together with other industry leaders, AMD, Microsoft, and NVIDIA, we are contributing Caliptra, a re-usable IP block for root of trust measurement, to OCP. In the coming months we will roll out initial code for the community to collectively harden together.Reliable computing: To address the challenges of reliability at scale, we’ve formed a new server-component resilience workstream at OCP,  along with AMD, ARM, Intel, Meta, Microsoft, and NVIDIA. Through this workstream, we’ll develop consistent metrics about silent data errors and corruptions for the broader industry to track. We’ll also contribute test execution frameworks and suites, and provide access to test environments with faulty devices. This will enable the broader community — across industry and academia — to take a systems-approach to addressing silicon faults and silent data errors. Sustainability: Finally, we’re announcing our support for a new initiative within OCP to support environmental sustainability as a key tenet across the ecosystem. Google has been a leader in environmental sustainability for many years. We have been carbon neutral since 2007, powered by 100% renewable energy since 2017, and have an ambitious goal to achieve net-zero emissions across all of our operations and value chain by 2030. In turn, as the cleanest cloud in the industry, we have helped customers track and reduce their carbon footprint and achieve significant energy savings. We’re excited to share these best practices with OCP and work with the broader community to standardize sustainability measurement and optimization in this important area. As the industry body focused on system integration (e.g., compute, memory, storage, management, power and cooling), the OCP Foundation is uniquely positioned to facilitate the industry-wide codesign we need. Google is active in OCP, serving in leadership roles, incubating new initiatives, and supporting numerous contributions.These announcements are the latest example of our history of fostering open and standards-based ecosystems. Open ecosystems enable a diverse product marketplace, with agility in time-to-market, and the opportunity to be strategic about innovation. Google’s open source leadership is multidimensional: driving industry standardization and adoption, strong and varied community contributions to grow the ecosystem, as well as broad policy and organizational leadership and sharing of best practices. The four initiatives we are announcing today, in combination with the Google-led talks at the OCP Summit, provide a small glimpse into the exciting new era of systems ahead. We look forward to working with the broader OCP community and other industry organizations to build a vibrant open hardware ecosystem to support even more innovation in this space. Please join us in this exciting journey.Related ArticleJupiter evolving: Reflecting on Google’s data center network transformationThanks to optical circuit switching (OCS) and wave division multiplexing (WDM) in the Jupiter data center network, Google enjoys a host o…Read Article
Quelle: Google Cloud Platform

Unifying data and AI to bring unstructured data analytics to BigQuery

Over one third of organizations believe that data analytics and machine learning have the most potential to significantly alter the way they run business over the next 3 to 5 years. However, only 26% of organizations are data driven. One of the biggest reasons for this gap is that a major portion of the data generated today is unstructured, which includes images, documents, and videos. It is estimated to cover roughly up to 80% of all data, which has so far remained untapped by organizations.One of the goals of Google’s data cloud is to help customers realize value from data of all types and formats. Earlier this year, we announced BigLake, which unifies data lakes and warehouses under a single management framework, enabling you to analyze, search, secure, govern and share unstructured data using BigQuery. At Next ‘22, we announced the preview of object tables, a new table type in BigQuery that provides a structured record interface for unstructured data stored in Google Cloud Storage. This enables you to directly run analytics and machine learning on images, audio, documents and other file types using existing frameworks like SQL and remote functions natively in BigQuery itself. Object tables also extend our best practices of securing, sharing and governing structured data to unstructured, without needing to learn or deploy new tools.Directly process unstructured data using BigQuery MLObject tables contain metadata such as URI (Uniform Resource Identifier), content type, and size that can be queried just like other BigQuery tables. You can then derive inferences using machine learning models on unstructured data with BigQuery ML. As part of preview, you can import open source TensorFlow Hub image models, or your own custom models to annotate the images. Very soon, we plan to enable this for audio, video, text and many other formats, and pre-trained models to enable out-of-the box analysis. Check out this video to learn more and watch a demo.code_block[StructValue([(u’code’, u’# Create an object tablernCREATE EXTERNAL TABLE my_dataset.object_tablernWITH CONNECTION us.my_connection rnOPTIONS(uris=[“gs://mybucket/images/*.jpg”],rn object_metadata=”SIMPLE”, metadata_cache_mode=”AUTOMATIC”);rnrn # Generate inferences with BQMLrnSELECT * FROM ML.PREDICT(rn MODEL my_dataset.vision_model, rn (SELECT ML.DECODE_IMAGE(data) AS img FROM my_dataset.object_table)rn);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e08491c1750>)])]By analyzing unstructured data natively in BigQuery, businesses canEliminate manual effort as pre-processing steps such as tuning image sizes to model requirements are automatedLeverage the simple and familiar SQL interface to quickly gain insightsSave costs by utilizing existing BigQuery slots without needing to provision new forms of computeAdswerve is a leading Google Marketing, Analytics and Cloud partner on a mission to humanize data. Twiddy & Co. is Adswerve’s client – a vacation rental company in North Carolina. By combining structured and unstructured data, Twiddy and Adswerve used BigQuery ML to analyze images of rental listings and predict the click-through rate, enabling data-driven photo editorial decisions. “Twiddy now has the capability to use advanced image analysis to stay competitive in an ever changing landscape of vacation rental providers – and can do this using their in-house SQL skills.” said Pat Grady, Technology Evangelist, AdswerveProcess unstructured data using remote functionsCustomers today use remote functions (UDFs) to process structured data for languages and libraries that are not supported in BigQuery. We are extending this capability to process unstructured data using object tables. Object tables provide signed URLs to allow remote UDFs running on Cloud Functions or Cloud Run to process the object table content. This is particularly useful for running Google’s pre-trained AI models, including Vision AI, Speech-to-Text, Document AI, open source libraries such as Apache Tika, or deploying your own custom models where performance SLAs are important. Here’s an example of an object table being created over PDF files that are parsed using an open source library running as a remote UDF.code_block[StructValue([(u’code’, u’SELECT uri, extract_title(samples.parse_tika(signed_url)) AS titlernFROM EXTERNAL_OBJECT_TRANSFORM(TABLE pdf_files_object_table, rn [“SIGNED_URL”]);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e083ebb4310>)])]Extending more BigQuery capabilities to unstructured dataBusiness intelligence – The results of analyzing unstructured data either directly in BigQuery ML or via UDFs can be combined with your structured data to build unified reports using Looker Studio (at no charge), Looker or any of your preferred BI solutions. This allows you to gain more comprehensive business insights. For example, online retailers can analyze product return rates by correlating them with the images of defective products. Similarly, digital advertisers can correlate ad performance with various attributes of ad creatives to make more informed decisions.BigQuery search index – Customers are increasingly using the search functionality of BigQuery to power search use cases. These capabilities now extend to unstructured data analytics as well. Whether you use BigQueryML to produce inference on images or use remote UDFs with Doc AI to produce document extraction, the results can now be search indexed and used to support search access patterns. Here’s an example of search index on data that is parsed from PDF files:code_block[StructValue([(u’code’, u’CREATE SEARCH INDEX my_index ON pdf_text_extract(ALL COLUMNS);rnrnSELECT * FROM pdf_text_extract WHERE SEARCH(pdf_text, “Google”);’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e084957e110>)])]Security and governance – We are extending BigQuery’s row-level security capabilities to help you secure objects in Google Cloud Storage. By securing specific rows in an object table, you can restrict the ability of end users to retrieve the signed URLs of corresponding URIs present in the table. This is a shared responsibility security model, for which administrators need to ensure that end users don’t have direct access to Google Cloud Storage, and use signed URLs from object tables as the only access mechanism.Here’s an example of a policy for PII images that are secured to be first processed through a blur pipeline:code_block[StructValue([(u’code’, u’CREATE ROW ACCESS POLICY pii_data ON object_table_imagesrnGRANT TO (“group:admin@example.com”) rnFILTER USING (ARRAY_LENGTH(metadata)=1 AND rn metadata[OFFSET(0)].name=”face_detected”)’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e083dd17e90>)])]Soon, Dataplex will support object tables, allowing you to automatically create object tables in BigQuery and manage and govern unstructured data at scale.Data sharing – You can now use Analytics Hub to share unstructured data with partners, customers and suppliers while not compromising on security and governance. Subscribers can consume the rows of object tables that are shared with them, and use signed URLs for unstructured data objects. Getting StartedSubmit this form to try these new capabilities that unlock the power of your unstructured data in BigQuery. Watch this demo to learn more about these new capabilities.Special thanks to engineering leaders Amir Hormati, Justin Levandoski and Yuri Volobuev for contributing to this post.Related ArticleBuilt with BigQuery: BigQuery ML enables Faraday to make predictions for any US consumer brandHow Building with BigQuery ML enables Faraday to make predictions for any US consumer brand.Read Article
Quelle: Google Cloud Platform

EVO2CLOUD – Vodafone’s SAP migration from on-prem to Google Cloud

Editor’s note: Vodafone is migrating its SAP system, the backbone for its financial, procurement and HR services, to Google Cloud. Vodafone’s SAP system has been running on-prem for 15 years, during which time it has significantly grown in size, making this one of the largest and one of the most complex SAP migrations in EMEA. By integrating its cloud-hosted SAP system to its data ocean running on Google Cloud, Vodafone aims to introduce operational efficiency and drive innovation.Vodafone: from telco to tech-coVodafone, a leading telecommunications company in Europe and Africa, is accelerating its digital transformation from a telco to a tech-co that provides connectivity and digital services such as 5G services, IoT, TV and hosting platforms. Vodafone is partnering with Google Cloud to enable various elements of this transformation — from building one of the industry’s largest data oceans on Google Cloud to  driving value from data insights and deploying AI/ML models. One of Vodafone’s core initiatives is ‘EVO2CLOUD’, a strategic program to migrate its  SAP workloads to Google Cloud. Vodafone uses SAP for its financial, procurement and HR services; it’s the backbone of its internal and external operations. High availability and reliability are fundamental requirements to ensure smooth operation with minimal downtime. Moreover, hosting SAP on Google Cloud is a foundation for digital innovation and maintaining cybersecurity.EVO2CLOUD: enabling SAP on Google CloudWhen complete, EVO2CLOUD will have been one of the largest SAP to Google Cloud migrations. Over the course of two to three years, EVO2CLOUD will enable the transformation of a broad SAP ecosystem composed of more than 100 applications that have been running on-prem for the past 15 years, to a leaner, more agile and scalable deployment that is cloud-first and data-led. With EVO2CLOUD, Vodafone aims to improve operational efficiency, increase its NPS score and maximize business value by incorporating SAP into its cloud and data ecosystem,  introducing data analytics capabilities to the organization and enabling future innovations. As such, EVO2CLOUD is providing standardized SAP solutions and facilitating the transition to a data-centric model that leverages real-time, reliable data to drive data-based corporate decision making. SAP’s operating model on Google CloudVodafone foresees a step change in its operating model, where it can leverage an on-demand, highly performant, and memory-optimized M1 and M2 infrastructure at a low cost. Thanks to infrastructure as code, this improved operating model will provide increased capacity, high availability, flexibility and consistent enforcement of security rules. Vodafone is also reshaping its security architecture and leveraging the latest technologies to ensure privacy, data protection, and resilient threat detection mechanisms. Furthermore, it expects to increase its release-cycle frequency from bi-annual rollouts to weekly release cycles, increasing agility and introducing features faster. In short, Vodafone wants to build agility and flexibility in all that it does — from design all the way to delivery and operations, and DevSecOps will need to be an integral part of its operating model.Leveraging data to drive innovationBefore migrating to Google Cloud, it was difficult for Vodafone to extract and make use of its SAP data. Now with the transition to the cloud and with Google Cloud tools, it can expand how it uses its data for analytics and process mining. This includes operations and monitoring opportunities to map data with other external sources, e.g., combining HR data from SAP with other non-SAP data, resulting in data enrichment and additional business value. Vodafone is continuing to explore opportunities with Google Cloud to identify even more ways to leverage their data.Why Google Cloud and what’s NextIn fact, Vodafone is not only building its system on Google Cloud, but rather sees this project as the first step in a three-phase transformation: Redesigning the SAP environment and migrating to Google Cloud to make it ready for integration with Vodafone’s data ocean.Integrating SAP with Vodafone’s data ocean that sits on Google BigQuery. Leveraging cloud-based data analytics tools to optimize data usage, processes, and how Vodafone operates its business. Moving to Google Cloud is in line with Vodafone’s data-centric strategy, which aims to introduce enhanced features in data analytics and artificial intelligence, and effectively serves Vodafone’s employees and customers in more real-time. Transformation and change managementThe migration to Google Cloud is underway with Vodafone, Google Cloud, SAP and Accenture working together as one team to make this transformation a success. “An innovative and strategic initiative, co-shaped with a truly integrated partnership. A daily collaboration among four parties, Vodafone, Google, SAP and Accenture are executing the cloud transformation of a complex SAP estate within a compressed timeframe, for rapid benefits realization and accelerated innovations in the cloud.” – Antonio Leomanni, EVO2CLOUD program lead, AccentureVodafone recently celebrated the pilot’s go-live, an important milestone in this program. Change management has been fundamental to this transformation, incorporating learning and enablement, financial governance, lifecycle management, security, architecture reviews and innovation. By focusing on these disciplines, Vodafone and Google Cloud are ensuring the success of this transformation and strengthening their partnership.ConclusionIn conclusion, the SAP migration aligns with Vodafone’s data strategy by enabling a step change towards operational efficiency and innovation, by integrating SAP to Vodafone’s data ocean. The key to the success of this ongoing migration is:Clear migration requirements and objectives – infrastructure availability, security and resilience. Strong change managementApplication of the right technologies and toolsTo learn more about how Google Cloud is advancing the telecommunications industry visit us here.Related ArticleDelivering data-driven IT and networks, Google Cloud expands its analytics partner ecosystem for telecommunicationsCommunication Service Providers are becoming data driven and leveraging Google Cloud and their partners to solve tough problems.Read Article
Quelle: Google Cloud Platform

Fortress Vault on Google Cloud: Bringing private data to NFTs

Over the past two years, the general population has become more acquainted with cryptocurrencies and the first iterations of NFTs, which were among the earliest use cases for blockchain technology. This public awareness and participation has led to a growing interest in, and demand for, Web3 technology at the enterprise level. But building trust in a new wave of technology, especially in large organizations, doesn’t happen overnight. That is why it’s critical for Web3 technologists to bring the broader benefits, use cases, and core capabilities of blockchain to the forefront of the conversation. If businesses don’t understand how this new technology can help them, how can they prioritize it among competing tech plans and resources? And without baseline protocols that account for privacy, confidential data, and IP, how can they future-proof a business? Answering these questions and delivering trustworthy infrastructure is exactly why Scott Purcell and I founded Fortress Web3 Technologies — to bring about the next wave of Web3  utility. The company’s goal is to provide infrastructure that eliminates barriers to Web3 adoption with RESTful APIs and widgetized services that enable businesses to quickly launch and scale their Web3 initiatives. Our tools include embeddable wallets for NFTs and fungible rewards tokens; NFT minting engines; and core financial services . These include payments, compliance, and crypto liquidity via our wholly-owned financial institution, Fortress Trust. Being overseen by a chartered, regulated entity ensures privacy, compliance and business continuity.Fortress chose Google Cloud to help usher in this new-wave technology because no other cloud provider is better suited to helping regulated industries get up to scale on our Web3 infrastructure and blockchain technology. I’ll get into more specifics below, but at the highest level: IPFS (the current standard distributed storage) is going to face major resistance when it comes to industries that are heavily regulated or deal in ownership rights. By leveraging Google Cloud, which has critical certifications such as HIPPA, Department of Defense, ISO, and Motion Picture, we’re striking the appropriate balance between decentralization and centralization, using the best of both technologies. The Fortress Vault on Google Cloud is a huge and necessary step forward as the first ever NFT-database solution to protect intellectual property, confidential documents, and other electronic records. It represents the first technology that marries privately stored content with the accessibility, privacy, portability, and provenance that blockchain provides. Understanding Non-Fungible Tokens (NFTs)An NFT is not an expensive jpeg. From a technical point of view, an NFT is a unique key stored in a distributed and trustless ledger we call a blockchain. This blockchain token is uniquely identifiable from any other token and acts as a digital key to authenticate ownership and unlock data held in a database. While different blockchains have adopted different standards, Ethereum standards are a good proxy to represent overall concepts. Going back to the primitives, if you read the EIP 721 proposal, metadata is explicitly optional. While today’s NFT hype has indeed leveraged that technology to monetize and distribute digital art, the potential of blockchain is in the ability to digitally represent ownership of a wide variety of different asset classes on a decentralized ledger. Unique, non-fungible tokens are not a new concept. We use them every day in technical systems for things like authentication, database keys, idempotency, and much more. Now, thanks to blockchain technology, you can take those out of their walled gardens and into an open platform that can lead to transformational utility and applications. Take real estate, for example. Instead of a paper-based title documenting you as the owner of your home, imagine that the title is tokenized with an NFT on a blockchain. Any platform could cryptographically verify the authenticity of that form of title along with its provenance in real time and confirm that you’re the rightful owner of that property. But, perhaps you don’t want the title of your property visible to others, nor the associated permits, tax documents, architectural drawings, contractor lists, and other documents. Maybe you just want banks, insurance companies, and others to be able to confirm that you are indeed the owner without revealing the details of those records. The NFT metadata records immutable public-facing provenance, while the underlying data remains private and protected using Fortress Vault on Google Cloud. Apply that same utility to other sensitive information such as medical records, intellectual property, estate documents, corporate contracts, and other confidential information and it’s easy to see how enterprises are just now exploring how to hold traditional assets as NFTs.Fortress Vault: Intellectual Property, Confidential Documents, and Other Electronic RecordsWhat NFTs and Web3 have been lacking is the ability to make the tokenized data accessible exclusively by the owner — and only the owner.  NFTs are a digital key to unlock everything ranging from music and event tickets, to real estate deeds and healthcare records, to estate documents, and to everything in the world that’s digital.This is why we created the Fortress Vault.  When building it, we had to make a fundamental decision: Either go with a distributed and permissionless storage protocol like IPFS, filecoin, or other blockchain-based database offerings, or work with an industry-leading cloud platform that understands data integrity and is establishing itself as the leader in the space. Ultimately, we chose Google Cloud for its industry-leading object storage, professional management, fault tolerance, and myriad of certifications for architecture and data integrity.Some of the challenges faced when vaulting a vast variety and quantity of digital content at scale include:Balancing data availability versus cost of storageData redundancyLong term archival needsBusiness continuityFlexibility to meet current and future needs of the rapidly evolving Web3 industry. Google Cloud is the clear leader across all of these pain points. The object lifecycle management of Google Cloud Storage enables efficient  transition between storage classes when either the data matures to a certain point or it’s updated with newer files. Content in the Fortress Vault can range from on-demand data to long-term uses, such as estate planning documents that won’t be accessed for 30 years. When storing NFT data, robust disaster recovery is table stakes. We quickly gravitated to the automatic redundancy options and multi-region storage buckets that let us customize where we store our data without massive devops and management overhead. By leveraging Google Cloud, we can offer industry leading retention, redundancy, and integrity for our customers’ NFT content.Working with a leader in data storage was key to making this a reality. Additionally, Google Cloud shares our vision of bringing every industry forward into the world of Web3. We are both focused on building the critical infrastructure that allows everyone from Web3 native companies to Fortune 500 brands navigate the strategic shift to blockchain technology.Why Web3 Matters“Web3” is shorthand  for the “third wave” of the internet and the technological innovation that brought us here. Web 1 — the earliest internet — democratized reading and access to information, opening the doors to  mass communication.  Web 2 expanded on that with the ability to read and “write.” It democratized publishing by letting people directly engage in producing information through blogs, social media, gaming, and contributions to collective knowledge. Web 3 expands our technological capabilities even more with the ability to read, write, and “own.”  With blockchain, we can now establish clear provenance with visibility into the  origination of ownership of any tokenized asset, and we can see the chain of ownership. We can rely on this next-generation technology to track, authenticate, protect, and keep a ledger of our assets. With the Fortress Vault on Google Cloud, we have the capability to ensure the integrity of non-public data while making it accessible via NFTs. This is a game changer for Web3 adoption, particularly  in industries like music, event ticketing, gaming, finance, transportation, real estate, and healthcare. Every industry can benefit from the ability to tokenize assets on blockchain technology without leaving the trusted safety of Google Cloud data storage. The market for NFTs is everyone. And the Fortress Vault on Google Cloud is the technology evolution that makes it possible for Web3 innovators to confidently build, launch, and scale their initiatives across every industry imaginable.Related ArticleWhat’s new in Google Cloud databases: More unified. More open. More intelligent.Google Cloud databases deliver an integrated experience, support legacy migrations, leverage AI and ML and provide developers world class…Read Article
Quelle: Google Cloud Platform