Strengthening our European data sovereignty offerings with Assured Workloads for EU

European organizations, both public and private, are migrating their operations and data to the cloud in increasing numbers. In doing so, they need confidence they can meet their unique needs for security, privacy, and digital sovereignty. Key requirements include the ability to store data within a European geographic region, to ensure that support is provided by EU personnel, and the ability to control administrative access to their customer data and encryption keys used to protect that data. To help meet these needs for customers using Google Cloud Platform, we are pleased to announce the general availability of Assured Workloads for EU. As covered in detail in our introductory blog post, this product allows GCP customers to create and maintain workloads with:Data residency in their choice of EU Google Cloud regionsPersonnel access and customer support restricted to EU persons located in the EUCryptographic control over data access, including customer managed encryption keysLet’s look at how to configure a cloud workload with these controls using the Google Cloud Console: Configuring Assured Workloads for EUAssured Workloads functions at the folder level of an organization, allowing for specific controls to be applied to and enforced selectively for cloud workloads with sovereignty requirements. The first step in creating an Assured Workloads folder is to choose where data will be stored:Selecting the European Union option provides access to two different types of Assured Workloads controls:EU Regions and Support: This option, now in General Availability, allows customers to restrict storage of their data to the EU, in addition to restricting support and access to EU persons. EU Regions and Support with sovereignty controls: This option, now in Public Preview, builds on the capabilities of EU Regions and Support, and provides additional levels of sovereign control by encrypting customer data with externally stored and managed keys from Cloud External Key Manager (EKM) and signing Access Approval requests with those same external keys.Signed Access Approval is a new feature that adds a layer of assurance for actions authorized through Access Approval, a platform control which requires explicit customer consent before administrative access to customer data or configurations is permitted. It signs approvals you grant via Access Approval with an external key from your External Key Manager, helping to verify that an access request was approved by an outside party. Signed Access Approval is currently available for customer configurations that use Thales external key management systems and is coming soon to other external key management systems that integrate with EKM. Customers can apply either option for Assured Workloads for EU at the folder level, allowing flexibility to selectively run workloads using EU Regions and Support, and applying the additional cryptographic controls to workloads that require a higher level of data sovereignty. In either case, Assured Workloads configures and enforces the chosen controls automatically.Customer choices for digital sovereigntyAssured Workloads for EU is the latest in a series of offerings from Google Cloud that deliver what we call Software Defined Community Clouds — cloud infrastructure provisioned for exclusive use by a specific set of organizations with controls tailored to their specific jurisdictional needs. Assured Workloads includes offerings for customer groups in the United States, Canada (in Preview), and now in the European Union, while continuing to expand to other regions around the world.  Customers may have additional operational sovereignty needs focused on the independent operation and verification of these controls. This is why as part of our ‘Cloud. On Europe’s Terms.’ initiative, we’ve announced sovereign cloud solutions powered by Google Cloud to be offered through trusted partners like T-Systems in Germany, Thales in France, and Minsait in Spain. For many organizations, however, the ability to meet data sovereignty requirements for specific workloads will be a meaningful step forward in their digital sovereignty journey.Take the next stepAssured Workloads EU Regions and Support is now generally available for Google Compute Engine, Persistent Disk, BigQuery, Google Cloud Storage, and Cloud KMS (EKM), with EU Regions and Support with sovereignty controls now available in Preview for the same services. Read more about both offerings in our documentation. To learn more, please contact Google Cloud Sales.Related ArticleHelping build the digital future. On Europe’s terms.Cloud computing is globally recognized as the single most effective, agile and scalable path to digitally transform and drive value creat…Read Article
Quelle: Google Cloud Platform

How Newsweek increased total revenue per visit by 10% with Recommendations AI

Newsweek provides the latest news, in-depth analysis, and ideas about international issues, technology, business, culture, and politics to its readers around the world. While editors pick the best articles to display on the home page and topic pages, it is also critical for Newsweek to offer a personalized experience by delivering fresh and relevant article recommendations tailored to the unique interests of each reader. This need became even more important during the pandemic as readers wanted to be kept informed about the latest news and understand its impact on their own lives and businesses.Personalization with Recommendations AIGoogle has spent years delivering recommended content across flagship properties such as Google Ads, Google Search, and YouTube. Recommendations AI takes advantage of Google’s expertise in recommendations and is powered by state-of-the-art machine learning models. It is also a fully managed service with automated model training and recommendation serving infrastructure that have helped to meet Newsweek’s planet-scale needs.Newsweek had been concerned that a sizable fraction of their users left the website after reading only one article and as a result was evaluating deploying ML-based recommendations on their article detail page to increase user engagement. Newsweek and Google Cloud expected that highly personalized recommendations from Recommendations AI would help readers find the articles they would be most interested in, thereby significantly increasing the click-through rate (CTR) of recommendations being shown.Newsweek ran A/B tests on both desktop and mobile to compare their existing solution with content recommendations from Recommendations AI which leverage a user’s reading history along with article metadata such as categories, titles, and article publish time to ensure that recommendations are relevant, fresh, and personalized. The result was a strong improvement in business metrics.“Google Cloud Recommendations AI has not only improved our CTR by 50%-75% and subscription conversion rate by 10%, but also allowed us to increase total revenue per visit by 10%,” says Michael Lukac, Newsweek’s Chief Technology Officer. “The fully managed service, advanced AI, and real-time personalization have allowed us to make an improvement in our user engagement. It has improved the diversity of content and personalized assets to the individual reader. Newsweek has been able to easily create and edit models from the dashboard while retraining them daily to handle changing catalogs.”Next StepsNewsweek has seen tremendous benefit from Recommendations AI’s ability to create a superior reader experience with personalization, and sees opportunities to further improve the reader’s journey by having Google cover more real estate on their site, app, and on other channels such as personalized newsletters. To explore what Google Cloud’s Recommendations AI can do for your business, click here.Related ArticleHow to get better retail recommendations with Recommendations AIRecommendations AI is a solution that uses machine learning to bring product recommendations to their shoppers across any catalog or clie…Read Article
Quelle: Google Cloud Platform

Cloud CISO Perspectives: January 2022

I’m excited to share our first Cloud CISO Perspectives post of 2022. It’s already shaping up to be an eventful year for our industry and we’re only in month one. There’s a lot to recap in this post, including the U.S. government’s recent efforts to address critical security issues, like open source software security and zero trust architectures. We’ve also released new resources from our Google Cybersecurity Action Team like the Cloud Security Megatrends and the Boards of Directors whitepaper on cloud risk governance. Cloud Security Megatrends We’re often asked if the cloud is more secure than on-prem (and why) so we shared our answer in a recent blog post. At Google Cloud, security by design is our priority. We’ve long adopted zero-trust principles for our baseline security architectures and built a global network that relies on defense in depth layers to protect against configuration errors and attacks. But security is always evolving and that is why we also take advantage of the following megatrends:Economy of scale: Decreasing the marginal cost of security raises the baseline level of security. Shared fate: A flywheel of increasing trust drives more transition to the cloud, which compels even higher security and even more skin-in-the-game from the cloud provider.Healthy competition: The race by deep-pocketed cloud providers to create and implement leading security technologies is the tip of the spear of innovation. Cloud as the digital immune system: Every security update the cloud gives the customer is informed by some threat, vulnerability, or new attack technique often identified by someone else’s experience. Enterprise IT leaders use this accelerating feedback loop to get better protection.Software-defined infrastructure: Cloud is software defined, so it can be dynamically configured without customers having to manage hardware placement or cope with administrative toil. From a security standpoint, that means specifying security policies as code, and continuously monitoring their effectiveness.Increasing deployment velocity: Because of cloud’s vast scale, providers have had to automate software deployments and updates, usually with automated continuous integration/continuous deployment (CI/CD) systems. That same automation delivers security enhancements, resulting in more frequent security updates.Simplicity: Cloud becomes an abstraction-generating machine for identifying, creating and deploying simpler default modes of operating securely and autonomically. Sovereignty meets sustainability: The cloud’s global scale and ability to operate in localized and distributed ways creates three pillars of sovereignty. This global scale can also be leveraged to improve energy efficiency.If you’re an IT decision maker, pay attention to these megatrends that will continue to drive and reinforce cloud security and will outpace the security of on-prem infrastructure well into the future. U.S. Federal government cybersecurity momentum Open source software security: Earlier this month, Google participated in the White House Summit on open source software security. The meeting came at a critical time for the industry following December’s Log4j vulnerabilities and was both a recognition of the challenge and an important first step towards addressing it. The open source software ecosystem is not homogenous, despite the fact that the industry often thinks of or treats it this way. Some of it, like Linux, is highly curated, while other critical software is supported through diffuse communities including technology companies and other stakeholders. There is also a long tail of many other critical projects driven by a dedicated community of maintainers around the world, including Googlers. In light of this reality, we welcomed the chance to share our recommendations to advance the future of open source software security. Some work we’ve done includes founding the Open Source Security Foundation, which has been instrumental already in making security improvements. We’ve also helped drive a number of key security initiatives within the open source community includingsecurity scorecards, the SLSA framework to improve the security and integrity of open source packages, andSecure Open Source Rewards to financially incentivize improvements to critical open source security projects.  OMB’s Federal zero trust strategy: The publication of the Office of Management and Budget’s zero trust architecture strategy marks an important step for the U.S. federal government’s efforts to modernize under Executive Order 14028. Google Cloud supports this approach, which recognizes the immense security benefits offered by modern computing architectures. For the past decade, Google has successfully applied zero trust principles through our BeyondCorp and BeyondProd frameworks for providing end-user access and securing our cloud workloads. And we’ve brought these best practices from our own journey to global governments and businesses of any size through solutions like BeyondCorp Enterprise and capabilities like Binary Authorization and Anthos Service Mesh, which are embedded in Anthos, our managed application platform. For Federal agencies embarking on this zero trust journey, theGoogle Cybersecurity Action Team will offer our expertise by conducting Zero Trust Foundations strategy workshops, which can help organizations in the public and private sectors develop actionable and achievable strategies and plans for zero trust implementation. Google Cybersecurity Action Team Highlights Here are the latest updates, products, services and resources across our security teams this month: SecurityDemocratizing security operations: We recently announced that Siemplify, a leading security orchestration, automation and response (SOAR) provider, is joining Google Cloud to help companies better manage their threat response. Providing a proven SOAR capability with Chronicle’s approach to security analytics is an important step forward in our vision to advance invisible security and democratize security operations for every organization.Security by design: The Highmark Health security team is using “secure-by-design” techniques to address the security, privacy, and compliance aspects of its Living Health solution with Google Cloud’s Professional Services Organization (PSO). Google has long advocated for and followed security by design principles, which is why we’re continuously building enhanced security, controls, resiliency and more into our cloud products and services. Secure collaboration for hybrid work environments: The Google Workspace team shared its recommendations for businesses as they prepare for the future of work,  where the hybrid/flexible work model is becoming standard practice and a new approach to security is essential.Anthos Policy Controller CIS Benchmark enforcement: A big part of our shared fate philosophy is to build secure products and not just security products. A recent example of this in action is embedding CIS benchmark policy conformance in the Anthos Policy Controller. We believe the more we embed approaches like this into our products, the more application and infrastructure teams can intrinsically embed security at the start and reduce toil for the security team.DevOps for technology-driven organizations and startups: A key success factor for many security programs is the partnership and integration with development teams, and there are some great resources and lessons in our DORA research.Security by design with Chrome OS: ABN AMRO’s Asia-Pacific region team recently shared how they are using Chrome OS and CloudReady to work securely in the cloud, reduce total cost of ownership, and add flexibility for employees. This is a great example of secure by design principles in the use of Chromium.Risk & ComplianceBoards of Directors summary guide to cloud risk governance: The latest whitepaper from the Google Cybersecurity Action Team outlines how boards of directors can prioritize safe, secure, and compliant adoption processes for cloud technologies within their organizations.  TruSight Risk Assessment of Google Cloud:TruSight recently released a comprehensiverisk assessment report on Google Cloud. Our Enterprise Trust team collaborated on this robust assessment of Google Cloud services to validate the design and implementation of controls. TruSight’s risk assessment of our security controls will help customers accelerate and complete their risk management due diligence.Data governance: Check out this new blog series on data governance where our teams explain the role of data governance, its importance, and the necessary processes to run an effective data governance program. Implementing data governance will help maximize value derived from business data, build user trust, and ensure compliance with required security measures.Controls and ProductsEncrypting Data Fusion: To help meet the security, privacy and compliance requirements of customers in regulated industries like finance or public sector, we announced the general availability of Customer Managed Encryption Keys (CMEK) integration for Cloud Data Fusion, which enables encryption of both user data and metadata at rest with a key that customers can control through our Cloud Key Management Service (KMS). Don’t forget to sign-up for our newsletter if you’d like to have our Cloud CISO Perspectives post delivered every month to your inbox. We’ll be back next month with more updates and security-related news.Related ArticleCloud CISO Perspectives: December 2021Google Cloud CISO Phil Venables shares his thoughts on the latest security updates from the Google Cybersecurity Action Team.Read Article
Quelle: Google Cloud Platform

Optimize the cost of your Google Cloud VMware Engine deployments

Google Cloud VMware Engine allows you to deploy a managed VMware environment with dedicated hardware in minutes with the flexibility to add and remove ESXi nodes on-demand. This flexibility is particularly convenient to quickly add compute capacity as needed. However, it is important to implement cost optimization strategies and review processes such that increased cost does not come as a surprise. Since hardware needs to be added to ESXi clusters to accomodate for increased workload capacity, additional care needs to be taken when scaling out the Private Clouds.In this blog, we explore best practices for operating Google Cloud VMware Engine Private Clouds with a focus on optimizing overall cost.Google Cloud VMware Engine Billing PrinciplesCustomers are billed hourly by the number of VMware ESXi nodes that have been deployed. Similar to the commitment-based pricing model that Compute Engine offers for Virtual Machine instances, Google Cloud VMware Engine offers committed use discounts for one and three-year terms. For detailed pricing information refer to the pricing section on the Google Cloud VMware EngineProduct Overview page.Cost Optimization Strategy #1: Apply Committed Use DiscountsCommitted Use Discounts (CUDs) are discounts based on a commitment to running a number of Google Cloud VMware Engine nodes in a particular region for either a one- or three-year term. CUDs for a three-year commitment can provide up to a 50% discount if their cost is invoiced in full at the start of the contract. As you might expect, commitment-based discounts cannot be canceled once purchased so you need confidence in how many nodes you will run in your Private Cloud. Apply this discount for the minimal amount of nodes you will run over a one- or three-year period and revise the number of needed nodes regularly if Google Cloud VMware Engine is the target platform of your data center migration.Cost Optimization Strategy #2: Optimize your Storage ConsumptionIf your workloads consume a lot of storage (e.g. backup systems, file servers or large databases), you may need to scale out the cluster to add additional vSAN storage capacity. Consider the following optimization strategies to keep storage consumption low:1. Apply custom storage policies which use RAID 5 or RAID 6 rather than RAID 1 (default storage policy) while achieving the same Failures to Tolerance (FTT). The FTT number is a key metric as it is directly linked to the monthly uptime SLA of the ESXi cluster. A storage policy using RAID 1 with FTT=1 incurs a 100% storage overhead since a RAID 1 encoding scheme mirrors blocks of data for redundancy. Similarly, if FTT=2 is required (e.g. for more critical workloads that require a 99.99% uptime availability) RAID 1 produces a 200% storage overhead. A RAID 5 configuration is more storage efficient, as a storage policy with FTT=1 can be achieved with only a 33% storage overhead. Likewise, RAID 6 can provide FTT=2 with only 50% storage overhead which means 50% of storage consumption can be saved compared to using the default storage policy with RAID 1. Note, however, that there is a tradeoff: RAID 1 requires fewer I/O operations to the storage devices and may provide better performance.2. Create new disks using the “Thin Provisioning” format. Thin provisioned disks save storage space as they start small and expand as more data is written to the disk. 3. Avoid Backups filling up vSAN storage. Backup tools such as Actifio provide integrations of VMware environments and Cloud Storage allowing operators to move backups to a cheaper storage location. Backup data which needs to be retained long-term should be stored inside Cloud Storage buckets with life-cycle policies to move data to a cheaper storage class after a certain time period.4. Enable deduplication and compression on the vSAN cluster to reduce the amount of redundant data blocks and hence the overall storage consumption.Cost Optimization Strategy #3: Rightsize ESXi Clusters1. Size ESXi clusters such that CPU, Memory and storage utilization are at a high level but support the outage of an ESXi node without any failure of workloads. Operating a cluster with resource utilization close to full capacity might cause an outage in case of a sudden hardware failure. Having the highest resource utilization metric (CPU, Memory or Storage) at approximately 70% allows a safe operation of the cluster while also using the capabilities of the cluster.2. Start new Private Cloud deployments with single-node clusters and expand them when needed. Google Cloud VMware Engine has recently added the ability to create private clouds that contain a single node for testing and proofs of concept. Single node clusters can only be kept for a maximum of 60 days and do not have any SLA. However, they provide a great method to minimize cost during the integration with day-2 tooling, configurations and testing. Once the Private Cloud landing zone is fully configured, the one-node cluster can be extended to a three-node cluster to be eligible for an SLA.3. Consolidate ESXi clusters if possible: If you are running workloads on multiple clusters, review if clusters can be consolidated for a more efficient balancing of resources across clusters. As an example, if workloads are divided by OS type or by production and non-production, a higher resource utilization may be achieved if clusters are consolidated. However, there should be care taken in reviewing whether there are other constraints which would prevent consolidation, such as licensing requirements. If VMs need to run on specific nodes for licensing requirements, consider DRS Affinity rules to pin workloads to specific nodes.4. Consolidate Private Clouds if possible: Review if Private Clouds can be consolidated if you run more than one. Each Private Cloud requires its own set of management VMs which causes an overhead to the overall resource consumption. Cost Optimization Strategy #4: Review Resource Utilization of Workloads1. Review the resource utilization of VMs on an on-going basis after running applications in a steady state. Extract vCenter metrics programmatically or visually to provide right-sizing recommendations. For instance, if it is noticed that VMs consume more CPU and memory resources than needed, tune the VM parameters during a scheduled downtime of the application (requires reboot).Consider scheduling the execution of a script which extracts CPU and memory utilization statistics from vCenter and stores the data in a convenient format such as in a CSV file. As an example, this can be implemented using PowerShell from a script execution host.Define criteria to characterize workloads as over- or underutilized resources by comparing their average CPU and memory utilization over a minimum of 30 days with a reasonable threshold value. Example conditions (thresholds can be tuned to meet your requirements):CPU Usage (30-day and 1-year average) % is less than (<) 50%, andCPU Usage (30-day and 1-year maximum) % is less than (<) 80%Despite the above recommendations, avoid making abrupt changes and always carefully review the data against the requirements of the workloads.2. Use Cloud Monitoring with the Standalone agent integration to review cluster and workloads metrics. Follow the installation guide to enable metrics forwarding to integrate vCenter metrics with Cloud Monitoring.3. Consider using third-party tooling, such as VMware vROps, to get insights into the capacity and utilization to help with right-sizing if the workload is CPU/memory bound (blog post for more details). Note that vROps requires additional license and needs to be installed on a per VM/host.Managing Cost Optimization – People and ProcessThe efficacy of cost optimization hinges also on the people and processes readiness to support and run the operations.1. Set up a cost optimization function – In any cloud operating model, cost management and optimization is not a responsibility of a single team but requires a coordinated effort from multiple teams/roles. Sponsorship and support from executive leadership is needed to make optimization a shared top priority and build a central cost optimization function within your Cloud Center of Excellence (CoE), Finance (for defining optimization goals and monitoring spend), Architecture (for reviewing optimization options) and Operations/SRE (for implementing the options). Additionally, engage business/application stakeholders for validating availability and performance impact on workloads.2. Adopt a crawl-walk-run approach – Cost optimization is a continuous and ongoing operation and follows an enterprise’s cloud adoption maturity curve. Define supporting processes and tools as you start and refine them as you scale.3. Prioritize the optimization options – While optimization can bring in significant cost savings, it comes at a cost of resource effort and time. Prioritize your options based on the potential savings vs estimated level of effort to identify the most impactful ways to reduce spend and realize quick wins.4. Report and measure – Identify key metrics of interest (e.g. cost per user / tenant-customer) and define KPIs to continuously measure optimization outcomes and success against them.Refer to our Cloud FinOps whitepaper for a holistic framework on driving financial accountability and business value realization. Also check the principles of cloud cost optimization blog for additional guidance on tools and cost optimization best practices.Call to ActionIn this blog we have listed several strategies to reduce the overall cost of Private Clouds which differ in their implementation effort. Quick wins that can be implemented to reduce cost in the short-term include the use of CUDs, specifically if VMware Engine will be used as a platform for workloads for at least one year, as well as custom storage policies to optimize overall vSAN storage consumption. Optimization strategies which include the adoption of processes to monitor utilization metrics of clusters and VMs provide helpful insights on whether workloads are oversized. Yet, adjustments to workload sizing should not be made swiftly and require careful review of metrics in a steady state. This cost optimization does rather take effect in a longer term.At Google Cloud, we have developed an architecture framework to help you optimize your spend and adopt Google Cloud VMware Engine while maximizing your returns on your journey to cloud. If you are interested in more information, please contact your Google Cloud account team.A special thanks to Wayne Chu for his contributions and help with the implementation of cost optimization processes with our enterprise customers.Related ArticleBest practices for optimizing your cloud costsFollowing these best practices will help optimize your cloud costs to the needs of your business, so you can get through these unpredicta…Read Article
Quelle: Google Cloud Platform

Google Tau VMs deliver over 40% price-performance advantage to customers

In November 2021, we announced the general availability of Tau VMs. Since then, Google Cloud’s Tau VMs with Google Kubernetes Engine (GKE) have unlocked value for many customers who are now using Tau VMs for their production workloads, such as: Ascend, who achieved over 125% higher performance; Nylas, who gained over 40% higher price-performance; and OpenX, who achieved 40% better price-performance while at the same time reducing their application latency by 62%. T2D is the first instance type in the Tau VM family and is built on the latest 3rd generation AMD EPYCTM processors, offering 42% higher price-performance compared to general-purpose VMs from any of the leading public cloud vendors. Tau VMs offer a leading combination of performance, price, and full x86 compatibility, offering customers the lowest cost solution for scale-out workloads. Tau VMs are available in predefined shapes, with up to 60vCPUs per VM, 4GB of memory per vCPU, networking up to 32 Gbps and a slew of storage options including Standard, Balanced and Performance PD. Tau VMs are also available as Spot VMs, offering an over 60% discount compared to on-demand pricing. For customers looking for advanced container orchestration, GKE delivers high levels of reliability, security, and scalability, and has supported Tau VMs since the day they became available on Google Cloud. Tau VMs are ideal for CPU-bound workloads such as web-serving with encryption, video encoding, compression/decompression, image processing and horizontally-scaled applications. Using Tau VMs along with GKE’s cost-optimization best practices can help lower your total cost of ownership. You can add Tau VMs to new or existing GKE clusters by specifying the Tau T2D machine type in your GKE node-pools through the Cloud console or by using –machine-type in gcloud. Here is what some of our customers have to say about Tau VMs:Ascend provides a unified analytics and data engineering platform, and chose Tau VMs along with GKE to run their data-intensive workload — primarily because of Tau’s absolute performance and price-performance advantage.“Our core capability at Ascend is bringing together data ingestion, transformation, delivery, orchestration and observability into a single platform. To operate at scale and keep pace with our telemetry data production rates, high single-threaded performance is critical. With Google Cloud’s Tau VMs with Google Kubernetes Engine (GKE), we are able to achieve over 125% higher performance than previous generation families. This has completely changed our ability to query historical metrics. Where previously metric queries against historical data over ranges longer than a couple hours were difficult, we can now easily query data ranges of multiple weeks.” – Joe Stevens, Tech Lead – Infrastructure, Ascend.ioNylas is a pioneer and leading provider of productivity infrastructure solutions for modern software. In the past year, Nylas has been using GKE in their journey to reinvent their architecture and provide their enterprise customers with a bi-directional universal email sync, security compliance with the highest enterprise standards, and industry-specific machine learning services. “For our core application, Google’s Tau VMs with Google Kubernetes Engine delivers over 40% better price-performance than Amazon’s Graviton-based VMs. Further, Tau VMs maintain x86 compatibility and eliminate the need to maintain a separate stack for ARM. We are moving our workload from Amazon Web Services to Google Cloud to take advantage of these benefits.” – David Ting, SVP of Engineering, NylasOpenX operates an independent ad exchange. Operating 100% on Google Cloud has enabled OpenX to achieve improved performance, scalability, speed and global reach. “At OpenX, our ad-exchange services over 200 billion requests every day. Getting the best combination of performance and price from the infrastructure is critically important for us. We use multiple Google Kubernetes Engine (GKE) clusters across geographic regions with autoscaling to power our ad-delivery components. Running Google Cloud’s Tau VMs with GKE has enabled over 40% better price-performance and 62% latency reduction for our application as compared to the prior generation family. We have made the move to Tau VMs for our application to take advantage of these benefits.” – Paul T.Ryan, CTO, OpenXWe are excited to see Tau VMs adding value for so many of our customers by enabling industry leading price-performance for a variety of workloads. If you haven’t tried Tau VMs yet, give them a try today in our Iowa, Netherlands and Singapore regions and move your production workloads to Tau VMs. Tau VMs will be arriving in additional regions and zones in the coming weeks. You can provision GKE node pools based on Tau VMs and explore how you can take advantage of improved price-performance for your scale-out containerized workloads. To get started, go to the Google Cloud Console, select Google Kubernetes Engine, and choose Tau T2D for your GKE nodes. To learn more about Tau VMs or other Compute Engine VM options, check out our machine types and our pricing pages.Related ArticleTau T2D VMs now in Preview: Independent testing validates market-leading price-performanceT2D VMs powered by 3rd Generation AMD EPYC processors (code-named Milan) are now available for the Compute Engine Tau family in preview.Read Article
Quelle: Google Cloud Platform

Bigtable Autoscaling: Deep dive and cost saving analysis

Cloud Bigtable is a fully managed service that can swiftly scale to meet performance and storage demands with the click of a button. If you’re currently using Bigtable, you might configure your cluster sizes to perform for peak throughput or programmatically scale to match your workload. Bigtable now supports autoscaling for improved manageability, and in one of our experiments autoscaling reduced costs of a common diurnal workload by over 40%.You only pay for what you need when autoscaling is enabled; Bigtable will automatically add or remove capacity in response to the changing demands of your workloads. Autoscaling enables you to spend more time on your business and less time managing your infrastructure due to the reduced overhead of capacity provisioning management. Autoscaling works on both HDD and SSD clusters, and is available in all Bigtable regions.We’ll look at when and how to use this feature, go through a performance analysis of autoscaling in action, and finally see how it can impact your database costs.Enabling AutoscalingCloud Bigtable autoscaling is configured at the cluster level and can be enabled using the Cloud Console, the gcloud command-line tool, the Cloud Bigtable Admin API, and Bigtable client libraries.With autoscaling enabled, Bigtable automatically scales the number of nodes in a cluster in response to changing capacity utilization. The business-critical risks associated with incorrect capacity estimates are significantly lowered: over-provisioning (unnecessary cost) and under-provisioning (missing business opportunities).Autoscaling can be enabled for existing clusters or configured with new clusters. You’ll need two pieces of information: a target CPU utilization and a range to keep your node count within. No complex calculations, programming, or maintenance are needed. One constraint to be aware of is the maximum node count in your range cannot be more than 10 times the minimum node count. Storage utilization is a factor in autoscaling, but the targets for storage utilization are set by Bigtable and not configurable. Below are examples showing how to use the Cloud Console and gcloud to enable autoscaling. These are the fastest ways to get started.Using Cloud ConsoleWhen creating or updating an instance via the Cloud Console you can choose between manual node allocation or autoscaling. When autoscaling is selected, you configure your node range and CPU utilization target.Using command lineTo configure autoscaling via the gcloud command-line tool, modify the autoscaling parameters when creating or updating your cluster as shown below.Updating an existing cluster:Creating a new cluster:Transparency and trustOn the Bigtable team, we performed numerous experiments to ensure that autoscaling performs well with our customers’ common workloads. It’s important that you have insight into Cloud Bigtable’s autoscaling performance, so you can monitor your clusters and understand why they are scaling. We provide comprehensive monitoring and audit logging to ensure you have a clear understanding of Bigtable’s actions. You’re able to connect Bigtable activity to your billing and performance expectations and fine tune the autoscaling configuration in order to ensure your performance expectations are maintained. Below is the Bigtable cluster monitoring page with graphs for metrics and logs for the cluster.Related ArticleCloud Bigtable launches Autoscaling plus new features for optimizing costs and improved manageabilityCloud Bigtable launches autoscaling that automatically adds or removes capacity in response to the changing demand for your applications.Read ArticleWhen is autoscaling right for your workload?Bigtable is flexible for a variety of use cases with dynamic traffic profiles. Bigtable autoscaling may not always be the right configuration for your business, so here are some guidelines for when autoscaling is ideal.When to use autoscalingYou’re an existing Bigtable user who wants to optimize costs, while maintaining performance for your cluster. For example: diurnal traffic patterns that you might see with online retail.You’re a new Bigtable user or have a new workload. Provisioning enough capacity to meet unknown use cases is hard.Your business is growing, and you’re not sure the extent of future growth.  You want to be prepared to scale for any opportunity.What autoscaling won’t solveCertain batch workloads. Autoscaling will react to a sharp increase in traffic (a “step” or batch upload of data). However, Bigtable will still need to rebalance the data and traffic against a rapid increase in nodes, and this may cause a performance impact as Bigtable works to rebalance. Autoscaling is likely not the correct solution to resolving hotspotting or ‘hot tablets’ in your Bigtable cluster. In these scenarios it is best to review data access patterns and row key / schema design considerations.Autoscaling in ActionCloud Bigtable’s horizontal scalability is a core feature, derived from the separation of compute and storage. Updating the number of nodes for a Bigtable instance is fast whether or not you use autoscaling. When you add nodes to your cluster, Bigtable rebalances your data across the additional nodes, thus improving the overall performance of the cluster. When you scale down your cluster, Bigtable rebalances the load from the removed nodes to the remaining nodes.With autoscaling enabled, Bigtable monitors the cluster’s utilization target metrics and reacts in real time to scale for the workload as needed. Part of the efficiency of Bigtable’s native autoscaling solution is that it connects directly to the cluster’s tablet servers to monitor metrics, so any necessary autoscaling actions can be done rapidly. Bigtable then adds or removes nodes based on the configured utilization targets. Bigtable’s autoscaling logic scales up quickly to match increased load, but scales down slowly in order to avoid putting too much pressure on the remaining nodes.Example workloadLet’s look at one of the experiments we ran to ensure that autoscaling performance was optimal in a variety of scenarios. The scenario for our experiment is a typical diurnal traffic pattern: active users during peak times and a significant decrease during off-peak times. We simulated this by creating a Bigtable instance with 30 GB of data per node and performed point reads of 1 kb. We’ll get some insights from this experiment using Bigtable’s monitoring graphs. You can access the cluster’s monitoring graphs by clicking on the cluster ID from the Bigtable instance overview page in the Cloud Console.Bigtable Instance overview page in Cloud ConsoleHaving clicked through to the cluster overview page, you can see the cluster’s node and CPU utilization monitoring graphs as seen below. Bigtable cluster overview page in the Cloud ConsoleThe node count graph shows a change from 3 nodes to 27 nodes and back down to 3 nodes over a period of 12 hours. The graph shows the minimum and maximum node counts configured as well as the recommended number of nodes for your current CPU load, so you can easily check that those are aligned. The recommended number of nodes for CPU target (orange line) is closely aligned with the actual number of nodes (blue line) as CPU utilization increases, since scaling up happens quickly to keep up with throughput. As CPU utilization decreases, the actual number of nodes lags behind the recommended number of nodes. This is in line with the Bigtable autoscaling policy to scale down more conservatively to avoid putting too much pressure on the remaining nodes. In the CPU utilization graph we see a sawtooth pattern. As it reaches a peak, we can compare both graphs to see the number of nodes is adjusted to maintain the CPU utilization target. As expected, CPU utilization drops when Bigtable adds nodes and steeply increases when nodes are removed. In this example (a typical diurnal traffic pattern), the throughput is always increasing or decreasing. For a different workload, such as one where your throughput changes and then holds at a rate, you would see more consistent CPU utilization. On the cluster overview page, we are also able to see the logs and understand when the nodes are changing and why.Logs on the Bigtable cluster overview page in the Cloud ConsoleTo get more insights, you can go to the instance monitoring view. Here we can see even more graphs showing the experiment workload activity. Note the diurnal traffic pattern mentioned above is in line with autoscaling behavior: as throughput increases, node count increases and as throughput decreases, node count decreases.Bigtable Instance overview page in the Cloud ConsoleCost evaluationCustom dashboard in the Cloud Console Metrics Explorer showing node count average, node count, and read throughputThis experiment workload ran for 12 hours. Let’s see how the costs would change for this scenario with and without autoscaling. Assume a Bigtable node cost1 of $0.65/hr per nodeComparing the number of nodes and cost when using autoscaling vs scaling for peak: 15.84 nodes on average for autoscaling / 27 nodes scaled for peak = .587The number of nodes required is 58.7% of the peak when using autoscaling in this scenario. This is a potential approximate cost saving of 41.3% when using Bigtable native autoscaling in this example.These savings can be significant when you’re working with large amounts of data and queries per second. SummaryAutoscaling with Bigtable provides a managed way to keep your node count and costs aligned with your throughput. Get started: Enable autoscaling via the Cloud Console or command line.Check performance: Keep an eye on your latency with the Bigtable monitoring tools and adjust your node range.Reduce costs: While maintaining a 60% CPU utilization in our example scenario, the new cost on the diurnal workload was 58.7% of the total when compared to scaling for peak.1. See Bigtable pricing: https://cloud.google.com/bigtable/pricing2. ‘Scale for peak’ is the provisioning policy adopted by many DB operational managers to ensure the peak load is supported.
Quelle: Google Cloud Platform

Google Cloud launches new dedicated Digital Assets Team

Blockchain technology is yielding tremendous innovation and value creation for consumers and businesses around the world. As the technology becomes more mainstream, companies need scalable, secure, and sustainable infrastructure on which to grow their businesses and support their networks. We believe Google Cloud can play an important role in this evolution.Building on our existing work with blockchain developers, exchanges, and other companies in this space, we are announcing today a new, dedicated Digital Assets Team within Google Cloud to support our customers’ needs in building, transacting, storing value, and deploying new products on blockchain-based platforms. This new team will enable our customers to accelerate their efforts in this emerging space and help underpin the blockchain ecosystems of tomorrow. What We’re Doing Today (and Into the Future)Blockchain and distributed-ledger-based companies like Hedera, Theta Labs, and Dapper Labs have already chosen to build on top of Google Cloud for scalability, flexibility, and security. Moving forward, Google Cloud’s Digital Assets Team will undertake a number of short- and long-term initiatives to support companies in the digital assets/blockchain ecosystem, including:Providing dedicated node hosting/remote procedure call (RPC) nodes for developers, allowing users to deploy blockchain validators on Google Cloud via a single click (“click to deploy”).Participating in node validation and on-chain governance with select partners.Helping developers and users host their nodes on thecleanest cloud in the industry, supporting their environmental, social, and governance initiatives.Supporting on-chain governance via participation from Google Cloud executives and senior engineers.Hosting several public BigQuery datasets on our Marketplace, including full blockchain transaction history for Bitcoin, Ethereum, Bitcoin Cash, Dash, Litecoin, Zcash, Theta, Hedera Hashgraph, Band Protocol, Polygon, XRP, and Dogecoin.Driving co-development and integration into Google’s robust partner ecosystem, including participating in the Google Cloud Marketplace.Embracing joint go-to-market initiatives with our ecosystem partners where Google Cloud can be the connective tissue between traditional enterprise and blockchain technologies.As we build out our team, we’re also exploring opportunities in the future to enable Google Cloud customers to make and receive payments using cryptocurrencies.Why Partner with Google CloudOur customers in this space—both traditional firms seeking to implement blockchain strategies and blockchain-native companies, such as exchanges, app providers, and decentralized platforms—are choosing Google Cloud for three key reasons:First, their businesses and their developer ecosystems can build on the industry’s cleanest cloud. Growing in a sustainable manner is top-of-mind for many businesses, but is particularly relevant in the blockchain space, where the ability to run and scale sustainably is critical. Google is carbon neutral today, and we’ve announced our goal to run on carbon-free energy, 24/7 at all of our data centers by 2030. We’ve also rolled out the ability for customers to choose a Google Cloud region in which to run based on carbon footprint data.Second, developers building on blockchain-based platforms can benefit from Google’s world-class developer platform. Google Cloud infrastructure ensures that developers can speed up the delivery of software and data on the blockchain, delivering fast access to applications for users. Third, Google Cloud technologies and services will ensure that blockchain-based companies can scale securely and reliably. Google can ensure that data, applications, games, or digital assets like NFTs will be delivered on a stable, secure, and trusted global network.Google Cloud’s Approach to Blockchain and Digital AssetsBlockchains and digital assets are changing the way the world stores and moves its information —as well as value. As an infrastructure provider, Google Cloud views the evolution of blockchain technology and decentralized networks today as analogous to the rise of open source and the internet 10-15 years ago. Just as open source developments were integral to the early days of the internet, blockchain is yielding innovation and value creation for consumers and businesses. As the technology becomes more mainstream, companies will need scalable, secure infrastructure on which to grow their businesses and support their networks.As such, we’re applying Google Cloud technology to the blockchain market with the following principles:Consistent with Google Cloud’s core business: We are specialists in data-powered innovation with leading infrastructure, industry solutions, and other cutting-edge technology. We pursue blockchain projects and partnerships that align with our mission and our expertise.User trust and governance: Blockchain networks raise novel questions concerning legal compliance and user privacy. We will maintain our commitment to our users through a robust focus on privacy and user trust, as well as an uncompromising focus on compliance with applicable laws.Network-agnostic: Google’s infrastructure will seek to preserve optionality of networks for the benefit of users. We’re inspired by the work already done in the digital assets space by our customers, and we look forward to providing the infrastructure and technologies to support what’s possible with blockchain technologies in the future. If you’re eager to learn more about Google Cloud’s new Digital Assets Team, please reach out to your Google Cloud sales representative or partner manager.
Quelle: Google Cloud Platform

Cloud Bigtable launches Autoscaling plus new features for optimizing costs and improved manageability

Cloud Bigtable is a fully managed, scalable NoSQL database service for large operational and analytical workloads used by leading businesses across industries, such as The Home Depot, Equifax, and Twitter. Bigtable has more than 10 Exabytes of data under management and processes more than 5 billion requests per second at peak. Today, we’re announcing the general availability of autoscaling for Bigtable that automatically adds or removes capacity in response to the changing demand for your applications. With autoscaling, you only pay for what you need and you can spend more time on your business instead of managing infrastructure. In addition to autoscaling, we recently launched new capabilities for Bigtable that reduce cost and management overhead:2X storage limit that lets you store more data for less, particularly valuable for storage optimized workloads.Cluster groups provide flexibility for determining how you route your application traffic to ensure a great experience for your customers. More granular utilization metrics improve observability, faster troubleshooting and workload management. Let’s discuss these capabilities in more detail. Optimize costs and improve manageability with autoscalingThe speed of digitization has increased in most aspects of life driving up consumption of digital experiences. The ability to scale up and scale down applications to quickly respond to shifts in customer demand is now more critical for businesses  than ever before.  Autoscaling for Bigtable automatically scales the number of nodes in a cluster up or down according to the changing demands of usage. It significantly lowers your risk of over-provisioning and incurring unnecessary costs, and under-provisioning which can lead to missed business opportunities. Bigtable now natively supports autoscaling with direct access to the Bigtable servers to provide a highly responsive autoscaling solution.Customers are able to set up an autoscaling configuration for their Bigtable clusters using the Cloud Console, gcloud, the Bigtable admin API, or our client libraries. It works on both HDD and SSD clusters, and is available in all Bigtable regions. You can set the minimum and maximum number of nodes for your Bigtable autoscaling configuration in Cloud Console as shown below.Once you have set up autoscaling, it is helpful to  understand what autoscaling is doing, when and why to reconcile against billing and performance expectations. We have invested significantly in comprehensive monitoring and audit logging to provide developers with granular metrics and pre-built charts that explain how autoscaling makes decisions.Related ArticleBigtable Autoscaling: Deep dive and cost saving analysisBigtable now supports autoscaling. In this post we’ll look at when and how to use it, analyze autoscaling in action, and see its impact o…Read Article2X the storage limit Data is being generated at a tremendous pace and numerous applications need access to that data to deliver superior customer experiences.Many data pipelines supporting these applications require high throughput, and low latency access to vast amounts of data while maintaining the cost of compute resources. In order to meet the needs of storage driven workloads, Bigtable has doubled the storage capacity per node so that you can store more data for less, and don’t have to compromise on your data needs. Bigtable nodes now support 5TB per node (up from 2.5TB) for SSD and 16TB per node (up from 8TB) for HDD. This is especially cost-effective for batch workloads that operate on large amounts of data. Manageability at scale with cluster groupsBusinesses today need to serve users across regions and continents and ensure they provide the best experience to every user no matter the location. We recently launched the capability to deploy a Bigtable instance in up to 8 regions so that you can place the data as close to the end user as possible. A greater number of regions helps ensure your applications are performant for a consistent customer experience, where your customers are located. Previously, an instance was limited to four regions.With a global presence, there are typically multiple applications that require access to the replicated data. Each application needs to ensure that its serving path traffic does not see increased latency or reduced throughput because of a potential ‘noisy neighbor’ when additional workloads need access to the data. To provide improved workload management, we recently launched App Profile Cluster Group routing. Cluster group routing provides finer grained workload isolation management, allowing you to configure where to route your application traffic. This will allow you to allocate Bigtable clusters to handle certain traffic like batch workloads while not directly impacting the clusters being used to serve your customers.Greater observabilityHaving detailed insight and understanding of how your Bigtable resources are being utilized to support your business is crucial for troubleshooting and optimizing resource allocation. The recently launched CPU utilization by app profile metric includes method and table dimensions. These additional dimensions provide more granular observability into the Bigtable cluster’s CPU usage and how your Bigtable instance resources are being used. These observability metrics tell you what applications are accessing what tables with what API method, making it much easier to quickly troubleshoot and resolve issues.Learn moreTo get started with Bigtable, create an instanceor try it out with a Bigtable Qwiklab.Check out Youtube videos for step by step introduction to how Bigtable can be used in real world applications like Personalisation and Fraud detection.Learn how you can migrate data from HBase to Bigtable
Quelle: Google Cloud Platform

Expanding support for early-stage startups on Google Cloud

Startups are uniquely adept at solving difficult challenges, and Google is committed to partnering with these organizations and delivering technology to help them do so as they start, build, and grow. Over the past year, we’ve deepened our focus on helping startups scale and thrive in the cloud, including launching new resources and mentorship programs, hosting our first-ever Google Cloud Startup Summit, growing our team of startup experts, and more.With the new year in full swing, I’m excited to roll out several new offerings and updates designed to support startups even more effectively.First, we will align Google Cloud’s startup program with Google for Startups to ensure startup customers enjoy a consistent experience across all of Google—including Google Cloud infrastructure and services—and to provide founders access to Google mentors, products, programs, and best practices. Going forward, our program will be the Google for Startups Cloud Program.Next, we’ll deepen our commitment to supporting founders that are just starting out, when access to the right technology and expertise can have a massive impact on their company’s growth trajectory. Early-stage startups are particularly well-positioned to move quickly and solve problems, but they need the ability to scale with minimal costs, to pivot to address a new opportunity, and to leverage expertise and resources as they navigate new markets and investors.  Supporting early-stage startups is a key goal of the Google for Startups Cloud Program, and today I’m thrilled to announce a new offer for funded startups that will make it easier for these companies to get access to the technology and resources they need. Providing new Google Cloud credits for early-stage startupsStarting now, the Google for Startups Cloud Program will cover the first year of Google Cloud usage for investor-backed startups, through series A rounds, up to $100,000. For most startups, this will mean they can begin building on Google Cloud at no cost, ensuring they can focus on innovation, growth, and customer acquisition. In their second year of the program, startups will have 20% of their Google Cloud usage costs covered, up to an additional $100,000 in credits.This new offering will make it simpler for startups to access to Google Cloud’s capabilities in AI, ML, and analytics, and to rapidly build and scale on Google Cloud infrastructure with services like Firebase and Google Kubernetes Engine (GKE).Learn more about this new offer and eligibility requirements here.Connecting startup customers to Google know-how and supportWe know that navigating decisions as a fast-scaling startup can be challenging. Last year, we introduced our global Startup Success Team as a dedicated Google Cloud point of contact for startups in our program as they build. Now that this team is fully up and running, we’re expanding it to all qualified, early-stage startups in the Google for Startups Cloud Program. These guides will get to know the unique needs of each startup throughout their two years in the program, and will help connect them with the right Google teams to help resolve any technical, go-to-market, or credit questions along the way. As a customer grows in their usage and expertise with Google Cloud, they’ll be connected to our startup expert account teams to continue their journey.   The Google for Startups Cloud Program joins Google’s numerous offerings for entrepreneurs. In addition to receiving mentorship, tailored resources, and technical support from Google subject matter experts, participating startups are eligible for additional Google product benefits to help their business including Google Workspace, Google Maps and more. Founders can take advantage of workshops, events, and technical training courses, as well as Google for Startups programsand partner offerings. They can also tap into a supportive network of peers through our new C2C Connect digital community just for founders and CTOs building on Google Cloud. Helping startups focus on innovation, not infrastructureOur goal is to help startups move fast now, without creating technical debt that will slow them down later. With our fully managed, serverless offerings like Cloud Run, Firestore, Firebaseand BigQuery, startups can spend their time on their roadmap, rather than infrastructure management. And as they go from MVP to product to scale, startups don’t need to overhaul their architecture—Google Cloud services scale with them.That’s how Nylas, a startup focused on business productivity, is able to rapidly scale its platform and support larger, enterprise customers, all while growing its revenue by 5X. FLYR Labs is helping airlines better manage revenue and forecast demand, with a platform powered by Google Cloud data and AI capabilities and running on GKE.Sniip is rapidly growing adoption of its app that helps people more easily track and pay bills, leveraging GKE to scale quickly and Cloud Run to empower their developers.With Google Cloud, startups benefit from a business and technology partnership to help them build and go to market. We’ll work with founders from the early prototypes to global scale as they expand to new markets. Startups around the world are choosing to build with Google Cloud. Join us and let’s get solving.Related ArticleRead Article
Quelle: Google Cloud Platform

Sprinklr and Google Cloud join forces to help enterprises reimagine their customer experience management strategies

Enterprises are increasingly seeking out technologies that help them create unique experiences for customers with speed and at scale. At the same time, customers want flexibility when deciding where to manage their enterprise data, particularly when it comes to business-critical applications.That’s why I’m thrilled that Sprinklr, the unified customer experience management (Unified-CXM) platform for modern enterprises, has partnered with Google Cloud to accelerate its go-to-market strategy and grow awareness among our joint customers. Sprinklr will work closely with our global salesforce, benefitting from our deep relationships with enterprises that have chosen to build on Google Cloud. Akin to Google Cloud’s mission to accelerate every organization’s ability to digitally transform their business through data-powered innovation, Sprinklr’s primary objective is to empower the world’s largest and most loved brands to make their customers happier by listening, learning, and taking action through insights. With this strategic partnership now in place, Sprinklr and Google Cloud will go-to-market together with the end-customer as our sole focus.Traditionally, brands have adopted point solutions to manage segments of the customer journey. In isolation, these may work — but they rarely work collaboratively, even when vendors build “Frankenstacks” of disconnected products. These solutions can’t deliver a 360° view of the customer, and often reinforce departmental silos. All of which creates point-solution chaos.Sprinklr’s approach is fundamentally different and is the way out of the aforementioned point-solution chaos. As the first platform purpose-built for unified customer experience management (Unified-CXM) and trusted by the enterprise, Sprinklr’s industry-leading AI and powerful Care, Marketing, Research, and Engagement solutions enable the world’s top brands to learn about their customers, understand the marketplace, and reach, engage, and serve customers on all channels to drive business growth. Sprinklr was built from the ground up as a platform-first solution, designed to evolve and grow with the rapid expansion of digital channels and applications. The results? Faster innovation. Stronger performance. And a future-proof strategy for customer engagement on an enterprise scale.”Sprinklr works with large, global companies that want flexibility when deciding where to manage their enterprise data and consider our platform a business-critical application,” said Doug Balut, Senior Vice President of Global Alliances, Sprinklr. “Giving our customers the opportunity to manage Sprinklr on Google Cloud empowers them to create engaging customer experiences while maintaining the high security, scalability, and performance they need to run their business.”To learn more about this exciting partnership and the challenges we jointly solve for customers, check out the recent conversation between Google Cloud’s VP of Marketing, Sarah Kennedy, and Sprinklr’s Chief Experience Officer, Grad Conn. Or read the press release on the partnership.
Quelle: Google Cloud Platform