How to run SAP on Google Cloud when high availability is high priority

Over the past couple of years, businesses across every industry have faced unexpected challenges in keeping their enterprise IT systems safe, secure, and available to users. Many have experienced sudden spikes or drops in demand for their products and services and most are now operating in a hybrid work environment. In such changing conditions, with business requirements and expectations constantly evolving, it is a best practice to periodically revisit your IT system service-level objectives (SLOs) and agreements (SLAs) and ensure they are still aligned with your business needs.Adapting to these new requirements can be especially complex for companies that run their SAP enterprise applications in on-premises environments. These organizations are often already struggling with running business-critical SAP instances as they can be complex and costly to maintain. They know how much their users depend on these systems and how disruptive dealing with unplanned outages can be, so they see the on-premises setup—backed up with major investments in high availability (HA) systems and infrastructure—as the best way to ensure the security and availability of these essential applications. IT organizations charged with running on-premises SAP landscapes, in many cases, must also manage a growing number of other business-critical applications—all while under pressure to do more with less.For many organizations, this is an unsustainable approach. In fact, according to a SIOS survey looking at trends in HA solutions, companies at the time were already struggling to hold the line with on-premises application availability:95% of the companies surveyed reported at least occasional failures in the HA services that support their applications.98% reported regular or occasional application performance issues, and 71% reported them once or more per monthWhen HA application issues occurred, companies surveyed spent 3–5 hours, on average, to identify and fix the problem.Things aren’t getting easier for these companies. Today’s IT landscape is dominated by risk, uncertainty, and the prospect of belt-tightening down the road. At the same time, it’s especially important now to keep your SAP applications—the software at the heart of your company—secure, productive, and available for the business.At Google Cloud, we’ve put a lot of thought into solving the challenges around high availability for SAP environments. We recognize this as a potential make-or-break issue for customers and we prioritize giving them a solution: a reliable, scalable, and cost-effective SAP environment, built on a cloud platform designed to deliver high availability and performance.When you use Google Cloud, you get many services that are designed to be fault tolerant or highly available. The concepts are similar, but understanding the difference can save you time and effort when designing your architecture.We consider fault tolerant components as fully redundant mechanisms, where any failure of these components is designed to be seamless to the system availability. It includes components like storage (Google Cloud Storage, Persistent Disks) and network (Google Network, Cloud DNS, Cloud Load Balancer). Highly available services, however, will have an automated recovery mechanism of all the relevant architectural components, also known as single points of failure, which minimizes the recovery time objective (RTO) and recovery point objective (RPO). It usually involves replicating components and automating the failover process between them.Four levels of SAP high availability on Google CloudUnderstanding how to give SAP customers the right availability solution starts with recognizing that each customer will have different target availability SLAs and those targets will vary depending on their business needs, budgets, SAP application use cases, and other factors. Let’s look at the SAP high availability landscape infrastructure, operating system and application availability components, and what you would need to consider for your SAP system’s overall availability strategy.Level 1: InfrastructureMany customers find that simply moving their SAP system from on-premises to Google Cloud can increase their system’s uptime, because they are able to leverage the platform’s embedded security, networking, compute and storage features which are highly available by default.Compute ServicesFor compute services, Google Cloud Compute Engine has three built-in capabilities that are especially important and can reduce or even eliminate disruptions to applications due to hardware failures:Live Migration: When a customer’s VM instances are running on a host system that needs scheduled maintenance, Live Migration moves the VM instance from one host to another, without triggering a restart or disrupting the application. This is a built-in feature that every Google Cloud user gets at no additional cost. It works seamlessly and automatically, no matter how large or complex a user’s workloads happen to be. Google Cloud conducts hardware maintenance and applies hypervisor security patches and updates globally and seamlessly without ever having to inform a single customer to restart their VM as our maintenance does not impact your running applications, thanks to the power of Live Migration. Memory Poisoning Recovery (MPR): Even the highest-quality hardware infrastructure could break at some point and memory errors are the most common type of hardware malfunction (see Google Cloud’s study on memory reliability). Modern CPU architectures have native features like Error Correction Code (ECC), which enable hosts to recover from correctable errors. However, uncorrectable errors will crash and restart all VMs in the host, resulting in unexpected downtime. If you have HANA databases, you also have to account for the time it takes to load the data into memory. In that case, a host crash can cause hours of business critical service downtime, depending on the database size.Google Cloud developed a solution which integrates the CPU native error handling capabilities, SAP HANA and Google Cloud capabilities to reduce disruptions and downtime due to memory errors. With MPR, the uncorrectable memory error is detected and isolated until the VMs can be live migrated off of the affected host.If the uncorrectable error is found on a VM hosting SAP HANA, Google Cloud MPR will send a signal to SAP HANA, with Fast Restart enabled, to reload only the affected memory from disk, thus resolving the issue without downtime in most situations. Subsequently, all VMs on the affected host will be live migrated to a healthy host to prevent any downtime or disruption to customer’s applications running on those VMs.Automatic Restart: In the rare case when an unplanned shutdown cannot be prevented, this feature swings into action and automatically restarts the VM instance on a different host. When necessary, it calls up a user-defined startup script to ensure that the application running on top of the VM restarts at the same time. The goal is to ensure the fastest possible recovery from an unplanned shutdown, while keeping the process as simple and reliable as possible for users.  These services aim to increase the uptime of the single node, but highly critical workloads need resilience against compute related failures, including a complete zone outage. To cover this, Google Cloud Compute Engine offers a monthly uptime percentage SLAof 99.99% for instances distributed across multiple zones. Network File System storage (NFS)Another important component of highly available SAP infrastructure is the Network File System storage (NFS), which is used for SAP shared files, such as the interfaces directory and transport management. Google Cloud offers several file sharing solutions, like its first party Filestore Enterprise and third party solutions, such as NetApp CVS-Performance, both offering a 99.99% availability SLA. (if you need more information comparing NFS solutions on Google Cloud, please check the documentation available).Level 2: Operating SystemA critical part of the failover mechanism is clustering compute components at operating system level. It allows for fast component failure detection and triggers the failover procedures, minimizing the application downtime. Clustering at the OS level on Google Cloud, is very similar to the on-prem approach to clustering, with a couple improved features. Both SUSE Enterprise Linux (SLES) and Red Hat Enterprise Linux (RHEL) implement Pacemaker as a clustering resource manager and provide cluster agents designed for Google Cloud, which allows it to seamlessly manage functions and features like STONITH fencing, VIP routes and storage actions. When deploying OS clusters on Google Cloud, customers can avail themselves of the HA/DR provider hooks that allow SAP HANA to send out notifications to ensure a successful failover without data loss. For more information, see the detailed documentation for configuring HA clusters on RHEL and on SLES in our SAP high availability deployment guides.Windows-based workloads use Microsoft failover clustering technology and have special features on Google Cloud to enable and configure the cluster. Here you can find detailed documentation.Level 3: DatabaseEvery SAP environment depends on a central database system to store and manage business-critical data. Any SAP high availability solution must consider how to maintain the availability and integrity of this database layer. In addition, SAP systems support a variety of database systems—many of which employ different mechanisms to achieve high availability performance. By supporting and documenting the use of HA architectures for SAP HANA, MaxDB, SAP ASE, IBM Db2, Microsoft SQL Server and Oracle workloads (using our Bare Metal Solution, you can use HA certified hardware and even install Oracle RAC solution). Google Cloud gives customers the freedom to decide how to balance the costs and benefits of HA for their SAP databases.SAP ​HANA System Replication​ (HSR) is one of the most important application-native technologies for ensuring HA for any SAP HANA system. It works by replicating data continuously from a primary system to one or more secondary systems, and that data can be preloaded into memory to allow for a rapid failover if there’s a disaster.Google Cloud supports and complements HSR by supporting the use of synchronous replication for SAP instances that reside in any zone within the same region. That means users can place their primary and secondary instances in different zones to keep them protected against a single-point-of-failure in either zone.Other database systems like SAP ASE or IBM Db2 offer similar functionalities, which are also supported to run on Google Cloud infrastructure. The low network latency between zones in the same region coupled with our tools for automated deployments give companies the choice to run a variety of database HA options, tailored to their current business needs. Review our latest documentation for a current list of supported database systems and reference architectures.Level 4: Application serverSAP’s NetWeaver architecture helps users avoid app-server bottlenecks that can threaten HA uptime requirements. Google Cloud takes that advantage and runs with it by giving customers the high availability compute and networking capabilities they need to protect against the loss of data through synchronization and to get the most reliability and performance from SAP NetWeaver. It uses one OS level cluster (SLES or RHEL), with Pacemaker cluster resource manager and STONITH fencing for the ABAP SAP Central Services (ASCS) and Enqueue Replication Server (ERS), each with is own internal load balancer (ILB) for virtual IP. Detailed documentation for deploying and configuring HA clusters can be found for both RHEL and SLES in our NetWeaver high availability planning guides.Distributing application server instances across multiple zones of the same region provides the best protection against zonal failures while still providing great performance to the end user. Through automated deployments your IT team can quickly react to changes in demand and spin up additional instances in moments to keep the SAP system up and running, even during peak situations. Other ways Google Cloud supports high availability SAP systemsThere are many other ways Google Cloud can help maximize SAP application uptime, even in the most challenging circumstances. Consider a few examples, and keep in mind how tough it can be for enterprises, even larger ones, to implement similar capabilities at an affordable cost:Geographic distribution and redundancy. Google Cloud’s global footprint currently includes 30 regions, divided into 91 zones and over 140 points of presence. By distributing key Google Cloud services across multiple zones in a region, most SAP users can achieve their availability goals without sacrificing performance or affordability. Powerful and versatile load-balancing capabilities. For many enterprises, load balancing and distribution is another key to maintaining the availability of their SAP applications. Google Cloud meets this need with a range ofload-balancing options, including global load balancing that can direct traffic to a healthy region closest to users. Google Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions. And, as a software-defined service, it avoids the scalability and management issues many enterprises encounter with physical load-balancing infrastructure. Another important load balancer service for highly available SAP systems is the Internal Load Balancer, which allows you to automate the Virtual IP (VIP) implementation between the primary and secondary systems.Tools that keep developers focused and productive. Google Cloud’sserverless platform includes managed compute and database products that offer built-in redundancy and load balancing. It allows a company’s SAP development teams to deploy side-by-side extensions to the SAP systems without worrying about the underlying infrastructure. By using Apigee API Management, companies can provide a scalable interface to their SAP systems for these extensions, which protects the backend system from traffic peaks and malicious attacks. Google Cloud alsosupports CI/CD through native tools and integrations with popular open source technologies, giving modern DevOps organizations the tools they need to deliver software faster and more securely. Moreover, Google Cloud’s Cortex Framework provides accelerators and best practices to reduce risk, complexity and costs when innovating alongside SAP and unlocks the best of Google Cloud’s Analytics in a seamless setup that brings more value to the business.Flexible, full-stack monitoring. Google Cloud Monitoring gives enterprises deep visibility into the performance, uptime, and overall health of their SAP environments. It collects metrics, events, and metadata from Google Cloud, Amazon Web Services, hosted uptime probes, application instrumentation, and even application components such as Cassandra, Nginx, Apache Web Server, Elasticsearch, and many others. With a custom monitoring agent for SAP HANA and the Cloud Operation’s Ops Agent, Cloud Monitoring uses this data to power flexible dashboards and rich visualization tools, which helps SAP teams identify and fix emerging issues before they affect your business.Explore your HA optionsWe’ve only scratched the surface when it comes to understanding the many ways Google Cloud supports and extends HA for SAP instances. For an even deeper dive, our documentation goes into more technical detail on how you can set up a high availability architecture for SAP landscapes using Google Cloud services.Related ArticleLearn how to tackle supply chain disruptions with SAP IBP and Google CloudSAP IBP now integrated with Google Cloud for faster, more accurate forecasting to navigate challenges with supply chain disruptionsRead Article
Quelle: Google Cloud Platform

Introducing model co-hosting to enable resource sharing among multiple model deployments on Vertex AI

When deploying models to the Vertex AI prediction service, each model is by default deployed to its own VM. To make hosting more cost effective, we’re excited to introduce model co-hosting in public preview, which allows you to host multiple models on the same VM, resulting in better utilization of memory and computational resources. The number of models you choose to deploy to the same VM will depend on model sizes and traffic patterns, but this feature is particularly useful for scenarios where you have many deployed models with sparse traffic.Understanding the Deployment Resource PoolCo-hosting model support introduces the concept of a Deployment Resource Pool, which groups together models to share resources within a VM. Models can share a VM if they share an endpoint, but also if they are deployed to different endpoints. For example, let’s say you have four models and two endpoints, as shown in the image below.Model_A, Model_B, and Model_C are all deployed to Endpoint_1 with traffic split between them. And Model_D is deployed to Endpoint_2, receiving 100% of the traffic for that endpoint. Instead of having each model assigned to a separate VM, we can group Model_A and Model_B to share a VM, making them part of DeploymentResourcePool_X. We can also group models that are not on the same endpoint, so Model_C and Model_D can be hosted together in DeploymentResourcePool_Y. Note that for this first release, models in the same resource pool must also have the same container image and version of the Vertex AI pre-built TensorFlow prediction containers. Other model frameworks and custom containers are not yet supported.Co-hosting models with Vertex AI PredictionsYou can set up model co-hosting in a few steps. The main difference is that you’ll first create a DeploymentResourcePool, and then deploy your model within that pool. Step 1: Create a DeploymentResourcePoolYou can create a DeploymentResourcePool with the following command. There’s no cost associated with this resource until the first model is deployed.code_block[StructValue([(u’code’, u’PROJECT_ID={YOUR_PROJECT}rnREGION=”us-central1″rnVERTEX_API_URL=REGION + “-aiplatform.googleapis.com”rnVERTEX_PREDICTION_API_URL=REGION + “-prediction-aiplatform.googleapis.com”rnMULTI_MODEL_API_VERSION=”v1beta1″rn rn# Give the pool a namernDEPLOYMENT_RESOURCE_POOL_ID=”my-resource-pool”rn rnCREATE_RP_PAYLOAD = {rn “deployment_resource_pool”:{rn “dedicated_resources”:{rn “machine_spec”:{rn “machine_type”:”n1-standard-4″rn },rn “min_replica_count”:1,rn “max_replica_count”:2rn }rn },rn “deployment_resource_pool_id”:DEPLOYMENT_RESOURCE_POOL_IDrn}rnCREATE_RP_REQUEST=json.dumps(CREATE_RP_PAYLOAD)rn rn!curl \rn-X POST \rn-H “Authorization: Bearer $(gcloud auth print-access-token)” \rn-H “Content-Type: application/json” \rnhttps://{VERTEX_API_URL}/{MULTI_MODEL_API_VERSION}/projects/{PROJECT_ID}/locations/{REGION}/deploymentResourcePools \rn-d ‘{CREATE_RP_REQUEST}”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e467adcff10>)])]Step 2: Create a modelModels can be imported to the Vertex AI Model Registry at the end of a custom training job, or you can upload them separately if the model artifacts are saved to a Cloud Storage bucket. You can upload a model through the UI or with the SDK using the following command:code_block[StructValue([(u’code’, u”# REPLACE artifact_uri with GCS path to your artifactsrnmy_model = aiplatform.Model.upload(display_name=’text-model-1′,rn artifact_uri=u2019gs://{YOUR_GCS_BUCKET}u2019,rn serving_container_image_uri=’us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest’)”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e464c9e0f90>)])]When the model is uploaded, you’ll see it in the model registry. Note that the deployment status is empty since the model hasn’t been deployed yet.Step 3: Create an endpointNext, create an endpoint via the SDK or the UI. Note that this is different from deploying a model to an endpoint.endpoint = aiplatform.Endpoint.create(‘cohost-endpoint’)When your endpoint is created, you’ll be able to see it in the console.Step 4: Deploy Model in a Deployment Resource PoolThe last step before getting predictions is to deploy the model within the DeploymentResourcePool you created.code_block[StructValue([(u’code’, u’MODEL_ID={MODEL_ID}rnENDPOINT_ID={ENDPOINT_ID}rn rnMODEL_NAME = “projects/{project_id}/locations/{region}/models/{model_id}”.format(project_id=PROJECT_ID, region=REGION, model_id=MODEL_ID)rnSHARED_RESOURCE = “projects/{project_id}/locations/{region}/deploymentResourcePools/{deployment_resource_pool_id}”.format(project_id=PROJECT_ID, region=REGION, deployment_resource_pool_id=DEPLOYMENT_RESOURCE_POOL_ID)rn rnDEPLOY_MODEL_PAYLOAD = {rn “deployedModel”: {rn “model”: MODEL_NAME,rn “shared_resources”: SHARED_RESOURCErn },rn “trafficSplit”: {rn “0”: 100rn }rn}rnDEPLOY_MODEL_REQUEST=json.dumps(DEPLOY_MODEL_PAYLOAD)rnpp.pprint(“DEPLOY_MODEL_REQUEST: ” + DEPLOY_MODEL_REQUEST)rn rn!curl -X POST \rn-H “Authorization: Bearer $(gcloud auth print-access-token)” \rn-H “Content-Type: application/json” \rnhttps://{VERTEX_API_URL}/{MULTI_MODEL_API_VERSION}/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:deployModel \rn-d ‘{DEPLOY_MODEL_REQUEST}”), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e464c9e0c90>)])]When the model is deployed, you’ll see it ready in the console. You can deploy additional models to this same DeploymentResourcePool for co-hosting using the same endpoint we created already, or using a new endpoint.Step 5: Get a predictionOnce the model is deployed, you can call your endpoint in the same way you’re used to.x_test= [‘The movie was spectacular. Best acting I’ve seen in a long time and a great cast. I would definitely recommend this movie to my friends!’]endpoint.predict(instances=x_test)What’s nextYou now know the basics of how to co-host models on the same VM. For an end to end example, check out this codelab, or refer to the docs for more details. Now it’s time for you to start deploying some models of your own!Related ArticleSpeed up model inference with Vertex AI Predictions’ optimized TensorFlow runtimeThe Vertex AI optimized TensorFlow runtime can be incorporated into serving workflows for lower latency predictions.Read Article
Quelle: Google Cloud Platform

Cloud SQL – SQL Server Performance Analysis and Query Tuning

The following blog covers popular performance analysis tools and technologies database administrators can use to tune and optimize Cloud SQL for SQL Server. Common performance challenges are described in each section along with tools and strategies to analyze, address and remediate them. After reviewing this blog, consider applying the tools and processes detailed in each section to a non-production database in order to gain a deeper understanding of how they can help you to manage and optimize your databases. We will also publish a follow up blog that will walk you through common performance issues and how to troubleshoot and remediate those using the tools and processes described here.1. Getting Started: Connecting to your Cloud SQL for SQL Server instance.The most common use cases for connecting to Cloud SQL include connecting from a laptop over VPN and using a jump host in GCP. SQL Server DBAs who connect from a local laptop over VPN using SQL Server Management Studio (SSMS) should review this Quickstart document for connecting to a Cloud SQL instance that is configured with a private IP address. DBAs may also prefer to connect to a single jump host for centralized management of multiple Cloud SQL for SQL Server instances. In this scenario, a Google Compute Engine (GCE) VM is provisioned and DBAs use a Remote Desktop Protocol (RDP) tool to connect to the jump host and manage their Cloud SQL databases. For a comprehensive list of options on connecting to Cloud SQL, see connecting to Cloud SQL for SQL Server. 2. Activity Monitoring: What’s running right now? When responding to urgent support calls, DBAs have an immediate need to determine what is currently running on the instance.  Historically, DBAs have relied on system stored procedures such as sp_Who and sp_Who2 to support triage and analysis. To determine what’s running right now, consider installing Adam Machanic’s sp_WhoIsActive stored procedure. To view currently running statements and to obtain details on the plans, use the statement below. Note that in the following example, the procedure sp_WhoIsActive has been installed on the dbo schema of the dbtools database. EXEC dbtools.dbo.sp_WhoIsActive @get_plans=1Also, see Brent Ozar’s sp_BlitzFirst stored procedure, which is included in the SQL-Server-First-Responder-Kit. Review the documentation for examples. 3. Optimizing queries using the SQL Server Query Store.Query optimization is best to be performed proactively  as a weekly DBA checklist item.  The SQL Server Query Store feature can help with this and provides DBAs with query plans, history and useful performance metrics. Before starting the SQL Server Query Store, it is a good idea to review the following Microsoft SQL Server Performance Monitoring article: Monitoring performance by using the Query Store . Query Store is enabled on a database level and must be enabled for each user database. See the example below for how to enable Query Store. ALTER DATABASE <<DBNAME>>SET QUERY_STORE = ON (WAIT_STATS_CAPTURE_MODE = ON);After enabling Query Store, review the Query Store configuration using SSMS. Right-click on the database and view Query Store properties. Review the Microsoft article Monitoring performance by using the Query Store for more information on properties and settings. Screenshot: Query Store has been enabled on the AdventureWorksDW2019 database.After enabling Query Store on a busy instance, query data will normally be available for review within a few minutes. Alternatively, run a few test queries to generate data for analysis. Next, expand the Query Store node to explore available reports.In the example below, I selected “Top Resource Consuming Queries”. I then sorted the table by total duration and reviewed the execution plan for the top resource consuming query. In reviewing the execution plan I noted that a table scan was occurring. I was then able to remediate the issue by asking the user to modify their query to select specific columns rather than selecting all the columns, and then added a non-clustered index to the underlying table to include the required columns.Example Index Change:code_block[StructValue([(u’code’, u’CREATE NONCLUSTERED INDEX NCIX_dbo_scenario1_LastName_INCLUDE_FirstName_BirthDatern ON [dbo].[scenario1] (LastName) INCLUDE (FirstName, BirthDate); rnGO’), (u’language’, u”), (u’caption’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec8a528d690>)])]To track a query over time, right-click the query and select “Track Query”.Note that the plans for before and after the index change are shown below.Select both plans, then choose “Compare Plans” to view pre and post plan changes.SQL Server Query Store is a helpful built-in performance tuning tool that is available for Cloud SQL DBAs to capture, analyze and tune T-SQL statements. It makes sense to spend time learning about how Query Store can help you manage and optimize your SQL Server databases.4. Analyzing instance and database health, configuration and performance.The SQL Server Community offers many free tools and scripts for reviewing and analyzing SQL Server instances and databases. A few popular script resources are noted below. Glen Berry’s SQL Server Diagnostic Queries are useful for assessing on-prem instances when planning for a migration, and analyzing configurations and performance once databases are running in GCP.  For more information on how to use the SQL Server Diagnostic Queries, and for help interpreting the results, review Glen’s YouTube videos.Brent Ozar’s SQL-Server-First-Responder-Kit is another popular community tool used to quickly assess and analyze SQL Server instances. Note that Cloud SQL for SQL Server does not support installing objects in the master database, and it is recommended that a separate database is created for the scripts. Many DBAs create a tools database (for example: dbtools), and install scripts and procedures in that database. Review the documentation and Brent’s how-to videosfor tips on installing and using the kit. 5. Configuration and performance levers to reduce locking and blocking.Performance problems related to locking and blocking may be reduced by scaling up the instance and optimizing database objects like tables, queries and stored procedures. While increasing instance performance may provide quick wins in the short-term, optimizing SQL and application code results in better stability and performance over the long term.Instance Cores and Storage CapacityIncreasing cores and storage capacity, also known as scaling up, has an immediate effect on IO throughput and performance, and many workload performance issues may be mitigated by increasing CPU and Storage configuration settings. Disk performance is based on the disk size and the number of vCPUs. Add more storage space and vCPUs to increase IOPS and throughput.Read Committed Snapshot Isolation (RCSI) If you find yourself adding NO LOCK to your queries in an attempt to reduce contention and speed things up, it’s probably a good idea to take a look at Read Committed Snapshot Isolation. When READ_COMMITTED_SNAPSHOT is turned on, SQL Server Engine uses row versioning instead of locking.  For more information, see Kendra Little’s blog post on How to Choose Between RCSI and Snapshot Isolation Levels to determine if RCSI is right for your database workloads.Forced ParameterizationIf you run across an application that generates a lot of dynamic SQL or executes SQL without parameters, you may see a lot of CPU time wasted on creating new plans for SQL queries. In some cases, Forced Parameterization can help your database performance when you are not able to change or influence application coding standards. For more on forced parameterization and how it can be applied, review the following link:  SQL Server Database Parameterization option and its Query Performance effects  6. Managing Indexes and Statistics: SQL Server maintenanceOla Hallengren’s SQL Server Maintenance Solution is a SQL Server Community standard database maintenance solution. In an on-premises or GCE environment, a DBA may choose to install the entire maintenance solution, including backup scripts. Since backups are handled internally by Cloud SQL, a DBA may choose to install only the Statistics and Indexing procedures and supporting objects. Visit https://ola.hallengren.com/ to learn more about the solution, and take time to review the associated scripts, instructions, documentation and examples of how to install and use the SQL Server Maintenance Solution.ConclusionProactively managing and tuning your Cloud SQL SQL Server databases enables DBAs to spend less time on production support calls and increases the performance, efficiency and scalability of databases. Many of the tools and recommendations noted in this blog are also applicable to SQL Server databases running on GCE. Once you become familiar with the tools and processes featured in this blog, consider integrating them into your database workflows and management plans.Related ArticleCloud SQL for SQL Server: Database administration best practicesCloud SQL for SQL Server is a fully-managed relational database service that makes it easy to set up, maintain, manage, and administer SQ…Read Article
Quelle: Google Cloud Platform

Hear how this Google M&A Lead is helping to build a more diverse Cloud ecosystem

Editor’s note: Wayne Kimball, Jr. is Principal for Google Cloud’s Mergers & Acquisitions business. He is also the founder of Black+TechAmplify, a corporate development initiative to accelerate the growth of traditionally untapped, diverse founders. In both cases, he says, it’s about creating empathy around people and processes to create greater value.You’re a “boomerang,” or returning Googler. How do you view your career?It’s been a fun journey, with a lot of excitement, work, and rewards. I studied engineering at North Carolina A&T State University, the nation’s largest HBCU. I was student body president, and one of my goals was to enhance the technology experience for fellow students by convincing the university to transition to  Gmail. A friend of a friend helped put me in touch with Google, who then came to campus for strategic conversations with administrators and to meet with students. The school ended up adopting Gmail, and soon thereafter, I was offered a job at Google, becoming the first ever hire from North Carolina A&T.I started out doing technology operations as a PeopleOps rotational associate, but pretty soon I realized that I wanted to be on the sales side, as I like seeing the dynamic of people, technology, and business. Eventually I left Google, went to business school, then did strategy and M&A work at a couple of places before I came back to do it at Cloud.What drew you to M&A? Working in sales taught me all about growing organically. M&A is exciting because the work drives enterprise value through inorganic acquisitions. I find it truly rewarding to identify and merge the various points of view, and make them ultimately work seamlessly together more organically. We always have to start with customer focus, but that can mean a lot of things. Then, when it comes to the acquisition, Google Cloud is first a customer that needs to get the right acquisition, and subsequently the company we acquire is a customer that needs to be acclimated to being a part of Google.I always say, “Change should happen with people, not to people.”Is Black+TechAmplify a passion project, or an extension of what you do inside Cloud?It’s a bit of both. There are almost ten Google employees now supporting Black+TechAmplify, but the project started with me asking, “How many of our acquisitions have been Black-owned or women-owned?” It wasn’t a lot. So we set out to identify tech startups that were Black-owned, and develop more resources and exposure, including ways to partner and grow with Google. After two cohorts, the companies have raised over $20 million in additional funding. We feel like it’s a model we can extend to other founders from underrepresented groups.I also partner closely with Google for Startups to support their review and selection of startup applicants, and serve as an Advisor for Black and Latinx founders in their programs. It’s also encouraging to see a number of other companies, in addition to Google, now leaning into this kind of activity.Is there a common theme in what you’re doing at Cloud?I’d say it’s all focused on accelerating the growth of Google Cloud by accelerating the value capture for the customer, wherever they are. In every case, to accelerate value capture you have to look after people, making sure they are treated well. If you look after that, the profit will eventually come.Related Article“Take that leap of faith” Meet the Googler helping customers create financial inclusionCloud Googler shares how she has brought her purpose to her work, creating equity in the financial services space.Read Article
Quelle: Google Cloud Platform

Drive Hockey Analytics uses Google Cloud to deliver pro-level sports tracking performance to youth

In ice hockey’s earlier days, National Hockey League (NHL) coaches made their most important decisions based on gut instincts. Today, experience and instincts are still vital, but NHL coaches now have another essential tool at their disposal: powerful data analytics. Before and after every game, coaches and even players meticulously pore over game data and review detailed statistics to improve performance and strategy. And while this is a win for the NHL, higher-end data analytics tools have typically been out of reach for youth hockey teams largely because capturing game performance data on the ice is expensive, complicated, and time consuming.We built Drive Hockey Analytics to democratize pro-level analytics and help young players develop their gameplay and build a higher hockey IQ. Coaches and parents can now easily and affordably track 3,000 data points per second from players, sticks, and pucks. Drive Hockey Analytics—which takes 15 minutes to set up at the rink after initial calibration—converts these raw data points into actionable statistics and insights to improve player performance in real time and boost post-game training.Scaling a market-ready stick and puck tracking platform on Google CloudDrive Hockey Analytics began as an engineering project in the MAKE+ prototype lab of the British Columbia Institute of Technology (BCIT). We quickly realized that we couldn’t transform Drive Hockey Analytics into a market-ready stick and puck tracking platform without shifting more resources to R&D. After meeting with the dedicated Google Startup Success Managers from theGoogle for Startups Cloud Program, with this support, we decided to migrate from AWS toGoogle Cloud so our small team could reduce IT costs and accelerate time to market. Google Cloud solutions make everything easier to build, scale, and secure. We immediately took advantage of Google Cloud’s highly secure-by-design infrastructure to implement robust user authentication and institute strict privacy controls to comply with the Children’s Online Privacy Protection Act (COPPA). In just days, we enabled coaches and players to access individual analytics dashboards and more securely share key statistics such as speed, acceleration, agility and edgework, zone time, positioning, among many others  with teammates and family.We also separated performance and personalstorage data on Google Cloud, encrypted containers withGoogle Kubernetes Engine (GKE), and wrote third-party applications and pipelines that autoscale withSpark on Google Cloud. These processes could have taken us weeks or even months if we had to manually design and integrate all these security capabilities on our own.To build our interactive player analytics engine, we leveragedTensorFlow,BigQuery, andMongoDB Atlas on Google Cloud. With the simple and flexible architecture offered in Google Cloud, we quickly moved from concept to code, and from code to state-of-the-art predictive models. We now collect and analyze thousands of data points every second to identify key performance metrics, break-out game intelligence, and deliver actionable recommendations. Coaches and players can leverage this data to increase team possession of the puck, optimize player positions, reduce shot attempts, and score more goals.In the future, we plan to explore additional Google products and services such asGoogle Cloud Tensor Processing Units (TPUs),Google Cloud Endpoints for OpenAPI, andGoogle Ads. These solutions will enable us to further expand our ML stack, leverage streaming data from wearables and cameras, and reach new markets.Bringing pro-level sports analytics to youth hockeyThe Startup Success team has been instrumental in helping us rapidly transform Drive Hockey Analytics from a university engineering project into a top shelf player and puck tracking system. Their guidance and responsiveness are amazing, with a human touch that stands out compared to services from other technology providers. We especially want to highlight the Google Cloud research credits that help us affordably explore new solutions to address extremely large dataset challenges. Thanks to these credits, we successfully process thousands of data points in streams and batches, apply ML-driven logic, and run resource-efficient queries. Google Cloud research credits also give us access to dedicated startup experts, managed compute power, vast amounts of secure storage, and potential for joining the Google Cloud Marketplace.Demand for Drive Hockey Analytics continues to grow, and we constantly evolve our platform based on input from youth teams and coaches. We’re looking to go fully to market in 2023. With Drive Hockey Analytics, youth teams are putting on their mitts and taking control of the puck as they improve real-time player performance and help their team count more wins. We can’t wait to see what we accomplish next as we continue transforming dusters into barnburners by democratizing advanced analytics that were once only available to pro-sports teams.If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, andsign up for our communications to get a look at our community activities, digital events, special offers, and more.Related ArticleBlack Kite runs millions of cyber-risk assessments at scale on Google CloudLearn how Black Kite flawlessly runs millions of cyber-risk assessments on Google Cloud.Read Article
Quelle: Google Cloud Platform

Google Cloud’s innovation-first infrastructure

Organizations are driving the complete transformation of their business by inventing new ways to accomplish their objectives using the cloud; from making core processes more efficient, to improving how they reach and better serve their customers, to achieving insights through data that fuel innovation. Cloud infrastructure belongs at the center of every organization’s transformation strategy. We see a vast landscape of opportunity to innovate in our cloud’s core capabilities that will have long-standing impact on the speed and simplicity of building solutions on Google Cloud. From data management and machine learning to security and sustainability, we continue to invest deeply in infrastructure innovation that generates value from the foundation upward. We focus on three defining attributes of our infrastructure that help our customers accelerate through innovation:Optimized: Customers want solutions that meet their specific needs. They want to build and run apps where they need them, tailored for popular workloads, industry solutions, and for specific outcomes whether it is high performance, cost savings, or a balance of both. Their workloads should just run better on Google Cloud.Transformative: Transformation is more than “lifting and shifting” infrastructure to the cloud for cost saving and convenience. Transformative infrastructure integrates the best of Google’s AI and ML capabilities to drive faster innovation, while meeting the most stringent security, sovereignty, and compliance needs.Easy: As cloud platforms become more versatile, they can become very complex to adopt and operate. Reducing your operational burden is possible with an easy-to-use cloud platform. Our customers often tell us that Google Cloud makes complex tasks seem simple, and this is a product of intentional engineering. Google’s 20+ years of technology leadership is built on a culture of innovation and focus on our customers. Here are some examples of new innovation we are bringing in these areas. Solutions that are optimized for what matters most to youLet’s start with optimizing for price-performance. Last year, we launched Tau VMs optimized for cost-effective performance of scale-out workloads. Tau T2D leapfrogged every leading public cloud provider in both performance and total cost of ownership delivering up to 42% better price performance versus comparable VMs from any other leading cloud. Today, we are delighted to announce that we are offering more choice to customers, with the addition of Arm-based machines to the Tau VM family. Powered by Ampere® Altra® Arm-based processors, T2A VMs deliver exceptional single-threaded performance at a compelling price, making them ideal for scale-out, cloud-native workloads. Developers now have the option of choosing the optimal architecture to test, develop and run their workloads.Cost optimization is a major goal for many of our customers. Spot VMs enable you to take advantage of our idle machine cycles at deep discounts with a guaranteed 60% off and up to 91% savings off on-demand pricing. These are the perfect choice for batch jobs and fault-tolerant workloads in high performance computing, big data and analytics. Customers told us that they would like to see less variability and more predictability in the pricing of Spot VMs. We have heard you loud and clear. Our Spot VMs offer the least variability (once per month price changes) and more predictability in pricing compared to other leading clouds. Optimizing for global scale is critical to meet the high demands of today’s consumers — especially when it comes to video streaming. Launched in May 2022, Media CDN is optimized to deliver immersive video streaming experience at a global scale. Available in more than 1,300 cities, Media CDN leverages the same infrastructure that YouTube uses to deliver content to over 2 billion users around the world. Customers including U-NEXT and Stan have quickly rolled out Media CDN to deliver a modern, high quality experience to their viewers. Another emerging opportunity is the rise of distributed systems and distributed workers, and the ability to build and run apps wherever needed. With Google Distributed Cloud, we now extend Google Cloud infrastructure and services to different physical locations (or distributed environments) including on premises or co-location data centers and a variety of edge environments. Anthos powers all Google Distributed Cloud offerings, to deliver a common control plane for building, deploying and running your modern containerized applications at scale, wherever you choose.For greater choice, we have designed Google Distributed Cloud as a portfolio of hardware, software, and services with multiple offerings to address the specific requirements of your workloads and use cases. You can choose from our Edge, Virtual, and Hosted offerings to meet the needs of your business.Driving transformation through AI/ML and securityThe pace of innovation in the field of machine learning continues to accelerate and Google has been a long time pioneer. From Search and YouTube to Play and Maps, ML has helped bring out the best that our products have to offer. We’ve made it a point to make the best of Google available to our customers, and JAX and Cloud TPU v4 are two great examples. JAX is a cutting edge open source ML framework developed by Google researchers. It’s designed to give ML practitioners more flexibility and allow them to more easily scale their models to the largest of scales. We recently made Cloud TPU v4 pods available to all our customers through our new ML hub. This cluster of Cloud TPU v4 pods offers 9 exaflops of peak aggregate performance and runs at 90% carbon-free energy, making it one of the fastest, most efficient, and most sustainable ML infrastructure hubs in the world. Cloud TPU v4 has enabled researchers to train a variety of sophisticated models including natural language processing models and recommender models to name a few. Customers are already seeing the benefits, including Cohere who saw a 70% improvement in training times and LG Research who used Cloud TPU v4 to train their large multi-modal 300 billion parameter model.On the security front, increasing cybersecurity threats has every company rethinking its security posture. Our investments in our planet-scale network that’s secure, performant and reliable is matched with our lead in defining industry wide frameworks and standards to help customers better secure their software supply chain. Google last year introduced SLSA (supply chain levels for software artifacts), an end-to- end framework for ensuring the integrity of artifacts throughout the software supply chain. It is an open-source equivalent of many of the processes we have been implementing internally at Google. We challenge ourselves to enable security without complex configuration or performance degradation. One example of this is our Confidential VMs – where data is stored in the trusted execution environment outside of which it is impossible to view the data or operations performed on it, even with a debugger. Another is Cloud Intrusion Detection System (Cloud IDS), which provides network threat detection built on ML-powered threat analysis which processes over 15 Trillion transactions per day to identify new threats with 4.3M unique security updates made each day. With the highest possible rating of AAA by CyberRatings.org, Cloud IDS has proven efficacy to block virtually all evasions. Developer-first ease of useMaking your transformation journey simpler, with easy-to-use tools to accelerate your innovation is our priority. Today, we are introducing Batch in preview, a fully managed job scheduler to help customers run thousands of batch jobs with just a single command. It’s easy to set up, and supports throughput oriented workloads including those requiring MPI libraries. Jobs run on auto-scalable resources, giving you more time to work on the greatest areas of value. This improves the developer experience for executing HPC, AI/ML, and data processing workloads such as genomics sequencing, media rendering, financial risk modeling, and electronic design automation.Continuing innovation for greater ease, we recently announced the availability of the new HPC toolkit. This is an open source tool from Google Cloud that enables you to easily create repeatable, turnkey HPC clusters based on proven best practices, in minutes. It comes with several blueprints and broad support for third party components such as the Slurm scheduler and Intel DAOS and DDN Lustre storage. System performance and awareness of what infrastructure is doing is closely tied to security, but to do this well, it needs to be easy. We recently introduced Network Analyzer to help customers transform reactive workflows into proactive processes and reduce network and service downtime by automatically monitoring VPC network configurations. Network Analyzer is part of our Network Intelligence Center, providing a single console for Google Cloud network observability, monitoring, and troubleshooting. This is just a sample of what we are doing in Google Cloud to provide infrastructure that gives customers the freedom to securely innovate and scale from on-premises, to edge, to cloud on an easy, transformative, and optimized platform. To learn more about how customers such as Broadcom and Snap are using Google Cloud’s flexible infrastructure to solve their biggest challenges, be sure to watch our Infrastructure Spotlight event, aired today.
Quelle: Google Cloud Platform

Expanding the Tau VM family with Arm-based processors

Organizations that are developing ever larger, scale-out applications will leave no stone unturned in their search for a compute platform that meets their needs. For some, that means looking to the Arm® architecture. Known for delivering excellent performance per watt efficiency, Arm-based chips are already ubiquitous in mobile devices, and have proven themselves for supercomputing workloads. At Google Cloud, we’re also excited about using Arm chips for the next generation of scale-out, cloud-native workloads.Last year, we added Tau VMs to Compute Engine, offering a new family of VMs optimized for cost-effective performance for scale-out workloads. Today we are thrilled to announce the Preview release of our first VM family based on the Arm architecture, Tau T2A. Powered by Ampere® Altra® Arm-based processors, T2A VMs deliver exceptional single-threaded performance at a compelling price. Tau T2A VMs come in multiple predefined VM shapes, with up to 48 vCPUs per VM, and 4GB of memory per vCPU. They offer up to 32 Gbps networking bandwidth and a wide range of network attached storage options, making Tau T2A VMs suitable for scale-out workloads including web servers, containerized microservices, data-logging processing, media transcoding, and Java applications.Google Cloud customers and developers now have the option of choosing an Arm-based Google Cloud VM to test, develop and run their workloads on the optimal architecture for their workload. Several of our customers have had private preview access to T2A VMs for the last few months and have had a great experience with these new VMs. Below is what few of them have to say about T2A VMs.“Our drug discovery research at Harvard includes several compute-intensive workloads that run on SLURM using VirtualFlow1. The ability to run our workloads on tens of thousands of VMs in parallel is critical to optimize compute time. We ported our workload to the new T2A VM family from Google and were up and running with minimal effort. The improved price-performance of the T2A will help us screen more compounds and therefore discover more promising drug candidates.” – Christoph Gorgulla, Research Associate, Harvard University“In recent years, we have come to rely on Arm-based servers to power our engineering activity at lower cost and higher performance compared to legacy environments. The introduction of the Arm Neoverse N1-based T2A instance allows us to diversify our use of cloud compute on Arm-based hardware and leverage the Google Compute Engine to build the exact virtual machine types we need, with the convenience of Google Kubernetes Engine for containerized workloads.” – Mark Galbraith, Vice President, Productivity Engineering, Arm.Ampere Computing has been a key partner for Google Cloud and delivering this VM. “Ampere® Altra® Cloud Native Processors were designed from the ground up to meet the demands of modern cloud applications,” said Jeff Wittich, Chief Product Officer, Ampere Computing. “Our close collaboration with Google Cloud has resulted in the launch of the new price-performance optimized Tau T2A instances, which enable demanding scale-out applications to be deployed rapidly and efficiently.”Integration with Google Cloud services Google Cloud is ramping up its support for Arm. T2A VMs support most popular Linux operating systems such as RHEL, CentOS, Ubuntu, and Rocky Linux. In addition, T2A VMs also support Container-optimized OS to bring up Docker containers quickly, efficiently and securely. Further, developers building applications on Google Cloud can already use several Google Cloud services with T2A VMs — with more coming later this year: Google Kubernetes Engine – Google Kubernetes Engine (GKE) is the leading platform for organizations looking for advanced container orchestration. Starting today, GKE customers can run their containerized workloads using the Arm architecture on T2A. Arm nodes come packed with key GKE features, including the ability to run in GKE Autopilot mode for a hands-off experience. Read more about running your Arm workloads with GKE here. Batch – Our newly launched Batch service supports T2A. As of today users will be able to run batch jobs on T2A instances to optimize their cost of running workloads.Dataflow – Dataflow is a fully managed streaming analytics service that minimizes latency, processing time, and cost through autoscaling and batch processing. You can now use T2A VMs with your Dataflow workloads.Extensive ISV partner ecosystemWhile Arm chips are relative newcomers to data center workloads, there’s already a robust ecosystem of ISV support for Tau T2A VMs. In fact, Ampere lists more than 100 applications, databases, cloud-native software and programming languages that are already running on Ampere-based T2A VMs, with more being added all the time. Further, ISV partners that have validated their solutions on T2A VMs have been impressed by the ease with which they were able to port their software to Tau T2A VMs. “Momento’s serverless cache enables developers to accelerate database and application performance at scale. Over the past few months, we have become intimately familiar with Google Cloud’s new T2A VMs. We were pleasantly surprised with the ease of portability to Arm instance from day one. The maturity of the T2A platform gives us the confidence to start using these VMs in production. Innovations like T2A VMs in Google Cloud help us continuously innovate on behalf of our customers.” – Khawaja Shams, CEO, Momento. Learn more about Momento’s T2A experience.“SchedMD’s Slurm open-source workload manager is designed specifically to satisfy the demanding needs of compute-intensive workloads. We are thrilled with the introduction of the T2A VMs on Compute Engine. The introduction of T2A will give our customers more choice of virtual machines for their demanding workload management needs using Slurm.” – Nick Ihli, Director of cloud and Solutions Engineering, SchedMD.”At Rescale, we help our customers deliver innovations faster with high performance computing built for the cloud. We are excited to now offer T2A VMs to our customers, with compelling price-performance to further drive engineering and scientific breakthroughs. With Arm-based VMs on Google Cloud, we are able to offer our customers a larger portfolio of solutions for computational discovery.” – Joris Poort, CEO, Rescale“Canonical Ubuntu is a popular choice for developers seeking a third party server operating system running on Google Cloud, and we are very happy to provide Ubuntu as the guest OS for users of Compute Engine on Google Cloud’s new Arm-based VMs, which supports our most recent long-term supported versions. Once migrated, users will find a completely familiar environment with all the packages and libraries they know and rely on to manage their workloads.” – Alexander Gallagher, VP of Cloud Sales at CanonicalTo help you get started, we’re providing customers, ISV and ecosystem partners access to T2A VMs at no charge for a trial period, to help jumpstart development on Ampere Arm-based processors. When Tau T2A reaches General Availability later this year, we’ll continue to offer a generous trial program that offers up to 8 vCPUs and 32 GB of RAM at no cost.Pricing and availabilityTau T2A VMs are price-performance optimized for your cloud-native applications. A 32vCPU VM with 128GB RAM will be priced at $1.232 per hour for on-demand usage in us-central1. T2A VMs are currently in preview in several Google Cloud regions: us-central (Iowa – Zone A,B,F), europe-west4 (Netherlands – Zone A,B,C) and asia-southeast1 (Singapore – Zone B,C) and will be in General Availability in the coming months. We look forward to working with you as you explore using Ampere Arm-based T2A VMs for your next scale-out workload in the cloud.To learn more about Tau T2A VMs or other Compute Engine VM options, check out our machine types and pricing pages. To get started, go to the Google Cloud Console and select T2A for your VMs.1. https://www.nature.com/articles/s41586-020-2117-zRelated ArticleRun your Arm workloads on Google Kubernetes Engine with Tau T2A VMsWith Google Kubernetes Engine’s (GKE) support for the new Tau VM T2A, you can run your containerized workloads on the Arm architecture.Read Article
Quelle: Google Cloud Platform

Run your Arm workloads on Google Kubernetes Engine with Tau T2A VMs

At Google Kubernetes Engine (GKE), we obsess over customer success. One major way we continue to meet the evolving demands of our customers is by driving innovations on the underlying compute infrastructure. We are excited to now give our customers the ability to run their containerized workloads using the Arm® architecture! Earlier today, we announced Google Cloud’s virtual machines (VMs) based on the Arm architecture on Compute Engine. Called Tau T2A, these VMs are the newest addition to the Tau VM family that offers VMs optimized for cost-effective performance for scale-out workloads. We are also thrilled to announce that you can run your containerized workloads on the Arm architecture using GKE. Arm nodes come packed with the key GKE features you love on the x86 architecture, including the ability to run in GKE Autopilot mode for a hands-off experience, or on GKE Standard clusters where you manage your own node pools. See the ‘Key GKE features’ below for more details.”The new Arm-based T2A virtual machines (VMs) supported on the Google Kubernetes Engine (GKE) are providing cloud customers with the higher performance and energy efficient options required to run their modern containerized workloads. The Arm engineering team has collaborated on Kubernetes CI/CD enablement and we look forward to seeing the ease-of-use and ecosystem support that comes with Arm support on GKE.”– Bhumik Patel, Director of Software Ecosystem Development, Infrastructure Line of Business, Arm.Starting today, Google Cloud customers and developers can run their Arm workloads on GKE in Preview1 by selecting a T2A machine shape during cluster or node pool creation either through gcloud or the Google Cloud console. Check out our tutorial video to get started!Some of our customers who had early access to T2A VMs highlighted the ease of use in working with their Arm workloads on GKE.”Arcules offers cloud-based video surveillance as a service for multi-site customers that’s easy-to-use, scalable, and reliable – all within an open platform and supported by customer service that truly cares. We are excited to run our workloads using Arm-based T2A VMs with Google Kubernetes Engine (GKE). We were thoroughly impressed by how easily we could provision Arm nodes on a GKE cluster independently and alongside x86-based nodes. We believe that this multi-processor architecture will help us reduce costs while providing a better experience for our customers.”—Benjamin Rowe, Cloud and Security Architect, ArculesKey GKE features supported with Arm-based VMsWhile the T2A is Google Cloud’s first VM based on the Arm architecture, we’ve ensured that it comes with support for some of the most critical GKE features — with more on the way. Arm Pods on GKE Autopilot – Arm workloads can be easily deployed on Autopilot with GKE version 1.24.1-gke.1400 or later in supported regions1 by specifying both the scale-out compute class (which also enters Preview today), and the Arm architecture using node selectors or node affinity. See the docs for an example Arm workload deployment on Autopilot.Ease-of-use in creating GKE nodes – You can provision Arm nodes with GKE version 1.24 or later using the Container-optimized OS (COS) with containerd node image and selecting the T2A machine series. In other words, GKE automatically provisions the correct node image to be compatible with your choice of x86 or Arm machine series. Multi-architecture clusters – GKE clusters support scheduling workloads on multiple compute (x86 and Arm) architectures. A single cluster can either have only x86 nodes, only Arm nodes, or a combination of both x86 and Arm nodes. You can even run the same workloads on both architectures in order to evaluate the optimal architecture for your workloads.Networking and security features – Arm nodes support the latest in GKE networking features such as GKE Dataplane V2 and creating and enforcing a GKE network policy. GKE’s security features such as workload identity and shielded nodes are also supported on Arm nodes.Scalability features – When running your Arm workloads, you can use GKE’s best-in-class scalability features such as cluster autoscaler (CA), node auto provisioning (NAP), and horizontal and vertical pod autoscaling (HPA / VPA).Support for Spot VMs – GKE supports T2A Spot VMs out-of-the-box to help save costs on fault-tolerant workloads. Enhanced developer toolsWe’ve updated many popular Google Cloud developer tools to let you create containerized workloads that run on GKE nodes with both Arm and x86 architectures, simplifying the transition to developing for Arm or multi-architecture GKE clusters. When using Cloud Code IDE extensions or Skaffold on the command line, you can build Arm containers locally using Dockerfiles, Jib, or Ko, then iteratively run and debug your applications on GKE. With Cloud Code and Skaffold, building locally for GKE works automatically regardless of whether you’re developing on an x86- or Arm-based machine. Whether you build Arm or multi-architecture images, Artifact Registry can be used to securely store and manage your build artifacts before deploying them. If you develop on Arm-based local workstations, you can use Minikube to emulate GKE clusters with Arm nodes locally while taking advantage of simplified authentication with Google Cloud using the gcp-auth addon. Finally, Google Cloud Deploy makes it easy to set up continuous delivery to Arm and multi-architecture GKE clusters just like it does with x86 GKE clusters. Updating a pipeline for these Arm-inclusive clusters is as simple as pointing your Google Cloud Deploy pipeline to an image registry with the appropriate architecture image. A robust DevOps, security, and observability ecosystemWe’ve also partnered with leading CI/CD, observability, and security ISVs to ensure that our partner solutions and tooling are compatible with Arm workloads on GKE. You can use the following partner solutions to run your Arm workloads on GKE straight out-of-the-box.Datadog provides comprehensive visibility into all your containerized apps running on GKE by collecting metrics, logs and traces to help to surface performance issues and provide context when troubleshooting. Starting today, you can use Datadog when running your Arm workloads on GKE. Learn more.Dynatrace uses its software intelligence platform to track the availability, health and utilization of applications running on GKE, thereby helping surface anomalies and determine their root causes. You can now use these features of Dynatrace with GKE Arm nodes. Learn more.Palo Alto Networks’ Prisma Cloud Daemonset Defenders enforce security policies for your cloud workloads, while Prisma Cloud Radar displays a comprehensive visualization of your GKE clusters as well as the containers and nodes, so you can easily identify risks and investigate incidents. Use Prisma Cloud Daemonset Defenders with GKE Arm nodes for enhanced cloud workload security. Learn more.Splunk Observability Cloud provides developers and operators with deep visibility into the composition, state, and ongoing issues within a cluster. You can now use Splunk Observability Cloud when running your Arm workloads on GKE. Learn more.Agones is an open source platform built on top of Kubernetes that helps you deploy, host, scale, and orchestrate dedicated game servers for large scale multiplayer games. Through a combination of efforts from the community and Google Cloud, Agones now supports the Arm architecture starting with the 1.24.0 release of Agones. Learn more. Try out GKE Arm today!To help you make the most of your experience with GKE Arm nodes, we are providing guides to help you with learning more about Arm workloads on GKE, creating clusters and node pools with Arm nodes, building multi-arch images for Arm workloads, and preparing an Arm workload for deployment to your GKE cluster. To get started with running Arm workloads on GKE, check out the tutorial video! 1. T2A VMs are currently in preview in several Google Cloud regions: us-central (Iowa – Zone A,B,F), europe-west4 (Netherlands – Zone A,B,C) and asia-southeast1 (Singapore – Zone B,C).Related ArticleExpanding the Tau VM family with Arm-based processorsThe Tau T2A is Google Cloud’s first VM family based on the Arm architecture and designed for organizations building cloud-native, scale-o…Read Article
Quelle: Google Cloud Platform

Moving off CentOS? Introducing Rocky Linux Optimized for Google Cloud

As CentOS 7 reaches end of life, many enterprises are considering their options for an enterprise-grade, downstream Linux distribution on which to run their production applications. Rocky Linux has emerged as a strong alternative that, like CentOS, is 100% compatible with Red Hat Enterprise Linux. In April 2022, we announced a customer support partnership with CIQ, the official support and services partner and sponsor of Rocky Linux, as the first step in providing a best-in-class enterprise-grade supported experience for Rocky Linux on Google Cloud. Today we’re excited to announce the general availability of Rocky Linux Optimized for Google Cloud. We developed this collection of Compute Engine virtual machine images in close collaboration with CIQ so that you get optimal performance when using Rocky Linux on Compute Engine to run your CentOS workloads.These new images contain customized variants of the Rocky Linux kernel and modules that optimize networking performance on Compute Engine infrastructure, while retaining bug-for-bug compatibility with Community Rocky Linux and Red Hat Enterprise Linux. The high bandwidth networking enabled by these customizations will be beneficial to virtually any workload, and are especially valuable for clustered workloads such as HPC (see this page for more details on configuring a VM with high bandwidth).Going forward, we’ll collaborate with CIQ to publish both the community and Optimized for Google Cloud editions of Rocky Linux for every major release, and both sets of images will receive the latest kernel and security updates provided by CIQ and the Rocky Linux community.  And of course, we’ll offer support with CIQ for both these images, per our partnership. Rocky Linux Optimized for Google Cloud lets you take advantage of everything Compute Engine has to offer, including day-one support for our latest VM families, GPUs, and high-bandwidth networking. And for customers building for a multi-cloud deployment environment, the community Rocky images have you covered.Starting today, Rocky Linux 8 Optimized for Google Cloud is available for all x86-based Compute Engine VM families (and soon for the new Arm-based Tau T2A), with version 9 soon to follow. Give it a try and let us know what you think.Related ArticleGoogle Cloud partners with CIQ to provide an enterprise-grade experience for Rocky LinuxGoogle announces CIQ-backed support for Rocky Linux, and pre-announces performance-tuning, new migration tools, and out-of-the-box suppor…Read Article
Quelle: Google Cloud Platform

Multicloud reporting and analytics using Google Cloud SQL and Power BI

After migrating databases to Google Cloud,  Cloud SQL developers and business users can use familiar business intelligence tools and services like Microsoft Power BI to connect to and report from Cloud SQL MySQL, PostgreSQL, and SQL Server databases.  The ability to quickly migrate databases to GCP without having to worry about refactoring or developing new reporting and BI tools is a key capability for businesses migrating to CloudSQL. Organizations can migrate today, and then replatform databases and refactor reporting in subsequent project phases.The following guide demonstrates key steps to configure Power BI reporting from Cloud SQL. While your environment and requirements may vary, the design remains the same. To begin, create three Cloud SQL Instances, each with a Private IP address.After creating the database instances, create a Windows VM in the same VPC as the Cloud SQL instances. Install and configure the Power BI Gateway on this VM along with the required ODBC connectors.Download and Install ODBC Connectors for PostgreSQL and MySQL.Postgres:  https://www.postgresql.org/ftp/odbc/versions/msi/  MySQL: https://dev.mysql.com/downloads/connector/odbc/  Configure System DSNs for each Database connection. Examples follow. SQL ServerPostgreSQLMySQLThe traffic between the CloudSQL instance and the VM hosting the data gateway stays inside the Google VPC and is encrypted via Encryption in Transit in Google Cloud. To add an additional layer of SSL encryption for the data inside the Google VPC, configure each System DSN to use CloudSQL SSL/TLS certificates . Next, download, install, and configure the Power BI Gateway. Note that the gateway may be installed in an HA configuration. The screenshot below shows a single standalone gateway. On-premises data gateway configuration: Create a new on-premises data gatewayOn-premises data gateway configuration: Validate Gateway ConfigurationOn-premises data gateway configuration: Review logging settingsOn-premises data gateway configuration: Review HTTPS modeMake sure that outgoing HTTPS traffic is allowed to exit from the VPC.Next, download and open Power BI Desktop. Log into Power BI and select “Manage gateways” to configure data sources.Add data sources for each instance, and then test the data source connections. In the example below a data source is added for each CloudSQL instance.Load test data into each database instance (optional). In the example below a simple table containing demo data is created in each source database.Launch Power BI desktop and log in. Next, add data sources and create a report. Select “Get data” and add ODBC connections for CloudSQL SQL Server, PostgreSQL and MySQL, then create a sample report with data from each instance.Using the Power BI publish feature, publish the report to the Power BI service. Once the report and data sources are published, update the data sources in the Power BI workspace to point to the data gateway data sources.Map the datasets to the CloudSQL database gateway connections.Optional: Schedule a refresh time.To perform an end-to-end test, update the test data and refresh the reports to view the changes.Use the Publish to – Power BI Service to publish Power BI reports that were developed with Power BI Report Builder to a workspace (Power BI Premium Capacity is required).ConclusionHopefully this blog was helpful in demonstrating how Power BI reports and dashboards can connect to Google Cloud SQL Databases using the Power BI Gateway. You can also use the Power BI Gateway to connect to your Big Query datasets and databases running on GCE VMs. For more information on Cloud SQL, please visit Google Cloud Platform Cloud SQL.  Related ArticleSQL Server SSRS, SSIS packages with Google Cloud BigQueryThe following blog details patterns and examples on how Data teams can use SQL Server Integration Services (SSIS) and SQL Server Reportin…Read Article
Quelle: Google Cloud Platform