ESG quantifies benefits of moving Microsoft workloads to Google Cloud

Customers tell us there are many benefits and opportunities to reduce costs that can be unlocked from migrating and modernizing Microsoft and Windows Server-based workloads to Google Cloud. We recently worked with the Enterprise Strategy Group (ESG) to run an economic validation study across Google Cloud customers who have migrated or modernized their Microsoft and Windows Server-based workloads to the cloud. What they found quantifies and reinforces what we are hearing from customers. According to the ESG study, customers that move Windows workloads to Google Cloud can see the following benefits:Significantly reduced licensing and hardware costs—from 32% to to 88% depending on workloadImproved agility and a better customer experience, for example, 65% improved load timesReduced risk and an improved security postureThe ability to leverage managed services for license usage and manageability efficiencyGoogle Cloud Sole Tenant up to a 32% TCO savings on a three-year modeled cost of operating 520 Windows Server 2016 workloads (with BYOL)Google Cloud offers a first-class experience for Microsoft workloads like SQL Server and any Windows Server-based applications, all backed by enterprise grade support. Moving Windows workloads to Google Cloud lets you increase IT agility and reduce your on-prem footprint. We simplify the proof-of-concept and technological validation process to reduce risk during the migration. Google Cloud can also help optimize your license expense and exposure by increasing the license usage efficiency on the underlying infrastructure through innovative features like custom VM shapes and sole tenant nodes. Managed services for SQL Server and Active Directory, robust Windows container support, and an opinionated modernization path off of the Microsoft stack to open source, provide the tools you need to achieve your strategic IT goals.We are pleased this report further validates and complements the value that customers realize from choosing Google Cloud for their Windows Server and Microsoft workloads. Register to download this report to get all the details.
Quelle: Google Cloud Platform

Updates on Google collaborations with Cisco featured at WebexOne

Over the past three years, Google Cloud has worked closely with Cisco to deliver a number of customer-focused solutions in areas such as hybrid cloud, multicloud, work transformation, and contact center integrations. Earlier this week, we were excited to share updates on our joint work in the collaboration space atWebexOne, Cisco’s digital collaboration conference, which brings together global customers to share the latest on remote work, customer service, and more. These developments to our partnership include enabling Webex Contact Center with Google Cloud’s Contact Center AI (CCAI) solution, which is powered by Google Cloud artificial intelligence (AI) and Natural Language Processing capabilities, and bringing Cisco Webex Expert on Demand to Glass Enterprise Edition.Making contact centers more customer-centric with AIAs the world continues to adjust to a new way of working, the role of contact centers has become even more critical to businesses, governments, and individuals. According to IDC, more than 70 percent of buyers point to customer experience as the most important consideration in their purchasing decisions. Businesses have realized that providing seamless and effective customer service to their buyers increases customer loyalty and increasingly, that this can be achieved with AI-powered contact center solutions. In fact, IDC predicts that by 2025, AI-powered enterprises will be able to achieve Net Promoter Scores that are 1.5 times higher than those of their competitors. Recently, Cisco integrated its Contact Center solutions with Google Cloud’sContact Center AI solution, making it easy for businesses to complement their existing Cisco-powered contact center services with virtual agents quickly, and to support their customers with 24/7, self-service systems. By leveraging Google Cloud capabilities in ML, natural language understanding, and speech recognition and synthesis, this joint solution from Google Cloud and Cisco helps customers get answers to questions quickly, through natural and efficient conversations. CCAI also supports contact center agents, helping them address questions and problems with easy access to documents and information.”The contact center is going through a renaissance, and artificial intelligence is playing a key role during this very exciting time,” said Omar Tawakol, VP and GM for Cisco’s Contact Center group.” By combining Google Cloud’s Natural Language Processing and AI capabilities that are able to deduce a consumer’s intent with our industry leading skills based routing capabilities that are able to match an agent’s specific skill with the AI determined intent, we’re able to create a unique and differentiated fusion of human and AI that will empower agents and delight customers.”Improving service and responsiveness to Illinois citizens with AIWhile industries ranging from retail and e-commerce to financial services are leveraging AI solutions to support consumers, the use cases for AI increasingly involve delivering critical assistance to citizens, as governments step up digital access to services and information.In the spring of 2020, the State of Illinois faced an unprecedented surge in applications for unemployment benefits, with more than a million claims submitted between March 1 and May 9—nearly 12 times the volume the department processed during the same period in 2019. Compounding the challenge was the need to transition state employees to remote work due to COVID-19.The Illinois Department of Employment Security (IDES) chose Google Contact Center AI to enable their virtual agents to quickly and effectively serve citizens through chat and voice. Cisco’s Contact Center AI APIs connect the GoogleCloud Dialogflow service to the state’s Cisco contact center and communications system. Their web and phone systems were up and running by late April, with Virtual Agents answering 40,000 after-hours calls per day to provide immediate, real-time assistance with questions about eligibility, filing claims, and more. Importantly, the state was able to generate cost savings through the solution, according to initial analysis.Extending Cisco Webex Expert on Demand on Glass Enterprise Edition 2 The Cisco Webex Expert on Demand application forGlass Enterprise Edition 2 empowers remote frontline workers such as retail store employees, manufacturing line workers, and field service technicians by enabling hands-free collaboration in the field, around the world. Expert on Demand connects these professionals to experts who can guide them step-by-step in real time. Real-time collaboration is a key use case for Glass Enterprise Edition 2, a wearable device that provides hands-on workers and professionals with glanceable information in a comfortable, lightweight profile designed to be worn all day. With a transparent heads-up display and a point of view camera, Glass helps onsite workers collaborate with others, while staying focused on the task at hand. Remote workers dialed into Cisco Webex Expert on Demand can see exactly what onsite workers see as they perform their jobs and communicate directly with them to provide real-time assistance.If you’re interested in participating in this customer preview program, visitgoogle.com/glass/contact/business to place a request.Google Cloud and Cisco keep collaborating to innovateInnovation is the cornerstone of Google Cloud’s global partnering strategy, and as you can see, there is an incredible amount of innovation happening between Cisco and Google Cloud. We take pride in offering these and more collaborative ventures in the future, with the goal of helping our mutual customers get work done. We look forward to your business joining us on this journey.Want to Learn More?For additional details on the Google-Cisco partnership, visit:Cisco and Google CloudGoogle Cloud Contact Center AI
Quelle: Google Cloud Platform

Dataproc Hub makes notebooks easier to use for machine learning

Dataproc is a fast, easy-to-use, fully managed cloud service for running open source, such as Apache Spark, Presto, and Apache Hadoop clusters, in a simpler, more cost-efficient way. Today, with the general availability of Dataproc Hub, and the launch of our machine learning initialization action, we are making it easier for data scientists to use IT-governed, open source notebook-based machine learning with horizontally scalable compute, powered by Spark. Our enterprise customers running machine learning on Dataproc require role separation between IT and data scientists. With Dataproc Hub, IT administrators can pre-approve and create Dataproc configurations to meet cost and governance constraints. Data scientists can then create personal workspaces backed by IT pre-approved configurations to spin up scalable distributed Dataproc clusters with a single click. Jupyter Notebooks enable data scientists to interactively explore and prepare the data and train their models using Spark and additional OSS machine learning libraries. These on-demand Dataproc clusters can be configured with auto-scale and auto-deletion policies and can be started and stopped manually or automatically. We have received very positive feedback from our enterprise customers especially on the role separation, and we want to make Dataproc setup even easier with the new machine learning initialization action. Having worked with enterprises across industries, we have observed common requirements for Dataproc data science configurations that we are now packaging together in our machine learning initialization action. You can further customize the initialization action and add your own libraries to build a custom image. This simplifies Dataproc ML cluster creation while providing data scientists a cluster with:Python packages such as TensorFlow, PyTorch, MxNet, Scikit-learn, and KerasR packages including XGBoost, Caret, randomForest, and sparklyrSpark-BigQuery Connector: Spark connector to read and write data from and to BigQueryDask and Dask-Yarn: Dask is a Python library for parallel computing with similar APIs to the most popular Python data science libraries, such as Pandas, NumPy, and scikit-learn, enabling data scientists to use the standard Python at scale. (There’s a Dask initialization available for Dataproc.)RAPIDS on Spark (optionally): RAPIDS Accelerator for Apache Spark combines the power of the RAPIDS cuDF library and the scale of the Spark distributed computing framework. Accelerated shuffle configuration leverages GPU-GPU communications and RDMA capabilities to deliver reduced latency and costs for select ML workloadsK80, P100, V100, P4, or T4 Nvidia GPUs and drivers (optional)Considerations when building a Dataproc cluster for machine learningData scientists predominantly infer business events from the data events. Data scientists then, in collaboration with business owners, develop hypotheses and build models leveraging machine learning to generate actionable insights. Ability to understand how business events translate to data events is a critical factor for success. Our enterprise users need to consider many factors prior to selecting the appropriate Dataproc OSS machine learning environment. Points of consideration include:Data access: Data scientists need access to long-term historical data to make business event inference and generate actionable insights. Access to data at scale in proximity to the processing environment is critical for large-scale analysis and machine learning.Dataproc includes predefined open source connectors to access data on Cloud Storage and on BigQuery storage. Using these connectors, Dataproc Spark jobs can seamlessly access data on Cloud Storage in various open source data formats (Avro, Parquet, CSV, and many more) and also data from BigQuery storage in native BigQuery format.  Infrastructure: Data scientists need the flexibility to select the appropriate compute infrastructure for machine learning. This compute infrastructure comprises VM type selection, associated memory, and attached GPUs and TPUs for accelerated processing. Ability to select from a wide range of options is critical for optimizing for performance, results, and costs. Dataproc provides the ability to attach K80, P100, V100, P4, or T4 Nvidia GPUs to Dataproc compute VMs. RAPIDs libraries leverage these GPUs to deliver performance boost to select Spark workloads. Processing environment: There are many open source machine learning processing environments such as Spark ML, DASK, RAPIDS, Python, R, TensorFlow, and more. Usually data scientists do have a preference, so we’re focused on enabling as many of the open source processing environments as possible. At the same time, data scientists usually add custom libraries to enhance their data processing and machine learning capabilities. Dataproc supports Spark and DASK processing frameworks for running machine learning at scale. Spark ML comes with standard implementations of machine learning algorithms, and you can utilize them on your datasets already stored on Cloud Storage or BigQuery. Some data scientists prefer ML implementations from Python libraries for building their models. Essentially, swapping a couple of statements enables you to switch from standard Python libraries to DASK. You can select the appropriate processing environment to suit your specific machine learning needs. Orchestration: Many iterations are required in an ML workflow because of model refinement or retuning. Data scientists need a simple approach to automate data processing and machine learning graphs. One such design pattern is building a machine learning pipeline for modeling and another approach is scheduling the notebook used in interactive modeling.Dataproc workflow templates enable you to create simple workflows and Cloud Composer can be used to orchestrate complex machine learning pipelines.Metadata management: Dataproc Metastore enables you to store the associated business metadata with the table metadata for easy discovery and communication. Dataproc Metastore, currently in private preview, enables a unified view of open source tables across Google Cloud. Notebook user experience: Notebooks allow you to interactively run workloads on Dataproc clusters. Data scientists have two options to use Notebooks on Dataproc:You can use Dataproc Hub to spin up a personal cluster with Jupyter Notebook experience using IT pre-approved configurations with one click. IT administrators can select the appropriate processing environment (Spark or DASK), the compute environment (VM type, cores, and memory configuration) and optionally also attach GPU accelerators along with RAPIDS for performance gains for some machine learning workloads. For cost optimizations, IT administrators can configure auto-scaling and auto-deletion policies and data scientists at the same time can manually stop the cluster when not in use. You can configure your own Dataproc cluster, selecting the appropriate processing environment and compute environment along with the notebook experience (Jupyter and Zeppelin) using Component Gateway. Data scientists need a deep understanding of how data represents business transactions and events and the ability to leverage the innovation in OSS machine learning and deep learning, Notebooks, and Dataproc Hub to deliver actionable insights. We at Google focus on understanding the complexity and limitations of the underlying framework, OSS, and infrastructure capabilities and are actively working to simplify the OSS machine learning experience so that you can focus more on understanding your business and generating actionable insights and less on managing the tools and capabilities used to generate them. Check out Dataproc, let us know what you think, and help us build the next-generation OSS machine learning experience that is simple, customizable, and easy to use.Related ArticleMachine learning patterns with Apache Beam and the Dataflow Runner, part IAs more people use ML inference in Dataflow pipelines to extract insights from data, we’ve seen some common patterns emerge. In this post…Read Article
Quelle: Google Cloud Platform

Preparing your MySQL database for migration with Database Migration Service

Recently, we announced the new Database Migration Service (DMS) to make it easier to migrate databases to Google Cloud. DMS is an easy-to-use, serverless migration tool that provides minimal downtime database migration to Cloud SQL for MySQL (Preview) and Cloud SQL for PostgreSQL (available in Preview by request). In this post, we’ll cover some of the tasks you need to take to prepare your MySQL database for migration with DMS.What types of migrations are supported?When we talk about migrations, usually we either do an offline migration, or a minimal downtime migration using continuous data replication. With Database Migration Service (DMS) for MySQL, you can do both! You have an option for one-time migration or continuous migration.Version supportDMS for MySQL supports source database versions 5.5, 5.6 5.7, or 8.0, and it supports migrating to the same version or one major version higher.  Here are the possible migration paths for each version:When migrating to a different version than your source database, your source and destination databases may have different values for the sql_mode flag. The SQL mode defines what SQL syntax MySQL supports and what types of data validation checks it performs. For instance, the default SQL mode values are different between MySQL 5.6 and 5.7. As a result, with the default SQL modes in place, a date like 0000-00-00 would be valid in version 5.6 but would not be valid in version 5.7. Additionally, with the default SQL modes, there are changes to the behavior of GROUP_BY between version 5.6 and version 5.7. Check to ensure that the values for the sql_mode flag are set appropriately on your destination database.You can learn more about  the sql_mode flag and what the different values mean in the MySQL documentation. PrerequisitesBefore you can proceed with the migration, there are a few prerequisites you need to complete. We have a quickstart that shows all the steps for migrating your database, but what we want to focus on in this post is what you need to do to configure your source database, and we’ll also briefly describe setting up a connection profile and configuring connectivity.Configure your source databaseThere are several steps you need to take to configure your source database. Please note that depending on your current configuration, a restart on your source database may be necessary to apply the required configurations.Stop DDL write operationsBefore you begin to migrate data from the source database to the destination database, you must stop all Data Definition Language (DDL) write operations, if any are running on the source. This script can be used to verify whether any DDL operations were executed in the past 24 hours, or if there are any active operations in progress.server_id system variableOne of the most important items to set up in your source database instance is the server_id system variable. If you are not sure what your current value is, you can check by running this on your mysql client:SELECT @@GLOBAL.server_id;The value displayed must be any value equal or greater than 1. If you are not sure about how to configure the server_id, you can look at this page. Although this value can be dynamically changed, replication is not automatically started when you change the variable unless you restart your server.Global transaction ID (GTID) loggingThe gtid_mode flag controls whether or not global transaction ID logging is enabled and what types of transactions the logs can contain. Make sure that gtid_mode is set to ON or OFF, as ON_PERMISSIVE and OFF_PERMISSIVE are not supported with DMS. To know which gtid_mode you have on your source database run the following command:SELECT @@GLOBAL.gtid_mode;If the value for gtid_mode is set to ON_PERMISSIVE or OFF_PERMISSIVE, when you are changing it, note that changes to the value can only be one step at a time. For example, if gtid_mode is set to ON_PERMISSIVE, you can change it to ON or OFF_PERMISSIVE, but not to OFF in a single step. Although the gtid_mode value can be dynamically changed without requiring a server reboot, it is recommended that you change it globally. Otherwise, it will only be valid for the session where the change occurred and it won’t have effect when you start the migration via DMS. You can learn more about gtid_mode in the MySQL documentation.Database user accountThe user account that you are using to connect to the source database needs to have these global privileges:EXECUTERELOADREPLICATION CLIENTREPLICATION SLAVESELECTSHOW VIEWWe recommend that you create a specific user for the purpose of migration, and you can temporarily leave the access to this database host as %. More information on creating a user can be found here.The password of the user account used to connect to the source database must not exceed 32 characters in length. This is an issue specific to MySQLreplication. For more information about the MySQL user password length limitation, see MySQL Bug #43439.DEFINER clauseBecause a MySQL migration job doesn’t migrate user data, sources that contain metadata defined by users with the DEFINER clause will fail when invoked on the new Cloud SQL replica, as the users don’t yet exist there.You can identify which DEFINER values exist in your metadata by using these queries. Check if there are entries for either root%localhost or users that don’t exist in the target instance.SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.EVENTS;SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.ROUTINES;SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.TRIGGERS;SELECT DISTINCT DEFINER FROM INFORMATION_SCHEMA.VIEWS;If your source database does contain this metadata you can do one of the following:Update the DEFINER clause to INVOKER on your source MySQL instance prior to setting up your migration job.Create the users on your target Cloud SQL instance before starting your migration job.Create a migration job without starting it. That is, choose Create instead of Create & Start.Create the users from your source MySQL instance on your target Cloud SQL instance using the Cloud SQL API or UI.Start the migration job from the migration job list or the specific job’s page.Binary loggingEnable binary logging on your source database, and set retention to a minimum of 2 days. We recommend setting it to 7 days to minimize the likelihood of lost log position. You can learn more about binary logging in the MySQL documentation.InnoDBAll tables, except tables in system databases, will use the InnoDB storage engine. If you need more information about converting to InnoDB, you can reference this documentation on converting tables from MyISAM to InnoDB.Set up a connection profileA connection profile represents all the information you need to connect to a data source. You can create a connection profile on its own or in the context of creating a specific migration job. Creating a source connection profile on its own is useful if the person who has the source access information is not the same person who creates the migration job. You can also reuse a source connection profile definition in multiple migration jobs.Learn more about connection profiles and how to set them up in the documentation.Configure connectivityDMS offers several different ways that you can set up connectivity between the destination Cloud SQL database and your source database. There are four connectivity methods you can choose from:IP allowlistingReverse SSH tunnelVPCs through VPNsVPC peeringThe connectivity method you choose will depend on the type of source database, and whether it resides on-premises, in Google Cloud, or in another cloud provider. For a more in-depth look at connectivity, you can read this blog post.Extra ResourcesNow that you’ve learned how to prepare your MySQL database for migration, you can visit the DMS documentation to get started, or continue learning by reading these blog posts:Best practices for homogeneous database migrationsDatabase Migration Service connectivity – A technical introspectiveClosing the gap: migration completeness when using Database Migration ServiceTry out DMS in the Google Cloud console. It’s available at no additional charge for native lift-and-shift migrations to Cloud SQL.Related ArticleClosing the gap: Migration completeness when using Database Migration ServiceLearn what is and isn’t included when migrating a MySQL database to Cloud SQL using Database Migration Service (DMS).Read Article
Quelle: Google Cloud Platform

Ensuring financial stability starts with database stability

Editor’s note: We are hearing today from Freedom Financial Network, provider of technology-based solutions that help consumers overcome debt and establish financial security. To meet the demand of their growing suite of services across the organization, they moved from Rackspace to Google Cloud SQL.At Freedom Financial Network, our products and services have helped hundreds of thousands of consumers reduce and consolidate their debt. Our suite of customized solutions is driven by a core architecture of intelligent decision-making microservices, shared across the organization, that depend upon independent instances. Before making the switch to Google Cloud, we utilized Rackspace’s solution. But over the past year (during a period of significant growth), we realized that we needed to free up our infrastructure and platform teams to provide more comprehensive support across the enterprise. We also wanted to drive growth by transitioning from a monolithic to a microservices architecture, helping us expand our suite of consumer products and allowing our internal teams a more flexible, self-service access to our infrastructure.Related ArticleMaking your monolith more reliableAs microservices grow, monoliths still exist for enterprises. Here’s how to apply SRE principles to those single-tiered software apps.Read ArticleOn Rackspace, through our existing monolithic architecture, we were managing large clusters of instances running on MySQL. Each of Freedom Financial Network’s business units had one large cluster of instances. Rackspace was managing those clusters, and thus taking work off our hands, but we had very little control over these databases. Every small change such as disk resizing would take a couple of weeks at least. Because of that, our database instances were vastly overprovisioned and expensive. We saw that Google Cloud could host and manage all of our databases, saving us valuable time and resources, and that Google Cloud SQL’s versatility would allow us to build flexible, secure solutions that would meet the needs of our teams and our customers. We were able to break down our clusters into many smaller instances that we can manage entirely through automation without adding overhead.A complex migration made easier by Google CloudOur migration involved the transformation of our monolithic architecture to a microservices architecture, deployed on Google Kubernetes Engine (GKE) and using the Cloud SQL Proxy in a sidecar container pattern or the Go proxy library to connect to Cloud SQL. Each microservice uses its own schema and schemas can be grouped in shared instances or be hosted on dedicated instances for higher load applications. We successfully leveraged Google Cloud’s new Database Migration Service (DMS) to migrate our databases from Rackspace to Cloud SQL. We used it to migrate three separate production databases, with five total schemas migrated and an overall size of close to 1 TB of data with less than 15 minutes of downtime. Ultimately, the migration was successful and largely painless. We’ve shut down our services at Rackspace, and all of our databases are running on Google Cloud’s managed services now. DMS was really the only option because of the size of our databases. We estimated that doing a “dump and load” migration would have required application downtime in excess of 12 hours—not to mention the hours we would have spent doing prep work. Using Cloud SQL as our database foundationSince completing the migration, Cloud SQL has helped us meet our goals around security, scale, and flexibility. We now deploy a robust set of microservices and instances—following a recent resizing, we have an estimated 180 instances consuming 350 CPUs, for 1300 gigs of RAM. Our microservice examples include everything from simple use cases and application configuration databases to larger, more complex databases that hold information used frequently by business teams. We save so much time not having to manage 180 instances.With Google Cloud SQL, we save time and resources no longer managing 180 instances. We know that we are going to grow, and our current structure is better suited for that growth. Mathieu DuboisOur Platform Team now uses Terraform to create new resources to other Freedom Financial Network teams in Google Cloud. For example, when a team starts a new project and needs a new instance, all they have to do is use the custom Terraform module we’ve built on top of the default Cloud SQL provider to submit a pull request. By creating a module, we ensure that all of the instances are created consistently. The module configures and manages common default options around size of instance needed, if they want to add a read replica, and high availability, while adhering to our regular naming conventions.We’ve recently switched to using Workload Identity on GKE, which gives us a lot of flexibility around permissions. Each of our microservices has a Kubernetes service account, which is linked through Workload Identity to a Google Cloud services account, and we only grant that account its necessary permissions. This allows us to ensure that each microservice only accesses the instances it needs to perform its tasks. A huge benefit of the Cloud SQL Proxy is its security features, allowing us to enforce SSL connections to the databases, and ensuring that the databases aren’t accessible from the outside. We can segregate our data easier, boosting reliability. With greater database segregation, we can limit the blast radius of a potential incident. All of Cloud SQL’s out-of-the-box services, including monitoring, help us flag any potential problems with instances.With Google Cloud managing our databases, we can focus more time and resources on supporting our other teams. With every team running faster, Freedom Financial Network as a whole can operate faster, we can solve business problems more efficiently, and drive growth in a greater diversity of new areas and customer products. With Google Cloud SQL, our new structure is optimized for our expected growth.Explore Freedom Financial Network and learn more about Google Cloud SQL.Related ArticleCloud SQL now supports PostgreSQL 13Fully managed Cloud SQL cloud database service now supports PostgreSQL 13.Read Article
Quelle: Google Cloud Platform