Devices and zero trust

In a zero trust environment, every device has to earn trust in order to be granted access. When determining whether access should be granted, the security system relies on device metadata, such as what software is running or when the OS was last updated, and checks to see if the device meets that organization’s minimum bar for health. Think of it like your temperature: under 100 degrees and you are safe, but go over and you are now medically in fever territory, and you may not be allowed into certain venues.Zero Trust relies on WHO you are and WHAT you are using to determine accessIn this issue of GCP Comics we  focus on devices, and how they play into a Zero Trust environment.Device data can take many forms, and can come from many sources. We recommend collecting multiple types of data from multiple systems and using it to make well-informed decisions on which devices get access to your important systems.What are some of those data types?Operating system version: to help you limit access for older, unsupported releasesPatch date: to find out if there are unpatched vulnerabilities presentLast check-in date: to understand how long this machine has been ‘offline’Binaries installed: to see if there’s any known malware or dangerous executablesExecutables run recently: to see if anything fishy is still runningDisk encryption: to see if the device complies with data protection policiesLocation data: to restrict access to some tools to only specific cities, states, or countriesUser(s) logged in recently: to see if other people might be sharing this deviceAnd where can you gather the data? There are many sources, including:DNS serversDHCP serversLocal agentsMobile device management solutionsOS-specific management toolsFor more on this topic, check out the following resources:OSQuery – Open source endpoint visibilityEndpoint Verification – Google Cloud inventory managementBeyondCorp: Building a Healthy FleetBeyondCorp: Design to Deployment at GoogleWant more GCP Comics? Visit gcpcomics.com and be sure to follow us on Twitter at @pvergadia and @maxsaltonstall so you don’t miss the next issue!Related ArticleWhat is zero trust identity security?A zero trust network is one in which no person, device, or network enjoys inherent trust. All trust, which allows access to information, …Read Article
Quelle: Google Cloud Platform

Google and the National Science Foundation expand access to cloud resources

As part of our commitment to ensuring more equitable access to computing power and training resources, Google Cloud will contribute research credits and training to projects funded through a new initiative by the National Science Foundation (NSF) called the computer and information science and engineering Minority-Serving Institutions Research Expansion (CISE-MSI) program. This program seeks to support research capacity at MSIs by broadening funded research in a range of areas supported by the programs of NSF’s CISE directorate. The research areas include those covered by the following CISE programs:Algorithmic Foundations (AF) program (Program Webpage);Communications and Information Foundations (CIF) program (Program Webpage);Foundations of Emerging Technologies (FET) program (Program Webpage);Software and Hardware Foundations (SHF) program (Program Webpage);Computer and Network Systems Core (CNS Core) program (Program Webpage);Human-Centered Computing (HCC) program (Program Webpage);Information Integration and Informatics (III) program (Program Webpage);Robust Intelligence (RI) program (Program Webpage);OAC Core Research (OAC Core) program (Program Webpage);Cyber-Physical Systems (CPS) (Program Webpage);Secure and Trustworthy Cyberspace (SaTC) (Program Webpage);Smart and Connected Communities (S&CC) (Program Webpage); andSmart and Connected Health (SCH) (Program Webpage).For this program, CISE has started with a focus on MSIs, which include Historically Black Colleges and Universities, Hispanic-Serving Institutions, and Tribal Colleges and Universities. MSIs are central to inclusive excellence: they foster innovation, cultivate current and future undergraduate and graduate computer and information science and engineering talent, and bolster long-term U.S. competitiveness. This initial round of proposal applications are due by April 15.NSF funds research and education in most fields of science and engineering and accounts for about one-fourth of federal support to academic institutions for fundamental research. Since 2017, we’ve been proud to partner with the NSF to expand access to cloud resources and research opportunities. We provided $3 million in Google Cloud credits to the NSF’s BIGDATA grants program. We committed $5 million in funding to support the National AI Research Institute for Human-AI Interaction and Collaboration. We also have an ongoing commitment to facilitate cloud access for NSF-funded researchers as one of the cloud providers for the NSF’s CloudBank.Digging into the details: a Google/NSF Q&AFor more on this partnership, we spoke to Alice Kamens, strategic projects and program manager for higher education at Google, and Dr. Fay Cobb Payton, program director in the NSF’s CISE directorate, to explain why this new CISE-MSI funding initiative is so important.Can you explain what drove this new program?Payton: At NSF, we assessed our award portfolios and recognized that we could do better in terms of the number of minority-serving institutions engaged through the various research programs offered by the CISE directorate. In 2019 and 2020, we held a series of CISE-MSI workshops to talk to HBCU, HSI, and TCU faculty about how we could better support them. It was really community-driven rather than a top-down approach.Kamens: At the same time, we at Google were assessing our research funding initiatives and noticing the same under-representation of minority-serving institutions in our programs. We wanted to make sure our resources were reaching researchers and faculty at MSIs. That’s when we heard about the NSF’s forthcoming MSI-RE program and met with Fay to see how we could help expand the program’s capacity.Payton: On the basis of many conversations with my colleague, Deep Medhi, program director for the CloudBank project, and CISE leadership including Erwin Gianchandani, NSF’s deputy assistant director for CISE, as well as Gurdip Singh, division director for Computer and Network Systems, we decided to focus on building research capacity and research partnerships within and across MSIs. Building on existing CISE partnerships, we wanted to create pathways to expose and train future generations in core research.What are the main benefits for MSIs and researchers?Payton: We are offering about $7 million in funding to support researchers with a focus on specific CISE programs named above and in the CISE-MSI solicitation. This program encourages cross fertilization, either across institutional types and researchers, or across faculty who may not get a chance to engage because of their workloads at MSIs, particularly those with a heavy focus on teaching.Kamens: Google will provide Google Cloud credits for up to $100,000 per Principal Investigator (PI), as well as training worth $35,000 in live, instructor-led workshops. These matching credits expand the total award amount each PI can access, while the workshops cover the fundamentals of cloud technology, advanced skills, and curriculum and training to help faculty bring the cloud into their courses.What impact do you expect it will have now–and down the road?Payton: In the short term, a first cohort of about 10 to 15 proposals will be funded this year. In the longer term, we also want to foster increased engagement with researchers across their careers, beyond simply writing proposals and receiving grants. There’s a breadth of opportunities for science at NSF, such as CAREER awards, computing workshops, and review panel service. Establishing relationships with program directors really matter. Through a continued series of CISE “mini-labs,” we are working to better enable the relationship-building among MSI researchers and CISE program directors.Kamens:  At Google we often hear from researchers that the ability to use cloud computing to get an answer to a question in hours rather than days can fundamentally shift the way that they conduct research. Our goal is to accelerate time to discovery and cutting-edge research in academia. It’s critical to us that all researchers, regardless of institution type or size, have access to the resources they need, and can harness Google Cloud as they see fit to help accelerate their research. What’s around the next corner?Kamens: In the next few years I think the cloud will be a driver for so much that we do. From researchers and employees to teachers and students, we will all need to become fluent in the power of the cloud.Payton: This is just the beginning of our outreach. I’d like to think that this solicitation is version 1.0. We’ve already come up with ways to improve the next round!To learn more, visit the NSF’s Computer and Information Science and Engineering Minority-Serving Institutions Research Expansion program solicitation and apply by April 15th. Review NSF’s Dear Colleague Letter announcing this partnership. You can download aninformational webinar as well as proposal development workshops for applicants through the American Society for Engineering Education. To estimate cloud computing costs, consult the CloudBank resources page.Google Cloud has also expanded its global research credits program for qualifying projects in the following countries: Japan, Korea, Malaysia, Brazil, Mexico, Colombia, Chile, Argentina, and Singapore. To start or ramp up your own project, check out our application form.Disclaimer: The inclusion of NSF in this blog post is informative of a funding opportunity only. It is not intended to endorse the company, or its products or services.Related ArticleGoogle Cloud initiatives offer researchers critical support during the pandemicOur new initiatives offer crucial support to overburdened researchers in these difficult times.Read Article
Quelle: Google Cloud Platform

This bears repeating: Introducing the Echo subsea cable

It’s an old story, but one that bears repeating: Google works hard to build infrastructure that connects people, geographies, and businesses. Today, we’re announcing Google’s investment in Echo, a new subsea cable from the United States to Asia. Echo will run from Eureka, California to Singapore, with a stop-over in Guam, with plans to also land in Indonesia. Additional landings are possible in the future. Echo will be the first-ever cable to directly connect the U.S. to Singapore with direct fiber pairs over an express route. It will decrease latency for users connecting to applications running the Google Cloud Platform (GCP) regions in the area, home to some of the world’s most vibrant financial and technology centers. Infrastructure investments such as these can have a substantial, measurable impact on regional economic activity. For instance, according to a recent study of Google’s APAC network infrastructure between 2010 and 2019, network investment there led to 1.1 million additional jobs and an extra $430 billion (USD) in aggregate GDP for the APAC region.Click to enlargeEcho’s architecture is designed for maximum resilience: Its unique Trans Pacific route to Southeast Asia avoids crowded, traditional paths to the north and is expected to be ready for service in 2023. We look forward to the expanded connectivity that Echo will bring to Southeast Asia, enabling new opportunities for people and businesses in the region.Related ArticleThe Dunant subsea cable, connecting the US and mainland Europe, is ready for serviceThe Dunant submarine cable system, crossing the Atlantic Ocean between Virginia Beach in the U.S. and Saint-Hilaire-de-Riez on the French…Read Article
Quelle: Google Cloud Platform

Rethinking ‘rehost, replatform, rearchitect’: Cloud migration for the real world

When helping customers plan large-scale migrations of applications to the cloud, we here on the Professional Services team sometimes observe them pouring countless hours into the top-down evaluation of their application estate and categorizing them into discrete migration strategies like “rehost”, “replatform”, “refactor” and so on. It’s a well1 established2 industry3 practice4 in which the only open point for debate, it seems, is whether there are 5, 6, 7 or 8 distinct “R’s”. In practice, the popular “R” migration strategies aren’t really strategies at all—they’re placeholders for all the things you don’t yet know about your applications. We find that an IT organization’s policies and capabilities do more to determine the migration path than anything else and ultimately override any architect’s prior top-down planning.What’s more, almost no application falls squarely into any one of the migration strategies across all of its layers. The database might get replatformed from self-hosted PostgreSQL to managed Cloud SQL while the application layer might get rehosted as the same VM on Compute Engine, while the load balancer might get replaced with Google’s Global Load Balancer.What most organizations usually need is a more holistic approach to migration. That’s why when our Professional Services teams engage with customers, we pay little attention to these semantics. In this early planning stage we’re more interested in people and processes before we focus on the technology. First, we may recommend you consider the Google Cloud Adoption Framework, and determine whether your cloud migration needs to be tactical, strategic or transformational. Then, armed with that insight, it’s time to consider three fundamental migration approaches:Migration factoryThe migration factory approach, in which we laterally copy or deploy virtual machines or containers in bulk, works well if the applications are simple and similar enough and if the team handling the migration can execute it autonomously without needing to coordinate much with individual application teams. This scaled approach is well suited for initiatives that are infrastructure-led rather than application-led, in particular wholesale data center exits, and can be expedited with solutions like Google Cloud VMware Engine or specialized tools like Migrate for Compute Engine or our Database Migration Service. By the same token, the migration factory approach doesn’t lend itself well to a change management process (through internal policy or external regulation) in which each application must individually undergo a manual review. The effort of reviewing and controlling for the inevitable changes outweighs the time saved from the automated migration of the code and data itself. The same is true if you need to make material changes to your CI/CD tool chain before you can deploy to your cloud environment. And once you factor in the change process, the commercial case for an as-is migration to the cloud becomes less compelling.In summary, there are some specific scenarios in which the migration factory approach is the best option, but there are many other scenarios that don’t meet the criteria.Greenfield software developmentAt the other extreme of a migration factory approach, there’s the option not to migrate an existing application at all, but rather to develop a new “greenfield” or “cloud-native” application instead. This approach essentially follows textbook agile software development practices. While there is nothing specific to the cloud about this approach, the cloud’s managed and serverless services lend themselves particularly well to accelerating the development of such a software solution.A newly developed application must be treated as a product and not as a project, meaning the team does not abandon its work after the last milestone has been reached. Rather, it remains dedicated to the application and continuously refines and expands its functionality in perpetuity (or until the application is deprecated). In this regard, the approach is fundamentally different from any kind of migration. There are no predetermined schedules and the architect does not unilaterally dictate requirements—there are only sprints of incrementally feature rich, usable software. The team tasked with developing the application must be small, cross-functional and long-lived. It also takes considerably more time to develop a new application than to migrate an existing one.Modernization factoryThe majority of migrations that we see our customers engage in involve some degree of modernization on an application by application basis.Software modernization falls on a wide spectrum, involving a multitude of independent tactics and best practices to improve the scalability, availability, security and durability of the application itself while also reducing the amount of toil in deploying and operating the application.It’s through these incremental modernizations that customers usually realize their true TCO savings—and it’s also the reason why improving their DevOps capabilities naturally goes hand in hand with cloud adoption. In the cloud, because everything is an API, everything can be automated.For example, provisioning consistent project environments with the help of infrastructure as code can constitute a noteworthy modernization. Decoupling state from stateless logic can have great scaling benefits. Removing shell access from all servers and allowing only for changes to be pushed through a CI/CD pipeline can be a game changer for your security posture. There are a hundred more examples of incremental software modernizations and none of them squarely fit into a migration strategy “R”.A simplified illustration of a gradual modernization evolutionWhen we work with our customers to land their applications with some degree of modernization on Google Cloud we take stock of the common application layers (or “archetypes”) found in their current estate and mutually agree on a limited set of target cloud services, a set of modernization tactics and a degree of process automation through CI/CD and infrastructure as code. Any additional modernization effort above this baseline should be postponed until after the application has landed in production in the cloud.A modernization factory requires a hybrid team construct, composed of a small cross-functional app team that is familiar with the individual application plus a “factory” team that is familiar with the portfolio of target services and modernization tactics to be employed across all applications. While the app team enters and leaves the factory together with its application, the factory team stays in place and processes one application (and one app team) at a time.Crucially, the factory team must include representation from—and hold accountable—all decision makers who have the authority to block the successful completion of the migration. Think of the factory team as a matrixed microcosm of the business that transcends organizational hierarchies and binds everyone to a single shared objective: to successfully land an application in the cloud.Last but not least, the modernization factory provides a central forum in which to train people on the new cloud operating model. This helps mitigate against prematurely training swathes of IT employees on technologies for which they don’t have an immediate need, while also ensuring that the app teams have the skills and ways of working to be successful on their own after the migration concludes.Once the first migration factory has a proven track record of success, additional modernization factories can be spun up to migrate more applications in parallel.RecommendationSo, which approach should you take for your applications? To develop a real migration strategy, first answer why your organization is adopting the cloud. How you approach it follows from that.For example, harkening back to the Google Cloud Adoption Framework, if your objective is to reduce cost with minimal change to your applications (i.e., tactical) then we recommend taking the migration factory approach. If your objective is to maximize the benefits you get from your software (i.e., strategic) then we recommend taking the modernization factory or the greenfield software development approach. If your objective is to innovate with net new applications that solve new business problems (i.e., transformational) then we recommend taking the greenfield software development approach.After you’ve established why your organization wants to adopt cloud and have agreed on how you would like to approach it, everything else will begin to fall into place – including things that don’t begin with the letter “R”.To learn more about migrating to Google Cloud, check out our data center migration center or sign up for a free cloud migration cost assessment.1. Gartner’s 5 R’s2. AWS’s 6 R’s3. Citrix’s 7 R’s4. Infosys’ 8 R’sRelated ArticleApplication modernization isn’t easy. But we can make it easier.Migrating and modernizing applications and moving to the cloud can be a fun and interesting challenge, but it’s seldom “easy”. Here are f…Read Article
Quelle: Google Cloud Platform

Check, please! Billing in Cloud Storage

If you’d rather listen to this as a podcast, check out Google Cloud Reader. So far, we’ve talked a lot about how to use Cloud Storage—from managing data to optimizing performance, uploading, downloading, and buckets. But there’s one important topic that we haven’t talked about: the price tag.Analogy time! I’ve found that having a grocery list keeps me from overspending at the store (even if I’m shopping on an empty stomach) and this kind of budget planning comes in handy for Cloud Storage, too. Instead of snack specifics, you’ll need other information, but the same principles apply: As a general rule, it’s a good idea to predict and track your data usage so you can anticipate your monthly costs. Details in the documentation, and below.Let’s get to it.Pricing overviewTo start, it’s good to understand how pricing is broken down. For Cloud Storage, pricing is a compilation of four components:Data Storage refers to the amount of data stored in your buckets, and the rates vary depending on the storage class of your data, and location of your buckets.Network Usage is the amount of data read from or moved between your buckets.Operations Usage are the actions you take in Cloud Storage, like listing the objects in your buckets, for example.Retrieval and Early Deletion fees are only applicable for data stored in the less-frequently accessed storage Classes: Nearline, Coldline, and Archive.Each of these components has its own pricing tables that show cost based on factors such as region and operation type, which means that each company’s total cost is going to be based upon its specific requirements. So as much as I’d like to tell you exactly what your bottom line will be in this blog post, I can’t. But let’s focus on what we *can* accomplish in this post, and that’s a high level overview of the various costs and tools you can use to manage them!Pricing calculatorAllow me to be the first to introduce you to your budgeting best friend, the Google Cloud pricing calculator! I’ll walk you through the different sections so you’re ready to go when the time comes to enter your own information.Data storageData storage costs apply to the at-rest storing of your data in Cloud Storage. For a quick refresher, “at rest” means that the data is physically on the disk itself, and not somewhere in transit throughout the network, or only temporarily housed there. For another quick refresher, we’ve got 4 types of storage class: Standard, Nearline, Coldline, and Archive.Standard Storage is appropriate for storing data that is frequently accessed, such as serving website content, interactive workloads, or data supporting mobile and gaming applications. For standard storage, the monthly cost is the only cost you need to plan for.However, for the other three storage types, you’ll want to consider the minimum storage duration of that data, as well as any retrieval costs.For example, Coldline Storage has a minimum storage duration of 90 days, and a retrieval cost of, say, two pennies. So the cheaper monthly cost is completely worth it if you only want to access this data twice a year. If you find yourself accessing or updating the data every week, you’ll end up spending more money than if you had selected Standard Storage to begin with. So that’s something to consider when setting things up.Network costsWhen discussing network costs, we need to distinguish between egress and ingress:Egress represents data sent from Cloud Storage, like when reading data.Ingress represents data sent to Cloud Storage, like when writing data.Important note: Network ingress is always free.For network egress, there are three categories to consider:First, when that network egress is moving or copying data to other Cloud Storage buckets, or when other Google Cloud services access that data. This is considered “network egress within Google Cloud” and is free within regions, such as reading data in a US-EAST1 bucket into a US BigQuery dataset. Pricing then applies for egress between regions or across continents.Second, there’s “specialty network services,” which is when you use certain Google Cloud network products—such as Cloud CDN or Cloud Interconnect—egress pricing is based on their pricing tables. All other egress is considered to be “general network usage” and is billed based upon which continent the data is traveling to.Operations usageAn operation is an action that makes changes to, or retrieves information about buckets and objects in Cloud Storage. Operations are divided into three categories: Class A, Class B, and free. For a full list of the operations that fall into each class, check the documentation. As a brief overview:Class A includes creating storage buckets and objects. Class B includes retrieving storage objects. Free operations are primarily deletions.Early retrieval and deletion feesBecause Nearline Storage, Coldline Storage, and Archive Storage are intended for storing infrequently accessed data, there are additional costs associated with retrieval, and minimum storage durations. But more about that in the documentation.Closing out!Stay tuned for more posts on making the most of Cloud Storage.Learn more about your storage options in Cloud Storage Bytes. If you want to know more about pricing, check out the documentation for the most up to date information for your particular use case, more examples, and tutorials.Related ArticleServing up data from Cloud StorageWhen it comes to the cloud, there’s more than one way to serve a file. We’ll walk you through all the ways to serve data from Cloud Stora…Read Article
Quelle: Google Cloud Platform

Four ways CSPs can harness data, automation, and AI to create business value

Telecommunications companies sit on a veritable goldmine of data they can use to drive new business opportunities, improve customer experiences, and increase efficiencies. There’s so much data, in fact, that a significant challenge lies in ingesting, processing, refining, and using that data efficiently enough to inform decision-making as quickly as possible—often in near real-time.According to a new study by Analysys Mason, telecommunications data volumes are growing worldwide at 20% CAGR, and network data traffic is expected to reach 13 zettabytes by 2025. To stay relevant as the industry evolves, communications service providers (CSPs) need to manage and monetize their data more effectively to:Deliver new user experiences and B2B2X services, with the “X” being customers and entities in previously untapped industries, and unlock new revenue streams.Transform operations by harnessing data, automation, and artificial intelligence (AI)/machine learning (ML) to drive new efficiencies, improved network performance, and decreased CAPEX/OPEX across the organization.Here are four key data management and analytics challenges CSPs face, and how cloud solutions can help. 1. Reimagining the user experience means CSPs need to solve for near-real-time data analytics challenges.Consider being able to suggest offers to customers at the right place and time, based on their interactions. Or imagine being able to maximize revenue generation by dynamically adjusting offers to macro and micro groups based upon trends you discover during a campaign. These types of programs, which reduce churn and increase up-sell/cross-sell, are made possible when you can correlate your data across systems and get actionable insights at near real-time.Now, when it comes to effective decision-making in near real-time, lightning-speed is critical. Low latency is required for use cases like delivering location-based offers while customers are still on-site, or detecting fraud fast enough during a transaction to minimize losses. Cloud vendors can offer the speed and scale to tackle streaming data required for near-real-time data processing. At Google, we understand these requirements because they are core to our business, and we’ve developed the technologies to do so at scale. Google Cloud’s BigQuery, for example, is a serverless and highly scalable cloud data warehouse that supports streaming ingestion and super-fast queries at petabyte scale. Google infrastructure technologies like Dremel, Colossus, Jupiter and Borg that underpin BigQuery were developed to address Google’s global data scalability challenges. And Google Cloud’s full stream analytics solution is built upon Pub/Sub and Dataflow, and supports the ingestion, processing, and analysis of fluctuating volumes of data for near real-time business insights. Furthermore, CSPs can also take advantage of Google Cloud Anthos, which offers the ability to place workloads closer to the customer, whether within an operator’s own data center, across clouds, or even at the edge, enabling the speed required for latency sensitive use cases.What’s more, according to Justin van der Lande, principal analyst at Analysys Mason, “real-time use cases require an action to take place based on changes in streaming data, which predicts or signifies a fresh action.” They also require constant model validation and optimization. Therefore, using ML tools like TensorFlow in the cloud can help improve models and prevent them from degrading. Cloud-based services also let CSP developers build, deploy and train ML models through APIs or a management platform, so models can be deployed quickly with the appropriate validation, testing, and governance. Google Cloud AutoMLenables users with limited ML expertise to train high-quality models specific to their business needs. 2. Driving CSP operational efficiencies requires streamlining fragmented and complex sets of tools.Over time, many CSPs have built up highly fragmented and complex sets of software tools, platforms, and integrations for data management and analysis. A legacy of M&A activity over years means different departments or operating companies may have their own tools, which adds to the complexity of procuring and maintaining them—and can also impact an operator’s ability to make changes and roll out new functionalities quickly.Cloud providers offer CSPs access to advanced data and analytics tools with rich capabilities that are continuously updated. Google Cloud, for instance, offers Looker, which enables organizations to connect, analyze, and visualize data across Google Cloud, Azure, AWS, or on-premises databases, and is ideal for streaming applications. In addition, hyperscale cloud vendors work with a wide ecosystem of technology partners, enabling operators to adopt more standardised data tools that support a wider variety of use cases and are more open to new requirements. For example, Google Cloud partnered with Amdocs, helping CSPs consolidate, organize, and manage data more effectively in the cloud to lower costs, improve customer experiences, and drive new business opportunities. Amdocs DataONE extracts, transforms, and organizes data using a telco-specific and TM Forum-compliant Amdocs Logical Data Model. The solution runs on Google Cloud SQL, a fully managed and scalable relational database solution that allows you to more efficiently organize and improve the accessibility, availability, and visibility of your operational and analytical data. The Amdocs data solution can also integrate with BigQuery to take advantage of built-in ML. Finally, Amdocs Cloud Servicesoffers a practice to help CSPs migrate, manage and organize their data so they can extract the strategic insights needed to maximize business value.3. Leveraging cloud and automation can help CSPs reduce cost and overhead as data volumes continue to rise.One of the most powerful motivations for CSPs to adopt a cloud-based data infrastructure may be the prospect of lowering operational and capital costs. Analysys Mason predicts that IT and software capital spending for CSPs will approach $45 billion by 2025, and IT operational expenses will be more than double that amount. These costs are set to rise, as operators support new digital services and growing data volumes. With cloud services, you pay for the capacity you use, not the servers you own. This not only saves on infrastructure-related capital costs, but it also takes advantage of the efficiencies cloud computing achieves through scale and means that all maintenance and updates are built into a predictable monthly bill.Additionally, CSPs experience demand peaks and valleys daily and annually to accommodate busy internet traffic hours and high-audience events, like the Super Bowl. However, building infrastructure to accommodate these peaks wastes resources and reduces your return on capital. Customer demand may also fluctuate beyond these expected cycles, and large workloads like big data queries or ad hoc analytics and reports also make it difficult to predict your capacity needs. Cloud computing offers fast scaling up and down—even autoscaling—that isn’t always easy to do with on-premises systems. 4. Increasing customer lifetime value requires high quality and complete data for timely decision-making.Finally, CSPs need to utilize data and analytics to better understand how to engage with customers and deliver greater, more personalized services in order to increase overall customer lifetime value. This requires the ability to analyze and act on a complete set of quality data quick enough to inform sound decision-making. For example, without high quality and timely data on your most valued customers, you may not be able to spot customers who are about to churn or conversely, you may offer discounts to customers who were not about to churn in the first place. According to van der Lande, there are five main attributes required of a good data set: data quality, governance, speed, completeness, and shareability (see Chart 1). Put another way, your data is only as good as how fast you can capture/transform/load it from a myriad of back-end systems, front-end systems, and networks, how complete it is, and how easily you can share a 360o view with the right decision-makers. It is also important to consider how well that data is governed. Considerations such as data lineage, data source, categorization of PII data, and regulatory requirements are very important as you look to build trust in the data quality and ultimately the insights. What’s more, as data volumes grow, the more difficult it is to ensure its quality, governance, and completeness.The main CSP challenges related to data (Source: Analysys Mason)Operators can create a single operational data store in the cloud and use ML-driven preparation tools to improve data quality and completeness. Cloud vendors can also provide enterprise-grade security tools with the ability to manage access rights, as well as automated administration to ensure proper governance. The cloud supports near real-time, end-to-end streaming pipelines for big data analytics that would otherwise quickly strain in-house systems. In addition, solutions like Google Cloud’s BigQuery Omni powered by Anthos give CSPs a consistent data analysis and infrastructure management experience, regardless of their deployment environment.The telecommunications industry has a unique opportunity to mine the massive amount of data its systems generate to improve customer experiences, operate more efficiently, create innovative new products, and uncover use cases to generate new revenue opportunities faster. But as long as CSPs rely on rigid on-premises infrastructure, they’re unlikely to capitalize on this valuable resource. In a world where near real-time decision-making is more critical than ever, the cloud can help provide the agility, scale, and flexibility necessary to process and analyze this growing volume of data to remain not just relevant, but competitive.Download the complete Analysys Mason whitepaper, co-sponsored with Amdocs and Intel, to learn more.Related ArticlePartnering with Intel to accelerate cloud-native 5GSee how Google Cloud and Intel are partnering to make it easier for telco companies to help customers use 5G networks and deliver edge ap…Read Article
Quelle: Google Cloud Platform

Batter up! Anthos for bare metal helps MLB gear up for upcoming season

It’s been a long, cold winter, after a long, strange year. The global pandemic impacted the last baseball season, and it may do so again. But Spring Training is finally here, which is all about optimism and new beginnings. We’re excited for this season, and we’re really excited about our new architecture powered by Anthos on bare metal.We already use an array of services from Google Cloud: we broadcast mlb.com out of Google Cloud, relying on Compute Engine, Google Kubernetes Engine (GKE), Cloud SQL, Load Balancing, and Cloud Storage, to name a few. We also use GKE for test and development, and BigQuery for analytics. And for the second season now, we run Anthos in our ballparks to host applications that need to run in the park for performance reasons. Take our Statcast baseball metric platform: cameras collect data on everything from pitch speed to ball trajectories to player poses, which gets fed into the Statcast pipeline in real-time. Statcast transforms that data into on-screen analytics that announcers use as part of their game-time color commentary. Obviously, minimizing the time between when the bat hits the ball and when it’s displayed on screen is hugely important to the fan’s viewing experience.Last year, our Anthos servers ran on top of VMware, but the plan had always been to run Anthos on bare metal because it would help us simplify the stack we have to maintain in our 31 parks. So when Anthos for bare metal became generally available in November, we pushed forward with our partner Arctiq to deploy it for the upcoming season. By eliminating the virtualization layer, Anthos on bare metal makes it easier to swap out a server in the event of a hardware failure (our ballparks weren’t designed as climate-controlled data centers, so hardware failures happen more often than you would think). When a failure happens, we simply drop in a new server, image it with Ubuntu and Anthos, and the cluster auto heals itself and automatically deploys your apps. This kind of remote operation is particularly valuable in a pandemic. During the height of the 2020 season, local ordinances precluded most vendors and technicians from being on-site, making ease-of-replacement all the more important. 2021 will present many of the same restrictions. Anthos on bare metal also makes us more agile. For example, the Toronto Blue Jays are starting their season in Dunedin, FL this year. If the team is able to go back to Toronto this year, Anthos on bare metal makes it easy for us to follow them there.   Going forward, we have big plans for Anthos. Over time, Anthos clusters will become a multi-tenant resource that we can offer to anyone that needs access to low-latency compute in the ballpark, like a food vendor or entertainment provider. Rather than having every vendor provide their own silo, we provide Anthos as a service. The 2020 MLB season was a season like no other, and we’re expecting the 2021 season to throw us its fair share of curveballs too. But when it comes to our in-park server infrastructure, Google Cloud and Anthos on bare metal have us feeling pretty good about the future. It’s time to play ball!Major League Baseball trademarks and copyrights are used with permission of Major League Baseball. Visit MLB.com.
Quelle: Google Cloud Platform

TELUS International migrates key customer experience app to Google Cloud

As more services and applications go online, ensuring a frictionless customer experience is vital to building brand loyalty, capturing more sales, and optimizing profits. But if your underlying technology isn’t reliable, it’s easy to lose customers to the competition. For TELUS International, a leading digital customer experience innovator, ensuring the reliability of its online tools and services is crucial to its team’s mission to design, build, and deliver high-tech, high-touch customer experiences for some of the world’s most respected brands. TELUS International bundlesVerint, a Google Cloud Partner and workforce-management application, with its Cloud Contact Center platform to help North American call centers optimize customer service activities on the phone and online. TELUS International also uses Verint’s solution internally for its business process outsourcing.So, as part of its own digital transformation journey, TELUS International migrated Verint—a workforce-optimization-management application—from its legacy on-premises data center to Google Cloud.This move will help its global service centers optimize customer service activities for improved performance, by leveraging automation and AI-based analytics and insights to achieve better business outcomes..  Currently, TELUS International has approximately 30,000 users on the Verint platform, so ensuring that it’s running on a reliable cloud platform like Google Cloud is vital.Fast, painless migrationMartin Viljoen, VP of Information Technology at TELUS International, says the Verint migration from on-premise to Google Cloud was fast and seamless. “It took us about a week to stand up the infrastructure, which could have taken up to a year or more on-premises,” he says, given the traditional back and forth with hardware vendors to beef up the data center and solve problems along the way. “We didn’t have to worry about hardware availability on the fly. If you miss something, you just click and add it. You don’t have to write up a purchase order and wait for six weeks for delivery,” with much more time needed to get the new hardware operational. In all, Viljoen says it took only 4–6 weeks to go from system design to production. “Everything we needed was readily available,” he says. “And at the end of the day, it was very successful.” Simple, quick provisioningEvery migration has its challenges, but TELUS International found this project’s ‘bumps in the road’ much easier to navigate in Google Cloud. For example, Viljoen says the company started with a load balancer which was inexpensive but didn’t provide all the functionality needed. “We just went down the menu and selected the F5 load balancer,” which the company is currently using on-premises. “It was a very simple, very quick provisioning process and it proved why we are in the cloud. You can just pick any service if or when needed.” He says doing the same thing with an off-the-shelf load balancer and running into the same issue would have delayed the project for months. . Getting F5 configured for the cloud was also easy. The company simply replicated its on-premises configuration in Google Cloud. One-click backupBacking up into Google Cloud is also simple for TELUS International. “All you do is right-click,” says Viljoen. “On-premises, I would have to order oodles of bandwidth or buy a massive storage array. We’re also backing up our on-premises data center into Google Cloud because it’s so easy to do. It’s a no-brainer.” Exceptional performanceWith Verint Workforce Engagement and the Google Cloud Platform, TELUS has a world-class customer engagement platform to empower the remote and globally distributed workforce to support exceptional customer experiences, while gaining real-time insight into business operations for adjustment as needed to meet today’s ever-changing demands both within contact centers and throughout the enterprise.TELUS International’s clients are benefiting from improved performance on Google Cloud. “On a server, antivirus software is running and it eats up half of your resources.” He says customers with resource-intensive jobs have reported dramatic improvements in speed, getting reports in minutes instead of hours. As a result of all these gains, TELUS International’s plan is to migrate more of Verint to Google Cloud, including key components of the application’s workforce management solution as well as its call and screen recording feature. Having this data in Google Cloud will make it more accessible and open up new possibilities for data analytics and integration with other services, such as the company’s telephony platform, which is also on Google Cloud. Viljoen says, “We’re still at the low-hanging-fruit stage with Google Cloud, and we’re going to get deeper into it in the platform. The next step is to integrate other services that either our company or our clients are mandating. We’re a growing and evolving global organization. Having incremental tools and  services in the cloud has made all aspects of our business a lot easier, including our integrations. Once things are in the cloud, it’s just a lot simpler to enable our business.” At Google Cloud, we’re here to help you craft the right migration for you and your business just like we did with TELUS International. Get started by signing up for a free migration cost assessment, or visit our data center migration solutions page to learn more. Let’s get migrating!Related ArticleHow TELUS International got employees back to work with virtual desktopsHow TELUS deployed VDI to let agents work from home during COVID-19 pandemicRead Article
Quelle: Google Cloud Platform

Using BigQuery Administrator for real-time monitoring

When doing analytics at scale with BigQuery, understanding what is happening and being able to take action in real-time is critical. To that end, we are happy to announce Resource Charts for BigQuery Administrator. Resources Charts provide a native, out-of-the-box experience for real-time monitoring and troubleshooting of your BigQuery environments. Resource Charts make it easy to understand your historical patterns across slot consumption, job concurrency and job performance, allowing you to take actions to ensure your BigQuery environment continues to run smoothly. Specifically, it can help you:Determine how your resources are being consumed across several dimensions like projects, reservations and users, so you can take remediating actions like pausing a troublesome query. Manage capacity by allowing you to understand how your resources are being consumed over time and helping you optimize your BigQuery environment’s slot capacity.Taking Resource Charts for a spinLet’s say you start the morning with a hot coffee in hand and suddenly several colleagues complain their queries are running slower than expected. You open up Resource Charts and immediately see there was a spike in slot usage. But, what caused the spike? You zoom into the time range when the spike happened and group by different dimensions.When looking at the job dimension, you see that a new scheduled query job has been eating up a significant portion of your slot resources for the past 10 minutes.You find the query in Job history, click Cancel Job and your BigQuery environment returns to normal. You just diagnosed your BigQuery environment, identified the outlier and remediated the situation… all before you had a chance to put your coffee cup down.You just diagnosed your BigQuery environment, identified the outlier and remediated the situation… all before you had a chance to put your coffee cup down.Resource Charts leverages BigQuery’s INFORMATION_SCHEMA tables to render these visuals. This means all the data is also available for you to query directly, allowing you to create your own dashboards and monitoring processes. To help you get started, you can find example INFORMATION_SCHEMA queries on GitHub that show an organization’s slot and reservation utilization, job execution and job errors. You can also view Google Data Studio dashboard templates built from these queries.Resource Charts for BigQuery Administrator is available today in Public Preview for customers using Reservations and we hope it makes it easier for you to manage your BigQuery environments. You can learn more about how to use Resource Charts here.Related ArticleInventory management with BigQuery and Cloud RunBuilding a simple inventory management system with Cloud Run and BigQueryRead Article
Quelle: Google Cloud Platform

Run a transformed supply chain—see how at Google’s Digital Supply Chain summit

At the start of 2020, who knew that our supply chains would be so disrupted that we’d have to worry about having enough toilet paper or paper towels?  Yet, the early days of the COVID-19 pandemic resulted in many disruptions and unanticipated events. On a practical level, the sudden changes in consumer behavior placed supply chains in the spotlight, and revealed to many—including consumers everywhere—the fragility of our logistics networks. Of course, this disruption wasn’t just isolated to household items. Entire modes of purchasing shifted dramatically (and perhaps permanently). At the end of 2019, 16% of global sales was e-commerce. Within four months, that number grew to 33%. Supply chain companies were forced to adapt almost overnight to massive shifts to e-commerce and rapidly changing delivery models.The follow-on effects of this shift have been equally dramatic, including leading to a shortage of shipping containers. As COVID-19 lockdowns resulted in fewer people in the ports, this caused shipping traffic jams, which in turn led to a sharp rise in container shipping costs. And let’s not forget perhaps the most visible manifestation of why supply chains are the backbone of the global economy: The massive worldwide effort to deliver and distribute COVID-19 vaccines. The limitation of today’s supply chainsIt is not that supply chain professionals haven’t made investments to better predict demand, deliver or fulfill orders, and manage inventory. In fact, according to IDC Research, investments in supply chain management and service delivery are projected to grow by 34% in just the next 3 years—from $48 billion today to $64 billion by the end of 2023. However, there are still significant limitations to overcome, particularly in three key areas:Visibility: Companies don’t have enough information about their inventories to react to the uncertainties of profound change.  Flexibility: Companies running standard processes are slow to adapt to the changes. Intelligence: Without streamlined, cleaned, and actionable data, companies can’t accurately predict and meet demand.  Supply chains, then, are due for innovation. Unlike the manufacturing sector overall, which has adopted everything from AI and robotics to smart factories, supply chains have made only relatively small adjustments to their standard processes. Join the supply chain transformation at our summitTo help companies discuss and address these pressing issues, we’re hosting a Digital Supply Chain Summit on March 30, 2021, to bring together more than 300 senior supply chain and logistics leaders from across the world. At this event, attendees will learn how they can create a digital supply chain platform that enables them to deliver exceptional customer experience; how to build resilient and sustainable supply chains; and how to run supply chains autonomously through the use of AI, ML, and other advanced technologies. Among the featured industry speakers are Kuehne+Nagel and J.B. Hunt, both among the  world’s leading transportation and logistics companies, who are digitizing their supply chains to enable every process, person, and team. They will discuss their digital transformation journeys, particularly how they’re leveraging the cloud, artificial intelligence, and data analytics to unlock new levels of efficiency and business performance. Also at the event, you’ll hear from the leaders of Google’s own supply chain and data center operations, who will discuss how the cloud-based solutions they’ve deployed have driven real impact and business results. Finally, industry practice leaders from Accenture and Deloitte will also share how you can architect a customer-centric digital supply chain and a connected digital thread across your extended value chain.Why be average when you can be unique?As a company running a supply chain, you don’t need to stick with the status quo. Your transformation can be achieved by leveraging data to power individualized processes, which will set your company apart from the pack. We’re helping companies build a cloud-based digital supply chain platform based on four capabilities: First, using a digital supply chain twin, which is a digital representation of the physical supply chain as its core. Second, using intelligence to anticipate and predict potential outcomes. Third, allowing end users to access information from whatever device they are using, anywhere. And fourth, embracing partnering when it comes to applications. Supply chain companies succeed when they complementing their existing systems with new technologies, making it easier to innovate, adapt, and overcome limitations. Take the first step to transformationWe welcome you to register for the Google Cloud Digital Supply Chain Summit to get a comprehensive look at how companies are digitally transforming their supply chain and logistics operations. This online event is taking place on March 30. We hope it will help you identify steps you can take today to advance your own digital strategies with cloud-based solutions, data analytics, and AI.Related ArticleSave the date for Google Cloud Next ‘21: October 12-14, 2021Join us and learn how the most successful companies have transformed their businesses with Google Cloud. Sign-up at g.co/cloudnext for up…Read Article
Quelle: Google Cloud Platform