Hola, South America! Announcing the Firmina subsea cable

Today, we’re announcing Firmina, an open subsea cable being built by Google that will run from the East Coast of the United States to Las Toninas, Argentina, with additional landings in Praia Grande, Brazil, and Punta del Este, Uruguay. Firmina will be the longest cable in the world capable of running entirely from a single power source at one end of the cable if its other power source(s) become temporarily unavailable—a resilience boost at a time when reliable connectivity is more important than ever. As people and businesses have come to depend on digital services for many aspects of their lives, Firmina will improve access to Google services for users in South America. With 12 fiber pairs, the cable will carry traffic quickly and securely between North and South America, giving users fast, low-latency access to Google products such as Search, Gmail and YouTube, as well as Google Cloud services.Single-end power source capability is important for reliability, a key priority for Google’s network. With submarine cables, data travels as pulses of light inside the cable’s optical fibers. That light signal is amplified every 100 km with a high-voltage electrical current supplied at landing stations in each country. While shorter cable systems can enjoy the higher availability of power feeding from a single end, longer cables with large fiber-pair counts make this harder to do. Firmina breaks this barrier—connecting North to South America, the cable will be the longest ever to feature single-end power feeding capability. Achieving this record-breaking, highly-resilient design is accomplished by supplying the cable with a voltage 20% higher than with previous systems.Celebrating the world’s visionariesWe sought to honor a luminary who worked to advance human understanding and social justice. The cable is named after Maria Firmina dos Reis (1825 – 1917), a Brazilian abolitionist and author whose 1859 novel, Úrsula, depicted life for Afro-Brazilians under slavery. A mixed-race woman and intellectual, Firmina is considered Brazil’s first novelist. With this cable, we’re thrilled to draw attention to her pioneering work and spirit. You can learn more about Firmina in this Google Doodle.Including Firmina, we now have investments in 16 subsea cables, such as Dunant, Equiano and Grace Hopper, and consortium cables like Echo, JGA, INDIGO, and Havfrue. We’re continuing our work of building out a robust global network and infrastructure, which includes Google data centers and Google Cloud regions around the world. Learn more about our infrastructure.Related ArticleThe Dunant subsea cable, connecting the US and mainland Europe, is ready for serviceThe Dunant submarine cable system, crossing the Atlantic Ocean between Virginia Beach in the U.S. and Saint-Hilaire-de-Riez on the French…Read Article
Quelle: Google Cloud Platform

New research reveals what’s needed for AI acceleration in manufacturing

While the promise of artificial intelligence transforming the manufacturing industry is not new, long-ongoing experimentation hasn’t yet led to widespread business benefits. Manufacturers remain in “pilot purgatory,” as Gartner reports that only 21% of companies in the industry have active AI initiatives in production. However, new research from Google Cloud reveals that the COVID-19 pandemic may have spurred a significant increase in the use of AI and other digital enablers among manufacturers. According to our data—which polled more than 1,000 senior manufacturing executives across seven countries—76% have turned to digital enablers and disruptive technologies due to the pandemic such as data and analytics, cloud, and artificial intelligence (AI). And 66% of manufacturers who use AI in their day-to-day operations report that their reliance on AI is increasing.Click to enlargeThe top three sub-sectors deploying AI to assist in day-to-day operations are automotive/OEMs (76%), automotive suppliers (68%), and heavy machinery (67%).Click to enlargeIn fact, Bryan Goodman, Director of Artificial Intelligence and Cloud, Ford Global Data & Insight and Analytics shares, “Our new relationship with Google will supercharge our efforts to democratize AI across our business, from the plant floor to vehicles to dealerships. We used to count the number of AI and machine learning projects at Ford. Now it’s so commonplace that it’s like asking how many people are using math. This includes an AI ecosystem that is fueled by data, and that powers a ‘digital network flywheel.’”Moving from edge cases to mainstream business needsWhy are manufacturers now turning to AI in increasing numbers? Our research shows that companies who currently use AI in day-to-day operations are looking for assistance with business continuity (38%), helping make employees more efficient (38%), and to be helpful for employees overall (34%). It’s clear that AI/ML technology can augment manufacturing employees’ efforts, whether by providing prescriptive analytics like real-time guidance and training, flagging safety hazards, or detecting potential defects on the assembly line.Click to enlargeIn terms of specific AI use cases called out by the research, two main areas emerged: quality control and supply chain optimization. In the quality control category, 39% of surveyed manufacturers who use AI in their day-to-day operations use it for quality inspection and 35% for product and/or production line quality checks. At Google Cloud, we often speak with manufacturers about AI for visual inspection of finished products. Using AI vision, production line workers can spend less time on repetitive product inspections and can instead focus on more complex tasks, such as root cause analysis. In the supply chain optimization category, manufacturers said they tapped AI for supply chain management (36%), risk management (36%), and inventory management (34%).Click to enlargeIn our day-to-day work, we’re seeing many manufacturers rethink their supply chains and operating models to better accommodate for the increased volatility that has been brought about by the pandemic and support the secular trend of consumers asking for increasingly individualized products. We’ll share more on deglobalization in the third installment of our manufacturing insights series.AI use differs by geography, but not for the reasons you may thinkThe extent to which AI is already being used today varies quite strongly between geographies, according to our research. While 80% and 79% of manufacturers in Italy and Germany respectively report using AI in day-to-day operations, that percentage plummets in the United States (64%), Japan (50%) and Korea (39%).Click to enlargeIt’s tempting to state this disparity is due to an “AI talent gap.” Although the most common barrier, just a quarter (23%) of manufacturers surveyed believe they don’t have the talent to properly leverage AI. Cost, too, does not appear to be a roadblock (21% of those surveyed). Rather, from our observations, the missing link appears to be having the right technology platform and tools to manage a production-grade AI pipeline. This is obviously the focus of our efforts and others in the space, as we believe the cloud can truly help the industry make a step change.Looking ahead: The Golden Age of AI for manufacturingThe key to widespread adoption of AI lies in its ease of deployment and use. As AI becomes more pervasive in solving real-world problems for manufacturers, we see the industry moving away from “pilot purgatory” to the “golden age of AI.” The manufacturing industry is no stranger to innovation, from the days of mass production, to lean manufacturing, six sigma and, more recently, enterprise resource planning. AI promises to bring even more innovation to the forefront. To learn more about these findings and more, download our infographic here and our full report here. Research methodologyThe survey was conducted online by The Harris Poll on behalf of Google Cloud, from October 15 – November 4, 2020, among 1,154 senior manufacturing executives in France (n=150), Germany (n=200), Italy (n=154), Japan (n=150), South Korea (n=150), the UK (n=150), and the U.S. (n=200) who are employed full-time at a company with more than 500 employees, and who work in the manufacturing industry with a title of director level or higher. The data in each country were weighted by number of employees to bring them into line with actual company size proportions in the population. A global post-weight was applied to ensure equal weight of each country in the global total.Related ArticleCOVID-19 reshapes manufacturing landscape, new Google Cloud findings showAccording to our new research released today, manufacturers around the world have started to revamp their operating models and supply cha…Read Article
Quelle: Google Cloud Platform

NCR and Google Cloud are helping grocers rapidly reinvent the retail experience

In recent years, the grocery industry has had to shift to facilitate a wider variety of checkout journeys for customers. This has meant ensuring a richer transaction mix, including mobile shopping, online shopping, in-store checkout, cashierless checkout or any combination thereof like buy online, pickup in store (BOPIS).  What’s more, in the past year and a half alone grocers have had to enable consumers new ways to shop for essentials. This has included needing to rapidly integrate or build on-demand delivery apps, offer curbside pickup with near-instant fulfillment as well as support touchless and cashless checkout experiences. Searches on Google Maps for retailers in the US with curbside pickup options have increased by 9000% since March 2020, and we believe these trends from 2020 will continue to define the future of grocery shopping.The future of grocery will require agility and opennessFirstly, the need to rapidly adapt to changing consumer habits will be the new normal. Grocers will increasingly look to digitally transform legacy retail systems and modernize point of sale (POS) platforms to deliver and scale omnichannel experiences as quickly as possible. This necessitates a more agile and open architectural approach to technology – one built on microservices and leverages APIs so that new applications and experiences can be built, integrated and delivered faster.Automation and data-driven retailing will be table stakesIn order for retailers to blend what they’re offering in the store with digital experiences more efficiently, they will also need to automate more. For example, with automation and business intelligence, grocers can take labor that might have been tied up with tender operations and checkout and redistribute those resources to restocking shelves, curbside pick-up or improving customer experiences. Automation and access to real-time in-store inventory & supply chain data can also help grocers avoid the supply chain challenges seen in the early days of COVID-19. Grocers will need to find ways to leverage automation to ingest, organize, and analyze data from physical store networks, digital channels, distribution centers to better forecast demand and manage future fluctuations.How NCR and Google Cloud are helping grocers adapt to disruption with operational agilityHelping grocers improve operational agility to address changing consumer shopping habits and to thrive during times of disruption is something that NCR and Google Cloud have teamed up to do. NCR has over 135 years of experience in retail, having invented the cash register and are continuing to help grocers innovate. NCR Emerald builds upon the company’s leadership in POS software and has turned it into a unified platform that helps grocers operate the entire store from front to back. The solution supports cashier-led checkout, self-checkout, integrated payments, merchandising, and enables regional managers and corporate employees access to the analytics and tools needed to optimize loyalty programs and promotions.NCR has invested in a comprehensive, agile, and API-led retail architecture that lets grocers continually innovate and design new experiences as customers and the industry evolve. By running Emerald on Google Cloud, NCR can offer the solution on a subscription basis, helping grocers lower upfront capital expenditures and ensuring scalability. What’s more, NCR can tap into Google Cloud’s strength in data, analytics, and openness to deliver three key imperatives. Let’s take a look at each of these below.Run the way grocers need to while leveraging Google Cloud as a single source of logicTraditionally the POS system lived in the store. If disaster strikes, people still need access to food and essentials so the grocery store still needs to operate. It hardly gets more mission-critical than that. NCR Emerald is built on microservices, leveraging Kubernetes for front-of-house compute, and VMs (See graphic 1 below). This makes it easy to support lightweight clients accessible by store employees via any range of mobile devices, computer terminals, self-service kiosks, peripheral devices like receipt printers as well as legacy applications.What’s unique is that because Emerald runs on Google Cloud, it supports all those in-store and digital touchpoints mentioned above, but also allows grocers to run lean. Emerald leverages Google Cloud as a single source of truth and operates a lot of what it does out of logic. Every sales transaction coming from every channel, including e-commerce, can be logged via NCR’s Hosted Service and centralized in BigQuery and Bigtable as a transaction data master. This enables the grocer to manage any transactional use case very consistently, whether it e supporting customers who want to purchase in one store and return in another, offering digital receipts or the ability to exchange online purchases in store. Emerald on Google Cloud can help retailers extend capabilities through the power of the cloud but not need to live exclusively in the cloud. In other words, the solution allows grocers the ability to run the way they need to.Enable data-driven and real-time decision making for grocersStore managers, regional managers, category managers, and others all require different cuts of the data to do their jobs effectively. However, data silos persist and how data is formatted and arranged can still remain pretty static. Therefore allowing users with different roles the ability to view and analyze that data quickly and in different ways continues to be a challenge. As mentioned above, Emerald leverages Google Cloud data management solutions as the central repository for transactional, behavioral, and merchandising data. Every transaction from every store and every channel can be stored via NCR Hosted Service on BigQuery and Bigtable. NCR Analytics then harnesses the advanced analytical and data visualization capabilities of Looker to help grocers get a consolidated view of their business across all channels and then allow employees to slice and dice the data they way they need to. NCR Analytics also leverages the power of Google Cloud AI and machine learning to add another level of intelligence to the retailer’s data. For example, store managers can visualize how well they’re using their real estate and see how productive lanes 1-3 are compared with 7-10 or compare self-service versus manned lanes. By mapping to the retailer’s own catalog, they can also break down category-level performance and trends.NCR Analytics takes advantage of Google Cloud’s data pipeline to reduce processing time, with scaling and resource management provided out of the box. By letting the cloud store and process the data, NCR is providing the ability for retailers to analyze their data in near real-time across all platforms – a real game changer in the grocery business.Open APIs let grocers continually enrich the retail experienceFinally, Emerald is built on an API-first architecture managed through Apigee. It uses the power of Apigee as an open API platform to expose how Emerald can work with other NCR applications like loyalty and promotions, and third party applications like mobile ordering and order delivery to enrich the grocery experience for employees and customers. Every API that Emerald uses is available on Apigee, allowing them to share code samples and giving developers the ability to run scripts. This approach can allow retailers the ability to innovate in a fraction of the time and cost, speeding up 3rd party integrations up front and as businesses grow. Take, for example, Northgate Market, a chain of 40 stores in California, that were able to transform its digital operations and enable experiences that set it apart from competitors – quickly and simply with Emerald. It took less than 6 months to go from contract to live deployment in the first store. Since then, Northgate Market has been able to extend their intelligence by leveraging the power of Looker and NCR Analytics.Learn more about how NCR has been able to leverage an open, cloud-enabled architecture to help customers innovate across the retail, hospitality, and banking industries on the webinar “Role of APIs in Digital Transformation”. You can also learn more about how Northgate uses e-commerce to transform customer experience and gain consumer insights.Related ArticleAlbertsons and Google are making grocery shopping easier with cloud technologySee how grocery store chain Albertsons is working on new cloud technology to make online ordering and shopping easier — from the officia…Read Article
Quelle: Google Cloud Platform

Multi-Project Cloud Monitoring made easier

Customers need scale and flexibility from their cloud and this extends into supporting services such as monitoring and logging. Google Cloud’s Monitoring and Logging observability services are built on the same platforms used by all of Google that handle over 16 million metrics queries per second, 2.5 exabytes of logs per month, and over 14 quadrillion metric points on disk, as of 2020. However, you let us know through consistent feedback that the previous construct of Workspaces for Cloud Monitoring was not providing the flexibility needed for your larger scale projects.Cloud Operation’s New Approach to Multi-Project MonitoringWe’re happy to announce a new model for multi-project monitoring, which replaces the concept of Workspaces. This overhaul is geared toward maximizing the flexibility you have to manage your monitoring environments by introducing Metrics Scopes. Starting today you can associate your Google Cloud projects with multiple Metrics Scopes! Like Workspaces, Metrics Scopes will still be used to store all of the configuration content for dashboards, alerting policies, uptime checks, notification channels, and group definitions. However there is no limit to the number of Metrics Scopes to which you can associate a project. Prior to this change, a project could only be scoped with a single Workspace. Now, there are virtually unlimited possibilities for how you can set up multi-project monitoring. This unlocks a large variety of options, from more granular permissions to mission-focused configurations. At its most simple implementation though: operators/SREs can now create org-wide Metrics Scopes with monitoring configurations focused on infrastructure health. And developers can leverage Metrics Scopes built on a subset of their organization’s projects that allow them to focus on their application’s performance.How it worksWhen you have a collection of projects, Metrics Scopes enable you to view each project’s metrics in isolation as well as in combination with metrics stored by other projects. The Metrics Scope is hosted by a scoping project. This scoping project is the Cloud project that is selected in the Cloud Console project picker.ExampleIn this example, Project-SRE is the name of a scoping project to monitor your fleet. You added two developer teams’ projects: Project-Dev-1 and Project-Dev-2, to Project-SRE’s Metrics Scope. If you select Project-SRE with the Cloud Console project picker and then go to the Monitoring page, you view the metrics for all three projects: Metrics from all the projects are visible by using the scoping project Project-SRE, a project that was created specifically to monitor the fleet. It has a Metrics Scope of 3.If you select Project-Dev-1 with the Cloud Console project picker and then go to the Monitoring page, you view the Metrics Scope for Project-Dev-1 and you can only see the metrics for that project:Only metrics from the Developer’s project are visible by using the scoping project Project-Dev-1. It has a Metrics Scope of 1.What else is new?Metrics Scopes can now monitor up to 375 projects (up from 100).New projects automatically start working in Cloud Monitoring without the previous 60-second Workspace creation process.If you want to monitor more than one project simply add it to your Metrics Scope:Adding more than one project to a Metrics ScopeNavigationMentioned earlier, the Project Picker in the Cloud Console can be used to navigate between Metrics Scopes in Cloud Monitoring:A view of the Project Picker in the Cloud Console which can be used to navigate between Metrics ScopesThis is now consistent with many other services across Google Cloud. Specifically, you can see how the project picker stays consistent when navigating from Cloud Monitoring to Cloud Logging:The Project Picker stays consistent as you are navigating multiple servicesAdditionally, to make your navigation between Metrics Scopes easy we’ve added the new Metrics Scope Tab and Panel in the UI:Metrics Scopes panel in the Cloud Console UIComing SoonThe Metrics Scope API is coming within the next quarter! This API will enable you to programmatically manage your monitoring configurations and Metrics Scopes.Current Workspaces usersIf you are already using Workspaces in Cloud Monitoring you may have noticed that they converted to Metrics Scopes weeks ago. There is no additional action required and you can start taking advantage of the additional features of Metrics Scopes today.Get StartedCompanies that are digitally native or in the process of digital transformation have placed an increased operational role on developers and this often creates overlapping sets of responsibilities with Operations and SRE teams. Now multiple developer teams can focus on optimizing the performance of their applications while operators can take a fleet-wide view when maintaining and improving the performance of all of the infrastructure under their purview.For information on configuring a Metrics Scope to include metrics for multiple projects, see Viewing metrics for multiple projects.Related ArticleRead Article
Quelle: Google Cloud Platform

Build a platform with KRM: Part 1 – What’s in a platform?

This is the first post in a multi-part series on building developer platforms with the Kubernetes Resource Model (KRM). In today’s digital world, it’s more important than ever for organizations to quickly develop and land features, scale up, recover fast during outages, and do all this in a secure, compliant way. If you’re a developer, system admin, or security admin, you know that it takes a lot to make all that happen, including a culture of collaboration and trust between engineering and ops teams. But building culture isn’t just about communication and shared values— it’s also about tools. When application developers have the tools and agency to code, with enough abstraction to focus on building features, they can build fast without getting bogged down in infrastructure. When security admins have streamlined processes for creating and auditing policies, engineering teams can keep building without waiting for security reviews. And when service operators have powerful, cross-environment automation at their disposal, they can support a growing business with new engineering teams – without having to add more IT staff. Said another way: to deliver high-quality code fast and safely, you need a good developer platform.What is a platform? It’s the layers of technology that make software delivery possible, from Git repositories and test servers, to firewall rules and CI/CD pipelines, to specialized tools for analytics and monitoring, to the production infrastructure that runs the software itself.  An organization’s platform needs depend on a variety of factors, such as industry vertical, size, and security requirements. Some organizations can get by with a fully-managed Platform as a Service (PaaS) like Google App Engine, and others prefer to build their platform in-house. At Google Cloud, we serve lots of customers who fall somewhere in the middle: they want more customization (and less lock-in) than what’s provided by an all-in-one PaaS, but they have neither the time nor resources to build their own platform from scratch. These customers may come to Google Cloud with established tech preferences and goals. For example, they may want to adopt Serverless but not Service Mesh, or vice versa. An organization in this category might turn to a provider like Google Cloud to use a combination of hosted infrastructure and services, as shown in the diagram below.(Click to enlarge)But a platform isn’t just a combination of products. It’s the APIs, UIs, and command-line tools you use to interact with those products, the integrations and glue between them, and the configuration that allows you to create environments in a repeatable way. If you’ve ever tried to interact with lots of resources at once, or manage them on behalf of engineering teams, you know that there’s a lot to keep track of. So what else goes into a platform? For starters, a platform should be human-friendly, with abstractions depending on the user. In the diagram above, for example, the app developer focuses on writing and committing source code. Any lower-level infrastructure access can be limited to what they care about: for instance, spinning up a development environment. A platform should also be scalable: additional resources should be able to be “stamped out” in an automated, repeatable way. A platform should be extensible, allowing an org to add new products to that diagram as their business and technology needs evolve. Finally, a platform needs to be secure, compliant to industry- and location-specific regulations. So how do you get from a collection of infrastructure to a well-abstracted, scalable, extensible, secure, platform? You’ll see that one product icon in that diagram is Google Kubernetes Engine (GKE), a container orchestration tool based on the open-source Kubernetes project. While Kubernetes is first and foremost a “compute” tool, that’s not all it can do. Kubernetes is unique because of its declarative design, allowing developers to declare their intent and let the Kubernetes control plane take action to “make it so.” The Kubernetes Resource Model (KRM) is the declarative format you use to talk to the Kubernetes API. Often, KRM is expressed as YAML, like the file shown below.If you’ve ever run “kubectl apply” on a Deployment resource like the one above, you know that Kubernetes takes care of deploying the containers inside Pods, scheduling them onto Nodes in your cluster. And you know that if you try to manually delete the Pods, the Kubernetes control plane will bring them back up- it still knows about your intent, that you want three copies of your “helloworld” container. The job of Kubernetes is to reconcile your intent with the running state of its resources- not just once, but continuously.  So how does this relate to platforms, and to the other products in that diagram? Because deploying and scaling containers is only the beginning of what the Kubernetes control plane can do. While Kubernetes has a core set of APIs, it is also extensible, allowing developers and providers to build Kubernetes controllers for their own resources, even resources that live outside of the cluster. In fact, nearly every Google Cloud product in the diagram above— from Cloud SQL, to IAM, to Firewall Rules — can be managed with Kubernetes-style YAML. This allows organizations to simplify the management of those different platform pieces, using one configuration language, and one reconciliation engine. And because KRM is based on OpenAPI, developers can abstract KRM for developers, and build tools and UIs on top.Further, because KRM is typically expressed in a YAML file, users can store their KRM in Git and sync it down to multiple clusters at once, allowing for easy scaling, as well as repeatability, reliability, and increased control. With KRM tools, you can make sure that your security policies are always present on your clusters, even if they get manually deleted. In short, Kubernetes is not just the “compute” block in a platform diagram – it can also be the powerful declarative control plane that manages large swaths of your platform. Ultimately, KRM can get you several big steps closer to a developer platform that helps you deliver software fast, and securely. The rest of this series will use concrete examples, with accompanying demos, to show you how to build a platform with the Kubernetes Resource Model. Head over to the GitHub repository to follow Part 1 – Setup, which will spin up a sample GKE environment in your Google Cloud project. And stay tuned for Part 2, where we’ll dive into how the Kubernetes Resource Model works.Related ArticleI do declare! Infrastructure automation with Configuration as DataConfiguration as Data enables operational consistency, security, and velocity on Google Cloud with products like Config Connector.Read Article
Quelle: Google Cloud Platform

Dritte Availability Zone in der AWS-Region China (Peking)

Heute wurde der AWS-Region China (Peking)* eine dritte Availability Zone (AZ) hinzugefügt, um die hohe Nachfrage unserer wachsenden Kundenbasis zu unterstützen. Eine Availability Zone (AZ) ist ein oder mehrere diskrete Rechenzentren mit redundanter Stromversorgung, Vernetzung und Konnektivität in einer AWS-Region. Mithilfe dieser Availability Zones (AZs) können Sie Produktionsanwendungen und Datenbanken betreiben, die verfügbarer, fehlertoleranter und skalierbarer sind, als dies von einem einzigen Rechenzentrum aus möglich wäre.
Quelle: aws.amazon.com

Introducing On-Demand OpenStack Private Clouds and Initial Use Cases

InMotion Hosting has recently brought to market an automated deployment of OpenStack and Ceph that we sell as on-demand Private Cloud and as part of our Infrastructure as a Service.  We believe that making OpenStack more accessible is critical to the health of the OpenStack community as it will allow smaller teams a low-risk and… Read more »
Quelle: openstack.org