Easy access to stream analytics with SQL, real-time AI, and more

During times of challenge and uncertainty, businesses across the world must think creatively and do more with less in order to maintain reliable and effective systems for customers in need. In terms of data analytics, it’s important to find ways for bootstrapped engineering and ops teams working in unique circumstances to maintain necessary levels of productivity. Balancing the development of modern, high-value streaming pipelines with maintaining and optimizing cost-saving batch workflows is an important goal for a lot of teams. At Google Cloud, we’re launching new capabilities to help developers and ops teams easily access stream analytics.Highlights across these launches include:Streaming pipelines developed directly within the BigQuery web UI with general availability of Dataflow SQLDataflow integrations with AI Platform allow for simple development of advanced analytics use casesEnhanced monitoring capabilities with observability dashboardsBuilt on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, Google Cloud’s streaming platform provisions the resources that engineering and operations teams need to ingest, process, and analyze fluctuating volumes of real-time data to get real-time business insights. We are honored that The Forrester Wave™: Streaming Analytics, Q3 2019 report named Google Cloud a Leader in the space. These launches build on and strengthen the capabilities that drove that recognition.What’s new in stream analyticsThe development process for streaming and batch data pipelines is now even easier with these key launches across both Dataflow and Pub/Sub. You can get from idea to pipeline and management to iteration to fulfill customer needs efficiently.General availability of Dataflow SQLDataflow SQL lets data analysts and data engineers use their SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI. Your Dataflow SQL pipelines have full access to autoscaling, time-based windowing, a streaming engine, and parallel data processing. You can join streaming data from Pub/Sub with files in Cloud Storage or tables in BigQuery, write results into BigQuery or Pub/Sub, and build real-time dashboards using Google Sheets or other BI tools. There’s also a recently added command line interface to script your production jobs with full support of query parameters, and you can rely on the Data Catalog integration and a built-in schema editor for schema management.Iterative pipeline development in Jupyter notebooksWith notebooks, developers can now iteratively build pipelines from the ground up with AI Platform Notebooks and deploy with the Dataflow runner. Author Apache Beam pipelines step by step by inspecting pipeline graphs in a read-eval-print-loop (REPL) workflow. Available through Google’s AI Platform, Notebooks allows you to write pipelines in an intuitive environment with the latest data science and machine learning frameworks so you can develop better customer experiences easily. Share pipelines and scale with flex templatesDataflow templates allow you to easily share your pipelines with team members and across your organization or take advantage of many Google-provided templates to implement simple but useful data processing tasks. With flex templates, you can create a template out of any Dataflow pipeline.General availability of Pub/Sub dead letter topicsOperating reliable streaming pipelines and event-driven systems has gotten simpler with general availability of dead letter topics for Pub/Sub. A common problem in these systems is “dead letters,” or messages that cannot be processed by the subscriber application. A dead letter topic allows such messages to be put aside for offline examination and debugging so the rest of the messages can be processed without delays. Optimize stream data processing with change data capture (CDC)One way to optimize stream data processing is to focus on working only with data that has changed instead of all available data. This is where change data capture (CDC) comes in handy. The Dataflow team has developed a sample solution that lets you ingest a stream of changed data coming from any kind of MySQL database on versions 5.6 and above (self-managed, on-prem, etc.), and sync it to a dataset in BigQuery using Dataflow.Integration with Cloud AI Platform You can now take advantage of an easy integration to AI Platform APIs and access to libraries for implementation of advanced analytics use cases. AI Platform and Dataflow capabilities include video clip classification, image classification, natural text analysis, data loss prevention, and a number of other streaming prediction use cases.Ease and speed shouldn’t come just to those building and launching data pipelines, but those managing and maintaining them as well. We’ve also enhanced the monitoring experience for Dataflow, aimed to further empower operations teams.Reduce operations complexity with observability dashboardsObservability dashboards and Dataflow inline monitoring let you directly access job metrics to help with troubleshooting batch and streaming pipelines. You can access monitoring charts at both step- and worker-level visibility, and set alerts for conditions such as stale data and high system latency. Here’s a look at one example:Getting started with stream analytics is now easier than ever. The first step to begin testing and experimenting is to move some data onto the platform. Take a look at the Pub/Sub Quickstart docs to get moving with real-time ingestion and messaging with Google Cloud.
Quelle: Google Cloud Platform

How Cloud AI is helping during COVID-19

For years, I’ve been challenging people who advocate for the potential of artificial intelligence: Before you turn to AI as a solution, find a specific problem that needs solving. Now that we’re faced with a global pandemic, there is no shortage of immediate, complex problems that need to be solved.People are coming together to take on the challenges we’re facing due to the novel coronavirus, and AI is proving to be a valuable tool. From sifting through huge research datasets to finding potential treatments, to more accurately forecasting the spread of the disease, to powering virtual agents to answer questions about COVID-19, AI is helping all kinds of organizations. In this post, we’ll look at a few ways we’re trying to help out.Finding answers with the Kaggle CommunityIn March, The White House Office of Science and Technology Policy announced that it had 29,000 articles, which has now grown to more than 59,000, that may contain answers to key questions about the virus. It turned to Kaggle, a Google Cloud subsidiary, to call upon its community of more than 4 million data scientists to use AI to help find these answers. Participants have already developed several text and data mining tools to search through this dataset, named COVID-19 Open Research Dataset (CORD-19), to help answer critical questions like, “What do we know about COVID-19 risk factors?”, “What do we know about the virus’ genetics, origin, and evolution?”, and more.That same week, Kaggle doubled-down on its own efforts and challenged its community of data scientists in two forecast competitions: One focused on forecasting the spread of COVID-19 around the world, the other forecast the spread of the disease within California. Data scientists across the globe are collaborating to help the medical community defeat COVID-19, and you can keep up-to-date with our challenges at kaggle.com/covid19, and see the progress our community is making towards achieving the goals we’ve discussed here at kaggle.com/covid-19-contributions. Rapid Response Virtual Agent programIn early April, we launched the Rapid Response Virtual Agents program to help organizations that have been inundated with customer questions about the pandemic. The program helps businesses quickly build and implement a customized Contact Center AI virtual agent to respond to customer questions via chat or voice allowing customers to get 24/7 support.Albertsons CompaniesThe pandemic sparked a number of inquiries from our customers, causing a rush of calls, and impossibly long wait times. With the Rapid Response Virtual Agent program we were able to quickly set up our virtual agent, answering questions and directing traffic at the first inquiry level. Saving us time and money, while better servicing our customer’s needs. Cameron Craig, Vice President Digital Product, Design & ExperiencePPP Lending AI SolutionLast week, Google developed the PPP Lending AI Solution to help integrate Google’s AI-based document ingestion tools into lenders’ existing underwriting components and lending systems to make them more efficient. The PPP Lending AI Solution has three components, each of which can be used individually or in combination with each other:The Loan Processing Portal is a web-based application that lets lending agents and/or loan applicants create, submit, and view the status of their PPP loan application. The Document AI PPP Parser API enables lenders to use AI to extract structured information from PPP loan documents submitted by applicants. This component is available at no cost through June 30, 2020. Loan Analytics enables lenders to quickly onboard historical loan data, assist with the de-identification and anonymization of sensitive information, store information securely, and perform data analytics on this historical loan data. We’ve always known that one of AI’s great strengths is helping solve complex problems, and with the pandemic we’re faced with a particularly challenging one. We’ll continue to build and deploy our AI capabilities to help during this time, and to help customers solve their trickiest problems into the future.
Quelle: Google Cloud Platform

Windows Server containers on GKE now GA, with ecosystem support

As organizations look to modernize their Windows Server applications to achieve improved scalability and smoother operations, migrating them into Windows containers has become a leading solution. And orchestrating these containers with Kubernetes has become the industry norm, just as it has with Linux. We launched the preview of Windows Server container support in Google Kubernetes Engine (GKE) earlier this year, and today, it’s generally available for production use. Running your Windows apps in containers on Kubernetes can provide significant cost savings, as well as improved reliability, scalability, and security—things that are especially important in times of uncertainty. Since we launched the preview, many customers have kicked the tires on our Windows Server containers. Thanks to their feedback, we’ve added features like support for private clusters and regional clusters, choice of Long-Term Servicing Channel (LTSC) and Semi-Annual Channel (SAC) versions, integration with Active Directory using group Managed Service Account (gMSA) and much more. This release also includes integration with the Google Cloud Console to simplify creating new GKE clusters or updating existing clusters with Windows Server node pools like in the graphic below.Improving the end-to-end experience with partner solutionsWhen you modernize your applications, you also want to incorporate them into an end-to-end DevOps management experience that works with your existing tooling and workflows. To that end, we’ve worked with several partners to make sure that your build, test, deploy, config and monitoring apps work well with Windows containers. Here are some use cases and partner solutions that we’ve tested to support Windows containers in GKE:Here are a few solutions from our technology ecosystem ISV partners tested to work with Windows containers in GKE:Aqua: Aqua’s security platform can be deployed directly on a GKE cluster and allows users of Windows applications to scan images and ensure only trusted images are deployed to production, all while preventing container related attacks in real time. More here.Chef: Chef’s application delivery solution Habitat can easily and efficiently package and deploy any Windows application—both modern and legacy—into GKE. More here.CircleCI: CircleCI’s orb supports deployment to Windows containers running on GKE, allowing you to deploy applications in minutes from your CI/CD pipeline. More here.CloudBees: Speed up your software delivery using CloudBees Core pipelines to test and create Windows-based apps managed on GKE. More here. Codefresh: Codefresh provides native support for connecting to GKE clusters, so you can create a deployment pipeline to serve Windows applications on the cluster. More here.Datadog: By deploying the Datadog Agent on your Windows node pool, you can monitor all your containerized Windows applications running on a GKE cluster. More here.GitLab: Execute a CI/CD pipeline with Windows runners on GitLab (both dotcom and self-hosted) to automatically deploy Windows apps on GKE. More hereJFrog: JFrog Artifactory serves as a Kubernetes registry that provides full traceability of all your orchestrated Windows apps. More here.New Relic: The New Relic Kubernetes solution for GKE lets you fully observe metrics, events, logs and traces for the Windows workloads running in your Kubernetes clusters. More here.We hope you will kick-start your modernization journey using Windows Server containers. You can find detailed documentation on our website. Our partners are eager to help you with any questions related to the published solutions. You can also reach out to the GCP sales team. If you are new to GKE, the Google Kubernetes Engine page and the Coursera course on Architecting with GKE are good places to start. Please don’t hesitate to reach out to us at gke-windows@google.com if you have any feedback or need help unblocking your use case.
Quelle: Google Cloud Platform

Anthos in depth: Toward a service-based architecture

Two weeks ago, we announced many new advancements we are making to Anthos, including new capabilities that let you better run and manage loosely coupled microservices anywhere you need them. Today, we’re diving deeper into this world of services, and how we have been helping customers on their journey to this model. At a high level, the main benefit of a service-based architecture is the speed with which it lets you roll out changes with minimum disruption. With their smaller, independent components, microservices-based architectures enable faster development and deployment, increased choice of technologies, and more autonomous teams, so you can quickly roll out new and upgraded products to your customers. But as your usage of microservices increases, you often face additional challenges and you may need to adopt more modern deployment and management practices. With Anthos Service Mesh, you can: Better understand what is happening with your services Set policies to control those servicesSecure the communication between services All of this is done without changes to your application code. Let’s take a deeper look at how Anthos Service Mesh works, and how you can use it to adopt a more efficient service-based architecture. Better monitoring and SLOsMany of you come to us for help implementing Site Reliability Engineering (SRE) principles in your organization. Anthos Service Mesh can help you do this, beginning with monitoring, so they can see which services are communicating, how much traffic is being sent, and response times and error rates. Simply having this initial baseline information is a major improvement for many customers’ operations. For example, the topology graph below shows the connections between services. The focus on the checkout service even shows the pods comprising the service.Click to enlargeOnce you have monitoring in place, you can use Anthos Service Mesh to implement service level objectives (SLOs). Setting SLOs (for example, 99% availability over a one week rolling window) and having alerts on those SLOs lets your staff be proactive and catch issues before your customers become aware of them. You can send alerts (i.e., email, page, and UI warnings) to the team when your SLOs are not being met or you’ve exceeded your error budgets. This is an indicator that deployments should be frozen or slowed until stability and reliability are under control. This “cartservice” screenshot below shows the golden signals associated with this service, its health, and links to the infrastructure on which it’s running.Through monitoring, SLOs and alerts, your team will have much more information about—and control over—the health and well-being of your services, which in turn makes your products more reliable and performant for your customers. For example, co-location provider and Google Cloud partner Equinix uses Anthos and Anthos Service Mesh to give their customers visibility into their environments, so they can make better deployment decisions. “At Equinix, giving our customers the best performance is our top priority. With Anthos and network insights from Equinix Cloud Exchange Fabric, we can build a service mesh that gives access to rich information about the performance of our customers’ applications,” said Yun Freund, SVP, Platform Architecture and Engineering. “This provides us with metrics we can use to recommend where customers should run their workloads for the best end-to-end user experience.”Security with policies, encryption and authenticationFor many of you, particularly those in regulated industries like financial services and healthcare, there can be no compromises when it comes to security. Anthos Service Mesh lets you reduce the risk of data breaches by setting policies that ensure that any and all communications to and from your workloads are encrypted, mutually authenticated and authorized. This also helps protect against insider threats.But implementing, maintaining and updating strict policies using traditional rules and IP based parameters can be difficult. It’s even harder to enforce those policies while your deployments are scaling up and down—especially if they’re based on technologies like containers and serverless that span hybrid and multi-cloud environments. Anthos Service Mesh lets you implement context-aware application security using parameters such as identity, the service in question, as well as the context of the incoming request. You can also do all of this without depending on network primitives such as IP addresses. In this way, Anthos Service Mesh can help you adopt defense-in-depth and zero trust security strategies, on your way to implementing best practices such as BeyondCorp and BeyondProd.Anthos Service Mesh also provides Mesh CA, a fully-managed certificate authority to issue certificates for your microservices, enabling a “zero trust” security posture based on mTLS. Mesh CA is now generally available for workloads running on Anthos GKE.Traffic managementFinally, you can deploy Anthos Service Mesh to help you achieve safer, more controlled release processes, as well as gain more control over how traffic flows between your services. Anthos Service Mesh contains a number of traffic capabilities to allow you to fine tune the traffic in your mesh. For example, you can use the built-in canary capabilities to route a small percentage of traffic to new versions before rolling them out for all your users. Or you can take advantage of various load-balancing capabilities or location-based routing to control traffic. Other policies such as retries to enhance reliability, or even fault injection to test resilience, can help you roll out new products, while ensuring your customers have the best possible experience. In the second half of this year Anthos Service Mesh will also integrate with Traffic Director, a managed configuration and traffic control plane for your service mesh. Traffic Director powers the traffic management fundamentals of the service mesh (like service discovery, endpoint registration, health checking and load balancing) and enables powerful DevOps use cases like blue/green deployments and circuit breaking while still using declarative, open-source Istio APIs. Managed by GoogleWhile Anthos Service Mesh is based on the open-source Istio service mesh, it is offered as a managed service. You get all the benefits of service mesh without having to monitor, manage and upgrade the underlying software.Included as part of the managed offering, you get service mesh dashboards that give you all of the monitoring and SLO capabilities above, as well as telemetry, logging and tracing into a single tool. All these capabilities are generally available (GA) and fully supported. They give your application teams a set of out-of–the-box, powerful operations dashboards without having to depend on multiple open-source projects that you would in turn have to commit to deploy and maintain. And because all these Anthos Service Mesh components, including Traffic Director, Mesh CA and the Anthos Services telemetry dashboards, are managed services, you don’t need to worry about installing, upgrading or maintaining these components—Google’s SREs are on the job.What’s next for Anthos Service Mesh? The next frontier for Anthos Service Mesh is to make it easier for you to join virtual machines to the mesh, and not just containers. We are actively working on making it easy to add new and existing VMs to your mesh, so you can use all of the features listed above with your VM-based workloads. Later this week, we are hosting a webinar where you will be able to learn how the newest Anthos features will help you to build resilient applications and enable you to follow SRE and security best practices no matter where your applications run. You can register for this webinar on May 8, 2020 here.
Quelle: Google Cloud Platform

Providing transparency into government requests for enterprise data

At Google Cloud, we’re committed to being transparent about when governments request our enterprise customers’ data. Today, to continue our company-wide efforts to build trust through transparency, Google published its semi-annual Requests for user information transparency report. This version of the report represents an important step forward. For the first time, it breaks out the number of government requests we received for Google Cloud Platform and G Suite Enterprise Cloud customer data. Last October, we committed to publish this information in early 2020, and future transparency reports will continue to include it.Let’s take a look at some of the data and takeaways from the report before looking at how we’re working to improve your control over, and visibility into, your data. Key Transparency Report takeaways for customers Now that we’re breaking out information on the government requests for data we received, we have four initial observations. These observations are based on the total number of government requests for user information (81,785) across all of Google we received from July 2019 to December 2019. First, the number of requests targeting enterprises (282) represents a very small percentage (0.3%) of the overall number of requests we received. Second, for requests relating to G Suite Enterprise Cloud customers, we produced data in a very small number of cases (152). In each case, we reviewed the requests to ensure they were consistent with our policies and practices outlined below, and applicable law. Third, we didn’t produce any Google Cloud Platform Enterprise Cloud customer data in response to government requests. Finally, with regard to public sector customers, we didn’t identify any requests that appeared to be from a national government seeking information about another national government. If we were to receive such a request in the future, we would redirect the requesting government to the customer and object to the request if necessary.Moving forward, we trust that this enterprise-focused information will help address questions about how often governments are coming to Google to request access to enterprise customer data. Advocacy in support of customer controlBreaking out Google Cloud Platform and G Suite Enterprise Cloud customer data in our transparency report is part of our larger commitment to advancing customers’ control of their data in the cloud. We also advocate extensively, and litigate when necessary, to protect the interests of our enterprise customers. We continue to advocate for five global principles for governments to follow when making requests for enterprise data stored in the cloud: Approach enterprises directly Promote transparencyProtect customer rights Support strong security Streamline government rules for compelled production On the litigation side, in the United States Court of Appeals for the Second Circuit, our legal challenge to protect a customer’s right to know when its data is accessed has progressed. We recently filed our reply brief to counter the government’s arguments on secrecy and notice.Improving technical controls in our cloudWe believe that customers should have the strongest levels of control over data stored in the cloud. To support that mission, we’ve  developed industry-leading product capabilities that enhance your control over your data and provide expanded visibility into when and how your data is accessed. Some of our recent product updates in these areas include: External Key Manager, which is now generally available, lets customers encrypt data with encryption keys that are stored and managed in a third-party key management system that is run outside Google.Key Access Justifications (alpha for GCE/PD and Big Query) provides customers with a justification every time their externally hosted keys are needed to decrypt data and gives them the opportunity to approve or deny such requests. These products provide unprecedented levels of control over data in the cloud, and we’ll continue to update them based on customer needs. We are committed to building trust through transparency, and to helping to ensure our customers’ control over their data through legal and technical means. To learn more about our efforts, check out our whitepaper, “Government requests for customer data: controlling access to your data in Google Cloud.”
Quelle: Google Cloud Platform

How we’re supporting retailers across the globe during COVID-19

The shift to digital has been well under way for years in retail, but today, retailers have a new sense of urgency to digitally transform as the industry responds to COVID-19. The pandemic has dramatically impacted the retail industry at-large, exposing gaps in omnichannel capabilities, business continuity plans, and supply chain responsiveness. Today’s environment is uncharted territory for most retailers. Government-imposed lockdowns, social distancing guidelines, rapid changes in consumer behavior, and uneven demand for different product categories have introduced challenges to retailers around the globe. For retailers, crisis planning typically covers events in which a set of stores or warehouses are down due to natural disasters or other power outages. However, closing down all stores or, conversely, supporting demand surging to 20 times normal is a new dynamic that many retailers hadn’t contemplated previously. The pandemic has also had a polarizing effect on retailers. On one hand, grocery and mass merchandisers are experiencing unprecedented surges, while retailers in other categories, like fashion and beauty, are experiencing declines in many product categories. As we look to the future, we know that recovery will take time and will vary by sub-segment. To help retailers tackle these challenges, we’re sharing a number of industry-tailored solutions to support our customers and partners during this time. G Suite collaboration tools to assist workforce enablement and optimizationAs some retailers, particularly those in grocery and mass merchandise, experience a rapid rise in hiring to fulfill unprecedented demand, they’re realizing how critical it is to build and maintain collaboration with their employees. G Suite offers video conferencing, chat, email, and shared documents, allowing teams to efficiently work together remotely and in real time. As remote work and video conferencing continue to be the norm for many retailers, supermarkets like Schnucks—a family-owned supermarket with 100 stores in Missouri, Illinois, Indiana, Iowa, and Wisconsin—are using Google Meet to help keep dispatch running smoothly and as a help desk for in-store clerks. And in the UK, DFS Furniture Company Ltd has been able to transition its entire workforce to working from home using Google Meet. Rapid Response Virtual Agent program to quickly respond to customersOur newly launched Rapid Response Virtual Agent solution allows retailers to stand up new chatbot-based services within two weeks to help respond to their customers more quickly and efficiently, especially as it relates to critical information around COVID-19. This new chatbot can help with store hours inquiries, inventory questions, pick-up options, and more, offloading an immense amount of calls going to human agents, so they can focus on more complex service needs. Retailers like Albertsons faced call volume to their stores that increased five times the norm during COVID-19. To get customers faster responses, Albertsons enlisted the Rapid Response Virtual Agent to manage inbound call volumes and address customers’ more basic questions, such as hours of operation, pick and delivery options, and order status.Accelerated migration and PSO migration factory solutions to reduce operational overheadRapid changes in customer demand are causing significant capital and expense constraints. Accelerating the migration of IT systems to Google Cloud can help retailers quickly cut fixed costs, reduce their operational overhead, and set up the right infrastructure to map to their changing business needs—while ensuring business continuity during unexpected business disruptions. Lush, the UK-based beauty retailer, migrated its global ecommerce sites to Google Cloud in 2017 to help run its online channels smoothly, especially during peak seasons like Boxing Day. Migrating to the cloud has allowed Lush to control costs and, in return, develop innovative projects that will help drive its business forward, especially in light of COVID-19.   Capacity Management and Black Friday/Cyber Monday (BFCM) Assistance solutions to quickly rightsize cloud deploymentsBuying behavior has changed drastically, with atypical demand for some retail sub-segments and extreme declines for others. This, paired with sudden shifts from in-store to online, has caused a strain on omnichannel capabilities. Through early capacity planning, reliability testing, and operational war rooms, we can help retailers quickly rightsize cloud deployments to reflect the changing needs of their businesses. We’ve also activated our special peak season support protocols for retailers seeing ecommerce traffic surges. Ecommerce modernization to assist offline to onlineAs customer expectations shift during this time, providing a top-tier digital experience has become increasingly important. Having a flexible and agile ecommerce platform is crucial to enable retail teams to quickly introduce new shopping experiences to keep up with ever-changing demands. Google Cloud can help untangle legacy ecommerce systems, introducing a containerized architecture that provides flexibility and agility for businesses. Retailers like Ulta Beauty, whose stores are temporarily closed in response to the government-imposed lockdown, are leaning into their online channels to stay connected with shoppers, providing them with a remote digital experience. While some shoppers roam isles within a store to find exactly what they’re looking for, Ulta Beauty’s Virtual Beauty Advisor tool, built on Google Cloud, is proving particularly useful, providing consumers with data-driven product recommendations. Google Cloud’s demand forecasting and modeling capabilities to help respond to significant consumer behavior changesFor retailers, shifts in consumer spending present both a near-term need for a responsive supply chain, and a longer-term need to predict how customer behavior changes will impact demand. With Google Cloud’s AI/ML capabilities, in partnership with o9 Solutions, retailers can develop custom models based on their own data and signals to accurately forecast future demand of products at any given location—reducing lost revenue through stockouts, excessive discounting and markdowns, inventory holding, and spoilage costs. Retailers can also use Google Cloud’s broad range of public datasets, including weather, traffic, and more, to better forecast demand down to the store level.Google Cloud/Looker solutions for a 360-degree view of the customerWith substantial disruption in business due to COVID-19, it’s imperative for retailers to rely even more on data that is real-time and reliable. Google Cloud and Looker provide pre-built data models and analytics packages (called “Blocks”) that are specific for retail needs. With these pre-built resources, Looker can help quickly deliver solutions that transcend traditional business intelligence offerings such as reports and dashboards. By bringing multiple datasets across an organization together, retailers can create data experiences that help optimize in-store operations, increase retail margins, and improve customer lifetime value. Looker is also fully integrated with Google Cloud for Marketing solutions, allowing retailers to make informed marketing decisions in real time. Through this integration, they’re able to bring all of their marketing data from Google together for analysis to see how changes on Google Ads, YouTube, and Google Analytics affect one another. Retailers across all sub-segments can discover real-time insights and immediately implement changes within ongoing marketing programs.Now more than ever, we’re committed to bringing forward technologies the retail industry needs to adapt to this new era. Our goal is to eliminate the stress tied to keeping the lights on in IT and instead allow retailers to focus on what matters most: their customers and employees. Read more about our work with the retail industry here.
Quelle: Google Cloud Platform

How Google Cloud helped scaling-out one person's AI service—and his life

Editor’s note: This is a post by Kaz Sato from Google based on an interview with Sato (@sato_neet), an individual developer. It’s confusing that we have similar names, but we are not the same person.AI Gahaku (AI master painter): built by Sato with Firebase, Cloud Run, and Google Colab. One million users are enjoying the tool everyday.When Sato (@sato_neet) quit college in Tokyo 10 years ago, he didn’t know he had Asperger syndrome. After spending some time unemployed, he tried a couple different career paths, including trying to attend nursing school and learning to become a baker. When he realized that Asperger’s could be the reason that he wasn’t able to fit well in those environments he tried something else entirely: artificial intelligence (AI).It was two years ago when Sato started learning AI. He had taken some basic programming classes in college, but wanted to learn Python and JavaScript to create something fun with emerging technology and share it with the community. He also conquered the basics of deep learning with TensorFlow and Colaboratory. “As I have been earning so little money these days, it was very helpful for me that TensorFlow and Colab are freely available,” Sato explains. “I could get a great learning environment at no cost.”Developing AI GahakuIn March 2020, Sato released AI Gahaku (“AI master painter”), which he has been developing alone. It generates classical-painting style portraits based on portrait photos that you upload to the site.A classical-painting style portrait generated with AI Gahaku.The site uses a pix2pix-based ML model for the style transformation. Pix2pix is a kind of conditional generative adversarial network (cGAN) model that’s designed to generate a realistic image from a specified image as a condition. (Check out the pix2pix example with TensorFlow to try it for yourself.)A pix2pix generated image. The left image works as the condition, and the model generates the image at right.In the case of AI Gahaku, Sato trained a pix2pix-based model that takes the uploaded photo as the condition and generates a realistic classical painting portrait.The site instantly made a buzz when he shared it on Twitter, first in Japan and then in the US and other countries. Now AI Gahaku is being enjoyed by one million users worldwide, every day.The number of AI Gahaku users spiked from 0 to 1 million in 10 days.Sato has also released another fun project called pixel-me, a tool for generating 8-bit-style portraits with the same pix2pix technology—the difference is that he used pixelated images for training this model.An 8-bit style portrait generated with pixel-me.From 0 to 1,000,000 users in 10 daysWhen he was building the sites, Sato relied on Google Cloud Platform Free Tier—specifically Firebase, Cloud Run, and Colab. This allowed him to develop both AI Gahaku and pixel-me while keeping costs low.The systems architecture of AI Gahaku.”In addition to those free tiers, the $300 free trial program helped me learn how to use Google Cloud tools,” Sato says. “And, since the UX design clearly says which resource is free and which is not I was very comfortable using it.”Although AI Gahaku was built by one person, it scales automatically, thanks to Cloud Run’s serverless autoscaling feature. Now that Sato has packaged the pix2pix-based ML model as a container and deployed it to Cloud Run, he doesn’t have to manually start up or shut down server instances based on traffic load. Instead, each instance can start up in seconds after receiving an increasing number of requests—if there’s a sudden traffic spike, tens or hundreds of instances start up almost instantly, while staying under a controlled budget. This all means that Sato didn’t have to change anything in the system architecture as he watched the traffic load spike from zero to one million users per day within just 10 days of the release. Now? The Cloud Run backend of AI Gahaku is using the maximum 200 containers. “I’m so surprised that Cloud Run and Firebase are naturally scalable as serverless environments,” Sato says. “The site is keeping a fast and steady response time for millions of users, without any design changes for handling the boom in global traffic.”Scalable AI for everyoneUnder the current load, the operational cost of AI Gahaku is around $20 USD per day, Sato says. But he doesn’t have any plans to monetize the site. “I’m just not interested in those things, like starting up a business and extending the site. I just want to keep creating something truly interesting to me,” Sato explains. “I like Google Cloud serverless services because the platform allows me to explore those fun ideas easily, without worrying much about the initial cost, scalability, and ongoing operation.”Sato continues, “In the last week, I got so many responses and great feedback from all over the world. It has been the most valuable and meaningful time in my entire life. I really thank all the users, donators, and people who made this happen.” We tend to think of Google Cloud’s scalable AI features as something businesses deploy to help them scale and become more efficient. But, Sato’s story shows that sometimes this technology can help scale out your creativity, reachability, and connections with others. Do you have an idea to explore? Check out the Google Cloud Platform Free Tier page to learn more and try out the technology for yourself.
Quelle: Google Cloud Platform

Business continuity planning and resilience in financial services during COVID-19 and beyond

As COVID-19 continues to affect our world, financial services organizations are working hard to ensure their services are available to all who need them. In many countries, financial services are “essential” and must remain available throughout COVID-19 for important reasons, including national economic security. Moreover, in the U.S. and many other countries, governments are leaning on financial services institutions to deliver additional citizen services, like disbursement of stimulus funds.At the same time, security is as important as ever, especially as we see increased targeting of organizations with COVID-related scams. There are many more people using mobile devices and remote access to perform critical functions, and it’s important that security teams maintain defenses and focus on protecting remote workforces.As the global financial services industry responds to COVID-19, many organizations will rely more readily on remote capabilities to meet rapidly changing government, consumer and security demands. Access to secure, reliable and flexible systems and applications will become increasingly essential for the foreseeable future.Below are a few ways Google Cloud is supporting our financial services industry customers to ensure business continuity and reliability.Helping financial services organizations disperse loans and other funds faster to people who need themThe U.S. Small Business Administration (SBA)’s Paycheck Protection Program (PPP) aims to help numerous businesses facing unprecedented challenges keep their workers employed during the COVID-19 pandemic. But lenders, servicers and processors are struggling to handle current intake volumes of PPP loan requests. To help lenders accelerate and automate processing loan applications, Google Cloud developed the PPP Lending AI Solution to help integrate Google’s AI-based document ingestion tools as part of lenders’ existing underwriting components and lending systems. The PPP Lending AI Solution has three components, each of which can be used individually or in combination with each other:The Loan Processing Portal is a web-based application that lets lending agents and/or loan applicants create, submit, and view the status of their PPP loan application. The Document AI PPP Parser API enables lenders to use AI to extract structured information from PPP loan documents submitted by applicants. This component is available at no cost through June 30, 2020. Loan Analytics enables lenders to quickly onboard historical loan data, assist with the de-identification and anonymization of sensitive information, store information securely, and perform data analytics on this historical loan data. Leveraging artificial intelligence, we’ve created an end-to-end solution that speeds up the time-to-decision on loans and helps inform lenders’ liquidity analysis—from the initial application submission to the underwriting process and SBA validation. The solution is also equipped with Google’s security capabilities, enabling lenders to meet policy requirements and protect critical assets. Providing immediate burst capacity for banks, trading organizations, and insurersProviding remote work capabilities and workstations for employees is increasingly necessary for many financial institutions. Extending secure and reliable access to applications and systems without incurring significant capital investments has become a key priority. Our Burst Capacity Solution provides additional compute and analytics capabilities that can handle some of the most compute-intensive workloads. Our solution can help financial institutions quickly glean more from data, virtualize productivity tools and scale up and out hundreds of cores and terabytes of memory. It leverages core AI components that can hear, see, and understand various forms of structured and unstructured data without requiring data science expertise. Just as importantly, it includes multiple layers of physical and logical protection, encrypts data at rest by default and has a dedicated team of Site Reliability Engineers (SRE) providing continuous monitoring 24x7x365.Our goal is to ensure your infrastructure can handle significant traffic spikes and support the most demanding workloads, securely, efficiently and at scale. Here are some ways we have provided burst capacity for financial services firms:Banks: As banks see increased demand in mobile and online banking applications, our Burst Capacity solution can help them predict credit and loan defaults and margin calls that are timely and accurate. Banks can also comply with liquidity stress testing and capital planning requirements, such as Comprehensive Capital Analysis and Review (CCAR).Institutional and wholesale trading organizations: The Burst Capacity solution lets institutional and wholesale trading organizations calculate and simulate increased risks—such as market, value, counterparty, credit, liquidity and redemption—in order to identify market opportunities and mitigate losses.Life and casualty insurers: The Burst Capacity solution can support an increased demand for online support, video brokerage and advisory services for policies and claims such as life insurance. It can also help insurers conduct more comprehensive actuarial modeling.Burst Capacity solution provisioning takes approximately one week and is available to scale up when necessary. The solution’s pay-for-used-compute-seconds structure adds a level of flexibility.Helping financial services organizations modernize their infrastructureAs financial services organizations adjust to new realities, modernizing IT structures will become more critical than ever. Cloud-based infrastructures offer more flexible computational capacity, and offloading certain workloads from the mainframe to new cloud architectures may support flexible traffic and access patterns, suggesting a new way of thinking about network design. Data and AI tools will be integral to the new model, too, as they can improve insights, risk management and cybersecurity. And finally, reliance on hosted data centers may diminish, suggesting more collaboration between firms and technology partners for business continuity. One way we’re helping financial institutions modernize their infrastructure is assisting with migrating mainframe workloads to the cloud as quickly as possible through mainframe app automation. While many mission-critical workloads run on mainframe architecture, moving to the cloud offers access to new technologies that foster faster innovation. Through Google Cloud’s acquisition of Cornerstone earlier this year, we’re now helping customers like Boa Vista by offering migration roadmap development, conversion flexibility and automated data migration. Cornerstone can help solve immediate mainframe modernization needs by offering automated migration tools to applications, without requiring Cobal and PL4 expertise. Although, firms will still need a more holistic mainframe modernization strategy in the post-COVID-19 world.Another way we’re facilitating infrastructure modernization is through our managed, cloud-native platform Anthos. This application platform lets enterprises modernize how they develop, secure and operate hybrid cloud environments. By providing an agnostic, Kubernetes-based environment, customers can build once, and run anywhere, across clouds and on-premises. It’s already leveraged by leading financial institutions including DenizBank and KeyBank.For “essential” industries such as financial services, having a reliable, resilient infrastructure has never been more important than now. Helping financial services firms make value connections in real timeOnline platforms are key in supporting remote workers. AI-based agents and video conferencing can be used to assist customers and deliver financial advice, in real time. AI and robotic process automation (RPA) can also bring efficiency to tasks such as loan modifications, mortgage refinancing, ratings actions and credit extensions, freeing valuable staff to focus more on complex tasks and ensuring timely customer support.Working with Google Cloud and partner NubeliU, Banco Santander in Argentina developed a solution in less than 24 hours to expedite low interest loans for companies suffering the economic effects of the COVID-19 crisis. Using Google Vision AI, they could automatically validate documents and forms in PDF format—helping to meet a surge in demand and deliver loans in record time to support small and medium-sized companies.Meeting face-to-face is an important way financial services organizations serve their customers, now made more challenging as a result of COVID-19. To help support remote interactions, Google Meet enables effortless video conferencing with enterprise-grade security and reliability built on Google’s secure and reliable global infrastructure. Firms can safely cultivate client relationships through virtual advisory services, such as financial planning, and engage in video brokerage for policies such as life insurance. Finally, as financial services firms are handling extraordinary spikes in customer inquiries over digital channels, we developed the Contact Center AI Rapid Response Virtual Agent program to help automate simple customer service interactions so call center agents can focus on more complex cases. The program provides contact center customers with immediate self-service to address general questions and concerns about COVID-19, letting employees focus on providing higher value-added, more personalized responses to customers who need it.Continuing to support financial services organizations in this uncertain time and beyondWe are committed to maintaining the health of the systems that power the financial services industry, and will do everything we can to empower our customers’ business continuity planning and resilience. We’ll continue to look for ways to leverage the latest technologies to improve and enhance the current situation.
Quelle: Google Cloud Platform

Understanding forwarding, peering, and private zones in Cloud DNS

The Domain Name System, or DNS, is one of the most foundational services of the Internet, turning human-friendly domain names into IP addresses. Often handled by specialized network engineers within an organization, DNS can feel like a black box to people who don’t deal with it often. For one, DNS terminology can be confusing, and some terms have different meanings in different parts of the cloud network (e.g. peering). But understanding how DNS works is critical, especially in a cloud environment, where you need DNS to make your applications available to enterprise users.If you’re running on Google Cloud, chances are you use Cloud DNS, a scalable, reliable and managed authoritative DNS service running on the same infrastructure as Google. It has low latency, high availability and scalability and is a cost-effective way to make your applications and services available to your users.One of the more complex DNS setups that customers struggle with is building multiple projects and VPCs, all of which need to establish connectivity back to an on-prem DNS resource. Unless there’s outside connectivity to an on-prem network or another cloud, VPCs logically look like “islands,” or self-contained networks. As such, a logical assumption would be that each VPC would use its own forwarding zone and individually forward DNS queries back to on-prem for resolution. However, isolating your VPCs from one another leads to challenges, and the more VPCs you have, the harder it becomes. First, let’s unpack why this is challenging, and then show you how to solve for it.The trouble with handling DNS forwarding requests in multiple VPCsThe challenge is fundamentally a routing one. Google utilizes an egress proxy for all outbound DNS requests to the on-prem environment from an outbound forwarding zone. This highly available and scalable pool of proxies uses the same IP address block and does so for all VPCs. If you have multiple VPCs that forward DNS requests to the same on-premises network, it is not possible to create a route to send the response specifically to the originating VPC (because they are all using the same IP blocks for their proxies). The more VPCs using the pool of proxies you have, the greater the chances of sending things back to the wrong VPC. In the drawing below, two VPCs, A and B, are both set up with outbound forwarding zones to on-prem, and both the cloud routers A and B are advertising the DNS proxy range of 35.199.192.0/19. To the on-prem network, all traffic appears to originate from 35.199.192.0/19 and when a response is generated on-prem, the return traffic could end up in the wrong VPC network. In this scenario, the on-prem network has a 50/50 chance of guessing which VPC originated the request. And as more VPCs get introduced into the model, the chances of reaching the right source diminish rapidly.Outbound forwarding zones and DNS peering for connecting multiple VPCsIn order to address the challenge of connecting multiple VPCs to an on-prem network, you need to use a combination of outbound forwarding zones alongside DNS peering in a hub-and-spoke model. The hub VPC utilizes DNS forwarding to perform the hybrid connection to the on-prem network and the spoke VPCs uses DNS peering to connect to the hub VPC. In the drawing below, a single outbound forwarding zone is set up in VPC H. All other VPCs peer with VPC H. Any queries set to be resolved from on-prem will now go from the originating VPC(A, B, or C in this example) to VPC H. Once in VPC H, it will identify this as part of the outbound forwarding zone, and forward the request to on-prem through established network connectivity. In this case, the 35.199.192.0/19 range is only being advertised from VPC H’s cloud router, therefore when the query is being routed back to Google Cloud, there is only a single VPC network path for that route. VPC H then cascades the appropriate information back to the originating VPC (A, B, or C) and everything functions as expected.Keeping up with Cloud DNSManaging DNS might not be your day job, but understanding how it works can be critical when configuring enterprise cloud environments. In this post, we’ve shown you how to use some of Google’s DNS constructs to connect multiple zones to your on-premises DNS infrastructure, using a combination of zones, peering, and forwarding. You can learn more about Google Cloud’s networking portfolio, including our DNS services, online and reach us at gcp-networking@google.com.
Quelle: Google Cloud Platform

Learn 3 in-demand cloud skills in 30 days at no cost during the month of May

In April, we announced we were expanding our Google Cloud learning resources to support the growing number of people working and learning from home. Today, we are excited to announce that if you sign up by May 31, 20201, you can still enroll in Google Cloud training on both Pluralsight and Qwiklabs at no cost for 30 days—here’s where you can get started.If you’re new to Google Cloud, we recommend our five-hour introductory-level series of labs, Google Cloud Essentials. These labs will give you a tour of Google Cloud and help you familiarize yourself with basic cloud concepts such as virtual machines on Google Compute Engine, containerized applications with Kubernetes Engine, network load balancers, and HTTP load balancers. To get started, register here and select Qwiklabs.If you’re ready to dive even deeper into cloud, Pluralsight offers a full breadth of video-based Google Cloud learning paths, courses, and skills assessments. To help you pick your path, we’ve recommended below three in-demand skills you can start learning over the next 30 days at no cost on Pluralsight. To get started, register and select Pluralsight here to receive a special access link via email. Once you set up your Pluralsight account, you can search for any of the learning paths mentioned in this post from the Pluralsight catalog. Build your data analytics expertise According to McKinsey research, almost 60% of businesses find it harder to source talent for data and analytics positions than any other roles. The Data Analytics on Google Cloud learning path helps you build these much-needed skills, teaching you to explore, mine, load, visualize, and extract insights from diverse Google BigQuery datasets. You’ll also dig deeper into data loading, querying, schema modeling, optimizing performance, query pricing, and data visualization. This 13-hour learning path includes four courses which are a combination of hands-on labs and lectures. We recommend you have some prior experience with ANSI SQL before taking this learning path.Do some learning on machine learning The global machine learning market is expected to grow almost 44% from 2019 to 2025, making these skills relevant for any technical professional. The Machine Learning on Google Cloud learning path will allow you to experiment with end-to-end machine learning, starting from building a machine learning-focused strategy and progressing into model training, optimization, and production with hands-on labs using Google Cloud. This 17-hour learning path includes five courses with interactive hands-on labs and lectures. Anyone with knowledge of querying with SQL and programming in Python can take this learning path.Sharpen your Kubernetes skillsGoogle Kubernetes Engine (GKE) is a managed, production-ready environment for running containerized applications that’s trusted by businesses all over the world. The Architecting with Google Kubernetes Engine learning path will teach you how to implement solutions using GKE by building, scheduling, load balancing, and monitoring workloads. You’ll also learn to manage role-based access control and security, as well as provide persistent storage to these applications. This 10-hour learning path consists of four courses that have a mix of presentations, demos, and hands-on labs. To get the most from this training, we recommend you have experience with virtual machines, networks, storage in the cloud as well as experience with developing, deploying, and monitoring in the cloud. If you’d like to gain more experience before taking Architecting with Google Kubernetes Engine, you can take the three-hour Google Cloud Platform Fundamentals: Core Infrastructure course on Pluralsight. Ready to strengthen your cloud skills with Google Cloud training? Register here and claim your training offers by May 31 to get your free 30-days access on Qwiklabs and Pluralsight.1. Your 30-days access to Google Cloud training at no cost starts when you enroll for your courses. These offers are valid until May 31, 2020. After your 30-days, you will incur charges on Pluralsight; for Qwiklabs, you will need to purchase credits to continue taking labs.
Quelle: Google Cloud Platform