Liberating your mainframe data with Confluent and Google Cloud

Are you looking for the best way to migrate and replicate your mainframe data? Google Cloud and Confluent have teamed up to provide an end-to-end solution for connecting your mainframe application data with the advanced analytics capabilities of Google Cloud.In this article, we will discuss how you can use Confluent Connect to replicate messages from IBM MQ and Db2 to Google Cloud. This allows you to work with your mainframe data in the cloud, and enables you to build new applications and analytical capabilities using Google Cloud’s machine learning solutions. You also benefit by reducing impact on your production mainframe workloads, and reducing general purpose compute costs. In other words, you can continue using your mainframe to run your mission-critical business workloads while setting your data in motion for innovation.Here’s an example use case that demonstrates how using the Confluent MQ connector with Google Cloud can impact your bottom line. One of our customers is saving millions of dollars per year on mainframe cycles by leveraging z Integrated Information Processor (zIIP) engines for data processing.Moving these workloads to zIIP, off of GP (general purpose) compute, and away from CHINIT (Channel Initiator) routes directly leads to reduced MSU licensing. As an example, a customer in the financial services industry saw a 50% reduction in CPU usage per message. These cost savings can enable you to direct budget resources toward differentiating activities, such as commercializing your valuable mainframe data to open up new revenue streams and improve customer service.On the technical side, Confluent guarantees exactly-once message semantics, preserives message order and unleashes that data to be accessed by existing and new applications that need a high throughput, low latency event driven architecture. This means that you can rely on the accuracy and consistency of your data in Google Cloud as if you were querying it directly from your mainframe database.Once you have this data in your Confluent cluster, you can leverage  the combined capabilities of Confluent and Google Cloud. You can modernize the way your consumers access your data by providing a single, standard source of truth without impacting production services. Confluent integrates directly with Apigee, Google Cloud’s API platform for developing and managing APIs.Because Confluent integrates with BigQuery, you can also leverage the advanced analytical capabilities of BigQuery ML and Vertex AI to realize value from your latent mainframe data, and build new systems of insight that were not possible on the mainframe. And most of all, you can open up new avenues for innovation by allowing consumers to access the data when they need it, speeding up time to value and enabling faster business decisions.You now have a bridge to cloud for your mainframe application data. Get started by deploying Confluent from the Google Cloud marketplace.Related ArticleBeyond mainframe modernization: The art of possibilitiesMainframe modernization has been a hot topic over the past decade or so. Over time, the term “modernization” itself is manifested in many…Read Article
Quelle: Google Cloud Platform

Your guide to all things AI & ML at Google Cloud Next

Mark your calendars—Google Cloud Next is coming October 12–14, 2021.We’ll kick off each day with a live broadcast and keynote to showcase our latest launches and share how customers and partners are tackling today’s greatest business challenges. You can also participate in online interactive experiences and attend on-demand sessions that align with your interests and schedule. Overall, we’ve designed Next ’21 as a customizable digital adventure to allow for a more personalized experience. If you’re anything like me, you’ll be tuning in to learn more about AI and ML. And good news—we have a lot to share! Below is your guide to my top recommendations for AI and ML content. Whether you want to hear directly from our customers about applying AI to real challenges, learn more about our newest products and announcements, or get hands on with training opportunities, we’ve got you covered.Plus, Google Cloud Next is free this year, so the experience is inclusive and accessible to everyone. Register now! Level Up Data Analytics with AIWith organizations investing rapidly in their AI/ML strategies, Google Cloud is doubling down on our commitment to meet customers where they are. Whether that means making it easier to build custom ML models or providing out-of-box AI capabilities built for specific use cases, our goal is to accelerate our customers’ paths from AI investment to improved business outcomes. . Within the Data Analytics and AI sessions at Next, learn how customers are finding success with Vertex AI, and discover new product announcements that will transform the way your data end users ingest and analyze data and deploy ML models. SessionsAI102: Why Vertex AI is even easier for developersDA200: Build an interactive machine learning application with Looker and Vertex AIGCD118: The State of Data Science and Machine Learning in 2021Learning Labs HOL103: Vertex AI: Qwik StartHOL104: Build an AutoML forecasting model with Vertex AIGCD103: Data Science with Vertex AISpeaker SpotlightJaime Espinosa, Senior PM, Machine Learning, Twitter Jaime’s career as a Product Manager is focused on ML and/or Platforms, and he’s educated as a technology generalist, with multiple engineering degrees focused on end-to-end systems. He has delivered ML-based products for scientific computing, sales, real-estate, robotics, business intelligence, HPC, and CPU design. Often working from ideation to delivery, he has worked on eight platforms, including the first serverless product. Jaime has led six ML platforms at Microsoft, Intel, Algorithmia, and Twitter. Jaime joins the panel discussion AI102: Why Vertex AI is even easier for developers.Matt Ferrari, Head of Ad Tech, Customer Intelligence, and Machine Learning, WayfairMatt Ferrari has more than 15 years of executive experience, with the last 10 as a C-suite executive in the cloud managed services and healthcare SaaS ecosystem spaces. Presently, Matt is the Head of Martech and Data/Machine Learning Platforms, where he leads both the Product and Engineering strategy for Adtech, Customer Intelligence, and Machine Learning organizations. Prior, Matt was CTO and co-founder of ClearDATA, where he was responsible for the strategy and execution of the company’s healthcare technology platform and services. He also oversaw strategy and corporate development, ensuring the differentiation of ClearDATA’s vision and strategic roadmap. Matt has also led product management, product development, engineering, and solution architecture. Matt joins the panel discussion AI102: Why Vertex AI is even easier for developers.Reimagine Customer Experience with Conversational AIProviding great customer experience is a pivotal part of gaining and retaining customers. From virtual customer service agents to AI designed to extract insights from call center transcripts, learn  how our Conversational AI products can help your business serve more customers in a more personalized and efficient manner. Hear how our customers have used products like Contact Center AI to improve their customer satisfaction scores, reduce time-to-resolution for call center inquiries, and reduce costs. SessionsAI103: Using CCAI insights to better understand your customersAI104: Customer impact with Conversational AIGWS105: Drive results by transforming the customer experience with AI-powered Business MessagesLearning LabsHOL102: Design Conversational Flows for your AgentSpeaker SpotlightDinesh Mahtani, Director, Digital Analytics and Insights, TELUSDinesh leads the Data Analytics team at TELUS Digital. The team’s mandate is to create a platform of integrated capabilities across data, machine learning and targeting that allow TELUS to deliver personalized experiences across channels including TELUS.com and the MyTELUS app. The team’s goal is to leverage data to simplify their customers’ experience and unlock key outcomes for TELUS. View Dinesh’s session here.Unlock the Power of Unstructured Data with Document AIEvery enterprise is focused on turning unstructured data, which lacks the metadata and organization required for analysis, into structured data from which insights can be extracted. Google Cloud Document AI platform is built on decades of AI innovation at Google and gives customers the power to digitize document workflows and analyze the information that documents contain. Join us at Next 2021 to gain a holistic understanding of our AI strategy around helping enterprises unlock value from unstructured data. SessionsAI100: Google Cloud and Ironclad partner to accelerate document workflowsA1200: Process billions of pages and cut operational costs with DocAIDEV202: AI-powered applications with Google CloudLearning LabsHOL100: Form Parsing Using Document AIHOL101: Process Procurement Documents Asynchronously using the Document AI APISpeaker SpotlightNadia Aqsa, Sr. Outbound Product Manager, Workday Nadia Aqsa is the face of Workday Expenses to customers, partners, and go-to-market teams such as Sales, Services, Marketing, Support, and Education. She is passionate about solving customer pain-points and helping them get the most out of their Workday experience. She also leads product adoption initiatives, chairs the product advisory council, and regularly hosts live webinars to engage with customers. View Nadia’s session here.More AI ContentStill hungry for more AI? We aim to please! Here are some of the other sessions at Next 2021 that cover critical AI topics like Translation, Sustainability, AI & ML Next Gen Infrastructure, Vision, Voice, Text, Speech, and more. Take a look! SessionsAI201: Eli Lilly uses Cloud Translation to translate content globallyAI300: Achieve system-level sustainable change with AI and MLAI105: What’s new and what’s next with infrastructure for AI and MLSPTL102: Data Cloud: Simply transform with a universal data platform [Spotlight]DEVKEY1: Developer Keynote with Urs Holzle, SVP, Technical Infrastructure, Google Cloud [Keynote] Learning LabsGCD104: Bring the power of senses to your applications with Cloud AILP106: Machine learning and AI learning pathSpeaker SpotlightThomas Griffin, Translation Tech Lead & Global Regulatory Architect, Eli LillyThomas is a technologist and software engineer that is currently focused on cloud enablement and adoption within Eli Lilly’s Medicines Development organisation. See Thomas’ session here.Check out all our AI & ML sessions and stay tuned for some exciting announcements. We can’t wait for you to join us at Google Cloud Next.
Quelle: Google Cloud Platform

Speaker ID unlocks Machine Learning Speech Identification capability for Contact Centers

We launchedContact Center AI (CCAI) to kick off our work with the world’s largest call centers to reimagine customer experiences through the power of our Conversational AI. Our approach has shown its value over the last few years: a 2020 study based on CCAI customer interviews projected 20-35% call deflection away from agents, $1.3 million – $3.7 million in agent productivity gains through reduced average call times, and up to 75% reduced effort to manage contact center solutions (New Technology: The Projected Total Economic Impact™ Of Google Cloud Contact Center AI, a commissioned study conducted by Forrester Consulting, August 2020).We continue our investment in the space of reimagining customer experience with the launch of Speaker ID today. Speaker ID brings Google’s speech identification technology directly to Google Cloud customers and our contact center partners. Instead of subjecting callers to tedious and circuitous authentication processes, such as phone trees or menus, with Speaker ID, our customers and partners can now initiate service using the most intuitive interface of all: voice.”We are excited to see Google bring ML-based speaker identification to CCAI. As a launch partner for Google Cloud CCAI, we have seen the impact that Google’s AI technology can have on contact centers. Our customers are excited by the opportunity to add ML-based verification on top of their existing Dialogflow agents with no additional telephony or technology integrations necessary,” said Eric Rossman, VP of Technology Partners and Alliances, at Avaya.Contact Centers and their need for Speed and Convenience Calling into a contact center is still the preferred method of support or interaction for many customers. In fact, in 2020, customer preference for voice calls increased by 10 percentage points, to 43%, and was by far the most preferred service channel, with text interactions ranking second at 22% (Aspect Consumer Index Annual Report 2020). However, in order to receive the most personalized service available, callers still need to pass through archaic authentication processes, requiring them to remember passwords or provide personal information for verification. Not only is this highly inconvenient for the callers, it also significantly slows down the time to query resolution and burns through valuable agent time. While applying AI to these outdated policies can help to an extent, what is required is a complete reimagination of customer interaction that leverages the power of Conversational AI.Introducing Speaker IDSpeaker ID lets your callers authenticate over the phone, using their own voice. Using machine learning (ML)-based voice identification, Speaker ID identifies and verifies the caller. There are two primary components to using Speaker ID: Enrollment and Verification.The first time a caller encounters this new system, they will have to enroll. Typically, enrollment starts with authentication using the existing process, after which callers provide a brief voice sample that can be used in lieu of the old process to fast track verification in the future. Google Cloud Speaker ID is “text independent,” meaning that once a user is enrolled, verification can be performed with any audio snippet as short as three seconds. No password phrase is required and the user can say anything they want to authenticate their voice and thus their identity. Contact Center AI IntegrationSpeaker ID is directly integrated with Google Cloud’s Contact Center AI platform. Dialogflow CX, within CCAI, allows businesses to architect conversational experiences for their callers. With the integration with Speaker ID, Dialogflow CX can now leverage audio sent for intent detection and entity extraction. Along with full audio integration, Speaker ID also has pre-built Dialogflow CX components available to easily add enrollment and verification flows to existing or new Dialogflow CX bots. Customers can also configure Dialogflow CX to perform passive verification. This provides a speaker validation check based on the caller’s usual interaction with the voice bot. Users don’t need to be prompted or speak a specific phrase.Get Started TodaySpeaker ID is generally available today. At Google Cloud, we are committed to ensuring that our products and features are aligned with our AI Principles. If you are interested in Speaker ID, there is a review process to help ensure that your use case is aligned with our AI Principles and best practices for using Speaker ID for authentication and security. Contact your GCP seller to get started. For more information on Speaker ID, visit cloud.google.com/speaker-id. We also encourage you to register for Google Cloud Next 2021 to learn more about our Conversation AI offerings.
Quelle: Google Cloud Platform

Introducing Workflows callbacks

With Workflows, developers can easily orchestrate various services together, on Google Cloud or third-party APIs. Workflows connectors handle long-running operations of Google Cloud services till completion. And Workflow executions can also wait for time to pass with the built-in sys.sleep function, till some computation finishes, or some event takes place. But what if you need some user input or some approval in the middle of the workflow execution, like validating automatic text translation? Or an external system like a fulfillment center or an inventory system that is going to notify that products are back in stock? Instead of using a combination of “sleep” instructions and API polling, now you’ll be able to use Workflows callbacks! With callbacks, the execution of a workflow can wait until it receives a call to a specific callback endpoint. Let’s have a look at a concrete example.Case study: human validation of automated translationLet’s have a look at a concrete example! Machine learning based translations have reached an incredible level of quality, but sometimes, you want a human being to validate the translations produced. Thanks to Workflows callbacks, we can add a human, or an autonomous system, into the loop.To illustrate this case study, the following diagram will show you a possible implementation of the whole process:Click to enlargeFirst, the user visits a translation web page. They fill a textarea with the text they want to translate, and click on the translate button.Clicking on the button will call a Cloud Function that will launch an execution of the workflow. The text to translate is passed as a parameter of the function, and as a parameter of the workflow too.The text is saved in Cloud Firestore, and the Translation API is called with the input text, and will return the translation, which will be stored in Firestore as well. The translation appears on the web page in real-time thanks to the Firebase SDK.A step in the workflow creates a callback endpoint (also saved in Firestore), so that it can be called to validate or reject the automatic translation. When the callback endpoint is saved in Firestore, the web page displays validation and rejection buttons.The workflow now explicitly awaits the callback endpoint to be called. This pauses the workflow execution.The user decides to either validate or reject the translation. When one of the two buttons is clicked, a Cloud Function is called, with the approval status as parameter, which will in turn call the callback endpoint created by the workflow, also passing the approval status. The workflow resumes its execution, and saves the approval in Firestore. And this is the end of our workflow.Creating a callback and awaiting incoming callsTwo new built-in functions are introduced in the standard Workflows library:events.create_callback_endpoint — to create and setup the callback endpointevents.await_callback — to wait for the callback endpoint to be calledWith events.create_callback_endpoint you specify the HTTP method that should be used for invoking the callback endpoint, and you get a dictionary with the URL of that endpoint that you can pass to other systems. And with events.await_callback, you pass the callback endpoint to wait on, pass a timeout defining how long you want to wait, and when the endpoint is called, you get access to the body that was sent to the endpoint.Let’s have a look at the YAML definition of our workflow, where we apply those two new functions. First, we’re going to create the callback:The callback endpoint is now ready to receive incoming requests via a POST HTTP method, and the details of that endpoint are stored in the callback_details dictionary (in particular, the url key will be associated with the URL of the endpoint).Next, we pause the workflow, and await the callback with:The callback_details from earlier is passed as argument, as well as a timeout in seconds to wait for the callback to be made. When the call is received, all the details of the request are stored in the callback_request dictionary. You then have access to the full HTTP request, including its headers or its body. In case the timeout is reached, a TimeoutError is raised and can be caught by a try /except block.Going further and calling us back!If you want to have a closer look at the above example, all the code for this workflow can be found in the Workflows samples Github repository. And you can follow the details of this tutorial to replicate this workflow in your own project. As this is still a preview feature for now, please be sure to request access to this feature, if you want to try it on your own.For more information on callbacks, be sure to read the documentation. To dive deeper into the example above, please checkout the Github repository of this translation validation sample. Don’t hesitate to let us know via Twitter to @glaforge what you think of this feature, and how you intend on taking advantage of it in your own workflows!Related ArticleIntroducing new connectors for WorkflowsWe’re announcing new connectors for Workflows, which simplify calling Google Cloud services and APIs.Read Article
Quelle: Google Cloud Platform

Google named a leader in the 2021 Gartner® Magic Quadrant® for Full Life Cycle API Management

We’re excited to share that Gartner has recognized Google Cloud’s Apigee as a Leader in the 2021 Magic Quadrant for Full Life Cycle API Management, marking the sixth time in a row we’ve earned this recognition. We believe this achievement is a testimony to Google Cloud’s differentiated vision for API management and strong track record of delivering continuous product innovation. In this year’s report, Apigee is placed highest among all the vendors for the ability to execute.  We want to take this opportunity to thank our thriving community of customers, developers and partners for voicing their opinion.APIs are one of the key mechanisms through which organizations bring digital-first strategies to life in the form of new experiences and applications for partners and customers. To get the most out of these APIs in an efficient and scalable manner, API management is a must and partnering with the right API management vendor is critical to building and scaling a successful API program. Research from industry analyst firms like Gartner can help enterprises evaluate and choose the right solution.Many enterprise customers like Nationwide Insurance, ABN Amro, Bed Bath & Beyond, and Pizza Hut chose Google Cloud’s Apigee as their API management partner. Our mission is to help our customers make the leap to digital excellence – the ability to rapidly and repeatedly deploy and scale, and to consistently deliver on digital programs. We want to support our customers in building profitable API-based platforms and delivering measurable business outcomes.“APIs allow Veolia to access new ecosystems and partners that will bring new innovation opportunities for us. Apigee helps us quickly and easily deliver great customer experiences. It abstracts away the backend IT complexity, and helps us provide information and data to our customers quickly, consistently and securely,” – Pascal Dalla-Torre, Group CTO at Veolia.As part of this vision, we are focused on delivering continuous innovation.Achieve hyperscale – To help customers scale globally, connect their distributed workforce, and collaborate with regional partners using APIs, we announced Apigee X earlier this year. It’s a major release of our API management platform that seamlessly weaves together Google Cloud’s expertise in AI, security and networking to help enterprises efficiently manage the assets on which digital transformation initiatives are built. Developer efficiency – Developers today are using multiple tools to address their API and integration needs. Adding to the complexity is the proliferation of newer API styles and software development tools. Therefore, we recently announced Apigee Integration, a unified platform for API and integration needs,  Apigee Adapter for Envoy to support microservices needs, support for new API styles like GraphQL, flexibility of using existing SDLC tools to manage APIs, and fulfilment solutions for conversational AI. Democratizing application development – Across industries, tech savvy employees outside of the IT teams are increasingly using no-code development tools like AppSheet to build internal applications. To help organizations extend their API management investments to these tech savvy employees, we continue to invest in integrations between Apigee and AppSheet platforms. AIOps for APIs – In a digital excellence strategy, APIs take the center stage acting as the central nervous system for connecting various customer and employee facing applications. Therefore, ensuring APIs are always available and performing as expected is critical. To overcome monitoring challenges of hyperscale API programs, we harness the power of Google’s industry-leading machine learning capabilities to equip API operators with capabilities such as anomaly detection.Industry-specific solutions – To reduce the time to market of new digital programs and address specific industry requirements, we delivered a robust set of API industry accelerators such as Open Banking, Health APIx, eCommerce modernization, and Contact Center AI fulfilment for Telco.We are honored to be a Leader in the 2021 Gartner Magic Quadrant for Full Life Cycle Management and look forward to continuing to innovate and partner with you on your digital transformation journey. Download the full report here (requires an email address). To learn more about Apigee, visit the website here.Gartner Magic Quadrant for Full Life Cycle API Management, Shameen Pillai, Kimihiko Iijima, Mark O’Neill, John Santoro, Akash Jain, Fintan Ryan,28th September 2021.Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s Research & Advisory organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.Gartner and Magic Quadrant are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.  This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Google(Apigee).Related ArticleRead Article
Quelle: Google Cloud Platform

Cloud NAT explained!

For security, it is a best practice to limit the number of public IP addresses in your network. In Google Cloud, Cloud NAT (network address translation) lets certain resources without external IP addresses create outbound connections to the internet.Cloud NAT provides outgoing connectivity for the following resources:Compute Engine virtual machine (VM) instances without external IP addressesPrivate Google Kubernetes Engine (GKE) clustersCloud Run instances through Serverless VPC AccessCloud Functions instances through Serverless VPC AccessApp Engine standard environment instances through Serverless VPC AccessClick to enlargeHow is Cloud NAT different from typical NAT proxiesCloud NAT is a distributed, software-defined managed service, not based on proxy VMs or appliances. This proxyless architecture means higher scalability (no single choke point) and lower latency. Cloud NAT configures the Andromeda software that powers your Virtual Private Cloud (VPC) network so that it provides source network address translation (SNAT) for VMs without external IP addresses. It also provides destination network address translation (DNAT) for established inbound response packets only.Benefits of using Cloud NATSecurity: Helps you reduce the need for individual VMs to each have external IP addresses. Subject to egress firewall rules, VMs without external IP addresses can access destinations on the internet.Availability: Since Cloud NAT is a distributed software-defined managed service,  it doesn’t depend on any VMs in your project or a single physical gateway device. You configure a NAT gateway on a Cloud Router, which provides the control plane for NAT, holding configuration parameters that you specify. Scalability: Cloud NAT can be configured to automatically scale the number of NAT IP addresses that it uses, and it supports VMs that belong to managed instance groups, including those with autoscaling enabled.Performance: Cloud NAT does not reduce network bandwidthper VM because it is implemented by Google’s Andromeda software-defined networking. NAT rulesIn Cloud NAT, the NAT rulesfeature lets you create access rules that define how Cloud NAT is used to connect to the internet. NAT rules support source NAT based on destination address. When you configure a NAT gateway without NAT rules, the VMs using that NAT gateway use the same set of NAT IP addresses to reach all internet addresses. If you need more control over packets that pass through Cloud NAT, you can add NAT rules. A NAT rule defines a match condition and a corresponding action. After you specify NAT rules, each packet is matched with each NAT rule. If a packet matches the condition set in a rule, then the action corresponding to that match occurs.Basic Cloud NAT configuration examplesIn the example pictured in sketchnote, the NAT gateway in the east is configured to support the VMs with no external IPs in subnet-1 to access the internet. These VMs can send traffic to the internet by using either the gateways’s primary internal IP address or an alias IP range from the primary IP address range of subnet-1, 10.240.0.0/16. A VM whose network interface does not have an external IP address and whose primary internal IP address is located in subnet-2 cannot access the internet. Similarly, the NAT gateway Europe is configured to apply to the primary IP address range of subnet-3 in the west region allowing the VM whose network interface does not have an external IP address to send traffic to the internet by using either its primary internal IP address or an alias IP range from the primary IP address range of subnet-3, 192.168.1.0/24.To enable NAT for all the containers and the GKE node, you must choose all the IP address ranges of a subnet as the NAT candidates. It is not possible to enable NAT for specific containers in a subnet. For a more in-depth look into Cloud NAT check out the documentation.For more #GCPSketchnote, follow the GitHub repo. For similar cloud content follow me on Twitter @pvergadia and keep an eye out on thecloudgirl.dev Related ArticleSupercharge your Cloud NAT: Introducing new Cloud NAT featuresIntroducing new Google Cloud NAT features that improve scalability and flexibility for Compute Engine and Kubernetes Engine workloads.Read Article
Quelle: Google Cloud Platform

VMware and Google Cloud: The next chapter

Google Cloud Next and VMworld 2021 are less than two weeks away, and the partnership between Google Cloud and VMware is entering a new chapter. Over the past year, our close partnership with VMware and mutual dedication to customer success has inspired us to deliver several innovative capabilities, including expanding the service to 12 regions worldwide along with our industry-leading 99.99% availability, multi-region networking, and improved scalability to make it easy for customers to rapidly migrate to the cloud. “Our collaboration with Google is noteworthy because of the value it brings to our joint customers. The mutual success we have had partnering with Google on solutions that enable VMware workloads to run natively in the cloud with Google Cloud VMware Engine and digital workspace with Android and Chrome Enterprise is a testament to the quality of our joint offerings,” said Gregory Lehrer, VP Strategic Technology Partnerships, VMware. “As a result of our ongoing collaboration, we continue to see customers adopting our joint solution, indicating a strong and effective partnership. Our joint roadmap for the future points to an upward trajectory as we scale to meet anticipated demand.Across industries, customers are increasingly looking to accelerate their digital transformation due to the need for app modernization, aging infrastructure on-premises, and the need to meet customer needs in an always-on, digital environment. Customers such as Carrefour, a global retailer across 30 countries, quickly moved their on-premises environment to Google Cloud VMware Engine, while reducing operating costs by 40% and energy consumption by 45%. Furthermore, they were able to simultaneously improve the experience for shoppers and employees, and bolstered sales and shopper engagement with personalized offers. Companies are also looking to migrate and modernize their business with Google Cloud VMware Engine. LIQ, a CRM software company, migrated 80% of business applications and 50% of databases in just three months, and now plan to modernize their applications with microservices to lower maintenance time and costs. Looking forward, we’re focused on helping customers derive greater ROI from their investments in three ways:Flexibility – Single node Private Cloud SDDC to enable trials or proof-of-concept validations at a much lower cost.Availability – New geographic zones and expanded capacity within zones to better serve local business needs and continue to maintain data sovereignty within local regions.Ecosystem integrations – Building on our leading open platform, we’ve developed even more integrations with solutions across the ecosystem. VMware has validated it’s Disaster Recovery tool (Site Recovery Manager), Virtualization management tool (vRealize Cloud Management), as well as Virtual Desktop Infrastructure tool (Horizon Desktop) to ensure you can bring your mission critical applications to the cloud without disruption.We continue to focus on making migrations simpler as well. The recently announced Catalyst Program now provides even greater financial flexibility as eligible customers can get one-time Google Cloud credits to help offset existing VMware license investments. The program is consumption-based and designed to provide even more value as you accelerate your migration to the cloud. Furthermore, programs such as our Rapid Assessment and Migration Program (RAMP) provide free assessment and planning tools to reduce complexity, enable choice, and increase flexibility throughout the migration process.There’s much more to come from VMware and Google Cloud. We’re proud to be a Platinum Sponsor at VMworld 2021 and invite you to join us to learn more about our commitment to enabling digital transformation. Be sure to catch the fireside chat with Google Cloud CEO Thomas Kurian and VMware CEO Raghu Raghuram as they discuss industry trends and customer success. You’ll also hear more from our joint-customers and our product leaders about what’s to come. We can’t wait to connect with you virtually at the event.Related ArticleNew in Google Cloud VMware Engine: autoscaling, Mumbai expansion, etc.A review of the latest updates to Google Cloud VMware Engine.Read Article
Quelle: Google Cloud Platform

N2D VMs with latest AMD EPYC CPUs enable on average over 30% better price-performance

Last year, we announced the general-purpose N2D Compute Engine machine type based on the 2nd Generation AMD EPYC™ processor. Today, we are excited to announce that the N2D family now supports the latest 3rd Generation AMD EPYC processor.N2D VMs powered by 3rd Generation AMD EPYC processors deliver, on average, over 30% price-performance improvement across a variety of workloads as compared to prior 2nd Generation AMD EPYC processors. If you already use N2D machines, you can use the new hardware simply by selecting “AMD Milan or later” as the CPU platform for your N2D VMs. Further, if you’re using our first-generation N1 VM family, you’ll see a substantial price-performance1 improvement with the new N2D family.N2D VMs based on 3rd Generation AMD EPYC processors offer a broad set of features and options. N2D supports VMs with up to 224 vCPUs and up to 896 GB of memory, for workloads that require a higher number of threads. Google Cloud offers the highest number of vCPUs per VM across all general-purpose machine types available from a public cloud provider. N2D also includes a wide array of VM shapes (spanning standard, high-CPU and high-memory options) and Custom Machine Types, allowing you to pick custom sizes based on your workload needs. N2D VMs also support our recently introduced 100 Gbps high-bandwidth network to meet the demands of high-throughput workloads. In addition, N2D also supports high storage performance with persistent disk and up to 9 TB of local SSD. Combining high-throughput VMs with high-performance Local SSD is beneficial for I/O-intensive workloads. N2D is also available as a sole tenant node for workloads that require isolation to meet regulatory requirements or dedicated hardware for licensing requirements.New innovationCustomers using N2D VMs powered by 3rd generation AMD EPYC processors get access to the latest features in the AMD EPYC processor family including up to 256 MB of L3 cache and ‘Zen 3’ cores, which provide higher instructions per clock (IPC) compared to ‘Zen 2’. These processors include the same features offered in 2nd generation AMD EPYC processors including PCIe 4 support, high levels of memory bandwidth and access to AMD Infinity Guard for advanced security features. All of this means customers using the latest version of N2D with 3rd generation AMD EPYC can take advantage of its high performance for a variety of general-purpose workloads. “With exceptional performance and features, our AMD EPYC processors will provide future and existing Google Cloud N2D customers high performance capabilities for a variety of workloads,” said Lynn Comp, corporate vice president, Cloud Business Group, AMD. “This is another exciting extension of our relationship with Google, adding to the existing 2nd Gen EPYC based N2D VMs and the new T2D VMs, and our team is proud to continue to work together with Google Cloud on this.”What customers are sayingFullstory, a digital experience intelligence platform provider, was an early user of the new N2D VM family. “At FullStory, we are constantly looking for ways to improve database performance and reduce query latencies, especially as data sizes are always increasing,” said Jaime Yap, Director of Engineering at FullStory. “In our testing with Google Cloud’s latest N2D instances based on the AMD Milan CPU, we were pleased to see some query workloads achieve ~29% performance gains on average when compared to previous generation N2D VMs. We expect this to translate to dramatically improved utilization and better experiences for our customers.”Vimeo, the world’s leading all-in-one video software solution, tested the new N2D VMs. “At Vimeo, we have always believed in providing a best in class video quality experience to our users,” said Joe Peled, Director, hosting and delivery operations at Vimeo. “The bulk of our video content is CPU-processed running encoding workloads such as x264 (H.264), x265 (HEVC), and rav1e (AV1) to achieve optimal video fidelity with minimum artifacts. Google Cloud’s new AMD Milan based N2D VMs unlock a major improvement to our users by significantly reducing time spent in our transcoding pipelines on the order of 20%, and allow us to reduce costs by a similar factor.”Google Kubernetes Engine supportGoogle Kubernetes Engine (GKE) is the leading platform for organizations looking for advanced container orchestration, delivering the highest levels of reliability, security, and scalability. GKE supports N2D nodes based on 3rd Generation AMD EPYC Processors, helping you get the most out of your containerized workloads. You can add nodes based on N2D 3rd Gen EPYC VMs to your GKE clusters by choosing the N2D machine type in your GKE node pools and specifying the minimum CPU platform “AMD Milan”.100 Gbps NetworkingWe’ve optimized Google Cloud’s unique Andromeda network to support hardware offloads such as zero-copy, TSO, and encryption and are able to offer N2D VMs with 100 Gbps networking out-of-the-box.N2D VMs will be able to take full advantage of Google Cloud’s high-performance network infrastructure with bandwidth configurations that enable 100 or 50 Gbps speeds for VM shapes with 48 or more vCPUs. These networking configurations are offered as an add-on feature for N2D VMs and impose no additional inventory constraints on N2D deployments—you’ll be able to upgrade your N2D VMs’ network bandwidth in any zone with N2D availability. Confidential Computing (coming soon)Confidential Computing is an industry-wide effort to protect data in-use including encryption of data in-memory—while it’s being processed. With Confidential Computing, you can run your most sensitive applications and services on N2D VMs.We’re committed to delivering a portfolio of Confidential Computing VM instances and services such as GKE and Dataproc using the Secure Encrypted Virtualization (SEV) extension. You’ll be pleased to know that we’ll support SEV using this latest generation of AMD EPYC™ processors in the near-term and more advanced capabilities in the future.Target workloads N2D VMs are suitable for a wide variety of general-purpose workloads such as web serving, app serving, databases, and enterprise applications. With machine types that include up to 224 vCPUs, N2D machines are ideal for high-throughput workloads that can benefit from having a large number of threads. N2D machines are also ideal for workloads that may benefit from confidential computing features. N2D machines based on 3rd Generation AMD EPYC processors provide significant performance gains over current generation N2D VMs for various benchmarks and general purpose workloads as shown in the graph below.PricingN2D VMs with 3rd Generation AMD EPYC processors are offered at the same price as the previous generation N2D VMs. AvailabilityN2D VMs with 3rd Generation AMD EPYC processors are currently in preview in several Google Cloud regions: us-central (Iowa), us-east1 (S. Carolina), europe-west4 (Netherlands), and asia-southeast1 (Singapore) and will be available in other Google Cloud regions globally in the coming months. Please sign-up here and contact your Google Cloud sales representative if you are interested in the Preview. 1. Based on price-performance improvements measured on the new N2D Milan VMs vs. N1 VMs for the following: VP9 Transcode (51%), Nginx (72%), Server side Java throughput under SLA (72%), AES-256 Encryption (273%).
Quelle: Google Cloud Platform

People and planet AI: How to build a Time Series Model to classify fishing activities in the sea

Who would have known that today technology would enable us with the ability to use machine learning to track vessel activity, and make pattern inferences to help address IUU (illegal, unreported, and unregulated) fishing activities. What’s even more noteworthy is that we now have the computing power to share this information publicly in order to enable fair and sustainable use of our ocean. An amazing group of humans at the nonprofit Global Fishing Watch took on this massive big data challenge and succeeded. You can immediately access their dynamic map on their website globalfishingwatch.org/map that is bringing greater transparency to fishing activity and supporting the creation and management of marine protected areas throughout the world.Time lapse of Global Fishing Watch’s global fishing map powered by MLIn our second episode of our People and Planet AI series we were inspired by their ML solution to this challenge, and we built a short video and sample with all the relevant code you need to get started with building a basic time-series classification model in Google Cloud, and visualize it in an interactive  map. The model making predictions whether a vessel is fishing or note.ArchitectureThese are the components used to build a model for this sample:Architectural diagram for creating our time-series classification model.Global Fishing Watch GitHub: where we got the dataApache Beam: (open source library) runs on Dataflow. Dataflow: (Google’s data processing service) creates 2 datasets; 1 for training a model and the other to evaluate its results.TensorflowKeras: (high level API library) used to define a machine learning model, which we then train in Vertex AI.Vertex AI: (a platform to build, deploy, and scale ML models) we train and output the model.cost of building this time-series classification model is less than $5 in compute resourcesPricing and stepsThe total cost to run this solution was less than $5. There are seven steps we went through with their approximate time and cost:Why do we use a time series classification model? Vessels in the ocean are constantly moving, which creates distinctive patterns from a satellite view.Different fishing gear in vessels move in distinct spatial patterns and have varying regulations and environmental impacts. We can train a model to recognize the shapes of a vessel’s trajectory. Large vessels are required to use the automatic identification system, or AIS. The GPS-like transponders  regularly broadcast a vessel’s maritime mobile service identity, or MMSI, and other critical information to nearby ships, as well as to terrestrial and satellite receivers. While AIS is designed to prevent collisions and boost overall safety at sea, it has turned out to be an invaluable system for monitoring vessels and detecting suspicious fishing behavior globally.GPS-like device called the automatic identification system transmitting positions of vessels.One tricky part is that the MMSI data location signal (which includes a timestamp, latitude, longitude, distance from port, and more) is not emitted at regular intervals. AIS broadcast frequency changes with vessel speed (faster at higher speeds), and not all AIS messages that are broadcast are received – terrestrial receivers require line-of-sight, satellites must be overhead, and high vessel density can cause signal interference. For example, AIS messages might be received frequently as a vessel leaves the docks and operates near shore, then less frequently as they move further offshore until satellite reception improves.  This is challenging for a machine learning model to interpret. There are too many gaps in the data, which makes it hard to predict.A way to solve this is to normalize the data and generate fixed-sized hourly windows. Then the model can predict if the vessel is fishing or not fishing for each hour.Split panel where left side shows irregular GPS signals collected. Right side shows how we must normalize the data into hourly windows.It could be hard to know if a ship is fishing or not by just looking at its current position, speed, and direction. So we look at the data from the past as well, looking at the future could also be an option if we don’t need to do real time predictions. For this sample, it seemed reasonable to look 24 hours into the past to make a prediction. This means we need at least 25 hours of data to make a prediction for a single hour (24 hours in the past + 1 current hour). But we could predict longer time sequences as well. In general, to get hourly predictions, we need (n+24) hours of data.Options to deploy and access the modelFor this sample specifically we used Cloud Run to host the model as a web app so that other apps can call it to make predictions on an ongoing basis; this is our favorite in terms of pricing if you need to access your model from the internet over an extended period of time (charged per prediction request). You can also host it directly from Vertex AI where you trained and built the model, just note there is an hourly cost for using those VMs even if they are idle. If you do not need to access the model over the internet, you can make predictions locally or download the model onto a microcontroller if you have an IoT sensor strategy.3 options for hosting modelWant to go deeper?If you found this project interesting and would like to dive deeper either into the specifics of the thought process behind each step of this solution or even run through the code in your own project (or test project); we invite you to check out our interactive sample hosted on Colab, which is a free Jupyter notebook.  It serves as a guide with all the steps to run the sample, including visualizing the predictions on a dynamically moving map using an open source Python library called Folium. There’s no prior experience required! Just click “open in Colab” which is linked at the bottom of GitHub.You will need a Google Cloud Platform project. If you do not have a Google Cloud project you can create one with the free $300 Google Cloud credit, you just need to ensure you set up billing, and later delete the project after testing the desired sample.screenshot of interactive notebook in colab notebook

Stephanie Wong’s guide to #GoogleCloudNext 2021

You might be thinking, “Wait, Google Cloud Next is in October this year?” and “Wait, it’s already October?” Well, if the past year has taught us anything, it’s that odd schedules and wearing sweats to work are the new norm – allbeltsbets are off. So get ready because #GoogleCloudNext is coming up from Oct 12-14th on a virtual screen near you (yes, you can keep your sweats on!). We’ll start each morning with a live, must-see broadcast and keynote. Then, as the day rolls on, it’s up to you to explore our live, interactive expert Q&As, breakout sessions, and demos. While the session catalog is up, not everyone has the time to explore it in full. So I’ll share my top sessions in each product category so you can get to the heart of our launches, best customer stories, and hands-on learning. You can also check out my playlist on the Next website to easily access sessions and add them to your own playlist. KeynotesGENKEY1 Opening keynoteHow could the opening keynote not be at the top of my list? Google Cloud CEO, Thomas Kurian, will share his insights on how businesses can leverage cloud technology to build for the future. He’ll talk about how our products have helped customers adapt to complexities, challenges, and lead to opportunities. I can’t reveal too much just yet, but expect to hear about our biggest launches, captivating customer stories, and TK’s view on Google Cloud’s future. Keep an eye out for some exciting cameos as well! DEVKEY1 Developer keynote I’m excited to see Urs Hölzle and Aparna Sinha take the stage. Urs has been at Google for more than 25 years and oversees the design and operation of the servers, networks, and data centers that power our global services. Aparna is the Director of Product at Google Cloud for App Development Platforms. Both are incredibly experienced at building cloud technologies and the very infrastructure that supports our complex operations. You’ll hear from the developers who are bringing these innovations to life, Google Cloud’s vision for the top three technologies trends, and how cloud computing will evolve over the next decade. And don’t miss the Live Q&A hosted by my teammates Aja Hammerly and Priyanka Vergadia.Data analytics and databasesSPTL102 Data Cloud: Simply transform with a universal data platformWelcome to my spotlight session! Gerrit Kazmaier, VP & GM of Databases & Analytics joins me as we introduce the future of data, analytics, and AI at Google Cloud. In this keynote, we’ll focus on how organizations are simplifying and unifying data across transactional, processing and open source tools. We’ll also highlight our latest technology innovations for products including BigQuery, Spanner, Looker, and Vertex AI. Don’t miss it because there will be a live demo. Check out the Live Q&A afterwards to interact with the speakers. DA202 Take me into the ballgame162 games a year and a treasure trove of player statistics – the Major League Baseball is chock full of data. MLB is reimagining the fan experience with their data cloud. MLB consolidated its infrastructure and migrated to Google Cloud’s Anthos, Google Kubernetes Engine, Cloud SQL, BigQuery, and Looker. MLB tracks every moment of every game for an audience on seven continents with Cloud SQL, allowing them to use this valuable data to drive deeper engagement with fans today and in the future. Don’t miss this session to hear from the Sr. Director of Software Engineering for Statcast Data, Rob Engel and Product Marketing for Data Analytics, Aditi Mishra. LH100 10 billion games, play-by-play insightsThis session stood out to me – I, along with many of you, was glued to watching Netflix’s series, The Queen’s Gambit last year. Few of us realize, however, the surge in online chess playing the show caused. Chess.com saw a rapid spike in traffic, and worked with Google Cloud to create innovative experiences for +70M players. This session will dive into how Chess.com provides planet-scale gaming with Google Cloud to support over 10 million live games per day – and draws on its database of 10 billion matches to power real-time, in-game insights for players, and data-driven innovations for its platform.If you want to hear more about the inner workings and latest for analytics and managed database offerings (like BigQuery, Cloud SQL, and Spanner), check out DA301, DBS205, and ​​DBS203.AI/MLAI105 What’s new and what’s next with infra for AI and MLYou’ll learn from Product Managers, Chelsie Czop (Peterson) and Omkar Pathak about the new cloud machine learning (ML) infrastructure and accelerator innovations this year. They’ll paint a picture about the realms of possibilities that open up with Google Cloud’s ML and GPU capabilities.DA200 Build and interactive ML app w/ Looker and Vertex AIMy talented teammates Sara Robinson and Leigha Jarett are known for their work building creative applications for ML and data analytics, like baking recipes andBigQuery reference guides. In this session, they’ll go through how to leverage Looker’s extension framework to build a custom user experience that empowers business users to explore the results of data science applications built in Google Cloud. Using Vertex AI, they’ll deploy a demand forecasting model that exports results into your BigQuery data warehouse. With Looker’s prebuilt UI components, data practitioners can easily construct a purpose-built application where end users can enter in parameters, run an ML model, visualize the output, and make informed business decisions that impact enterprise strategy and accelerate time to value.Plus check out LP106, the ML and AI learning path to get hands on with BigQuery, TensorFlow, Cloud Vision, Natural Language API, and more.SPTL100 The path to invisible securityHere’s your chance to listen to our very own Chief Information Security Officer, Phil Venables. He’s just been appointed to the US Council of Advisors on Science and Technology for his work in security, and he’s teaming up with VP of Security at Google Cloud, Sunil Potti for this session. They’ll discuss how to engineer security directly into your applications and platforms to eliminate entire classes of threats without friction. They’ll introduce the Invisible Security Vision, and discuss the key capabilities that can make it a reality. SEC212 6 Layers of GCP data center securityYou might remember I made a video about Google’s data center security. If not, you might imagine it’d be about lasers, night vision cameras, and military grade fences. You’re not too far off because security is one of the most critical elements of our data centers’ DNA. I am one of the rare Googlers to visit one. During this session, I’ll take you on a journey to the core of a data center, to show you the six layers of physical security designed to thwart unauthorized access. You’ll also hear about how those layers transcend into Google Cloud’s logical security controls that make Google Cloud one of the most robust enterprise risk management platforms.If you want to hear more about protected storage, securing the software supply chain, and data encryption, I recommend SEC101, SEC204, SEC207, and SEC300. Application developmentSPTL101 Extend the value of cloud investments anywhere with Google CloudOur GM and VP of Product for IaaS, Sachin Gupta, and VP of Product, Application Modernization Platform, Jeff Reed, have been working together to deliver the leading public cloud offering. Based on the growing needs of organizations around the world, Google Cloud is extending cloud services to address more complex and unique use cases —from datacenter to the edge. They’re gong to dive into strategies that can help you modernize your people, processes, and applications to take full advantage of the distributed cloud from Google. Bring your questions after for a live Q&A, SPTL101QA.SPTL121 Unleash the Innovators!SVP, Urs Hölzle, and Sr. Director, Jeana Jorgenson, are bringing exciting announcements just for developers. This session is a quick recap of the week’s developer news – with  some “new news” as well, hear a few candid thoughts from Urs Hölzle on the importance of the developer community, and get a quick rundown of what to expect during Day 3 of Next, Community Day.LD101 Extend Google Cloud’s infrastructure and services anywhereShow, don’t just tell right? You’ve probably heard about the multicloud value of Anthos, but let’s see this in practice. I’m looking forward to this session because it will be an  interactive demo highlighting Google’s latest innovations in application modernization and infrastructure in a distributed cloud environment. It’ll show you how to modernize apps with cloud-native services (like Google Kubernetes Engine (GKE) Autopilot, Anthos, Cloud Run, and developer tools). And the demo will show how using Google’s AI/ML and analytics capabilities at the edge will bring the data processing closer to the data creating real-time insights across environments, while maintaining security and privacy.APP202 What’s next in KubernetesIn the beginning, Kubernetes aimed to provide users around the world with the tools to run their applications at scale. Google and the Kubernetes community created a shared vision for a platform with the flexibility to grow and shift, serving the needs of many different business types. While our engineers work within the contributor community to develop new capabilities, GKE has grown accordingly in the areas of multicluster deployments, improvements to support batch or AI and machine learning workloads, and much more. You’ll hear the latest features coming to the Kubernetes project that can help you scale your operations.Plus check out LP102, the Kubernetes, hybrid, and multicloud learning path. If you’re interested in our work with the open source community, check out our interactive session GCD113. InfrastructureINF300 Choosing the right VM family type for your workloadI always love sessions that guide you through the best compute option based on your workload. It’s a common question we get from the cloud community. With the addition of new VM types like our Tau Family there are additional ways to optimize for both compute power and cost. If you’re wondering whether you’re using the best possible cloud compute resource for your workloads, they’ll discuss the different Compute Engine machine families in detail and provide guidance on what factors to consider when choosing your virtual machine type.GCD101 Making infrastructure invisible for Application DevelopersCheck out this interactive session to speak with our experts about how storage can take care of failover and disaster recovery with automated data replication, and how networking can remove the complexity of traffic management and load balancing as your userbase grows. You’ll leave the conversation with answers to your most critical infrastructure questions and a list of capabilities you can leverage right away. Plus check out LP100, the cloud infrastructure learning path to learn Google Cloud’s approach to infrastructure and implementing, deploying, migrating, and maintaining applications.Diversity equity and inclusionDiversity is a critical component to innovation, so we’ve again included a series of DEI focused sessions at Next. When we bring people together with a variety of experiences, background, and beliefs, we can achieve more. So check out sessions like: SPT106 How to build a diverse workforce that thrives and innovates with psychological safetySPTL120 A conversation with Vint Cerf and Jim Hogan on disability in techLeaders from Accenture, and even one of the founders of the internet, Vint Cerf, will talk about the importance of psychological safety and neurodiversity in the workplace.Check out my playlist along with other expert-curated playlists on the Next website, and don’t forget to build your own. You can also explore the session catalog and click the Filter option at the top right to filter by:CategoryLearning levelTrackIndustry/interestsAnd job roleSee you at #GoogleCloudNext 2021. I’ll be active on social media throughout the event, so be sure to share your experience with me @stephr_wong. Related ArticleRegistration is open for Google Cloud Next: October 12–14Register now for Google Cloud Next on October 12–14, 2021Read Article
Quelle: Google Cloud Platform