Anthos rising—now easier to use, for more workloads

Today more than ever, customers ask for help addressing two critical business needs: reimagining their application portfolios and driving cost savings. Earlier today, we announced Google Cloud App Modernization Program, or Google CAMP. We built this program to help you innovate faster, so you can reach your customers with world-class, secure, reliable applications, all while saving on costs. Google CAMP does this with a consistent development and operations experience, tools, best practices and industry leading-guidance on how to develop, run, operate and secure applications. A key component of Google CAMP is Anthos, our hybrid and multi-cloud cloud modernization platform. In fact, we recently announced BigQuery Omni, a multi-cloud analytics solution, powered by Anthos. And today, building on that momentum, we’re excited to share several new Anthos capabilities with you. Bring AI to hybrid environmentsWhether it’s image recognition, pattern detection, conversational chatbots, or any number of other emerging use cases for artificial intelligence (AI), organizations are eager to incorporate AI functionality into their offerings. AI models require a lot of data, which more often than not resides in an organization’s data center—not in the cloud. Further, many organizations’ data is sensitive and must stay on-prem. As a result, you’re often forced to rely on fragmented solutions across on-prem and cloud deployments, or to minimize your use of AI entirely. With Anthos, you don’t have to make those types of compromises. Today we’re pleased to announce hybrid AI capabilities for Anthos, designed to let you use our differentiated AI technologies wherever your workloads reside. By bringing AI on-prem, you can now run your AI workloads near your data, all while keeping them safe. In addition, hybrid AI simplifies the development process by providing easy access to best-in-class AI technology on-prem. The first of our hybrid AI offerings, Speech-to-Text On-Prem, is now generally available on Anthos through the Google Cloud Marketplace. Speech-to-Text On-Prem gives you full control over speech data that is protected by data residency and compliance requirements, from within your own data center. At the same time, Speech-to-Text On-Prem leverages state-of-the-art speech recognition models developed by Google’s research teams that are more accurate, smaller, and require less computing resources to run than existing solutions.We collaborated with organizations across many industries to design Anthos’ hybrid AI capabilities. One customer in particular is Iron Mountain, a global leader for storage and information management services. “Iron Mountain built its InSight product on Google Cloud’s AI technology because it was by far the best AI service available. Now with Anthos hybrid AI, we can bring Google’s AI technology on site,” said Adam Williams, Director, Software Engineering at Iron Mountain. “Anthos is hybrid done right, allowing us to build software quickly in the cloud, and seamlessly deploy it on-premises for applications that have data residency and compliance requirements. Thanks to Anthos we have been able to meet our customers where they are and open up millions of dollars of new opportunities.” You can get started today with Speech-to-Text On Prem with five supported languages, with more coming soon. Think services-first for more workloadsMany of our customers choose Anthos because of its service-first approach (versus infrastructure-first). Anthos lets you automate those services, allowing you to proactively monitor and catch issues early. It does so with declarative policies that treat “configuration as data,” so you can minimize manual errors while maintaining your desired configuration state.These are some of the reasons that leading global financial-services provider Macquarie Bank chose Anthos as its application modernization platform. “Embracing Anthos enables us to move at the speed of now, by absorbing the complexity of building secure and efficient distributed systems,” said Richard Heeley, CIO, Banking and Financial Services, Macquarie Bank. “This means we can focus on driving innovation and delivering leading banking experiences for our customers, now and into the future.” We’ve also been doing more to bring the benefits of this services-first approach to a wider range of workloads. Today we are introducing Anthos attached clusters, which let you manage any Kubernetes cluster with the Anthos control plane—including centralized management for configuration and service mesh capabilities.We are also excited to share that Anthos for bare metal is now in beta, which lets Anthos run on-prem and at edge locations without a hypervisor. Anthos for bare metal provides a lightweight, cost-effective platform that minimizes unnecessary overhead and opens up new cloud and edge use cases. In fact, Google is itself an early adopter for Anthos for bare metal, working towards using it as a platform to run containers internally for our production workloads.Faster development cyclesWriting and managing production workloads can be labor-intensive. There are many ways Anthos can help your developers, security teams and operators be more productive. Let’s take a look at the newest capabilities. First, we’ve united our Cloud Code Integrated Development Environment (IDE) plugins with Cloud Run for Anthos. This allows you to build serverless applications directly from IDEs like VS Codeand Intellij. Supported languages include Java, Node.js, Python and Go. Once you’ve written your code, the new Cloud Code-Cloud Run emulator lets you quickly validate local changes on your own machine, with automated re-deploys on every saved code change. You can even use this emulator to locally debug your Cloud Run apps. When your code is ready, you can push changes directly to a remote dev environment in the cloud, right from the IDE.Additionally, Cloud Code now lets you create Kubernetes clusters with Cloud Run for Anthos enabled, right from within your IDE, pre-populating key details like project ID, zone/region, number of nodes, etc.Expand your security optionsWe built Anthos with a security-first approach from day one, following principles of least-privilege and extending defense-in-depth to your deployments. This simplifies everything from release management, to updating and patching. In particular, identity and authentication play a key role in securing your deployments—all the more so in Anthos environments that can span a variety of cloud and on-prem environments. Today, we’re announcing Anthos Identity Service, which extends your existing identity solutions to seamlessly work with Anthos workloads. With support for OpenID Connect, (generally available on-prem and in beta for Anthos on AWS), you can leverage your existing identity investments and enable consistency across environments. We will be adding support for additional protocols in the coming months.Then, with the new Anthos security blueprints you get best practices in a templated format, making it easy for you to quickly adopt best practices like auditing and monitoring, policy enforcement and enforcing locality restrictions. Anthos security blueprints also give you purpose-built solutions to automate governance, compliance and data residency for regulated industries such as financial services, retail and public sector. Finally, through Google Cloud Marketplace, we’ve made containerized applications for various use cases such as security, analytics, developer tools, etc. easier to access than ever before. Because of this, sales of partner SaaS offerings through the Google Cloud Marketplace have increased 3x since the beginning of 2020. Take the first step with easier migration As you look to modernize, the first step is often to migrate specific workloads before you can build on top of them. But moving VM-based workloads to containers can be very complex. You may not even have access to the source code, especially for third-party software, making manual containerization impossible.Today we’re also announcing new capabilities to make migrating your workloads to Anthos easier—even ones for which you don’t have the source code.Migrate for Anthos, widely used today as a low-friction path for migrating workloads to GKE, now provides build migration automation using the new CRD-based API to integrate with your custom processes and tooling. This enables several new features:Support for Anthos deployed on-prem so that you can convert VMs running on-prem—and keep them there—if you need that flexibility. Support for Windows containers, now in beta, for anyone looking to start converting their Windows workloads. Integration into the Google Cloud Console web admin UI, making it easier to monitor ongoing migrations or perform multiple migrations at once.One of our customers, the national British newspaper The Telegraph, uses Migrate for Anthos to accelerate its modernization. “The Telegraph was running a legacy content management system (CMS) in another public cloud on several instances. Upgrading the actual system or migrating the content to our main Website CMS was problematic, but we wanted to migrate it from the public cloud it was on,” said Lucian Craciun, Head of Technology, Platforms, The Telegraph. “We found out about Migrate for Anthos and gave it a try, and in about one month we were able to containerize and migrate all of those CMS workloads to GKE. We are already seeing significant savings on infrastructure and reduced day-to-day operational costs.”In addition, we’re making it easier for you to migrate workloads from Cloud Foundry, a first-generation cloud application platform. This new migration feature uses Kf on Anthos, which presents developers with a Cloud Foundry-like interface on top of Anthos. With this approach, you can benefit from Anthos’ operational benefits (e.g., declarative operations, service mesh, etc.), while minimizing disruption for your developers.More workloads from more places, with more ease No matter where you run your workloads—in Google Cloud, on-prem, in other clouds or at the edge—Anthos provides a consistent platform on which your teams can quickly build great applications that adapt to an ever-changing world. Over the coming weeks, we will publish deep dives into each of these areas with more detailed information; in the meanwhile, to learn more about these launches, as well as how to get the most out of Anthos, check out these great sessions that are going live at Google Cloud Next ‘20: OnAir this week:Anthos deep dive: part oneAnthos deep dive: part two
Quelle: Google Cloud Platform

From the ballpark to the cloud: How MLB is using Anthos

Whether it’s calculating batting averages or hot dog sales, data is at the heart of baseball. For Major League Baseball (MLB), the governing body of the sport known as America’s National Pastime, processing, analyzing, and ultimately making decisions based on that data is key to running a successful organization, and they’ve increasingly turned to Google Cloud to help them do it.MLB supports 30 teams spread across the US and Canada, running workloads in the cloud as well as at the edge with on-premises data centers at each of their ballparks. By using Anthos, they can containerize those workloads and run them in the location that makes the most sense for the application. We sat down with Kris Amy, VP of Technology Infrastructure at Major League Baseball, to learn more.Eyal Manor: Can you tell us a little bit about MLB and why you chose Google Cloud?Kris Amy: Major League Baseball is America’s pastime. We have millions of fans around the world, and we process and analyze extreme amounts of data. We know Google Cloud has tremendous expertise in containerization, AI and big data. Anthos enables us to take advantage of that expertise whether we’re running in Google Cloud, or running on-prem in our stadiums. Eyal: Why did you choose Anthos, and how is it helping you?Kris: Anthos is the vehicle we’re using to run our applications anywhere—whether that’s in a ballpark or in the cloud. We have situations where we have to do computing in the park for latency reasons, such as delivering stats in the stadium, to fans, or to broadcast, or to the scoreboard. Anthos helps us process all that data and get it back to whomever is consuming it. Uniformity across this deployment environment is especially key for our developers. They don’t want to know the differences between whether they’re running in the cloud, or running on-prem in a datacenter or in one of our stadiums.To give you an example, if something were to happen during a broadcast at Yankee Stadium, we could run our code across the city at Citi Field where the Mets play and continue broadcasting without interruption. And if we had any issue in any stadium, we can shoot that data up to Google Cloud and process it there.Eyal: That is really amazing. Can you tell us what this journey looked like for you?Kris: We started our journey of modernizing our application stack 18 months ago. We previously had various siloed applications, and we were now eager to move down this path of containerizing everything and using that as our path forward for deploying applications. From there, we had uniformity across all of our environments, whether that’s a miniature datacenter that we have running in the stadium, or a true datacenter, or in Google Cloud. So we had chosen containers, and we were well down the path, and then we were coming to the problem of “what do we do once we want to run this in the stadium?”We saw Google and noticed that Anthos was coming. We got excited because it seemed like the simplest and easiest solution for managing these applications and deploying them regardless of whether they’re in the stadium or in the cloud. That journey took us about 12 months and we’re happy to say that as of opening day this year, we’ve been running applications in our stadiums on Anthos.Learn more about how MLB is using Google CloudThere’s more to learn from how MLB is using Google Cloud. Check out their Next OnAir sessions on running Anthos on bare metal and at the edge and their data warehouse modernization journey, read their recent Google Cloud blog post, or see a live demo of how they’re using BigQuery to share subsets of fan data with MLB Clubs.
Quelle: Google Cloud Platform

Empowering remote learning with Azure Cognitive Services

This blog post was co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

As schools and organizations around the world prepare for a new school year, remote learning tools have never been more critical. Educational technology, and especially AI, has a huge opportunity to facilitate new ways for educators and students to connect and learn.

Today, we are excited to announce the general availability of Immersive Reader, and shine a light on how new improvements to Azure Cognitive Services can help developers build AI apps for remote education that empower everyone.

Make content more accessible with Immersive Reader, now generally available

Immersive Reader is an Azure Cognitive Service within the Azure AI platform that helps readers read and comprehend text. Through today’s general availability, developers and partners can add Immersive Reader right into their products, enabling students of all abilities to translate in over 70 languages, read text aloud, focus attention through highlighting, other design elements, and more. 

Immersive Reader has become a critical resource for distance learning, with more than 23 million people every month using the tool to improve their reading and writing comprehension. Between February and May 2020, when many schools moved to a distance learning model, we saw a 560 percent increase in Immersive Reader usage. As the education community embarks on a new school year in the Fall, we expect to see continued momentum for Immersive Reader as a tool for educators, parents, and students.

With the general availability of Immersive Reader, we are also rolling out the following enhancements:

Immersive Reader SDK 1.1: Updates include support to have a page read aloud automatically, pre-translating content, and more. Learn about SDK updates.
New Neural Text-to-Speech (TTS) languages: Immersive Reader is adding 15 new Neural Text to Speech voices, enabling students to have content read aloud in even more languages. Learn about the new Neural Text to Speech languages.
New Translator languages: Translator is adding five new languages that will also be available in Immersive Reader—Odia, Kurdish (Northern), Kurdish (Central), Pashto, and Dari. Learn about the latest Translator languages.

Today, we’re adding new partners who are integrating Immersive Reader to make content more accessible, Code.org and SAFARI Montage. 

Code.org is a nonprofit dedicated to expanding access to computer science in schools. To ensure that students of all backgrounds and abilities can access their resources and course content, Code.org is integrating Immersive Reader into their platform.

“We’re thrilled to partner with Microsoft to bring Immersive Reader to the Code.org community. The inclusive capabilities of Immersive Reader to improve reading fluency and comprehension in learners of varied backgrounds, abilities, and learning styles directly aligns with our mission to ensure every student in every school has the opportunity to learn computer science.” – Hadi Partovi, Founder and CEO of Code.org

SAFARI Montage, a leading learning object repository, is integrating Immersive Reader to make it possible for students of any language background or accessibility needs to engage with content, and enable families who don’t speak the language of instruction to be more involved in their students’ learning journeys.  

"Immersive Reader is a crucial support for CPS students and families. During remote learning, particularly for our younger learners, student learning is often supported by parents, guardians, or other caregivers. Since Immersive Reader can be used to translate the student-facing instructions in our digital curriculum, families can support student learning in over 80 languages, making digital learning far more equitable and accessible than ever before! In addition, read-aloud and readability supports are game-changers for diverse learners" – Giovanni Benincasa, UX Manager, Department of Curriculum, Instruction, and Digital Learning, Chicago Public Schools  

With Immersive Reader, all it takes is a single API call to help users boost literacy. To start exploring how to integrate Immersive Reader into your app or service, check out these resources: 

Software Development Kit (SDK): Immersive Reader SDK. 
Documentation: Immersive Reader documentation. 
Getting started videos: Immersive Reader videos .

To see the growing list of Immersive Reader partners and learn more, check out our partners page and Immersive Reader education blog.

Bring online courses to life with speech-enabled apps

With the shift to remote learning, another challenge that educators may face is continuing to drive student engagement.

Text to Speech, a Speech service feature that allows users to convert text to lifelike audio can facilitate new ways for students to interact with content. In addition to powering features like Read Aloud in Immersive Reader and the Microsoft Edge browser, Text to Speech enables developers to build apps that speak naturally in over 110 voices with more than 45 languages and variants.

With the Audio Content Creation tool, users can more easily bring audiobooks to life and finetune audio characteristics like voice style, rate, pitch, and pronunciation to fit their scenarios—no code required. Voices can even be customized for specific characters or personas; the Custom Neural Voice capability makes it possible to build one-of-a-kind voices, starting with 30 minutes of audio. Duolingo, for example, is using the Custom Neural Voice capability to create unique voices to represent different characters in its language courses.

To learn more about how to start creating speech-enabled apps for remote learning, check out the technical Text to Speech blog and other resources:

Demos: Text to Speech. 
Documentation: Text to Speech.
Software Development Kit (SDK): Speech SDK.

Improve productivity and accessibility with transcription and voice commands 

AI can also be a useful tool for more seamless note-taking, making it possible for students and teachers to type with their voice. Transcribe in Word uses Speech to Text in Azure Cognitive Services to automatically transcribe your conversations. Now with speaker diarization, you can get a transcript that identifies who said what, when. 

In addition, adding voice enables more seamless experiences in Microsoft 365. For students who have difficulties writing things down, they can use AI-powered tools in Office not just for dictation but also for controls such as adding, formatting, editing, and organizing text. Word uses Language Understanding, an Azure Cognitive Service that enables you to add custom natural language understanding to your apps, to make it possible to capture ideas easily. To learn more about Language Understanding and how it is powering voice commands, check out our Language Understanding blog.

For more details on how AI is powering experiences in Microsoft 365, read the Microsoft 365 blog.

Get started today

We can’t wait to see what you’ll build. Get started today with Azure Cognitive Services and an Azure free account.
Quelle: Azure

Advancing telehealth with Amwell

Today, healthcare organizations are reimagining how care is delivered. While many organizations were already beginning to embrace telehealth at the start of the year, the global COVID-19 pandemic has accelerated this trend virtually overnight—for instance, according to HHS data, Medicare primary care visits delivered virtually grew from less than one percent in February of 2020 to more than 40 percent in April. And, telehealth is here to stay: A study from Frost & Sullivan analysts forecasts a “sevenfold growth in telehealth by 2025.” At Google Cloud, we are committed to helping the healthcare industry transform to meet today’s extraordinary challenges and to build a platform for the future that enables high quality, efficient, and cost-effective care from anywhere. A comprehensive, patient-friendly telehealth system is critical to providing high quality virtual care. Imagine a not too distant future in which your visit begins with a customized greeting and relevant information in a digital waiting room. A conversational chatbot agent is immediately available to assist you, in your preferred language, by asking about your symptoms and the reason for your visit, and provides this information to your physician before she enters your virtual exam room. During your appointment, you continue to speak in your preferred language to your physician, while cloud-based artificial intelligence (AI) provides live, translated captioning of the conversation. Before, during, and after the appointment, AI and conversational agents simplify, automate, or offload your providers’ routine tasks, such as filling out common intake forms or collecting insurance information, so they are free to focus on you. Your health information like medication, symptoms, and records from your past visits are immediately available during your telehealth visit and afterwards, your medical records are immediately updated, privately and securely. Your doctor can quickly share notes, fill prescriptions, send relevant information and schedule a follow-up visit via email.The same technology that powers this telehealth platform can also enable providers to have better, ongoing monitoring of patients in home health situations as well as for those managing chronic conditions, by leveraging sophisticated data analytics tools in the cloud to help providers monitor and flag interventions at the right time.Today, we announced a new partnership with Amwell to help the healthcare industry transform for a world that is more reliant on telehealth, and to ensure that healthcare organizations and providers are equipped with telehealth solutions that provide holistic and secure experiences, support HIPAA compliance, are fully-integrated, and that will enable cohesive, patient-friendly journeys through the healthcare system.Google Cloud and Amwell will closely partner to bring telehealth solutions to healthcare organizations around the world, leveraging Amwell’s telehealth platform running on Google Cloud and integrating Google Cloud’s capabilities in areas including artificial intelligence (especially natural language processing and translation services), services aimed at secure handling of healthcare data in the cloud and enabling healthcare data interoperability, as well as collaboration tools like G Suite. We’ll work together to bring these solutions to market, helping expand access to virtual care among our mutual customers and the global healthcare industry. As part of this strategic partnership, Google Cloud will invest $100 million into Amwell to evolve and scale its telehealth portfolio to serve the needs of providers, insurers, and patients. You can read more about our partnership with Amwell here.Over the coming months and years, patients will expect healthcare organizations to offer a comprehensive, seamless, and friendly virtual care experience. It’s critical that organizations are thinking today about building this platform for the future. We’re committed to partnering with the healthcare industry to adapt, prepare, and thrive in the new future.
Quelle: Google Cloud Platform