Corona: Google verschiebt Start von Android 11
Ursprünglich sollte die erste Betaversion von Android 11 im Mai 2020 erscheinen, das Launch-Event findet jetzt am 3. Juni statt. (Android 11, Google)
Quelle: Golem
Ursprünglich sollte die erste Betaversion von Android 11 im Mai 2020 erscheinen, das Launch-Event findet jetzt am 3. Juni statt. (Android 11, Google)
Quelle: Golem
Der Fokus bei Qualcomm-SoC und 5G-Modem liegt auf der Effizienz. (Snapdragon, Smartphone)
Quelle: Golem
Neue Betas von iOS und WatchOS zeigen, dass künftig auch Gesundheitsdaten an die Feuerwehr geschickt werden können. (Medizin, API)
Quelle: Golem
This month brought spring flowers, and plenty of adaptations to a work-from-home, virtually powered routine. At Google Cloud, we welcomed news of new certifications, lots of updates and news on conferencing with Google Meet, and security additions. Here’s a look at the top stories.Meet online, securely and at no costWe announced that Google Meet, our premium video conferencing product, is now free for everyone. Meet’s availability will be gradually expanding over the next few weeks, and can be used by anyone with an email address.Also last month, we announced that we’re extending our offer for all G Suite customers to use advanced Google Meet features for free until Sept. 30. This includes larger meetings for up to 250 participants, live streaming to 100,000 people within your domain, and meeting recording. Along with that news, we heard from customers about how they’re using Meet to adapt to work-from-home environments, speed up product launches, and more.In addition, Meet’s new features, launched last month, include some top-requested items. These include a tiled layout to see up to 16 meeting participants at once, the ability to present a Chrome tab for higher-quality video content with audio, and noise cancellation. Securing all-virtual meetingsIn an almost-entirely-virtual world, securing online interactions is more important than ever. Our approach to security is simple: Make products safe by default. We designed Meet to operate on a secure foundation, providing the protections needed to keep our users safe, their data secure, and their information private. Meet video meetings are encrypted in transit and we’re regularly updating safety measures and features to help prevent abuse. Learn more. To meet new requirements for remote work, businesses can now use BeyondCorp Remote Access. This is a cloud solution based on the zero-trust approach used within Google for almost a decade. BeyondCorp Remote Access lets your employees and extended workforce access internal web apps for just about any device anywhere—without needing a VPN. New phishing and malware threats related to COVID-19 have emerged, and our ML models have evolved to understand and filter these threats. We continue to block more than 99.9% of spam, phishing, and malware from reaching our users. Learn more. Multi-cloud capabilities expand, bring flexibilityMulti-cloud platform Anthos can now support AWS applications, so you can consolidate your ops across on-prem, Google Cloud, and AWS, with Microsoft Azure coming soon. This brings flexibility to run apps where you want them without adding complexity. Additionally, Anthos now offers deeper support for virtual machines to make cloud management easier.Understanding API design modelsRPC and REST are the two primary models for API design, and there are varying options to implement modern APIs. This post explores some of the differences, when to choose one or the other, and offers tips on using HTTP for APIs, specifications like OpenAPI, and the benefits of gRPC.Keep learning from homeAll this month, you can explore free cloud learning resources from Qwiklabs and Pluralsight. You’ll find cloud basics and courses in on-demand skill areas, like data analytics, machine learning, and Kubernetes. Once you sign up, you’ll get 30 days of free access.That’s a wrap for April. Stay well and keep in touch on Twitter.
Quelle: Google Cloud Platform
During times of challenge and uncertainty, businesses across the world must think creatively and do more with less in order to maintain reliable and effective systems for customers in need. In terms of data analytics, it’s important to find ways for bootstrapped engineering and ops teams working in unique circumstances to maintain necessary levels of productivity. Balancing the development of modern, high-value streaming pipelines with maintaining and optimizing cost-saving batch workflows is an important goal for a lot of teams. At Google Cloud, we’re launching new capabilities to help developers and ops teams easily access stream analytics.Highlights across these launches include:Streaming pipelines developed directly within the BigQuery web UI with general availability of Dataflow SQLDataflow integrations with AI Platform allow for simple development of advanced analytics use casesEnhanced monitoring capabilities with observability dashboardsBuilt on the autoscaling infrastructure of Pub/Sub, Dataflow, and BigQuery, Google Cloud’s streaming platform provisions the resources that engineering and operations teams need to ingest, process, and analyze fluctuating volumes of real-time data to get real-time business insights. We are honored that The Forrester Wave™: Streaming Analytics, Q3 2019 report named Google Cloud a Leader in the space. These launches build on and strengthen the capabilities that drove that recognition.What’s new in stream analyticsThe development process for streaming and batch data pipelines is now even easier with these key launches across both Dataflow and Pub/Sub. You can get from idea to pipeline and management to iteration to fulfill customer needs efficiently.General availability of Dataflow SQLDataflow SQL lets data analysts and data engineers use their SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI. Your Dataflow SQL pipelines have full access to autoscaling, time-based windowing, a streaming engine, and parallel data processing. You can join streaming data from Pub/Sub with files in Cloud Storage or tables in BigQuery, write results into BigQuery or Pub/Sub, and build real-time dashboards using Google Sheets or other BI tools. There’s also a recently added command line interface to script your production jobs with full support of query parameters, and you can rely on the Data Catalog integration and a built-in schema editor for schema management.Iterative pipeline development in Jupyter notebooksWith notebooks, developers can now iteratively build pipelines from the ground up with AI Platform Notebooks and deploy with the Dataflow runner. Author Apache Beam pipelines step by step by inspecting pipeline graphs in a read-eval-print-loop (REPL) workflow. Available through Google’s AI Platform, Notebooks allows you to write pipelines in an intuitive environment with the latest data science and machine learning frameworks so you can develop better customer experiences easily. Share pipelines and scale with flex templatesDataflow templates allow you to easily share your pipelines with team members and across your organization or take advantage of many Google-provided templates to implement simple but useful data processing tasks. With flex templates, you can create a template out of any Dataflow pipeline.General availability of Pub/Sub dead letter topicsOperating reliable streaming pipelines and event-driven systems has gotten simpler with general availability of dead letter topics for Pub/Sub. A common problem in these systems is “dead letters,” or messages that cannot be processed by the subscriber application. A dead letter topic allows such messages to be put aside for offline examination and debugging so the rest of the messages can be processed without delays. Optimize stream data processing with change data capture (CDC)One way to optimize stream data processing is to focus on working only with data that has changed instead of all available data. This is where change data capture (CDC) comes in handy. The Dataflow team has developed a sample solution that lets you ingest a stream of changed data coming from any kind of MySQL database on versions 5.6 and above (self-managed, on-prem, etc.), and sync it to a dataset in BigQuery using Dataflow.Integration with Cloud AI Platform You can now take advantage of an easy integration to AI Platform APIs and access to libraries for implementation of advanced analytics use cases. AI Platform and Dataflow capabilities include video clip classification, image classification, natural text analysis, data loss prevention, and a number of other streaming prediction use cases.Ease and speed shouldn’t come just to those building and launching data pipelines, but those managing and maintaining them as well. We’ve also enhanced the monitoring experience for Dataflow, aimed to further empower operations teams.Reduce operations complexity with observability dashboardsObservability dashboards and Dataflow inline monitoring let you directly access job metrics to help with troubleshooting batch and streaming pipelines. You can access monitoring charts at both step- and worker-level visibility, and set alerts for conditions such as stale data and high system latency. Here’s a look at one example:Getting started with stream analytics is now easier than ever. The first step to begin testing and experimenting is to move some data onto the platform. Take a look at the Pub/Sub Quickstart docs to get moving with real-time ingestion and messaging with Google Cloud.
Quelle: Google Cloud Platform
For years, I’ve been challenging people who advocate for the potential of artificial intelligence: Before you turn to AI as a solution, find a specific problem that needs solving. Now that we’re faced with a global pandemic, there is no shortage of immediate, complex problems that need to be solved.People are coming together to take on the challenges we’re facing due to the novel coronavirus, and AI is proving to be a valuable tool. From sifting through huge research datasets to finding potential treatments, to more accurately forecasting the spread of the disease, to powering virtual agents to answer questions about COVID-19, AI is helping all kinds of organizations. In this post, we’ll look at a few ways we’re trying to help out.Finding answers with the Kaggle CommunityIn March, The White House Office of Science and Technology Policy announced that it had 29,000 articles, which has now grown to more than 59,000, that may contain answers to key questions about the virus. It turned to Kaggle, a Google Cloud subsidiary, to call upon its community of more than 4 million data scientists to use AI to help find these answers. Participants have already developed several text and data mining tools to search through this dataset, named COVID-19 Open Research Dataset (CORD-19), to help answer critical questions like, “What do we know about COVID-19 risk factors?”, “What do we know about the virus’ genetics, origin, and evolution?”, and more.That same week, Kaggle doubled-down on its own efforts and challenged its community of data scientists in two forecast competitions: One focused on forecasting the spread of COVID-19 around the world, the other forecast the spread of the disease within California. Data scientists across the globe are collaborating to help the medical community defeat COVID-19, and you can keep up-to-date with our challenges at kaggle.com/covid19, and see the progress our community is making towards achieving the goals we’ve discussed here at kaggle.com/covid-19-contributions. Rapid Response Virtual Agent programIn early April, we launched the Rapid Response Virtual Agents program to help organizations that have been inundated with customer questions about the pandemic. The program helps businesses quickly build and implement a customized Contact Center AI virtual agent to respond to customer questions via chat or voice allowing customers to get 24/7 support.Albertsons CompaniesThe pandemic sparked a number of inquiries from our customers, causing a rush of calls, and impossibly long wait times. With the Rapid Response Virtual Agent program we were able to quickly set up our virtual agent, answering questions and directing traffic at the first inquiry level. Saving us time and money, while better servicing our customer’s needs. Cameron Craig, Vice President Digital Product, Design & ExperiencePPP Lending AI SolutionLast week, Google developed the PPP Lending AI Solution to help integrate Google’s AI-based document ingestion tools into lenders’ existing underwriting components and lending systems to make them more efficient. The PPP Lending AI Solution has three components, each of which can be used individually or in combination with each other:The Loan Processing Portal is a web-based application that lets lending agents and/or loan applicants create, submit, and view the status of their PPP loan application. The Document AI PPP Parser API enables lenders to use AI to extract structured information from PPP loan documents submitted by applicants. This component is available at no cost through June 30, 2020. Loan Analytics enables lenders to quickly onboard historical loan data, assist with the de-identification and anonymization of sensitive information, store information securely, and perform data analytics on this historical loan data. We’ve always known that one of AI’s great strengths is helping solve complex problems, and with the pandemic we’re faced with a particularly challenging one. We’ll continue to build and deploy our AI capabilities to help during this time, and to help customers solve their trickiest problems into the future.
Quelle: Google Cloud Platform
This post is part 2 of a two-part series about out how organizations are using Azure Cosmos DB to meet real world needs and the difference it’s making to them. In part 1, we explored the challenges that led service developers for Minecraft Earth to choose Azure Cosmos DB and how they’re using it to capture almost every action taken by every player around the globe—with ultra-low latency. In part 2 we examine the solution’s workload and how Minecraft Earth service developers have benefited from building it on Azure Cosmos DB.
Geographic distribution and multi-region writes
Minecraft Earth service developers used the turnkey geographic distribution feature in Azure Cosmos DB to achieve three goals: fault tolerance, disaster recovery, and minimal latency—the latter achieved by also using the multi-master capabilities of Azure Cosmos DB to enable multi-region writes. Each supported geography has at least two service instances. For example, in North America, the Minecraft Earth service runs in the West US and East US Azure regions, with other components of Azure used to determine which is closer to the user and route traffic accordingly.
Nathan Sosnovske, a Senior Software Engineer on the Minecraft Earth services development team explains:
“With Azure available in so many global regions, we were able to easily establish a worldwide footprint that ensures a low-latency gaming experience on a global scale. That said, people mostly travel within one geography, which is why we have multi-master writes setup between all of the service instances in each geography. That’s not to say that a player who lives in San Francisco can’t travel to Europe and still play Minecraft Earth—it’s just that we’re using a different mechanism to minimize round-trip latency in such cases.”
Request units per second (RU/s) consumption
In Azure Cosmos DB, request units per second (RU/s) is the “currency” used to reserve guaranteed database throughput. For Minecraft Earth, a typical write request consumers about 10 RU/s, with an additional 2-3 RU/s used for background processing of the append-only event log, which is driven by Azure Service Bus.
“We’ve found that our RU/s usage scales quite linearly; we only need to increase capacity when we have a commensurate increase in write requests per second. At first, we thought we would need more throughput, but it turned out there was a lot of optimization to be done,” says Sosnovske. “Our original design handled request volumes and complexity relatively well, but it didn’t handle the case where the system would shard—that is, physically repartition itself internally—because of overall data volumes.”
The reason for this was because allocated RU/s are equally distributed across physical partitions, and the physical partition with the most current data was running a lot hotter than the rest.
“Fortunately, because our system is modeled as an append only log that gets materialized into views for the client, we very rarely read old data directly from Azure Cosmos DB,” explains Sosnovske. “Our data model was flexible enough to allow us to archive events to cold storage after they were processed them into views, and then delete them from Azure Cosmos DB using its Time to Live feature.”
Today, with the service’s current architecture, Sosnovske isn’t worried about scalability at all.
“During development, we tested the scalability of Azure Cosmos DB up to one million RU/s, and it delivered that throughput without a problem,” Sosnovske says.
Initial launch of Mindcraft Earth
Minecraft Earth was formally released in one geography in October 2019, with its global rollout across all other geographies completed over the following weeks. For Minecraft fans, Minecraft Earth provides a means of experiencing the game they know and love at an entirely new level, in the world of augmented reality.
And for Sosnovske and all the other developers who helped bring Minecraft Earth to life, the opportunity to extend one of the most popular games of all time into the realm of augmented reality has been equally rewarding.
“A lot of us are gamers ourselves and jumped on the opportunity to be a part of it all,” Sosnovske recalls. “Looking back, everything went pretty well—and we’re all quite satisfied with the results.”
Benefits of using Azure Cosmos DB
Although Azure Cosmos DB is just one of several Azure services that support Minecraft Earth, it plays a pivotal role.
“I can’t think of another way we could have delivered what we did without building something incredibly complex completely from scratch,” says Sosnovske. “Azure Cosmos DB provided all the functionality we needed, including low latency, global distribution, multi-master writes, and more. All we had to do was properly put it to use.”
Specific benefits of using Azure Cosmos DB to build the Minecraft Earth service included the following:
Easy adoption and implementation. According to Sosnovske, Azure Cosmos DB was easy to adopt.
“Getting started with Azure Cosmos DB was incredibly easy, especially within the context of the .NET ecosystem,” Sosnovske says. “We simply had to install the Nuget package and point it at the proper endpoint. Documentation for the service is very thorough; we haven’t had any major issues due to misunderstanding how the SDK works.”
Zero maintenance. As part of Microsoft Azure, Azure Cosmos DB is a fully managed service, which means that nobody on the Minecraft Earth services team needs to worry about patching servers, maintaining backups, data center failures, and so on.
“Not having to deal with day-to-day operations is a huge bonus,” says Sosnovske. “However, this is really a benefit of building on Azure in general.”
Guaranteed low latency. A big reason developers chose Azure Cosmos DB was because it provides a guaranteed single-digit (<10ms) latency SLA for reads and writes at the 99th percentile, at any scale, anywhere in the world. In comparison, Table storage latency would have been higher—with no guaranteed upper bound.
“Azure Cosmos DB is delivering as promised, in that we’re seeing an average latency of 7 milliseconds for reads,” says Sosnovske.
Elastic scalability. Thanks to the elastic scalability provided by Azure Cosmos DB, the game enjoyed a frictionless launch.
“At no point was Azure Cosmos DB the bottleneck in scaling our service,” says Sosnovske. “We’ve done a lot of work to optimize performance since initial release and knowing that we wouldn’t hit any scalability limits as we did that work was a huge benefit. We may have paid a bit more for throughput then we had to at first, but that’s a lot better than having a service that can’t keep up with growth in user demand.”
Turnkey geographic distribution. With Azure Cosmos DB, geographic distribution was a trivial task for Minecraft Earth service developers. Adjustments to provisioned throughput (in RU/s) are just as easy because Azure Cosmos DB transparently performs the necessary internal operations across all the regions, continuing to provide a single system image.
“Turnkey geo-distribution was a huge benefit,” says Sosnovske. “We did have to think a bit more carefully about how to model our system when turning on multi-master support, but it was orders of magnitude less work than solving the problem ourselves.”
Compliance. Through their use of Time-to-Live within Azure Cosmos DB, developers can safely store location-based gameplay data for short periods of time without having to worry about violating compliance mandates like Europe’s General Data Protection Regulation (GDPR).
“It lets us drive workflows like ‘This player should only be able to redeem this location once in a given period of time,’ after which Azure Cosmos DB automatically cleans up the data within our set TTL,” explains Sosnovske.
In summarizing his experience with Azure Cosmos DB, Sosnovske says it was quite positive.
“Azure Cosmos DB is highly reliable, easy to use after you take the time to understand the basic concepts, and, best of all, it stays out of the way when you’re writing code. When junior developers on my team are working on features, they don’t need to think about the database or how data is stored; they can simply write code for a domain and have it just work.”
Get started with Azure Cosmos DB
Visit Azure Cosmos DB.
Learn more about Azure for Gaming.
Quelle: Azure
This post is part 1 of a two-part series about how organizations use Azure Cosmos DB to meet real world needs and the difference it’s making to them. In part 1, we explore the challenges that led service developers for Minecraft Earth to choose Azure Cosmos DB and how they’re using it to capture almost every action taken by every player around the globe—with ultra-low latency. In part 2, we examine the solution’s workload and how Minecraft Earth service developers have benefited from building it on Azure Cosmos DB.
Extending the world of Minecraft into our real world
You’ve probably heard of the game Minecraft, even if you haven’t played it yourself. It’s the best-selling video game of all time, having sold more than 176 million copies since 2011. Today, Minecraft has more than 112 million monthly players, who can discover and collect raw materials, craft tools, and build structures or earthworks in the game’s immersive, procedurally generated 3D world. Depending on game mode, players can also fight computer-controlled foes and cooperate with—or compete against—other players.
In May 2019, Microsoft announced the upcoming release of Minecraft Earth, which began its worldwide rollout in December 2019. Unlike preceding games in the Minecraft franchise, Minecraft Earth takes things to an entirely new level by enabling players to experience the world of Minecraft within our real world through the power of augmented reality (AR).
For Minecraft Earth players, the experience is immediately familiar—albeit deeply integrated with the world around them. For developers on the Minecraft team at Microsoft, however, the delivery of Minecraft Earth—especially the authoritative backend services required to support the game—would require building something entirely new.
Nathan Sosnovske, a Senior Software Engineer on the Minecraft Earth services development team explains:
“With vanilla Minecraft, while you could host your own server, there was no centralized service authority. Minecraft Earth is based on a centralized, authoritative service—the first ‘heavy’ service we’ve ever had to build for the Minecraft franchise.”
In this case study, we’ll look at some of the challenges that Minecraft Earth service developers faced in delivering what was required of them—and how they used Azure Cosmos DB to meet those needs.
The technical challenge: Avoiding in-game lag
Within the Minecraft Earth client, which runs on iOS-based and Android-based AR-capable devices, almost every action a player takes results in a write to the core Minecraft Earth service. Each write is a REST POST that must be immediately accepted and acknowledged to avoid any noticeable in-game lag.
“From a services perspective, Minecraft Earth requires low-latency writes and medium-latency reads,” explains Sosnovske. “Writes need to be fast because the client requires confirmation on each one, such as might be needed for the client to render—for example, when a player taps on a resource to see what’s in it, we don’t want the visuals to hang while the corresponding REST request is processed. Medium-latency reads are acceptable because we can use client-side simulation until the backing model behind the service can be updated for reading.”
To complicate the challenge, Minecraft Earth service developers needed to ensure low-latency writes regardless of a player’s location. This required running copies of the service in multiple locations within each geography where Minecraft Earth would be offered, along with built-in intelligence to route the Minecraft Earth client to the nearest location where the service is deployed.
“Typical network latency between the east and west coasts of the US is 70 to 80 milliseconds,” says Sosnovske. “If a player in New York had to rely on a service running in San Francisco, or vice versa, the in-game lag would be unacceptable. At the same time, the game is called Minecraft Earth—meaning we need to enable players in San Francisco and New York to share the same in-game experience. To deliver all this, we need to replicate the service—and its data—in multiple, geographically distributed datacenters within each geography.”
The solution: An event sourcing pattern based on Azure Cosmos DB
To satisfy their technical requirements, Minecraft Earth service developers implemented an event sourcing pattern based on Azure Cosmos DB.
“We originally considered using Azure Table storage to store our append-only event log, but its lack of any SLAs for read and write latencies made that unfeasible,” says Sosnovske. “Ultimately, we chose Azure Cosmos DB because it provides 10 millisecond SLAs for both reads and writes, along with the global distribution and multi-master capabilities needed to replicate the service in multiple locations within each geography.”
With an event sourcing pattern, instead of just storing the current state of the data, the Minecraft Earth service uses an append-only data store that’s based on Azure Cosmos DB to record the full series of actions taken on the data—in this case, mapping to each in-game action taken by the player. After immediate acknowledgement of a successful write is returned to the client, queues that subscribe to the append-only event store handle postprocessing and asynchronously apply the collected events to a domain state maintained in Azure Blob storage. To optimize things further, Minecraft Earth developers combined the event sourcing pattern with domain-driven design, in which each app domain—such as inventory items, character profiles, or achievements—has its own event stream.
“We modeled our data as streams of events that are stored in an append-only log and mutate an in-memory model state, which is used to drive various client views,” says Sosnovske. “That cached state is maintained in Azure Blob storage, which is fast enough for reads and helps to keep our request unit costs for Azure Cosmos DB to a minimum. In many ways, what we’ve done with Azure Cosmos DB is like building a write cache that’s really, really resilient.”
The following diagram shows how the event sourcing pattern based on Azure Cosmos DB works:
Putting Azure Cosmos DB in place
In putting Azure Cosmos DB to use, developers had to make a few design decisions:
Azure Cosmos DB API. Developers chose to use the Azure Cosmos DB Core (SQL) API because it offered the best performance and the greatest ease of use, along with other needed capabilities.
“We were building a system from scratch, so there was no need for a compatibility layer to help us migrate existing code,” Sosnovske explains. “In addition, some Azure Cosmos DB features that we depend on—such as TransactionalBatch—are only supported with the Core (SQL) API. As an added advantage, the Core (SQL) API was really intuitive, as our team was already familiar with SQL in general.”
Read Introducing TransactionalBatch in the .NET SDK to learn more.
Partition key. Developers ultimately decided to logically partition the data within Azure Cosmos DB based on users.
“We originally partitioned data on users and domains—again, examples being inventory items or achievements—but found that this breakdown was too granular and prevented us from using database transactions within Azure Cosmos DB to their full potential,” says Sosnovske.”
Consistency level. Of the five consistency levels supported by Azure Cosmos DB, developers chose session consistency, which they combined with heavy etag checking to ensure that data is properly written.
“This works for us because of how we store data, which is modeled as an append-only log with a head document that serves as a pointer to the tail of the log,” explains Sosnovske. “Writing to the database involves reading the head document and its etag, deriving the N+1 log ID, and then constructing a transactional batch operation that overwrites the head pointer using the previously read etag and creates a new document for the log entry. In the unlikely case that the log has already been written, the etag check and the attempt to create a document that already existed will result in a failed transaction. This happened regardless of whether another request ‘beats’ us to writing or if our request reads slightly out-of-date data.”
In part 2 of this series, we examine the solution’s current workload and how Minecraft Earth service developers have benefited from building it on Azure Cosmos DB.
Get started with Azure Cosmos DB
Visit Azure Cosmos DB.
Learn more about Azure for Gaming.
Quelle: Azure
Next-Gen-Games wie Godfall und Hellblade 2 basieren bereits auf der Unreal Engine, weitere Titel dürften bald folgen. (Unreal-Engine, Sony)
Quelle: Golem
Immer mehr US-Nutzer durchbrechen das Datenlimit von 1 TByte im Monat. Die Provider wagen es nicht, dafür Extragebühren zu erheben. (FCC, FTC)
Quelle: Golem