Google’s Virtual Desktop of the Future

Did you know that most Google employees rely on virtual desktops to get their work done? This represents a paradigm shift in client computing at Google, and was especially critical during the pandemic and the remote work revolution. We’re excited to continue enabling our employees to be productive, anywhere! This post covers the history of virtual desktops and details the numerous benefits Google has seen from their implementation. BackgroundIn 2018, Google began the development of virtual desktops in the cloud. A whitepaper was published detailing how virtual desktops were created with Google Cloud, running on Google Compute Engine, as an alternative to physical workstations. Further research had shown that it was feasible to move our physical workstation fleet to these virtual desktops in the cloud. The research began with user experience analysis – looking into how employee satisfaction of cloud workstations compared with physical desktops. Researchers found that user satisfaction of cloud desktops was higher than that of their physical desktop counterparts! This was a monumental moment for cloud-based client computing at Google, and this discovery led to additional analyses of Compute Engine to understand if it could become our preferred (virtual) workstation platform of the future.Today, Google’s internal use of virtual desktops has increased dramatically. Employees all over the globe use a mix of virtual Linux and Windows desktops on Compute Engine to complete their work. Whether an employee is writing code, accessing production systems, troubleshooting issues, or driving productivity initiatives, virtual desktops are providing them with the compute they need to get their work done. Access to virtual desktops is simple: some employees access their virtual desktop instances via Secure Shell (SSH), while others use Chrome Remote Desktop — a graphical access tool. In addition to simplicity and accessibility, Google has realized a number of benefits from virtual desktops. We’ve seen an enhanced security posture, a boost to our sustainability initiatives, and a reduction in maintenance effort associated with our IT infrastructure. All these improvements were achieved while improving the user experience compared to our physical workstation fleet.Example of Google Data CenterAnalyzing Cloud vs Physical DesktopsLet’s look deeper into the analysis Google performed to compare cloud virtual desktops and physical desktops. Researchers compared cloud and physical desktops on five core pillars: user experience, performance, sustainability, security, and efficiency.User ExperienceBefore the transition to virtual desktops got underway, user experience researchers wanted to know more about how they would affect employee happiness. They discovered that employees embraced the benefits that virtual desktops offered. This included freeing up valuable desk space to provide an always-on, always available compute experience, accessible from anywhere in the world, and reduced maintenance overhead compared to physical desktops. PerformanceFrom a performance perspective, cloud desktops are simply better than physical desktops. For example, running on Compute Engine makes it easy to spin-up on-demand virtual instances with predictable compute and performance – a task that is significantly more difficult with a physical workstation vendor. Virtual desktops rely on a mix of Virtual Machine (VM) families that Google developed based on the performance needs of our users. These include Google Compute EngineE2 high-efficiency instances, which employees might use for day-to-day tasks, to higher-performance N2/N2D instances, which employees might use for more demanding machine learning jobs. Compute Engine offers a VM shape for practically any computing workflow. Additionally, employees no longer have to worry about machine upgrades (to increase performance, for example) because our entire fleet of virtual desktops can be upgraded to new shapes (with more CPU and RAM) with a single config change and a simple reboot — all within a matter of minutes. Plus, Compute Engine continues to add features and new machine types, which means our capabilities only continue to grow in this space.SustainabilityGoogle cares deeply about sustainability and has been carbon neutral since 2007. Moving from physical desktops to virtual desktops on Compute Engine brings us closer to Google sustainability goals of a net-neutral desktop computing fleet. Our internal facilities team has praised virtual desktops as a win for future workspace planning, because a reduction in physical workstations could also mean a reduction in first-time construction costs of new buildings, significant (up to 30%) campus energy reductions, and even further reductions in costs associated with HVAC needs and circuit size needs at our campuses. Lastly, a reduction in physical workstations also contributes to a reduction in physical e-waste and a reduction in the carbon associated with transporting workstations from their factory of origin to office locations. At Google’s scale, these changes lead to an immense win from a sustainability standpoint. SecurityBy their very nature, virtual desktops mitigate the ability for a bad actor to exfiltrate data or otherwise compromise physical desktop hardware since there is no desktop hardware to compromise in the first place. This means attacks such as USB attacks, evil maid attacks, and similar techniques for subverting security that require direct hardware access become worries of the past. Additionally, the transition to cloud-based virtual desktops also brings with it an enhanced security posture through the use of Google Cloud’s myriad security features including Confidential Computing, vTPMs, and more. EfficiencyIn the past, it was not uncommon for employees to spend days waiting for IT to deliver new machines or fix physical workstations. Today, cloud-based desktops can be created instantaneously on-demand and resized on-demand. They are always accessible, and virtually immune from maintenance-related issues. IT no longer has to deal with concerns like warranty claims, break-fix issues, or recycling. This time savings enables IT to focus on higher priority initiatives all while reducing their workload. With an enterprise the size of Google, these efficiency wins added up quickly. Considerations to Keep in MindAlthough Google has seen significant benefits with virtual desktops, there are some considerations to keep in mind before deciding if they are right for your enterprise. First, it’s important to recognize that migrating to a virtual fleet requires a consistently reliable and performant client internet connection. For remote/global employees, it’s important they’re located geographically near a Google Cloud Region (to minimize latency). Additionally, there are cases where physical workstations are still considered vital. These cases include users who need USB and other direct I/O access for testing/debugging hardware and users who have ultra low-latency graphics/video editing or CAD simulation needs. Finally, to ensure interoperability between these virtual desktops and the rest of our computing fleet, we did have to perform some additional engineering tasks to integrate our asset management and other IT systems with the virtual desktops. Whether your enterprise needs such features and integration should be carefully analyzed before considering a solution such as this. However, should you ultimately conclude that cloud-based desktops are the solution for your enterprise, we’re confident you’ll realize many of the benefits we have!Tying It All TogetherAlthough moving Google employees to virtual desktops in the clouds was a significant engineering undertaking, the benefits have been just as significant.  Making this switch has boosted employee productivity and satisfaction, enhanced security, increased efficiency, and provided noticeable improvements in performance and user experience. In short, cloud-based desktops are helping us transform how Googlers get their work done. During the pandemic, we saw the benefits of virtual desktops in a critical time. Employees had access to their virtual desktop from anywhere in the world, which kept our workforce safer and reduced transmission vectors for COVID-19. We’re excited for a future where more and more of our employees are computing in the cloud as we continue to embrace the work-from-anywhere model and as we continue to add new features and enhanced capabilities to Compute Engine!
Quelle: Google Cloud Platform

Azure Storage Mover–A managed migration service for Azure Storage

File storage is a critical part of any organization’s on-premises IT infrastructure. As organizations migrate more of their applications and user shares to the cloud, they often face challenges in migrating the associated file data. Having the right tools and services is essential to successful migrations.

Across workloads, there can be a wide range of file sizes, counts, types, and access patterns. In addition to supporting a variety of file data, migration services must minimize downtime, especially on mission-critical file shares.

In February of 2022, we launched the Azure file migration program that provides no-cost migration to our customers, via a choice of storage migration partners.

Today, we are adding another choice for file migration with the preview launch of Azure Storage Mover, which is a fully managed, hybrid migration service that makes migrating files and folders into Azure a breeze.

The key capabilities of the Azure Storage Mover preview are:

NFS share to Azure blob container

With this preview release, we focus on the migration of an on-premises network file system (NFS) share to an Azure blob container. Storage Mover will support many additional source and target combinations over the coming months.

Cloud-driven migrations

Managing copy jobs at scale without a coordinating service can be time consuming and error-prone. Individual jobs have to be monitored and any errors resolved. It’s hard to maintain comprehensive oversight to ensure a complete and successful migration of your data.

With Azure Storage Mover you can express your migration plan in Azure and when you are ready, conveniently start and track migrations right from the Azure portal, PowerShell, or CLI. This allows you to utilize Azure Storage Mover for a one-time migration project or for any repeated data movement needs.

Azure Storage Mover is a hybrid service with migration agents that you’ll deploy close to your source storage. All agents can be managed from the same place in Azure, even if they are deployed across the globe.

Scale and performance

Many aspects contribute to a high-performance migration service. Fast data movement through the Azure Storage REST protocol and a clear separation of the management path from the data path are among the most important. Each agent will send your files and folders directly to the target storage in Azure.

Directly sending the data to the target optimizes the performance of your migration because the data doesn’t need to be processed through a cloud service or through a different Azure region from where the target storage is deployed in. For example, this optimization is key for migrations that happen across geographically diverse branch offices that will likely target Azure Storage in their region.

What’s next for Storage Mover?

There are many steps in a cloud migration that need to happen before the first byte can be copied. A deep understanding of your data estate is essential to a balanced cloud solution design for your workloads.

When we combine that with a strategy to minimize downtime, and manage and monitor migration jobs at scale, then we’ve arrived at our vision for the Storage Mover service. This roadmap for this vision includes:

Support for more sources and Azure Storage targets.
More options to tailor a migration to your needs.
Automatically loading possible sources into the service. That’s more than just convenience; it enables large-scale migrations and reduces mistakes from manual input.
Deep insights about selected sources for a sound cloud solution design.
Provisioning target storage automatically based on your migration plan.
Running post-migration tasks such as data validation, enabling data protection, and completing migration of the rest of the workload, etc.

Learn more

Find out more with our service overview.
Learn how to deploy Azure Storage Mover.
Explore Storage Mover in the Azure portal.
Learn about Storage Mover PowerShell.

Quelle: Azure

Implement User Authentication Into Your Web Application Using SuperTokens

This article was co-authored by Advait Ruia, CEO at SuperTokens.

Authentication directly affects the UX, dev experience, and security of any app. Authentication solutions ensure that sensitive user data is protected and only owners of this data have access to it. Although authentication is a vital part of web services, building it correctly can be time-consuming and expensive. For a personal project, a simple email/password solution can be built in a day, but the security and reliability requirements of production-ready applications add additional complexities. 

While there are a lot of resources available online, it takes time to go through all the content for every aspect of authentication (and even if you do, you may miss important information). And it takes even more effort to make sure your application is up to date with security best practices. If you’re going to move quickly while still meeting high standards, you need a solution that has the right level of abstraction, gives you maximum control, is secure, and is simple to use — just like if you build it from scratch, but without spending the time to learn, build, and maintain it. 

Meet SuperTokens

SuperTokens is an open-source authentication solution. It provides an end-to-end solution to easily implement the following features:

Support for popular login methods:

Email/password

Passwordless (OTP or magic link based)

Social login through OAuth 2.0

Role-based access control

Session management

User management

Option to self-host the SuperTokens core or use the managed service

SDKs are available for all popular languages and front-end frameworks such as Node.js, React.js, Reactive Native, Vanilla JS, and more.

The architecture of SuperTokens

SuperTokens’ architecture is optimized to add secure authentication for your users without compromising on user and developer experience. It consists of three building blocks:

Frontend SDK: The frontend SDK is responsible for rendering the login UI, managing authentication flows, and user sessions. There are SDKs for Vanilla JS (Vue / Angular / JS), ReactJS, and React-Native.

Backend SDK: The backend SDK provides APIs for sign-up, sign-in, sign-out, session refreshing, etc. Your frontend will talk to these APIs, which are exposed on the same domain as your application’s APIs. Available SDKs: Node.js, Python, and GoLang.

SuperTokens Core: The HTTP service for the core authentication logic and database operations. This service is used by the Backend SDK. It’s responsible for interfacing with the database and is queried by our backend SDK for operations that require the database.

Architecture diagram of a self-hosted core.

To learn more about the SuperTokens architecture, watch this video

What’s unique about SuperTokens?

Here are some features that set SuperTokens apart from other user-authentication solutions:

Supertokens is easy to set up and offers quick start guides specific to your use case. 

It’s open source, which means you can self-host the SuperTokens core and have control over user data. When you self-host the SuperTokens core, there are no usage limits — it can be used for free, forever.

It has low vendor lock-in since users have complete control over how SuperTokens works and where their data is stored.

The frontend of Supertokens is highly customizable. The authentication UI and authentication flows can be customized to your use case. The SuperTokens frontend SDK also offers helper functions for users who are looking to build their own custom UI.

SuperTokens integrates natively into your frontend and API layer. This means you have complete control over authentication flows. Through overrides, you can add analytics, add custom logic, or completely change authentication flows to fit your use case.

Why run Supertokens in Docker Desktop?

Docker Extensions help you build and integrate software applications into your daily workflows. With the SuperTokens extension, you get a simple way to quickly deploy Supertokens.

Once the extension is installed and started, you’ll have a running Supertokens core application. The extension allows you to connect to your preferred database, set the environment variable, and get your core connected to your backend.

The SuperTokens extension speeds up the process of getting started with SuperTokens and, over time, we hope to make it the best place to manage the SuperTokens core.

Getting started with SuperTokens 

Step 1: Pick your authentication method

Your first step is picking the authentication strategy, or recipe, you want to implement in your applications:

Email Password

Social Login & Enterprise SSO

Passwordless (with SMS or email)

You can find user guides for all supported recipes here.

Step 2: Integrate with the SuperTokens Frontend and Backend SDKs.

After picking your recipe, you can start integrating the SuperTokens frontend and backend SDKs into your tech stack.

For example, if you want both email password and social authentication methods in your application, you can use this guide to initialize SuperTokens in your frontend and backend.

Step 3: Connect to the SuperTokens Core

The final step is setting up the SuperTokens core. SuperTokens offers a managed service to get started quickly, but today we’re going to take a look at how you can self-host and manage the SuperTokens core using the SuperTokens Docker extension.

Running the Supertokens core from Docker Desktop

Prerequisites: Docker Desktop 4.8 or later

Hop into Docker Desktop and confirm that the Docker Extensions feature is enabled. Go to Settings > Extensions > and check the “Enable Docker Extensions” box.

Setting up the extension

Step 1: Clone the SuperTokens extension

Run this command to clone the extension:

git clone git@github.com:supertokens/supertokens-docker-extension.git

Step 2: Follow the instructions in the README.md to set up the SuperTokens Extension

Build the extension:

make build-extension

Add the extension to Docker Desktop:

docker extension install supertokens/supertokens-docker-extension:latest

Once the extension is added to Docker Desktop, you can run the SuperTokens core.

Step 3: Select which database you want to use to persist user data.

SuperTokens currently supports MySQL and PostgreSQL. Choose which Docker image to load.

Step 4: Add your database connection URI

You’ll need to create a database SuperTokens can write to. Follow this guide to see how to do this. If you don’t provide a connection URI, SuperTokens will run with an in-memory database.

In addition to the connection URI, you can add environment variables to the Docker container to customize the core.

Step 5: Run the Docker container

Select “Start docker container” to start the SuperTokens core. This will start the SuperTokens core on port 3567. You can ping “https://localhost:3567” to check if the core is running successfully.

Step 6: Update the connection URI in your backend to “http://localhost:3567”

(Note: This example code snippet is for Node.js, but if you’re using Python or Golang, a similar change should be made. You can find the guide on how to do that here.)

Now that you’ve set up your core and connected it to your backend, your application should be up and ready to authenticate users!

Try SuperTokens for yourself!

To learn more about SuperTokens, you can visit our website or join our Discord community.

We’re committed to making SuperTokens a more powerful user-authentication solution for our developers and users — and we need help! We’re actively looking for active contributors to the SuperTokens Docker extension project. The current code is simple and easy to get started with. And we’re always around to give potential contributors a hand.

If you like SuperTokens, you can help us spread the word by adding a star to the repo.
Quelle: https://blog.docker.com/feed/