Dell Technologies Cloud OneFS for Google Cloud, now generally available

Storage is the foundation for many enterprises’ tech infrastructures, and it needs to deliver scale and high-performance. Today, we’re announcing that Dell Technologies OneFS for Google Cloud is now generally available and ready for production use. This collaboration between Google Cloud and Dell Technologies helps organizations migrate high-scale and enterprise business-critical file-based workloads to Google Cloud. You can now use the power and scale of the OneFS storage technology together with the economics, capabilities, and simplicity of Google Cloud.OneFS for Google Cloud—powered by the Isilon OneFS file system from Dell Technologies—is a highly versatile scale-out storage solution that speeds up access to large amounts of data while reducing cost and complexity. It is flexible and lets you strike the right balance between large capacity and high-performance storage, enabling enterprise and high-performance file-based workloads. This new offering combines scale, performance and enterprise-class data management features to support file workloads as large as 50 petabytes in a single file system, all while maintaining data integrity and performance.Organizations seeking to migrate their demanding file-based workloads can now take advantage of Google Cloud’s analytics and compute services without having to make changes or adjustments to their applications. Applications running in Google Cloud now have high-performance, scalable access to file data including multi-protocol support via NFS, SMB, and HDFS. To validate this, Enterprise Strategy Group (ESG) performed a performance-based technical review, achieving 200 GB/s on a 2 PB sized OneFS for Google Cloud file system. Furthermore, data migration is simplified with the built-in SyncIQ data replication capabilities, facilitating migration between cloud and on-premises OneFS clusters.Structured pricing and performance tiers make OneFS accessible for a wide variety of workloads and budgets. Combined with Google Cloud’s core capabilities, it enables key enterprise and commercial HPC applications, such as AI/ML, genomics processing for life sciences, video editing and rendering for media and entertainment, telemetry data processing for the automotive industry, and electronic design automation.Billing and support integration for OneFSIn addition, management and operations is easier with deep OneFS integration into Google Cloud. Beyond the marketplace and networking integrations, which let you deploy and access OneFS file systems from your applications, OneFS is also integrated directly into the Google Cloud Console, billing, and support. Console integration means that storage admins and operations teams are able to provision, run, scale, and manage Isilon OneFS clusters from the same user interface used to manage all Google Cloud services. With integrated billing, OneFS storage usage is included directly in your Google Cloud bill. Google Cloud Support integration ensures that you can contact Google through familiar channels and have a single point of contact for issues. These integrations mean not only do your applications migrate to Google Cloud more easily, but your operations teams can depend on the workflows they already have in place, bringing easier overall management for your solution.Dell Technologies and Google Cloud are committed to supporting your most mission-critical file-based workloads and are excited to enable our customers with these new storage options. OneFS for Google Cloud is available in the us-east4 (Ashburn, Northern Virginia, U.S.), australia-southeast1 (Sydney), and asia-southeast1 (Singapore) regions, with additional regions coming soon.To get started or for more information, visit Dell Technologies Cloud OneFS in the Google Cloud Marketplace.
Quelle: Google Cloud Platform

Predicting the cost of a Dataflow job

The value of streaming analytics comes from the insights a business draws from instantaneous data processing, and the timely responses it can implement to adapt its product or service for a better customer experience. “Instantaneous data insights,” however, is a concept that varies with each use case. Some businesses optimize their data analysis for speed, while others optimize for execution cost. In this post, we’ll offer some tips on estimating the cost of a job in Dataflow, Google Cloud’s fully managed streaming and batch analytics service. Dataflow provides the ability to optimize a streaming analytics job through its serverless approach to resource provisioning and management. It automatically partitions your data and distributes your worker code to Compute Engine instances for parallel processing, optimizes potentially costly operations such as data aggregations, and provides on-the-fly adjustments with features like autoscaling and dynamic work rebalancing. The flexibility that Dataflow’s adaptive resource allocation offers is powerful; it takes away the overhead of estimating workloads to avoid paying for unutilized resources or causing failures due to the lack of processing capacity. Adaptive resource allocation can give the impression that cost estimation is unpredictable too. But it doesn’t have to be. To help you add predictability, our Dataflow team ran some simulations that provide useful mechanisms you can use when estimating the cost of any of your Dataflow jobs. The main insight we found from the simulations is that the cost of a Dataflow job increases linearly when sufficient resource optimization is achieved. Under this premise, running small load experiments to find your job’s optimal performance provides you with a throughput factor that you can then use to extrapolate your job’s total cost. At a high level, we recommend following these steps to estimate the cost of your Dataflow jobs: Design small load tests that help you reach 80% to 90% of resource utilizationUse the throughput of this pipeline as your throughput factorExtrapolate your throughput factor to your production data size and calculate the number of workers you’ll need to process it allUse the Google Cloud Pricing Calculator to estimate your job costThis mechanism works well for simple jobs, such as a streaming job that moves data from Pub/Sub to BigQuery or a batch job that moves text from Cloud Storage to BigQuery. In this post, we will walk you through the process we followed to prove that throughput factors can be linearly applied to estimate total job costs for Dataflow.  Finding the throughput factor for a streaming Dataflow jobTo calculate the throughput factor of a streaming Dataflow job, we selected one of the most common use cases: ingesting data from Google’s Pub/Sub, transforming it using Dataflow’s streaming engine, then pushing the new data to BigQuery tables. We created a simulated Dataflow job that mirrored a recent client’s use case, which was a job that read 10 subscriptions from Pub/Sub as a JSON payload. Then, the 10 pipelines were flattened and pushed to 10 different BigQuery tables using dynamic destinations and BigQueryIO, as shown in the image below.The number of Pub/Sub subscriptions doesn’t affect Dataflow performance, since Pub/Sub would scale to meet the demands of the Dataflow job. Tests to find the optimal throughput can be performed with a single Pub/Sub subscription.The team ran 11 small load tests for this job. The first few tests were focused on finding the job’s optimal throughput and resource allocation to calculate the job’s throughput factor. For the tests, we generated messages in Pub/Sub that were 500 KB on average, and we adjusted the number of messages per topic to obtain the total loads to feed each test. We tested a range of loads from 3MB/s to 250MB/s. The table below shows five of the most representative jobs with their adjusted parameters:All jobs ran in machines: n1-standard-2, configuration (vCPU/2 = worker count)In order to ensure maximum resource utilization, we monitored the backlog of each test using the backlog graph in the Dataflow interface. We recommend targeting an 80% to 90% utilization so that your pipeline has enough capacity to handle small load increases. We considered 86% to 91% of CPU utilization to be our optimal utilization. In this case, it meant a 2.5MB/s per virtual CPU (vCPU) load. This is job #4 on the table above. In all tests, we used n1-standard-2 machines, which are the recommended type for streaming jobs and have two vCPUs. The rest of the tests were focused on proving that resources scale linearly using the optimal throughput, and we confirmed it.Using the throughput factor to estimate the approximate total cost of a streaming jobLet’s assume that our full-scale job runs with a throughput of 1GB/s and runs five hours per month. Our throughput factor estimates that 2.5MB/s is the ideal throughput per worker using the n1-standard-2 machines. To support a 1GB/s throughput, we’ll need approximately 400 workers, so 200 n1-standard-2 machines.We entered this data in the Google Cloud Pricing Calculator and found that the total cost of our full-scale job is estimated at $166.30/month. In addition to worker costs, there is also the cost of streaming data processed when you use the streaming engine. This data is priced by volume measured in gigabytes, and is typically between 30% to 50% of the worker costs. For our use case, we took a conservative approach and estimated 50%, totaling $83.15 per month. The total cost of our use case is $249.45 per month.Finding the throughput factor for a simple batch Dataflow jobThe most common use case in batch analysis using Dataflow is transferring text from Cloud Storage to BigQuery. Our small load experiments read a CSV file from Cloud Storage and transformed it into a TableRow, which was then pushed into BigQuery in batch mode. The source was split into 1 GB files. We ran tests with file sizes from 10GB to 1TB to demonstrate that optimal resource allocation scales linearly. Here are the results of these tests:These tests demonstrated that batch analysis applies autoscaling efficiently. Once your job finds an optimized resource utilization, it scales to allocate the resources needed to complete the job with a consistent price per unit of processed data in a similar processing time.Let’s assume that our real scale job here processes 10TB of data, given that our estimated cost using resources in us-central1 is about $0.0017/GB of processed data. The total cost of our real scale job would be about $18.06. This estimation follows this equation: cost(y) = cost(x) * Y/X, where cost(x) is the cost of your optimized small load test, X is the amount of data processed in your small load test, and Y is the amount of data processed in your real scale job. The key in this and the previous examples is to design small-load experiments to find your optimized pipeline setup. This setup will give you the parameters for a throughput factor that you can scale to estimate the resources needed to run your real scale job. You can then input these resource estimations in the Pricing Calculator to calculate your total job cost. Learn more in this blog post with best practices for optimizing your cloud costs.
Quelle: Google Cloud Platform

Celebrating a decade of data: BigQuery turns 10

Editor’s note: Today we’re hearing from some of the team members involved in building BigQuery over the past decade, and even before. Our thanks go to Jeremy Condit, Dan Delorey, Sudhir Hasbe, Felipe Hoffa, Chad Jennings, Jing Jing Long, Mosha Pasumansky, Tino Tereshko, and William Vambenepe, and Alicia Williams.  This month, Google’s cloud data warehouse BigQuery turns 10. From its infancy as an internal Google product to its current status as a petabyte-scale data warehouse helping customers make informed business decisions, it’s been in a class of its own. We got together to reflect on some of the technical milestones and memorable moments along the way, and here are some of the moments through the years:Applying SQL to big data was a big deal.When we started developing BigQuery, the ability to perform big data tasks using SQL was a huge step. At that time, either you had a small database that used SQL, or you used MapReduce. Hadoop was just emerging then, so for large queries, you had to put on your spelunking hat and use MapReduce.Since MapReduce was too hard to use for complex problems, we developed Sawzall to run on top of MapReduce to simplify and optimize those tasks. But Sawzall still wasn’t interactive. We then built Dremel, BigQuery’s forerunner, to serve Google’s internal data analysis needs. When we were designing it, we aimed for high performance, since users needed fast results, along with richer semantics and more effective execution than MapReduce. At the time, people expected to wait hours to get query results, and we wanted to see how we could get queries processed in seconds. That’s important technically, but it’s really a way to encourage people to get more out of their data. If you can get query results quickly, that engenders more questions and more exploration.Our internal community cheered us on.Dremel was something we had developed internally at Google to analyze data faster, in turn improving our Search product. Dremel became BigQuery’s query engine, and by the time we launched BigQuery, Dremel was a popular product that many Google employees relied on. It powered data search beyond server logs, such as for dashboards, reports, emails, spreadsheets, and more. A lot of the value of Dremel was its operating model, where users focused on sending queries and getting results without being responsible for any technical or operational back end. (We call that “serverless” now, though it didn’t have a term back then.) A Dremel user put data on a shared storage platform and could immediately query it, any time. Faster performance was an added bonus.We built Dremel as a cloud-based data engine, similar to what we had done with App Engine, for internal users. We saw how useful it was for Google employees, and wanted to use those concepts for a broader external audience. To build BigQuery into an enterprise data warehouse, we kept the focus on the value of serverless, which is a lot more convenient and doesn’t require management overhead. In those early days of BigQuery, we heard from users frequently on StackOverflow. We’d see a comment and address it that afternoon. We started out scrappy, and really closely looped in with the community. Those early fans were the ones who helped us mature and expand our support team. We also worked closely with our first hyperscale customer as they ramped up to using thousands of slots (BigQuery’s unit of computational capacity), then the next customer after that. This kind of hyperscale has been possible because of Google’s networking infrastructure. This infrastructure allowed us to build a shuffler that used disaggregated memory into Dremel. The team also launched two file formats, and inspired external emulation for other developers. ColumnIO inspired the column encoding of open-source Parquet, a columnar storage format. And the Capacitor format used a columnar approach that supports semistructured data. The idea of using a columnar format for this type of analytics work was new, but popular, in the industry back then. Tech concepts and assumptions changed quickly.Ten years ago in the data warehouse market, high scalability meant high cost. But BigQuery brought a new way of thinking about big data into a data warehouse format that could scale quickly at low cost. The user can be front and center and doesn’t have to worry about infrastructure—and that’s defined BigQuery from the start. Separating storage and processing was a big shift. The method ten years ago was essentially just to throw compute resources at big data problems, so that users often ran out of room in their data warehouse, thus running out of querying ability. In 2020, it’s become much cheaper to keep a lot of data in a ready-to-query store, even if it isn’t queried often.Along the way, we’ve added lots of features to BigQuery, making it a mature and scalable data warehousing platform. We’ve also really enjoyed hearing from BigQuery customers about the projects they’ve used it for. BigQuery users have run more than 10,000 concurrent queries across their organization. We’ve heard over the years about projects like DNA analysis, astronomical queries, and more, and we see businesses across industries using BigQuery today.We also had our founding engineering team record a video celebrating BigQuery’s decade in data, talking about some of their memorable moments, naming the product, and favorite facts and innovations—plus usage tips. Check it out here:What’s next for data analytics?Ten years later, a lot has changed. What we used to call big data is now, essentially, just data. It’s an embedded part of business and IT teams.When we started BigQuery, we asked ourselves, “What if all the world’s data looked like one giant database?” In the last 10 years, we’ve come a lot closer to achieving that goal than we had thought possible. Ten years from now, will we still even need different operational databases and data warehouses and data lakes and business intelligence tools? Do we still need to treat structured data and unstructured data differently… isn’t it all just “data”? And then, once you have all of your data in one place, why should you even need to figure out on your own what questions to ask? Advances in AI, ML and NLP will transform our interactions with data to the level that we cannot fully imagine today. No matter what brave new world of data lies ahead, we’ll be developing and dreaming to help you bring your data to life. We’re looking forward to lots more exploration. And you can join the community monthly in the BigQuery Data Challenge.We’re also excited to announce the BigQuery Trial Slots promotional offer for new and returning BigQuery customers. This lets you purchase 500 slots for $500 per month for six months. This is a 95% discount from current monthly pricing. This limited time offer is subject to available capacity and qualification criteria and while supplies last. Learn more here. To express interest in this promotion, fill out this form and we’ll be in touch with the next steps.We’re also hosting a very special BigQuery live event today, May 20, at 12PM PDT with hosts Felipe Hoffa and Yufeng Guo. Check it out.
Quelle: Google Cloud Platform

Anthos in depth: exploring a bare-metal deployment option

We recently shared how organizations are modernizing their applications with Anthos, driving business agility and efficiency in exciting new ways. But while some of you want to run Anthos on your existing virtualized infrastructure, others want to eliminate the dependency on a hypervisor layer, to modernize your applications while reducing costs. A new option to run Anthos on bare metal later this year will let you do just that. Anthos on bare metal is a deployment option to run Anthos on physical servers, deployed on an operating system provided by you, without a hypervisor layer. Anthos on bare metal will ship with built-in networking, lifecycle management, diagnostics, health checks, logging, and monitoring. Additionally it will support CentOS, Red Hat Enterprise Linux (RHEL), and Ubuntu—all validated by Google. With Anthos on bare metal, you can use your company’s standard hardware and operating system images, taking advantage of existing investments, which are automatically checked and validated against Anthos infrastructure requirements. We are also extending our existing Anthos Ready Partner Initiative to include bare metal solutions. This will include reference architectures on how to integrate Anthos with many datacenter technologies such as servers, networking, and storage using our Anthos Ready partner qualification process.Reduce cost and complexityOver the years, virtualization has helped organizations increase the efficiency of their physical servers, but it has also introduced additional cost and management complexity. With containers becoming mainstream, there’s an opportunity to reduce the costs associated with licensing a hypervisor, while also reducing the architecture and management overhead of operating hundreds of VMs. That’s on top of the efficiencies that Anthos already brings to the table, even when it’s installed in a virtual machine. Anthos can simplify your application architecture, reduce costs, and decrease time spent learning new skills. In fact, the recent Forrester Total Economic Impact report found that Anthos enables a 40% to 55% increase in platform operations efficiency.Run closer to the hardware for better performanceMission critical applications often demand the highest levels of performance and lowest latency from the compute, storage, and networking stack. By removing the latency introduced by the hypervisor layer, Anthos on bare metal lets you run even computationally intensive applications such as GPU-based video processing, machine learning, etc., in a CAPEX and OPEX effective manner. This means that you can access all the benefits of Anthos—centralized management, increased flexibility, and developer agility—for your most demanding applications. Unlock new use cases for edge computingIn general, running your applications closer to your customers reduces latency and improves their experience. The availability of Anthos on bare metal servers lets you extend Anthos to new locations such as edge locations and telco sites. Our telco and edge partners welcome the advent of a bare metal option, as it allows them to run Anthos on specialized edge hardware. At the same time, you can still manage any applications you deploy to Anthos edge locations through the Google Cloud Console, complete with integrated monitoring and policy enforcement. You can also apply consistent policies and enforce them across all locations of application deployments. Visit Anthos at the Edge solutions page and learn more about how Anthos on bare metal is also helping customers with applications deployed at the edge locations. We developed Anthos to help all organizations to tackle multi-cloud, taking advantage of modern cloud-native technologies like containers,  serverless, service mesh, and consistent policy management; both in the cloud or on-premises. Now, with the option of running Anthos on bare metal, there are even more ways to enjoy the benefits of this modern cloud application stack. Learn more by downloading the Anthos under the hood ebook and get started on your modernization journey today!
Quelle: Google Cloud Platform

Build apps of any size or scale with Azure Cosmos DB

During these uncertain times, building apps with agility and cost-effectiveness is more important than ever before. Azure Cosmos DB, Microsoft’s NoSQL cloud database, is introducing new ways to affordably scale performance, launching features that enable rapid application development across teams, and making enterprise-grade security available to apps of any size or scale. With Azure Synapse Link, Azure Cosmos DB also becomes the first database to break the barriers between transactional and analytical stores.

Run low-cost, high-performance applications with autoscale and serverless

With guaranteed speed and availability, APIs for MongoDB, Cassandra, and Gremlin, and instant and elastic scalability worldwide, Azure Cosmos DB has been a popular choice for building cloud-native applications since it launched three years ago. We’re pleased to introduce new support for spiky and unpredictable workloads that can offer savings of up to 70 percent compared to running these types of workloads with the standard provisioned throughput pricing model.

Autoscale provisioned throughput (called “autopilot mode” in preview) will maintain SLAs while scaling up to a customer-specified maximum to meet the needs of unpredictable, high-throughput workloads. Consumption-based billing (starting at 10 percent of maximum) eliminates the need to monitor capacity, and the first 400 request units per second (RU/s) of throughput and 5 GB storage are free each month when paired with Azure Cosmos DB free tier.

Autoscale provisioned throughput responds to workload demands while maintaining SLAs.

The serverless pricing model, in preview soon, will handle traffic bursts on demand. This makes it easy to run spiky workloads that don’t have sustained traffic and means dev/test accounts will never again have to be deleted over the weekend. You will only pay for storage and database operations performed, without any resource planning or management.

Enhanced enterprise security and developer productivity at any scale

Regardless of size or scale, security and agility are critical for developing modern applications. Azure Cosmos DB offers new enhanced enterprise-grade data security to all applications with encryption-at-rest with customer-managed keys available now, and Azure Active Directory (Azure AD) integration and point-in-time backup and restore, both coming soon.

To improve productivity and collaboration, a new version of the Azure Cosmos DB Python SDK is available now, with enhanced Jupyter Notebook features including C# notebooks, a new version of Azure Cosmos DB SDK for Java, change feed functionality preserving all operations history, and support for partial updates (HTTP PATCH)—the top UserVoice request—all coming soon.

"We found it easy to get started with Azure Cosmos DB and its flexibility helped us throughout our application development." —Lutz Küderli, Head of Digital Services, Life and Health, Munich Re

Faster insights with no data movement

Data scientists, business analysts, and data engineers can now get insights in near-real-time with cloud-native Hybrid Transactional/Analytical Processing (HTAP) using Azure Synapse Link for Azure Cosmos DB, now in preview. With Azure Synapse Link, the data in Azure Cosmos DB is automatically and continuously made available for analytics with no Extract-Transform-Load (ETL) through an analytical store built on top of the transactional store, and continual data updates happening with no performance impact on transactions.

Azure Cosmos DB—the best-in-class NoSQL database for modern applications

The serverless pricing model and automatic scaling to match application needs, enterprise-grade capabilities, and cloud-native HTAP make Azure Cosmos DB the best-in-class NoSQL database to build scalable modern applications fast and with minimal capacity management and maximum cost-effectiveness.

Discover why Azure Cosmos DB is ideal for modern app development at Azure Cosmos DB.
Find videos on key concepts, tips, tricks, and more at the Azure Cosmos DB video channel.

What will you build?
Quelle: Azure

Azure Arc enabled Kubernetes preview and new ecosystem partners

In November 2019, we announced the preview of Azure Arc, a set of technologies that unlocks new hybrid scenarios for customers by bringing Azure services and management to any infrastructure across datacenters, edge, and multi-cloud. Based on the feedback and excitement of all the customers in the private preview, we are able to deliver Azure Arc enabled Kubernetes in preview to our customers. With this, anyone can use Azure Arc to connect and configure any Kubernetes cluster across customer datacenters, edge locations, and multi-cloud.

Over the last few months through private preview, organizations across a wide range of industries have experienced the power of Azure Arc for Kubernetes. Retail customers are deploying applications and configurations across their branch locations with guaranteed consistency. Financial institutions and healthcare providers are using Azure Arc to manage Kubernetes instances in geographic regions with custom data sovereignty requirements. Across several application scenarios and deployment environments, customers are embracing the diversity of the Kubernetes ecosystem. Azure Arc enabled Kubernetes is uniquely positioned through its openness and flexibility to help our customers meet their business challenges using the tools of their choice.

With today’s preview of Azure Arc enabled Kubernetes, support for most CNCF-certified Kubernetes distributions works out of the box. In addition, we are also announcing our first set of Azure Arc integration partners, including Red Hat OpenShift, Canonical Kubernetes, and Rancher Labs to ensure Azure Arc works great for all the key platforms our customers are using today.

Benefits of Azure Arc enabled Kubernetes

Application delivery is an inherently collaborative activity, especially as customers adopt DevOps practices. Developers, system operators, infrastructure engineers, and database administrators each play an active role in developing, deploying, and managing applications across multiple environments. To do this efficiently, customers have a need for shared application and infrastructure lifecycle management for teams that are siloed, based on locations and skills. In uncertain times, like we’re going through today, it’s even more important to ensure that your organization has visibility and oversight so you can go fast, safely.

Developers creating modern applications are adopting Kubernetes to spend more time focused on the application and less on the infrastructure. There is a rich Kubernetes ecosystem ranging from off-the-shelf Helm charts to developer tooling to use. Using your existing DevOps pipelines, Kubernetes manifests, and Helm charts, Azure Arc enables deployment to any connected cluster at scale. Azure Arc enabled Kubernetes adopts a GitOps methodology, so customers define their applications and cluster configuration in source control. This means changes to apps and configuration are versioned, enforced, and logged across any number of clusters.

Azure Arc provides a single pane of glass operating model to customers for all their Kubernetes clusters deployed across multiple locations. It brings Azure management to the clusters—unlocking Azure capabilities like Azure Policy, Azure Monitor, and Azure Resource Graph. By bringing every system into Azure Arc, it’s much easier to establish clear roles and responsibilities for team members based on a clear separation of concerns without sacrificing visibility and access.

Inventory and organization: Work more efficiently by getting control over sprawling resources at organizational, team, and personal levels.

Bring all your resources into a single system so you can organize and inventory through a variety of Azure scopes, such as Management groups, Subscriptions, and Resource Groups.
Create, apply, and enforce standardized and custom tags to keep track of resources.
Build powerful queries and search your global portfolio with Azure Resource Graph.

Governance and configuration: Streamline activities by creating, applying, and enforcing policies to Kubernetes apps, data, and infrastructure anywhere.

Set guardrails across all your resources with Azure Policy to ensure consistent configurations to a single cluster, or to many at scale by leveraging inheritance capabilities.
Standardize role-based access control (RBAC) across systems and different types of resources.
Automate and delegate remediation of incidents and problems to service teams without IT intervention.
Enforce run-time conformance and audit resources with Azure Policy.

Integrated DevOps and management capabilities: Mix and match additional Azure services or your choice of tools.

Integrated with GitHub, Azure Monitor, Security Center, Update, and more.
Common templating for automating configuration and infrastructure as code provide repeatable deployments.
End-to-end identity for users and resources with Azure Active Directory (Azure AD) and Azure Resource Manager.

Unified tools and experiences: Create a shared application and infrastructure lifecycle experience for teams that have traditionally been siloed based on locations, skills, and job descriptions.

Simplify your work with a unified and consistent view of your resources across datacenters, edge locations, and multi-cloud through the Azure portal and APIs.
Connect Kubernetes version of your choice from the ecosystem and work with them alongside Windows and Linux virtual machines (VMs), physical servers, and Azure data services.
Establish clear roles and responsibilities for team members with clear separation of concerns without sacrificing visibility and access.

How our integration partners leverage Azure Arc

“Red Hat OpenShift delivers the industry’s most comprehensive enterprise Kubernetes platform, with a proven track record and large installed base and tailor-built for workloads that need to run across the hybrid cloud. Azure Arc helps to provide a common control plane for OpenShift from corporate datacenters to the public cloud, providing a single management point for organizations seeking to pair the flexibility and innovation of OpenShift with the scalability and power of Azure.” —Mike Evans, Vice President, Technical Business Development, Red Hat OpenShift

“Canonical's Charmed Kubernetes enables enterprises to accelerate the development of a new generation of applications while benefiting from fully automated architecture and operations. By integrating Azure Arc, Charmed Kubernetes clusters can now be managed across any infrastructure, alongside Azure Kubernetes Service (AKS) deployments, fitting coherently into an organization’s wider IT estate. This integration provides enterprises with a single, unified place to visualize, govern and manage their environments at scale, from edge to cloud.” —Christian Reis, VP Public Cloud, Canonical Kubernetes

“Rancher Kubernetes Engine (RKE) on Microsoft Azure is a proven platform that provides amazing capabilities for cloud computing. By extending Azure to RKE anywhere with Azure Arc, Rancher Labs and Microsoft are poised to accelerate the development of a new generation of applications for the enterprise by bringing expanding and evolving IT estates under control.” —Sheng Liang, CEO, Rancher Labs

With Azure Arc, customers can connect and configure Kubernetes clusters and deploy modern applications at scale. Azure Arc also allows customers to run Azure data services on these Kubernetes clusters. In addition, the reality is that many customers have applications running on Windows and Linux servers. Azure Arc allows the management of servers as well, all from the same unified single-pane-of-glass experience. Going forward, the next Azure Arc preview will bring Azure data services, such as Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale, to Kubernetes clusters—taking advantage of always current, cloud-native services in the location they need.

We’ll continue to announce new capabilities and new previews for Azure Arc. We’d love to get your feedback and help shaping the upcoming capabilities as we go. To reach us, go to our UserVoice site.

Get started with Azure Arc enabled Kubernetes

To learn more about enabling Kubernetes clusters with Azure Arc, get started with the preview today. You can checkout examples of Azure Arc enabled Kubernetes on GitHub. 
Quelle: Azure

HoloLens 2 expands markets; Azure mixed reality services now broadly available

“When the intelligent cloud and intelligent edge are imbued with mixed reality and artificial intelligence, we have a framework for achieving amazing things and empowering even more people.” – Satya Nadella, Microsoft CEO speaking at the HoloLens 2 launch

At Microsoft Build 2020, we shared some exciting mixed reality news that is helping us create a new reality for empowering organizations and people to achieve more. This blog post highlights some of that news across the following areas:

Strong adoption of HoloLens 2 is driving us to expand availability into new markets.
Our Azure mixed reality cloud services are now broadly available.
HoloLens 2 and mixed reality are empowering firstline workers through the global pandemic.

HoloLens 2 expands to new markets this Fall

HoloLens 2 is currently available in Australia, Canada, China, France, Germany, Ireland, New Zealand, Japan, the United Kingdom, and the United States. In these markets, industry-leading companies in manufacturing, retail, healthcare, and education are using enterprise mixed reality applications to improve the productivity and quality of work for firstline workers. As you can see in the video below, with HoloLens 2 and Azure, these organizations are simultaneously driving faster training and up-skilling of employees, decreased task and process completion time, and reduced error rates and waste driving global demand for these solutions.

Below are a few key news items that will expand our HoloLens 2 impact:

HoloLens 2 will begin shipping to Italy, Netherlands, Switzerland, Spain, Austria, Sweden, Finland, Norway, Denmark, Belgium, Portugal, Poland, Singapore, South Korea, Hong Kong, and Taiwan in Q4 2020.
Online sales of HoloLens 2 will start on the Microsoft Store in July 2020.

In response to feedback from our customers and partners, we have just shipped a software update for HoloLens 2, which includes these exciting new capabilities:

Reconfigure and seamlessly set up new devices for enterprise production with Windows AutoPilot.
Enroll HoloLens with your Mobile Device Management system using a provisioning package.
Hand Tracking improvements.
Support for additional system voice commands to control HoloLens, hands-free.
Expanded USB Ethernet enables support for 5G/LTE dongles.

Review the release notes to learn about all of the new capabilities.

Azure mixed reality services now broadly available

As we outlined above, mixed reality is changing the way we work in industries including manufacturing, healthcare, and retail. These changes are only possible by bringing the power of edge computing together with the cloud to create seamless, immersive experiences.

With Azure, we are making a suite of mixed reality services available to make it easier than ever for developers to build immersive, 3D applications, and experiences for mixed reality headsets (such as HoloLens) or AR-enabled phones (iOS and Android). 

Azure Spatial Anchors is helping enterprise and gaming developers to more easily build applications that can map, persist, and share 3D content at real-world scale. Applications using Azure Spatial Anchors leverage the scale and security of Azure to consistently and reliably render 3D content across HoloLens, iOS, and Android devices. Azure Spatial Anchors is now generally available.

Here are a few quotes from customers using Azure Spatial Anchors:

“Connecting the digital with the physical worlds is full of challenges. Azure Spatial Anchors is the only service that supports Hololens, iOS, and Android, making it the perfect fit for Spatial. The service works quickly, and is extremely reliable. Setting up Azure Spatial Anchors in our existing Azure Cloud account was simple, and allowed us to use a service we’re already familiar with.” – Mo Ben-Zacharia, CTO, Spatial.io 

See how Spatial is using Azure Spatial Anchors to enable the future of collaboration.

“Our service, Augmented Store At Home, is totally based on Azure Spatial Anchors. Being able to share and persist high-quality, 3D assets would not be possible otherwise. The capability of the sharing experience is simple and almost transparent for the user. We chose Azure Spatial Anchors because of the accuracy and for the cross-platform capabilities.” – Vincenzo Mazzone, CTO, Hevolus  

See how Hevolus is using Azure Spatial Anchors to drive the future of retail.

Azure Remote Rendering lets enterprise developers in industries including manufacturing, engineering, construction, and retail bring the highest quality 3D content and interactive experiences to mixed reality devices, such as HoloLens 2. This service uses the computing power of Azure to render even the most complex models in the cloud and streams them in real-time to your devices, so users can interact and collaborate with 3D content in amazing detail. Azure Remote Rendering is now in preview. 

To learn more about our Azure mixed reality offerings for developers, we encourage you to consider attending Mixed Reality Dev Days 2020, a virtual event coinciding with Build, designed to learn from experts and connect with the mixed reality developer community.

A new reality for healthcare

While the momentum across HoloLens and our mixed reality services is happening across many industries, this is especially evident in healthcare. We are harnessing the power of mixed reality to assist the firstline workers manufacturing PPE devices and ventilators, as well as those providing direct patient care. Here are a few of these examples:

Ventilator Challenge, United Kingdom: A consortium of major industrial, technology, and engineering businesses from across the aerospace, automotive, and medical sectors have come together to produce medical ventilators for the National Health Services. Microsoft HoloLens devices, running PTC’s Vuforia Expert Capture software, are being used to train thousands of firstline workers on how to assemble the equipment. The worker are guided by visual, 3D, immersive instructions every step of the way. In the event these workers need help with assembly, they are leveraging Dynamics 365 Remote Assist, which enables hands-free video calling on the HoloLens to let operators collaborate with experts on a PC or mobile device.

Dick Elsy, Chief Executive, High Value Manufacturing Catapult and Consortium lead, describes the impact of mixed reality on their efforts.“Ensuring the NHS has enough ventilators to treat patients with advanced COVID-19 symptoms has been critical to the UK’s continued battle against the disease. Microsoft and its partners have been instrumental in providing the tools that have enabled the VentilatorChallengeUK consortium members to collaborate effectively, manage complex supply chains and train staff in new manufacturing procedures. In doing so, we’ve been able to gear up to manufacture ten years’ worth of ventilators in just ten weeks.” 

Learn more about the role of mixed reality and this consortium’s work.

Case Western Reserve University, United States: Remote learning at one of the nation’s premiere research institutions and medical schools has just taken on a new dimension. For the first time ever, instead of working together on campus, all 185 first-year students from Case Western Reserve University’s School of Medicine are using Microsoft HoloLens 2 and the university’s signature HoloAnatomy mixed-reality software, despite the physical separation created by the COVID-19 pandemic. The remote-learning application of HoloAnatomy is believed to be the first of its kind in the world and the latest advance in the educational use of the holographic headset by Case Western Reserve.

Mark Griswold, a professor of radiology who is one of the faculty leaders for HoloAnatomy states, “This unfortunate crisis has become an opportunity to prove that we could extend the reach of HoloLens education. It’s about making the world smaller, making the campus smaller, and getting people together to experience the same thing from anywhere.” 

View a video compilation of the first several classes.

Avicenne Hospital/APHP Group, France: HoloLens 2 and Dynamics 365 Remote Assist are being used on the front lines by doctors and nurses. Mixed reality is enabling multiple medical professionals to jointly participate in visits and consultations, while limiting the number of healthcare employees exposed to each patient. And by using mixed reality to access virtual 3D copies of patient records, they are eliminating the passing of physical charts and reducing the number of times a doctor or nurse needs to touch equipment, helping them to minimize exposure, while ensuring great patient care.

See Microsoft HoloLens being used in hospitals and learn more from Dr. Thomas Gregory.

Next steps with HoloLens 2

Learn more about how Microsoft HoloLens 2 and mixed reality can help your organization to achieve more.
Quelle: Azure

Creating the best Linux Development experience on Windows & WSL 2

We are really excited to have had Docker Desktop be featured in a breakout session titled “The Journey to One .NET” at MSFT Build by @Scott Hanselman  with WSL 2. Earlier in the his  keynote, we learned about the great new enhancements for GPU support for WSL 2 and we want to hear from our community about your interest in adding this functionality to Docker Desktop. If you are eager to see GPU support come to Docker Desktop, please let us know by voting up our roadmap item and feel free to raise any new requests here as well.

With this announcement, the launch of the Windows 2004 release imminently and Docker Desktop v2.3.02 reaching WSL2 GA , we thought this would be a good time to reflect on how we got to where we are today with WSL 2.

April 2019

Casting our minds back to 2019 (a very different time!), we first discussed WSL 2 with Microsoft in April. We were excited to get started and wanted to find a way to get a build as soon as possible.

May 2019

It turned out the easiest way to do this was to collect a laptop at Kubecon EU (never underestimate the bandwidth of a 747). We brought this back and started work on what would be our first ‘hacky’ version of WSL 2 for Docker Desktop.

June 2019

With some internal demo’s done we decided to announce what we were planning <3

This announcement was a bit like watching a swan on a lake, our blog post was calm and collected, but beneath the water we were kicking madly to take us towards something we could share more widely.

July 2019

We finally got far enough along that we were ready to share something!

Get Ready for the Tech Preview of Docker Desktop for WSL 2

And not long after we released our first preview of Docker Desktop using WSL 2

5 Things to Try with Docker Desktop WSL 2 Tech Preview

August-September 2019

Once again, with a preview out and things seeming calm we went back to work. We were talking with Microsoft weekly about how we could improve what we had, on fixing bugs and generally on improving the experience. Simon and Ben did have enough time though to head over to the USA to talk to Microsoft about how we were getting on.

October 2019

We released a major rework to how Docker Desktop would integrate with WSL 2:

Introducing the Docker Desktop WSL 2 Backend

Along with adding K8s support and providing feature parity with our old Hyper-V based back end. We also made the preview more visible in Docker Desktop and our user numbers started to increase

November 2019 – Feb 2020

This time flew by, we spent a lot of this time chasing down bugs, looking at how we could improve the local experience and also what the best ways of working would be:

Docker Desktop release 2.2 is here!

March 2020

We had built up a fair bit of confidence in what we had built and finally addressed one of the largest outstanding items we still had in our backlog – we added Windows Home support

Docker Desktop for Windows Home is here!

This involved us removing the functionality associated with running our old Moby VM in Hyper V and all of the options associated with running Windows containers – as these are not supported on Windows Home. With this we were now able to focus on heading straight to GA…

April 2020

We doubled down how we were getting ready for GA, learning lessons on improving our development practice. We wanted to share how we were preparing and testing WSL 2 ready for the 2.7m people out there running Docker Desktop.

How we test Docker Desktop with WSL 2

May 2020

We finally reached our GA with Docker Desktop v2.3.02!

Now we are out in the wild, we shared some ideas and best practices to make sure you are getting the best experience out of Docker Desktop when working with WSL 2. 

Docker Desktop: WSL 2 Best practices

(And of course for Windows Pro users this still comes with all the same features you know and love including the ability to switch back over to using Windows Containers.)

What’s next?

Next, is that people start to use Docker Desktop with WSL 2! To try out Docker Desktop with WSL 2 today, make sure you are on Windows 2004 or higher and download the latest Docker Desktop to get started.

If you are enjoying  Docker Desktop but have ideas of what we could do to make it better then please give us feedback. You can let us know what features you want to see next via our roadmap, including voting up GPU support for WSL 2. 
The post Creating the best Linux Development experience on Windows & WSL 2 appeared first on Docker Blog.
Quelle: https://blog.docker.com/feed/