Palm 2: Googles Sprachmodell wird kleiner und trotzdem besser
Die Grundlage für Googles Bard unterstützt mehr als 100 Sprachen und kann Programmieren. Die Ersteller warnen aber vor einem direkten Einsatz. (Deep Learning, Google)
Quelle: Golem
Die Grundlage für Googles Bard unterstützt mehr als 100 Sprachen und kann Programmieren. Die Ersteller warnen aber vor einem direkten Einsatz. (Deep Learning, Google)
Quelle: Golem
Das Europaparlament will keine Ausnahmeregelungen für den Einsatz von Gesichtserkennung in der Öffentlichkeit. Es drohe ein “Sicherheitsalbtraum”. (Gesichtserkennung, KI)
Quelle: Golem
Das ist mal ein Vorbild: Fairphone bringt einen ANC-Kopfhörer, bei dem sich alle grundlegenden Bauteile austauschen lassen. (Fairphone, Audio/Video)
Quelle: Golem
Die Fairbuds XL sind die ersten ANC-Kopfhörer auf dem Markt, bei denen sich die Bauteile des Geräts austauschen lassen. Bei einigen Ersatzteilen kann es aber teuer werden. (Fairphone, Audio/Video)
Quelle: Golem
Hire and Fire auch in Deutschland? Bei der Entlassung Hunderter Entwickler missachtet Shopify offenbar mehrfach den Kündigungsschutz – und könnte einen Betriebsrat verhindern. Eine Recherche von Daniel Ziegener (Betriebsrat, Wirtschaft)
Quelle: Golem
Let’s be honest, new and shiny features get most of the attention around here. It makes sense: New stuff is exciting! But WordPress.com has plenty of baked-in features that are worth talking about too.
Losing your work is one of the most frustrating things you can experience as a website owner. When you choose WordPress.com, you never have to worry about that again. Today, let’s chat about backups, which are powered by Jetpack (Automattic’s own suite of security, performance, and growth tools).
Real-time backups and one-click restores
With Jetpack VaultPress Backup, every single change to your site is captured in real-time. We also back up your site at a consistent time each day as a failsafe.
Our backups happen in real-time, making restoring your site to a previous state as easy as finding a cute dog on the internet.
Let’s look closer at how this feature can benefit you and your site(s).
No expertise required
Manually backing up a website is a time-consuming and resource-intensive task, not to mention a bit daunting on a technical level.
We’ve removed all that hassle by doing the work for you behind the scenes.
Even better, we house redundant copies of your backups on multiple servers around the world, so your data is always secure and accessible.
Version control, but for your website
With the Activity Log, you can quickly see every site change at a glance, letting you know exactly what action (and which user!) broke the site.
Our one-click restores allow you to quickly recover a site from any point in time: Simply find when the problem occurred, click “Restore,” verify that you want to revert your site to a previous state, and in as little a few minutes’ time, you’ll be back up and running.
Never miss a sales order
If you’re running an online store, you know that orders can come in at any time. It goes without saying that you need a backup system to keep your order and customer data safe. There are times when daily or even hourly backups simply don’t cut it.
If you’re running WooCommerce on your site, you can reinstate your store to any previous iteration, while keeping all orders and products current.
Losing your work is a thing of the past
Our automated backups save everything for you: posts, files, databases, themes, plugins . . . all of it. Should your site crash for any reason — an incompatible plugin or theme, for instance — rest assured that it can be easily restored in just minutes.
Whether you’re running a business or spending hours perfecting your site as a hobbyist, our state-of-the-art technology provides the peace of mind that you’ll never miss a sale or lose content again.
Learn more about our real-time backups
Real-time backups and one-click restores are available on Business and Commerce sites.
Quelle: RedHat Stack
Over the past decade, artificial intelligence has evolved from experimental prototypes and early successes to mainstream enterprise use. And the recent advancements in generative AI have begun to change the way we create, connect, and collaborate. As Google CEO Sundar Pichai said in his keynote, every business and organization is thinking about how to drive transformation. That’s why we’re focused on making it easy and scalable for others to innovate with AI.In March, we announced exciting new products that infuse generative AI into our Google Cloud offerings, empowering developers to responsibly build with enterprise-level safety, security, and privacy. They include Gen App Builder, which lets developers quickly and easily create generative chat and enterprise search applications, and Generative AI support in Vertex AI, which expands our machine learning development platform with access to foundation models from Google and others to quickly build, customize and deploy models. We also introduced our vision for Google Workspace, and delivered generative AI features to trusted testers in Gmail and Google Docs that help people write.Last month we introduced Security AI Workbench, an industry-first extensible platform powered by our new LLM security model Sec-PaLM, which incorporates Google’s unique visibility into the evolving threat landscape and is fine-tuned for cybersecurity operations.Today at Google I/O, we are excited to share the next steps not only in our own AI journey, but also those of our customers and partners as well. We’ve already seen a number of organizations begin to develop with and deploy our generative AI offerings. These organizations have been able to move their ideas from experimentation to enterprise-ready applications with the training models, security, compute infrastructure, and cost controls needed to provide their customers with transformative experiences. Our open ecosystem, which provides opportunities for every kind of partner, continues to grow as well. And we are also pleased to share new services and capabilities across Google Cloud and Workspace, including Duet AI—our AI-powered collaborator—to enable more users and developers to start seeing the impact AI can have on their organization.Customers bringing ideas to life with generative AILeading companies in a variety of industries like eDreams ODIGEO, GitLab, Oxbotica, and more, are using our generative AI technologies to create engaging content, synthesize and organize information, automate business processes, and build amazing customer experiences. A few examples we showcased today include:Adore Me, a New York-based intimate apparel brand, is creating production-worthy copy with generative AI features in Docs and Gmail. This is accelerating projects and processes in ways that even surprised the company.Canva, the visual communication platform, uses Google Cloud’s rich generative AI capabilities in language translation to better support its non-English speaking users. Users can now easily translate presentations, posters, social media posts, and more into over a hundred languages. The company is also testing ways that Google’s PaLM technology can turn short video clips into longer, more compelling stories. The result will be a more seamless design experience while growing the Canva brand.Character.AI, a leading conversational AI platform, selected Google Cloud as its preferred cloud infrastructure provider because we offer the speed, security and flexibility required to meet the needs of its rapidly growing community of creators. We are enabling Character.AI to train and infer LLMs faster and more efficiently, and enhancing the customer experience by inspiring imagination, discovery, and understanding. Deutsche Bank is testing Google’s generative AI and large language models (LLMs) at scale to provide new insights to financial analysts, driving operational efficiencies and execution velocity. There is an opportunity to significantly reduce the time it takes to perform banking operations and financial analysts’ tasks, empowering employees by increasing their productivity while helping to safeguard customer data privacy, data integrity, and system security.Instacart is always looking for opportunities to adopt the latest technological innovations, and by joining the Workspace Labs program, they have access to the new features and can discover how generative AI will make an impact for their teams.Orange is exploring a next-generation contact center with Google Cloud. With customers in 26 countries, the global telecommunications firm is testing generative AI to transcribe the call, summarize the exchange between the customer and service representatives, and suggest possible follow up actions to the agent based on the discussion. This experiment has the potential to dramatically improve both the efficiency and quality of customer interactions. Orange is working closely with Google to help ensure data protection and make sure that systematic employee review of Generative AI output and transparency can be implemented.Replit is developing a collaborative software development platform powered by AI. Developers using Replit’s Ghostwriter coding AI already have 30% of their code written by generative AI today. With real-time debugging of the code output and context awareness of the program’s files, Ghostwriter frees up developers’ time for more challenging and creative aspects of programming.Uber is creating generative AI for customer-service chatbots and agent assist capabilities, which handle a range of common service issues with human-like interactions with the aim of achieving greater customer satisfaction and cost efficiency. Additionally, Uber is working on using our synthetic data systems (a technique for improving the quality of LLMs) in areas like product development, fraud detection, and employee productivity.Wendy’s is working with Google Cloud on a groundbreaking AI solution, Wendy’s FreshAI, designed to revolutionize the quick service restaurant industry. The technology is transforming Wendy’s drive-thru food ordering experience with Google Cloud’s generative AI and LLMs—with the ability to discern the billions of possible order combinations on the Wendy’s menu. In June, Wendy’s plans to launch its first pilot of the technology in a Columbus, Ohio-area restaurant, before expanding to more drive-thru locations.Leading companies build with generative AI on Google CloudPartnering creates a strong ecosystem of real-world options for customersAt Google Cloud, we are dedicated to being the most open hyperscale cloud provider, and that includes our AI ecosystem. Today, we are excited to expand upon the partnerships announced earlier this year for every layer of the AI stack—chipmakers, companies building foundation models and AI platforms, technology partners enabling companies to develop and deploy machine learning (ML) models, app-builders solving customer use cases with generative AI, and global services and consulting firms that help enterprise customers implement all of this technology at scale. We announced new or expanded partnerships with SaaS companies like Box, Dialpad, Jasper, Salesforce, and UKG; and consultancies including Accenture, BCG, Cognizant, Deloitte, and KPMG. Together with our previous announcements with companies like AI21 Labs, Aible, Anthropic, Anyscale, Bending Spoons, Cohere, Faraday, Glean, Gretel, Labelbox, Midjourney, Osmo, Replit, Snorkel AI, Tabnine, Weights & Biases, and many more, they provide the a wide range of options for businesses and governments looking to bring generative AI into their organizations. Introducing new generative AI capabilities for Google CloudTo help cloud users of all skill levels solve their everyday work challenges, we’re excited to announce Duet AI for Google Cloud, a new generative AI-powered collaborator. Duet AI serves as your expert pair programmer and assists cloud users with contextual code completion, offering suggestions tuned to your code base, generating entire functions in real-time, and assisting you with code reviews and inspections. It can fundamentally transform the way cloud users of all skill sets build new experiences and is embedded across Google Cloud interfaces—within the integrated development environment (IDE), Google Cloud Console, and even chat. For developers looking to create generative AI applications more simply and efficiently, we are also introducing new foundation models and capabilities across our Google Cloud AI products. And to continue to enable and inspire more customers and partners, we are opening up generative AI support in Vertex AI and expanding access to many of these new innovations to more organizations.New foundation models are now available in Vertex AI. Codey, our code generation foundation model, helps accelerate software development with code generation, code completion, and code chat. Imagen, our text-to-image foundation model, lets customers generate and customize studio-grade images. And Chirp, our state-of-the-art speech model, allows customers to more deeply engage with their customers and constituents inclusively in their native languages with captioning and voice assistance. They can each be accessed via APIs, tuned through our intuitive Generative AI Studio, and feature enterprise-grade security and reliability, including encryption, access control, content moderation, and recitation capabilities that let organizations see the sources behind model outputs. Text Embeddings API is a new API endpoint that lets developers build recommendation engines, classifiers, question-answering systems, similarity matching, and other sophisticated applications based on semantic understanding of text or images. Reinforcement Learning from Human Feedback (RLHF) allows organizations to incorporate human feedback to deeply customize and improve model performance. Underpinning all of these innovations is our AI-optimized infrastructure. We provide the widest choice of compute options among leading cloud providers and are excited to continue to build them out with the introduction of new A3 Virtual Machines based on NVIDIA’s H100 GPU. These VMs, alongside the recently announced G2 VMs, offer a comprehensive range of GPU power for training and serving AI models.Extending generative AI across Google Workspace Earlier this year, we shared our vision for bringing generative AI to Workspace, and gave many users early access to features that helped them write in Gmail and Google Docs. Today, we are excited to announce Duet AI for Google Workspace, which brings together our powerful generative AI features and lets users collaborate with AI so they can get more done every day. We’re delivering the following features to trusted testers via Workspace Labs: In Gmail, we’re adding the ability to draft responses that consider the context of your existing email thread—and making the experience available on mobile.In Google Slides and Meet, we’re enabling you to easily generate images from text descriptions. Custom images in slides can help bring your story to life, and in Meet they can be used to create custom backgrounds.In Google Sheets, we’re automating data classification and the creation of custom plans—helping you analyze and organize data faster than ever. Moving the industry forward, responsiblyCustomers continue to amaze us with their ideas and creativity, and we look forward to continuing to help them discover their own paths forward with generative AI. While the potential for impact on business is great, we remain committed to taking a responsible approach, guided by our AI Principles. As we gather more feedback from our customers and users, we will continue to bring new innovations to market, with a goal to enable organizations of every size and industry to increase efficiency, connect with customers in new ways, and unlock entirely new revenue streams.
Quelle: Google Cloud Platform
A once-in-a-century global health emergency accelerates worldwide healthcare innovation and novel medical breakthroughs, all supported by powerful high-performance computing (HPC) capabilities.
COVID-19 has forever changed how nations function in the globally interconnected economy. To this day, it continues to affect and shape how countries respond to health emergencies. COVID-19 has demonstrated just how interconnected our society is and how risks, threats, and contagions can have global implications for many aspects of our daily lives.
COVID-19 was the largest global health emergency in over a century, with nearly 762 million cases reported as of the end of March 2023, according to the World Health Organization. The National Centre for Biotechnology Information points out the frequency and breath of new variants that continues to emerge at regular intervals. In response to this intricate health crisis, the global healthcare community quickly mobilized to better understand the virus, learn its behavior, and work toward preventative treatment measures to minimize the damage to lives across the world. Globally, nations mobilized resources for frontline workers, offered social protection to those most severely affected, and provided vaccine access for the billions who need it.
Recent technological innovations have provided the medical community with access to capabilities, such as HPC, that equipped healthcare professionals to better study, understand, and respond to COVID-19. Globally, healthcare innovators could access unprecedented computing power to design, test, and develop new treatments, faster, better, and more iteratively, than ever before.
Today, Azure HPC enables researchers to unleash the next generation of healthcare breakthroughs. For example, the computational capabilities offered by the Azure HPC HB-series virtual machines, powered by AMD EPYCTM CPU cores, allowed researchers to accelerate insights and advances into genomics, precision medicine, and clinical trials, with near-infinite high-performance bioinformatics infrastructure capabilities.
Since the beginning of COVID-19, companies have been leveraging Azure HPC to develop new treatments, run simulations, and testing at scale—all in preparation for the next health emergency. Azure HPC is helping companies unleash new treatments and health cure capabilities that are ushering in the next generation of treatments and healthcare capabilities, across the entire industry.
High-performance computing making a difference
A leading immunotherapy company partnered with Microsoft to leverage the capabilities of Azure HPC’s high-performance computing, in order to perform detailed computational analyses of the spike protein structure of SARS-CoV-2. Due to the critical nature of the spike protein structure and the role it plays in allowing the invasion of human cells, targeting it for study, analyses, and insights, is a crucial step in the development of treatments to combat the virus.
The company’s engineers and scientists collaborated with Microsoft, and quickly deployed HPC clusters on Azure, containing over 1250 core graphic processing units (GPUs). These GPUs are specifically designed for machine learning and similarly intense computational applications. The Azure HPC clusters augmented the company’s existing GPU clusters—which was already optimized for molecular modelling of proteins, antibodies, and antivirals—bringing a truly high-powered scaled engagement approach to fruition.
By collaborating with Microsoft in this way and making use of the massive, networked computing capabilities and advanced algorithms enabled by Azure HPC, the company was able to generate working models in days rather than the months it would have taken by following traditional approaches.
The incredible amount of computing power will help bolster drug discovery and therapeutic developments. By joining forces and bringing together the incredible power of Azure HPC and cutting edge immunotherapies, it helped contribute to the development of models that allowed researchers to better understand the virus, find novel binding sites to fight the virus, and ultimately guide the development of future treatments and vaccines for the virus.
Powering pharmaceutical research and innovation
The healthcare industry is making remarkable strides in the development of cutting-edge treatments and innovations that are geared towards solving some of the world’s greatest healthcare challenges.
For example, researchers are leveraging HPC to transform their research and development effort as well as accelerating the development of new life-saving treatments.
Using a technique producing amorphous solid dispersions (ASD), drug researchers break up active pharmaceutical ingredients and blend them with organic polymers to improve the dissolution rate, bioavailability, and solubility of drug delivery systems. Although a wonder of modern medicine, it is a highly complicated, often lab-based process that can take months.
Swiss-based Molecular Modelling Laboratory (MML), a leader in ASD screening, wanted to pivot its drug research and development to small organic and biomolecular polymers. This approach determines ASD stability prior to formulation, reveals new ASD combinations, enhances drug safety, and helps reduce drug development costs as well as delivery times.
MML chose to leverage Azure HPC resources on more than 18,000 Azure HBv2 virtual machines and to optimize high-throughput drug screening and active pharmaceutical ingredient solubility limit detection, with the aim to alleviate common development hurdles.
The adoption of Azure HPC has helped MML shift from a small start-up to an established business working with some of the top pharmaceutical companies in the world—all in a very short time.
For the global healthcare community, the computational power and scalability of Azure HPC presents an unprecedented opportunity to accelerate pharmaceutical, medical, as well as health innovation. Azure HPC will continue playing a leading role in supporting the healthcare industry to respond optimally to any future global health emergency that may arise.
Next steps
To request a demo, contact HPCdemo@microsoft.com.
Learn more about Azure HPC.
High-performance computing documentation.
View our HPC cloud journey infographic.
The post Preparing for future health emergencies with Azure HPC appeared first on Azure-Blog und Updates.
Quelle: Azure
I had the opportunity to participate in this year’s Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year’s event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry’s most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.
What is confidential computing?
When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.
Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.
In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.
Customer adoption patterns
With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I’m really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.
Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.
To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.
Current challenges
Regarding the challenges facing confidential computing, they tended to fall into four broad categories:
Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.
Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.
Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).
Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.
The future of confidential computing
When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure’s vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.
Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.
Microsoft at OC3
In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:
Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).
Container code and configuration integrity with confidential containers on Azure.
Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.
Enabling faster AI model training in healthcare with Azure confidential computing.
Project Amber—Intel’s attestation service.
Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.
All product names, logos, and brands mentioned above are properties of their respective owners.
The post Insights from the 2023 Open Confidential Computing Conference appeared first on Azure-Blog und Updates.
Quelle: Azure
I had the opportunity to participate in this year’s Open Confidential Computing Conference (OC3), hosted by our software partner, Edgeless Systems. This year’s event was particularly noteworthy due to a panel discussion on the impact and future of confidential computing. The panel featured some of the industry’s most respected technology leaders including Greg Lavender, Chief Technology Officer at Intel, Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, and Mark Papermaster, Chief Technology Officer at AMD. Felix Schuster, Chief Executive Officer at Edgeless Systems, moderated the panel discussion, which explored topics such as the definition of confidential computing, customer adoption patterns, current challenges, and future developments. The insightful discussion left a lasting impression on me and my colleagues.
What is confidential computing?
When it comes to understanding what exactly confidential computing entails, it all begins with a trusted execution environment (TEE) that is rooted in hardware. This TEE protects any code and data placed inside it, while in use in memory, from threats outside the enclave. These threats include everything from vulnerabilities in the hypervisor and host operating system to other cloud tenants and even cloud operators. In addition to providing protection for the code and data in memory, the TEE also possesses two crucial properties. The first is the ability to measure the code contained within the enclave. The second property is attestation, which allows the enclave to provide a verified signature that confirms the trustworthiness of what is held within it. This feature allows software outside of the enclave to establish trust with the code inside, allowing for the safe exchange of data and keys while protecting the data from the hosting environment. This includes hosting operating systems, hypervisors, management software and services, and even the operators of the environment.
Regarding what is not confidential computing, it is not other privacy enhancing technologies (PETs) like homomorphic encryption or secure multiparty computation. It is hardware rooted, trusted execution environments with attestation.
In Azure, confidential computing is integrated into our overall defense in depth strategy, which includes trusted launch, customer managed keys, Managed HSM, Microsoft Azure Attestation, and confidential virtual machine guest attestation integration with Microsoft Defender for Cloud.
Customer adoption patterns
With regards to customer adoption scenarios for confidential computing, we see customers across regulated industries such as the public sector, healthcare, and financial services ranging from private to public cloud migrations and cloud native workloads. One scenario that I’m really excited about is multi-party computations and analytics where you have multiple parties bringing their data together, in what is now being called data clean rooms, to perform computation on that data and get back insights that are much richer than what they would have gotten off their own data set alone. Confidential computing addresses the regulatory and privacy concerns around sharing this sensitive data with third parties. One of my favorite examples of this is in the advertising industry, where the Royal Bank of Canada (RBC) has set up a clean room solution where they take merchant purchasing data and combine it with their information around the consumers credit card transactions to get a full picture of what the consumer is doing. Using these insights, RBC’s credit card merchants can then offer their consumer very precise offers that are tailored to them, all without RBC seeing or revealing any confidential information from the consumers or the merchants. I believe that this architecture is the future of advertising.
Another exciting multi-party use case is BeeKeeperAI’s application of confidential computing and machine learning to accelerate the development of effective drug therapies. Until recently, drug researchers have been hampered by inaccessibility of patient data due to strict regulations applied to the sharing of personal health information (PHI). Confidential computing removes this bottleneck by ensuring that PHI is protected not just at rest and when transmitted, but also while in use, thus eliminating the need for data providers to anonymize this data before sharing it with researchers. And it is not just the data that confidential computing is protecting, but also the AI models themselves. These models can be expensive to train and therefore are valuable pieces of intellectual property that need to be protected.
To allow these valuable AI models to remain confidential yet scale, Azure is collaborating with NVIDIA to deploy confidential graphics processing units (GPUs) on Azure based on NVIDIA H100 Tensor Core GPU.
Current challenges
Regarding the challenges facing confidential computing, they tended to fall into four broad categories:
Availability, regional, and across services. Newer technologies are in limited supply or still in development, yet Azure has remained a leader in bringing to market services based on Intel® Software Guard Extensions (Intel® SGX) and AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). We are the first major cloud provider to offer confidential virtual machines based on Intel® Trust Domain Extensions (Intel® TDX) and we look forward to being one of the first cloud providers to offer confidential NVIDIA H100 Tensor Core GPUs. We see availability rapidly improving over the next 12 to 24 months.
Ease of adoption for developers and end users. The first generation of confidential computing services, based on Intel SGX technology, required rewriting of code and working with various open source tools to make applications confidential computing enabled. Microsoft and our partners have collaborated on these open source tools and we have an active community of partners running their Intel SGX solutions on Azure. The newer generation of confidential virtual machines on Azure, using AMD SEV-SNP, a hardware security feature enabled by AMD Infinity Gaurd and and Intel TDX, lets users run off-the-shelf operating systems, lift and shift their sensitive workloads, and run them confidentially. We are also using this technology to offer confidential containers in Azure which allows users to run their existing container images confidentially.
Performance and interoperability. We need to ensure that confidential computing does not mean slower computing. The issue becomes more important with accelerators like GPUs where the data must be protected as it moves between the central processing unit (CPU) and the accelerator. Advances in this area will come from continued collaboration with standards committees such as the PCI-SIG, which has issued the TEE Device Interface Security Protocol (TDISP) for secure PCIe bus communication and the CXL Consortium which has issued the Compute Express Link™ (CXL™) specification for the secure sharing of memory among processors. Open source projects like Caliptra which has created the specification, silicon logic, have read-only memory (ROM), and firmware for implementing a Root of Trust for Measurement (RTM) block inside a system on chip (SoC).
Industry awareness. While confidential computing adoption is growing, awareness among IT and security professionals is still low. There is a tremendous opportunity for all confidential computing vendors to collaborate and participate in events aimed at raising awareness of this technology to key decision-makers such as CISOs, CIOs, and policymakers. This is especially relevant in industries such as government and other regulated sectors where the handling of highly sensitive data is critical. By promoting the benefits of confidential computing and increasing adoption rates, we can establish it as a necessary requirement for handling sensitive data. Through these efforts, we can work together to foster greater trust in the cloud and build a more secure and reliable digital ecosystem for all.
The future of confidential computing
When the discussion turned to the future of confidential computing, I had the opportunity to reinforce Azure’s vision for the confidential cloud, where all services will run in trusted execution environments. As this vision becomes a reality, confidential computing will no longer be a specialty feature but rather the standard for all computing tasks. In this way, the concept of confidential computing will simply become synonymous with computing itself.
Finally, all panelists agreed that the biggest advances in confidential computing will be the result of industry collaboration.
Microsoft at OC3
In addition to the panel discussion, Microsoft participated in several other presentations at OC3 that you may find of interest:
Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB).
Container code and configuration integrity with confidential containers on Azure.
Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure.
Enabling faster AI model training in healthcare with Azure confidential computing.
Project Amber—Intel’s attestation service.
Finally, I would like to encourage our readers to learn about Greg Lavender’s thoughts on OC3 2023.
All product names, logos, and brands mentioned above are properties of their respective owners.
The post Insights from the 2023 Open Confidential Computing Conference appeared first on Azure-Blog und Updates.
Quelle: Azure