An Open Source Approach to Creating Company Events

Serendipitous connections, like discovering a shared interest with someone in line at the coffee shop, look slightly different in distributed organizations. This year, leveraging the values of open source, Automattic shared the planning and organization of a company-wide 24-hour online event among the entire company, in order to create space for serendipitous connections. 

What followed was energy, diversity, and abundant creativity!

The Purpose: Intentional Random Connections

As we welcomed 2023, this event was an invitation to connect across divisions, timezones, and geographies across 90+ countries: 

Intentional: We scheduled a date one month in advance so that people could create space to join at times that were convenient to them and their team.

Random: We invited any and all wonderful ideas to the agenda. We believe that great ideas can come from anywhere and anyone.

Connections: Communication is the oxygen of a distributed company and our intention was to create space for connections to form.

The Invitation: Open-Sourcing the Agenda

Rather than deciding the agenda up front, we entrusted the agenda to the whole company. We invited people to contribute what they’d like and then join any sessions that interested them. (This structure is built on the values of Open Space by Harrison Owen.)

Hosting a session was simple — all one had to do was choose a topic and add it directly to a shared Google Calendar which we created for the day. All that was left from there was to arrive on the day at their selected time to host their session with the folks who chose to join. 

Movement among sessions was welcome and everything was optional. 

The Outcomes: Abundant Creativity!

In total we had 38 sessions hosted over 24 hours (both synchronous and asynchronous). The creativity, energy, and diversity that was created far exceeded what any one person could have planned on their own.

We had cat parties, dog walks, asynchronous virtual raves, collaborative poem writing, role playing games, lego sessions, language lessons, art parties, parenting discussions, muffin making, company trivias, children’s book readings, open mics, musician parties, walks around the globe, and a lot of games! For the full list of sessions see the bottom of this post. 

While many companies report difficulty getting engagement in online company events, the survey feedback from this event showed that:

100% of participants would recommend future events like this

95% of participants met with someone they’d never met before

100% of people who hosted would host a session again

For a 2,000-person company, these are pretty staggering results. The value of distributed ownership and co-creation speaks for itself!

We hope these results convince you to try this out in your own organization!

A Peek Into Some of the Sessions

Full List of Sessions

Synchronous:

Hidden Identity Games x2

Automattic Trivia

TypeRacer 

Mandarin Conversation: 中文聊天網聚  

Tabletop Roleplaying Games 101

Cat’s Party

Dungeons & Dragons for Newbies 

Children’s Book Reading and Conversation 

Bavarder en Français: French Conversation 

LEGO AFOLs Unite

Play Pokemon 

Cyberpunk 2077 Roleplaying Session

Make Banana Muffins with Greg 

Defuse a Bomb x2 

Drawful Game

Virtual Dog Park 

Charla en Español 

Art Party

Parenting Teens Conversation

Brazilian Portuguese Conversation: Papo Farofa 

Open Mic 

Power Yoga 

Play Gartic Phone

Automusician Connection

Cat Party 

Let’s Speak Italiano! 

Tales from the Table 

Open Screenshare 

Animal Crossing Parade of Homes

Silly Games and Snacks 

Casual Hangout

Asynchronous:

Intentional Random Virtual Rave 

A Walk Around the Automattic Globe 

Poetomattic

Quelle: RedHat Stack

Extending reality: Immersive Stream for XR is now Generally Available

Last year at Google I/O, we announced the preview of Immersive Stream for XR, which leverages Google Cloud GPUs to host, render, and stream high-quality photorealistic experiences to millions of mobile devices around the world. Today, we are excited to announce that the service is now generally available for Google Cloud customers. With Immersive Stream for XR, users don’t need powerful hardware or a special application to be immersed in a 3D or AR world; instead, they can click a link or scan a QR code and immediately be transported to extended reality. Immersive Stream for XR is being used to power the “immersive view” feature in Google Maps, while automotive and retail brands are enhancing at-home shopping experiences for consumers, from virtually configuring a new vehicle to visualizing new appliances in the home.What’s new with GAWith this latest product milestone, Immersive Stream for XR now supports content developed in Unreal Engine 5.0. We have also added the ability to render content in landscape mode to support tablet and desktop devices. With landscape mode and the ability to render to larger screens, there is more real estate for creating sophisticated UIs and interactions, for more full-featured immersive applications. Finally, you can now embed Immersive Stream for XR content on your own website using an HTML iframe, allowing users to access your immersive applications without leaving your domain.How customers are using Immersive Stream for XRA common type of experience our customers want to create is a ‘space’ where users can walk around and interact with objects. For example, home improvement retailers can let their shoppers place appliances options or furniture in renderings of their actual living spaces; travel and hospitality companies can provide virtual tours of a hotel room or event space; and museums can offer virtual experiences where users can walk around and interact with virtual exhibits. To help customers create these experiences faster, we collaborated with Google Partner Innovation (PI) to create a spaces template, the first of a series of templates developed with close customer involvement within the PI Early Access Program. The spaces template standardizes the common interactions across these scenarios, such as user movement and object interaction.aside_block[StructValue([(u’title’, u’Industries embrace XR’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3e70c16cc8d0>), (u’btn_text’, u’Read more.’), (u’href’, u’https://cloud.google.com/blog/transform/augment-reality-virtual-reality-smartphone-secrets-immersive-stream’), (u’image’, None)])]Aosom, a home and garden ecommerce retailer, recently used this template to launch an experience that allows users to place furniture in either a virtual living room or in their own space using AR. Users have the ability to customize the item’s color and options, then add products to their shopping cart once satisfied. “Home & Garden shoppers are always looking for offerings that are unique and compatible with their own living space,” said Chunhua Wang, Chief Executive Officer, Aosom. “Google Cloud’s Immersive Stream for XR has enabled Aosom to deliver a visually vivid and immersive shopping experience to our customers.”Immersive Stream for XR especially benefits automakers, who can now enable prospective buyers to browse and customize new vehicles in photo realistic detail and visualize them in their own driveway. Most recently, Kia Germany leveraged the technology to promote the Kia Sportage, one of their top selling vehicles. The virtual experience was accessible via a QR code on the Kia website.“At Kia Germany we are excited to use Google Immersive Stream for XR to reach new consumers and provide them the perfect experience to discover our Sportage,” said Jean-Philippe Pottier, Manager of Digital Platforms at Kia Germany. “Our users love that they can change colors, engines, and interact with the model in 3D and augmented reality.”Last, with the addition of Unreal Engine 5.0 and support for bigger and more realistic worlds, users have the ability to explore far away historical landmarks without leaving their home. For example, Virtual Worlds uses photogrammetry techniques to capture historical sites, polish them with a team of designers, and then create interactive experiences on top. Because of the visual detail involved, these experiences have historically required expensive workstations with GPUs to perform the rendering, limiting their availability to physical exhibits. Using Unreal 5.0’s new Nanite and Luman capabilities, the team created an educational tour of the Great Sphinx of Giza, and made it accessible by anyone using Immersive Stream for XR, available here. Elliot Mizroch, CEO of Virtual Worlds, explains, “We’ve captured incredible sites from Machu Picchu to the Pyramids of Giza and we want everyone to be able to explore these monuments and learn about our heritage. Immersive Stream for XR finally gives us this opportunity.”Next stepsWe’re excited to see all of the innovative use cases you build using Google Cloud’s Immersive Stream for XR. Learn more by reading our documentation, or get started by downloading the Immersive Stream for XR template project. To get started with Unreal Engine 5.0 and landscape mode, you can download our updated Immersive Stream for XR template project, load it into Unreal Engine 5.0.3, and start creating your content. If you’d like to embed your experience on your own website, you can contact us to allowlist your domain.
Quelle: Google Cloud Platform

Microsoft and NVIDIA experts talk AI infrastructure

This post has been co-authored by Sheila Mueller, Senior GBB HPC+AI Specialist, Microsoft; Gabrielle Davelaar, Senior GBB AI Specialist, Microsoft; Gabriel Sallah, Senior HPC Specialist, Microsoft; Annamalai Chockalingam, Product Marketing Manager, NVIDIA; J Kent Altena, Principal GBB HPC+AI Specialist, Microsoft; Dr. Lukasz Miroslaw, Senior HPC Specialist, Microsoft; Uttara Kumar, Senior Product Marketing Manager, NVIDIA; Sooyoung Moon, Senior HPC + AI Specialist, Microsoft.

As AI emerges as a crucial tool in so many sectors, it’s clear that the need for optimized AI infrastructure is growing. Going beyond just GPU-based clusters, cloud infrastructure that provides low-latency, high-bandwidth interconnects, and high-performance storage can help organizations handle AI workloads more efficiently and produce faster results.

HPCwire recently sat down with Microsoft Azure and NVIDIA’s AI and cloud infrastructure specialists and asked a series of questions to uncover AI infrastructure insights, trends, and advice based on their engagements with customers worldwide.

How are your most interesting AI use cases dependent on infrastructure?

Sheila Mueller, Senior GBB HPC+AI Specialist, Healthcare & Life Sciences, Microsoft: Some of the most interesting AI use cases are in-patient health care, both clinical and research. Research in science, engineering, and health is creating significant improvements in patient care, enabled by high-performance computing and AI insights. Common use cases include molecular modeling, therapeutics, genomics, and health treatments. Predictive Analytics and AI coupled with cloud infrastructure purpose-built for AI are the backbone for improvements and simulations in these use cases and can lead to a faster prognosis and the ability to research cures. See how Elekta brings hope to more patients around the world with the promise of AI-powered radiation therapy.

Gabrielle Davelaar, Senior GBB AI Specialist, Microsoft: Many manufacturing companies need to train inference models at scale while being compliant with strict local and European-level regulations. AI is placed on the edge with high-performance compute. Full traceability with strict security rules on privacy and security is critical. This can be a tricky process as every step must be recorded for reproduction, from simple things like dataset versions to more complex things such as knowing which environment was used with what machine learning (ML) libraries with its specific versions. Machine learning operations (MLOps) for data and model auditability now make this possible. See how BMW uses machine learning-supported robots to provide flexibility in quality control for automotive manufacturing.

Gabriel Sallah, Senior HPC Specialist, Automotive Lead, Microsoft: We’ve worked with car makers to develop advanced driver assistance systems (ADAS) and advanced driving systems (ADS) platforms in the cloud using integrated services to build a highly scalable deep learning pipeline for creating AI/ML models. HPC techniques were applied to schedule, scale, and provision compute resources while ensuring effective monitoring, cost management, and data traceability. The result: faster simulation/training times due to the close integration of data inputs, compute simulation/training runs, and data outputs than their existing solutions.

Annamalai Chockalingam, Product Marketing Manager, Large Language Models & Deep Learning Products, NVIDIA: Progress in AI has led to the explosion of generative AI, particularly with advancements to large language models (LLMs) and diffusion-based transformer architectures. These models now recognize, summarize, translate, predict, and generate languages, images, videos, code, and even protein sequences, with little to no training or supervision, based on massive datasets. Early use cases include improved customer experiences through dynamic virtual assistants, AI-assisted content generation for blogs, advertising, marketing, and AI-assisted code generation. Infrastructure purpose-built for AI that can handle computer power and scalability demands is key.

What AI challenges are customers facing, and how does the right infrastructure help?

John Lee, Azure AI Platforms & Infrastructure Principal Lead, Microsoft: When companies try to scale AI training models beyond a single node to tens and hundreds of nodes, they quickly realize that AI infrastructure matters. Not all accelerators are alike. Optimized scale-up node-level architecture matters. How the host CPUs connect to groups of accelerators matter. When scaling beyond a single node, the scale-out architecture of your cluster matters. Selecting a cloud partner that provides AI-optimized infrastructure can be the difference between an AI project’s success or failure. Read the blog: AI and the need for purpose-built cloud infrastructure.

Annamalai Chockalingam: AI models are becoming increasingly powerful due to a proliferation of data, continued advancements in GPU compute infrastructure, and improvements in techniques across both training and inference of AI workloads. Yet, combining the trifecta of data, compute infrastructure, and algorithms at scale remains challenging. Developers and AI researchers require systems and frameworks that can scale, orchestrate, crunch mountains of data, and manage MLOps to optimally create deep learning models. End-to-end tools for production-grade systems incorporating fault tolerance for building and deploying large-scale models for specific workflows are scarce.

Kent Altena, Principal GBB HPC+AI Specialist, Financial Services, Microsoft: Trying to decide the best architectures between the open flexibility of a true HPC environment to the robust MLOps pipeline and capabilities of machine learning. Traditional HPC approaches, whether scheduled by a legacy scheduler like HPC Pack or SLURM or a cloud-native scheduler like Azure Batch, are great for when they need to scale to hundreds of GPUs, but in many cases, AI environments need the DevOps approach to AI model management and control of which models are authorized or conversely need overall workflow management.

Dr. Lukasz Miroslaw, Senior HPC Specialist, Microsoft: AI infrastructure is not only the GPU-based clusters but also low-latency, high-bandwidth interconnect between the nodes and high-performant storage. The storage requirement is often the limiting factor for large-scale distributed training as the amount of data used for the training in autonomous driving projects can grow to petabytes. The challenge is to design an AI platform that meets strict requirements in terms of storage throughput, capacity, support for multiple protocols, and scalability.

What are the most frequently asked questions about AI infrastructure?

John Lee: “Which platform should I use for my AI project/workload?” There is no single magic product or platform that is right for every AI project. Customers usually have a good understanding of what answers they are looking for but aren’t sure what AI products or platforms will get them that answer the fastest, most economical, and scalable way. A cloud partner with a wide portfolio of AI products, solutions, and expertise can help find the right solution for specific AI needs.

Uttara Kumar, Senior Product Marketing Manager, NVIDIA: “How do I select the right GPU for our AI workloads?” Customers want the flexibility to provision the right-sized GPU acceleration for different workloads to optimize cloud costs (fractional GPU, single GPU, multiple GPUs all the way up to multiple GPUs across multi-node clusters). Many also ask, “How do you make the most of the GPU instance/virtual machines and leverage it within applications/solutions?” Performance-optimized software is key to doing that.

Sheila Mueller: “How do I leverage the cloud for AI and HPC while ensuring data security and governance.” Customers want to automate the deployment of these solutions, often across multiple research labs with specific simulations. Customers want a secure, scalable platform that provides control over data access to provide insight. Cost management is also a focus in these discussions.

Kent Altena: “How best should we implement this GPU to run our GPUs?” We know what we need to run and have built the models, but we also need to understand the final mile. The answer is not always a straightforward one-size-fits-all answer. It requires understanding their models, what they are attempting to solve, and what their inputs and outputs/workflow looks like.

What have you learned from customers about their AI infrastructure needs?

John Lee: The majority of customers want to leverage the power of AI but are struggling to put an actionable plan in place to do so. They worry about what their competition is doing and whether they are falling behind but, at the same time, are not sure what first steps to take on their journey to integrate AI into their business.

Annamalai Chockalingam: Customers are looking for AI solutions to improve operational efficiency and deliver innovative solutions to their end customers. Easy-to-use, performant, platform-agnostic, and cost-effective solutions across the compute stack are incredibly desirable to customers.

Gabriel Sallah: All customers are looking to reduce the cost of training an ML model. Thanks to the flexibility of the cloud resources, customers can select the right GPU, storage I/O, and memory configuration for the given training model.

Gabrielle Davelaar: Costs are critical. With the current economic uncertainty, companies need to do more with less and want their AI training to be more efficient and effective. Something a lot of people are still not realizing is that training and inferencing costs can be optimized through the software layer.

What advice would you give to businesses looking to deploy AI or speed innovation?

Uttara Kumar: Invest in a platform that is performant, versatile, scalable, and can support the end-to-end workflow—start to finish—from importing and preparing data sets for training, to deploying a trained network as an AI-powered service using inference.

John Lee: Not every AI solution is the same. AI-optimized infrastructure matters, so be sure to understand the breadth of products and solutions available in the marketplace. And just as importantly, make sure you engage with a partner that has the expertise to help navigate the complex menu of possible solutions that best match what you need.

Sooyoung Moon, Senior HPC + AI Specialist, Microsoft: No amount of investment can guarantee success without thorough early-stage planning. Reliable and scalable infrastructure for continuous growth is critical.

Kent Altena: Understand your workflow first. What do you want to solve? Is it primarily a calculation-driven solution, or is it built upon a data graph-driven workload? Having that in mind will go a long way to determining the best or optimal approach to start down.

Gabriel Sallah: What are the dependencies across various teams responsible for creating and using the platform? Create an enterprise-wide architecture with common toolsets and services to avoid duplication of data, compute monitoring, and management.

Sheila Mueller: Involve stakeholders from IT and Lines of Business to ensure all parties agree to the business benefits, technical benefits, and assumptions made as part of the business case.

Learn more about Azure and NVIDIA

Visit our HPCwire Solution Channel.
Learn more about Microsoft Azure purpose-built infrastructure for AI.

Quelle: Azure

Automate your attack response with Azure DDoS Protection solution for Microsoft Sentinel

DDoS attacks are most known for their ability to take down applications and websites by overwhelming servers and infrastructure with large amounts of traffic. However, there are additional objectives for cybercriminals to use DDoS attacks to exfiltrate data, extort, act politically, or ideologically. One of the most devastating features of DDoS attacks is their unique ability to disrupt and create chaos in targeted organizations or systems. This plays well for bad actors that leverage DDoS as smokescreen for more sophisticated attacks, such as data theft. This demonstrates the increasingly sophisticated tactics cybercriminals use to intertwine multiple attack vectors to achieve their goals.

Azure offers several network security products that help organizations protect their applications: Azure DDoS Protection, Azure Firewall, and Azure Web Application Firewall (WAF). Customers deploy and configure each of these services separately to enhance the security posture of their protected environment and application in Azure. Each product has a unique set of capabilities to address specific attack vectors, but the most benefit speaks to the power of relationship—when combined these three products provide more comprehensive protection. Indeed, to combat modern attack campaigns one should use a suite of products and correlate security signals from one to another, to be able to detect and block multi-vector attacks.

We are announcing a new Azure DDoS Protection Solution for Microsoft Sentinel. It allows customers to identify bad actors from Azure’s DDoS security signals and block possible new attack vectors in other security products, such as Azure Firewall.

Using Microsoft Sentinel as the glue for attack remediation

Each of Azure’s network security services is fully integrated with Microsoft Sentinel, a cloud-native security information and event management (SIEM) solution. However, the real power of Sentinel is in collecting security signals from these separate security services and analyzing them to create a centralized view of the attack landscape. Sentinel correlates events and creates incidents when anomalies are detected. It then automates the response to mitigate sophisticated attacks.

In our example case, when cybercriminals use DDoS attacks as smokescreen to data theft, Sentinel detects the DDoS attack, and uses the information it gathers on attack sources to prevent the next phases of the adversary lifecycle. By using remediation capabilities in Azure Firewall and other network security services in the future, the attacking DDoS sources are blocked. This cross-product detection and remediation magnifies the security posture of the organization, where Sentinel is the orchestrator.

Automated detection and remediation of sophisticated attacks

Our new Azure DDoS Protection Solution for Sentinel provides a single consumable solution package that allows customers to achieve this level of automated detection and remediation. The solution includes the following components:

Azure DDoS Protection data connector and workbook.
Alert rules that help retrieve the source DDoS attackers. These are new rules we created specifically for this solution. These rules may be utilized by customers to achieve other objectives for their security strategy.
A Remediation IP Playbook that automatically creates remediation in Azure Firewall to block the source DDoS attackers. Although we document and demonstrate how to use Azure Firewall for remediation, any 3rd party firewall that has a Sentinel Playbook can be used for remediation. This provides the flexibility for customers to use this new DDoS solution with any firewall.

The solution is initially released for Azure Firewall (or any third-party firewall), and we plan to enhance it to support Azure WAF soon.

Let’s see a couple of use cases for this cross-product attack remediation.

Use case #1: remediation with Azure Firewall

Let’s consider an organization that use Azure DDoS Protection and Azure Firewall, and consider the attack scenario in the following figure:

An adversary controls a compromised bot. They starts with a DDoS smokescreen attack, targeting the resources in the virtual network for that organization. They then plan to access the network resources by scanning and phishing attempts until they’re able to gain access to sensitive data.

Azure DDoS Protection detects the smokescreen attack and mitigates this volumetric network flood. In parallel it starts sending log signals to Sentinel. Next, Sentinel retrieves the attacking IP addresses from the logs, and deploys remediation rules in Azure Firewall. These rules will prevent any non-DDoS attack from reaching the resources in the virtual network, even after the DDoS attacks ends, and DDoS mitigation ceases.

Use case #2: remediation with Azure WAF (coming soon)

Now, let’s consider another organization who runs a web application in Azure. It uses Azure DDoS Protection and Azure WAF to protect its web application. The adversary objective in this case is to attack the web application and exfiltrate sensitive data by starting with a DDoS smokescreen attack, and then launch web attacks on the application.

 

When Azure DDoS Protection service detects the volumetric smokescreen attack, it starts mitigating it, and signals logs to Sentinel. Sentinel retrieves the attack sources and applies remediation in Azure WAF to block future web attacks on the application.

Get started with Azure DDoS protection today

As attackers employ advanced multi-vector attack techniques during the adversary lifecycle, it’s important to harness security services as much as possible to automatically orchestrate attack detection and mitigation.

For this reason, we created the new Azure DDoS Protection solution for Microsoft Sentinel that helps organizations to protect their resources and applications better against these advanced attacks. We will continue to enhance this solution and add more security services and use cases.

Follow our step-by-step configuration guidance on how to deploy the new solution.
Quelle: Azure

Amazon Athena veröffentlicht den Datenquellenkonnektor für Google Cloud Storage

Ab heute können Sie Amazon Athena verwenden, um Daten in Google Cloud Storage abzufragen. Mit den Datenquellenkonnektoren von Athena können Sie SQL-Abfragen für Daten ausführen, die in relationalen, nicht-relationalen, objektbezogenen und benutzerdefinierten Datenquellen gespeichert sind, ohne dass Sie Daten in S3 verschieben oder eine neue Variante einer Abfragesprache lernen müssen. Google Cloud Storage ist ein verwalteter Service, der zum Speichern von Daten in Buckets entwickelt wurde, ähnlich wie Amazon S3.
Quelle: aws.amazon.com

Amazon MemoryDB für Redis kündigt Service Level Agreement mit einer Verfügbarkeit von 99,99 % an

Amazon MemoryDB für Redis bietet jetzt ein Service Level Agreement (SLA) für Verfügbarkeit von 99,99 % bei Verwendung einer Multi-Availability Zone (Multi-AZ)-Konfiguration. Zuvor bot MemoryDB ein SLA von 99,9 % für Multi-AZ-Konfigurationen. Mit dieser Einführung hat MemoryDB sein Multi-AZ-SLA aktualisiert, um eine 10-mal höhere Verfügbarkeit zu bieten.
Quelle: aws.amazon.com