Q&A: Steven Gonzalez on Indigenous futurist science fiction

Steven Gonzalez is a PhD candidate in the MIT Doctoral Program in History, Anthropology, Science, Technology, and Society (HASTS), where he researches the environmental impacts of cloud computing and data centers in the United States, Iceland, and Puerto Rico. He is also an author. Writing under the name E.G. Condé, he recently published his first book, “Sordidez.” It’s described as an “Indigenous futurist science fiction novella set in Puerto Rico and the Yucatán.” Set in the near future, it follows the survivors of civil war and climate disaster led by protagonist Vero Diaz, as they reclaim their Indigenous heritage and heal their lands.

In this Q&A, Gonzalez describes the book’s themes, its inspirations, and its connection to research, people, and classes at MIT.

Q: Where did the inspiration for this story come from?

A: I actually began my time at MIT in September of 2017 when Hurricane María struck. It was a really difficult time for me at the Institute, starting a PhD program. And it’s MIT, so there’s a lot of pressure. I was still kind of navigating the new institutional space and trying to understand my place in it. But I had a lot of people at the Institute who were extremely supportive during that time. I had family members in Puerto Rico who were stranded as a result of the hurricane, who I didn’t hear from for a very long time — who I feared dead. It was a very, very chaotic, confusing, and emotionally turbulent time for me, and also incredibly difficult to be trying to be present in a PhD program for the first semester. Karen Gardner, our administrator, was really incredibly supportive in that. Also the folks at the MIT Association of Puerto Ricans, who hosted fundraisers and linked students with counseling resources. But that trauma of the hurricane and the images that I saw of the aftermath of the hurricane, specifically in the town where my grandmother’s house was where I spent time living as a child during the summers, and to me, it was the greenest place that I have ever known. It looked like somebody had torched the entire landscape. It was traumatizing to see that image. But that kind of seeded the idea of, is there a way to burn without fire? There’s climate change, but there’s also climate terror. And so that was sort of one of the premises of the book explores, geoengineering, but also the flip side of geoengineering and terraforming is, of course, climate terror. And in a way, we could frame what’s been happening with the fossil fuel industry as a form of climate terror, as well. So for me, this all began right when I started at MIT, these dual tracks of thought.

Q: What do you see as the core themes of your novella?

A: One major theme is rebuilding. As I said, this story was very influenced by the trauma of Hurricane María and the incredibly inspiring accounts from family members, from people in Puerto Rico that I know, of regular people stepping up when the government — both federal and local — essentially abandoned them. There were so many failures of governance. But people stepped up and did what they could to help each other, to help neighbors. Neighbors cleared trees from roads. They banded together to do this. They pooled resources, to run generators so that everyone in the same street could have food that day. They would share medical supplies like insulin and things that were scarce. This was incredibly inspiring for me. And a huge theme of the book is rebuilding in the aftermath of a fictive hurricane, which I call Teddy, named after President Theodore Roosevelt, where Puerto Rico’s journey began as a U.S. commonwealth or a colony.

Healing is also a huge theme. Healing in the sense of this story was also somewhat critical of Puerto Rican culture. And it’s refracted through my own experience as a queer person navigating the space of Puerto Rico as a very kind of religious and traditional place and a very complex place at that. The main character, Vero, is a trans man. This is a person who’s transitioned and has felt a lot of alienation and as a result of his gender transition, a lot of people don’t accept him and don’t accept his identity or who he is even though he’s incredibly helpful in this rebuilding effort to the point where he’s, in some ways, a leader, if not the leader. And it becomes, in a way, about healing from the trauma of rejection too. And of course, Vero, but other characters who have gone through various traumas that I think are very much shared across Latin America, the Latin American experiences of assimilation, for instance. Latin America is a very complex place. We have Spanish as our language, that is our kind of lingua franca. But there are many Indigenous languages that people speak that have been not valued or people who speak them or use them are actively punished. And there’s this deep trauma of losing language. And in the case of Puerto Rico, the Indigenous language of the Taínos has been destroyed by colonialism. The story is about rebuilding that language and healing and “becoming.” In some ways, it’s about re-indigenization. And then the last part, as I said, healing, reconstruction, but also transformation and metamorphosis. And becoming Taíno. Again, what does that mean? What does it mean to be an Indigenous Caribbean in the future? And so that’s one of the central themes of the story.

Q: How does the novella intersect with the work you’re doing as a PhD candidate in HASTS?

A: My research on cloud computing is very much about climate change. It’s pitched within the context of climate change and understanding how our digital ecosystem contributes to not only global warming, but things like desertification. As a social scientist, that’s what I study. My studies of infrastructure are also directly referenced in the book in a lot of ways. For instance, the now collapsed Arecibo Ionosphere Observatory, where some of my pandemic fieldwork occurred, is a setting in the book. And also, I am an anthropologist. I am Puerto Rican. I draw both from my personal experience and my anthropological lens to make a story that I think is very multicultural and multilingual. It’s set in Puerto Rico, but the other half is set in the Yucatán Peninsula in what we’ll call the former Maya world. And there’s a lot of intersections between the two settings. And that goes back to the deeper Indigenous history. Some people are calling this Indigenous futurism because it references the Taínos, who are the Indigenous people of Puerto Rico, but also the Mayas, and many different Maya groups that are throughout the Yucatán Peninsula, but also present-day Guatemala and Honduras. And the story is about exchange between these two worlds. As someone trained as an anthropologist, it’s a really difficult task to kind of pull that off. And I think that my training has really, really helped me achieve that.

Q: Are there any examples of ways being among the MIT community while writing this book influenced and, in some ways, made this project possible?

A: I relied on many of my colleagues for support. There’s some sign language in the book. In Puerto Rico, there’s a big tradition of sign language. There’s a version of American sign language called LSPR that’s only found in Puerto Rico. And that’s something I’ve been aware of ever since I was a kid. But I’m not fluent in sign language or deaf communities and their culture. I got a lot of help from Timothy Loh, who’s in the HASTS program, who was extremely helpful to steer me towards sensitivity readers in the deaf community in his networks. My advisor, Stefan Helmreich, is very much a science fiction person in a lot of ways. His research is on the ocean waves, the history and anthropology of biology. He’s done ethnography in deep-sea submersibles. He’s always kind of thinking in a science fictional lens. And he allowed me, for one of my qualifying exam lists, to mesh science fiction with social theory. And that was also a way that I felt very supported by the Institute. In my coursework, I also took a few science fiction courses in other departments. I worked with Shariann Lewitt, who actually read the first version of the story. I workshopped it in her 21W.759 (Writing Science Fiction) class, and got some really amazing feedback that led to what is now a publication and a dream fulfilled in so many ways. She took me under her wing and really believed in this book.
Quelle: Massachusetts Institute of Technology

Two Lincoln Laboratory software products honored with national Excellence in Technology Transfer Awards

The Federal Laboratory Consortium (FLC) has awarded 2023 Excellence in Technology Transfer Awards at the national level to two MIT Lincoln Laboratory software products developed to improve security: Keylime and the Forensic Video Exploitation and Analysis (FOVEA) tool suite. Keylime increases the security and privacy of data and services in the cloud, while FOVEA expedites the process of reviewing and extracting useful information from existing surveillance videos. These technologies both previously won FLC Northeast regional awards for Excellence in Technology Transfer, as well as R&D 100 Awards.

“Lincoln Laboratory is honored to receive these two national FLC awards, which demonstrate the capacity of government-nonprofit-industry partnerships to enhance our national security while simultaneously driving new economic growth,” says Louis Bellaire, acting chief technology ventures officer at the laboratory. “These awards are particularly meaningful because they show Lincoln Laboratory teams at their best, developing transformative R&D [research and development] and transferring these results to achieve the strongest benefits for the nation.”

A nationwide network of more than 300 government laboratories, agencies, and research centers, FLC helps facilitate the transfer of technologies out of research labs and into the marketplace. Ultimately, the goal of FLC — organized in 1974 and formally chartered by the Federal Technology Transfer Act of 1986 — is to “increase the impact of federal laboratories’ technology transfer for the benefit of the U.S. economy, society, and national security.” Each year, FLC confers awards to commend outstanding technology transfer efforts of employees of FLC member labs and their partners from industry, academia, nonprofit, or state and local government. The Excellence in Technology Transfer Award recognizes exemplary work in transferring federally developed technology.

Keylime: Enabling trust in the cloud 

Cloud computing services are an increasingly convenient way for organizations to store, process, and disseminate data and information. These services allow organizations to rent computing resources from a cloud provider, who handles the management and security of those rented machines. Although cloud providers claim that the machines are secure, customers have no way to verify this security. As a result, organizations with sensitive data, such as U.S. government agencies and financial institutions, are reluctant to reap the benefits of flexibility and low cost that commercial cloud providers offer.

Keylime is an open-source software that enables customers with sensitive data to continuously verify the security of cloud machines, and edge and internet-of-things (IoT) devices. To enact its constant security checks, Keylime leverages a piece of hardware called a trusted platform module (TPM). The TPM generates a hash (a string of characters representing data) that will change significantly if data are tampered with. Keylime was designed to make TPMs compatible with cloud technology and reacts to a TPM hash change within seconds to shut down a compromised machine. Keylime also enables users to securely bootstrap secrets (in other words, upload cryptographic keys, passwords, and certificates into the rented machines) without divulging these secrets to the cloud provider.

Lincoln Laboratory transitioned Keylime to the public via an open-source license and distribution strategy that involved a series of partnerships. In 2015, after completing a prototype of Keylime, laboratory researchers Charles Munson and Nabil Schear collaborated with Boston University and Northeastern University to implement it as a core security component in the Mass Open Cloud (MOC) alliance, a public cloud service supporting thousands of researchers in the state. That experience led the team to work with Red Hat (under a pilot program funded by the U.S. Department of Homeland Security) to mature the technology in the open-source community.

Through the efforts of the Red Hat partnership, Keylime was accepted into the Linux Foundation’s highly selective Cloud Native Computing Foundation as a Sandbox project technology in 2019, a significant step in establishing the technology’s prestige. More than 50 open-source developers are now contributing to Keylime from around the world, and large organizations, including IBM, are deploying the technology to their cloud machines. Most recently, Red Hat released Keylime into its Enterprise Linux 9.1 operating system.

“We are proud that the Keylime team, our partners, and open-source developers have been recognized for their hard work and dedication with this national FLC award. We look forward to maintaining and building impactful collaborations, and helping the Keylime open-source community continue to grow,” says Munson.

The team members recognized with the FLC award are Munson and Schear (creators of Keylime at Lincoln Laboratory); Orran Krieger (MOC and Boston University); Luke Hinds and Michael Peters (Red Hat); Gheorghe Almasi (IBM); and Dan Dardani (formerly of the MIT Technology Licensing Office).

FOVEA: Accelerating video surveillance review 

While significant investments have improved camera coverage and video quality, the burden on video operators to analyze and obtain meaningful insights from surveillance footage — still a largely manual process — has greatly increased. The large-scale closed-circuit television systems patrolling public and commercial spaces can comprise hundreds or thousands of cameras, making daily investigation tasks burdensome. Examples of these tasks include searching for events of interest, investigating abandoned objects, and piecing together people’s activity from multiple cameras. As with any investigation, time is of the essence in apprehending persons of interest before they have inflicted widespread harm.

FOVEA dramatically reduces the time required for such forensic video analysis. With FOVEA, security personnel can review hours of video in minutes and perform complex investigations in hours rather than days, translating to faster reaction times to in-progress events and a stronger overall security posture. No pre-analysis video curation or proprietary server equipment are required; the add-on suite of video analytic capabilities can be applied to any video stream in an on-demand fashion and support both routine investigations and unforeseen or catastrophic circumstances such as terrorist threats. This suite includes capabilities for jump back, which automatically rewinds video to critical times and detects general scene changes; video summarization, which condenses all motion activity from long raw video into a short visual summary; multicamera navigation and path reconstruction, which tracks activity over place and time and camera to camera in chronological order; and on-demand person search, which scans neighboring cameras for persons of similar appearance.

Lincoln Laboratory began developing FOVEA under sponsorship from the U.S. Department of Homeland Security to address the critical needs of security operators in mass transit security centers. Through an entrepreneurial training program based on the National Science Foundation’s Innovation Corps, Lincoln Laboratory conducted a broad set of customer interviews, which ultimately led to Doradus Labs licensing FOVEA. The Colorado-based software development and technical support small business offered FOVEA to two of their casino customers and is now introducing the technology to their customers in the educational and transportation industries.

The laboratory team members recognized with the FLC award are Marianne DeAngelus and Jason Thornton (technology invention and primary contact with Doradus); Natalya Luciw, Diane Staheli, Sanjeev Mohindra, and (formerly) Tyler Shube (customer discovery); Ronald Duarte, Zach Elko, Brett Levasseur (software design and technology demonstrations); Jesslyn Alekseyev, Heather Griffin, and Kimberlee Chang and (formerly) Christine Russ, Aaron Yahr, and Marc Valliant (algorithm and software development); Dan Dardani (formerly of the MIT Technology Licensing Office) and Louis Bellaire (licensing); and Drinalda Kume, Jayme Selinger, and Zach Sweet (contracting services).

“It is wonderful to see the software team’s efforts recognized with this award,” says DeAngelus. “I am grateful for the many friendly people across Lincoln Laboratory and MIT who made this transition happen — especially the licensing, contracts, and communications offices.”

The FLC 2023 award winners will be recognized on March 29 at an awards reception and ceremony during the FLC National Meeting. 
Quelle: Massachusetts Institute of Technology

MIT to launch new Office of Research Computing and Data

As the computing and data needs of MIT’s research community continue to grow — both in their quantity and complexity — the Institute is launching a new effort to ensure that researchers have access to the advanced computing resources and data management services they need to do their best work. 

At the core of this effort is the creation of the new Office of Research Computing and Data (ORCD), to be led by Professor Peter Fisher, who will step down as head of the Department of Physics to serve as the office’s inaugural director. The office, which formally opens in September, will build on and replace the MIT Research Computing Project, an initiative supported by the Office of the Vice President for Research, which contributed in recent years to improving the computing resources available to MIT researchers.

“Almost every scientific field makes use of research computing to carry out our mission at MIT — and computing needs vary between different research groups. In my world, high-energy physics experiments need large amounts of storage and many identical general-purpose CPUs, while astrophysical theorists simulating the formation of galaxy clusters need relatively little storage, but many CPUs with high-speed connections between them,” says Fisher, the Thomas A. Frank (1977) Professor of Physics, who will take up the mantle of ORCD director on Sept. 1.

“I envision ORCD to be, at a minimum, a centralized system with a spectrum of different capabilities to allow our MIT researchers to start their projects and understand the computational resources needed to execute them,” Fisher adds.

The Office of Research Computing and Data will provide services spanning hardware, software, and cloud solutions, including data storage and retrieval, and offer advice, training, documentation, and data curation for MIT’s research community. It will also work to develop innovative solutions that address emerging or highly specialized needs, and it will advance strategic collaborations with industry.

The exceptional performance of MIT’s endowment last year has provided a unique opportunity for MIT to distribute endowment funds to accelerate progress on an array of Institute priorities in fiscal year 2023, beginning July 1, 2022. On the basis of community input and visiting committee feedback, MIT’s leadership identified research computing as one such priority, enabling the expanded effort that the Institute commenced today. Future operation of ORCD will incorporate a cost-recovery model.

In his new role, Fisher will report to Maria Zuber, MIT’s vice president for research, and coordinate closely with MIT Information Systems and Technology (IS&T), MIT Libraries, and the deans of the five schools and the MIT Schwarzman College of Computing, among others. He will also work closely with Provost Cynthia Barnhart.

“I am thrilled that Peter has agreed to take on this important role,” says Zuber. “Under his leadership, I am confident that we’ll be able to build on the important progress of recent years to deliver to MIT researchers best-in-class infrastructure, services, and expertise so they can maximize the performance of their research.”

MIT’s research computing capabilities have grown significantly in recent years. Ten years ago, the Institute joined with a number of other Massachusetts universities to establish the Massachusetts Green High-Performance Computing Center (MGHPCC) in Holyoke to provide the high-performance, low-carbon computing power necessary to carry out cutting-edge research while reducing its environmental impact. MIT’s capacity at the MGHPCC is now almost fully utilized, however, and an expansion is underway.

The need for more advanced computing capacity is not the only issue to be addressed. Over the last decade, there have been considerable advances in cloud computing, which is increasingly used in research computing, requiring the Institute to take a new look at how it works with cloud services providers and then allocates cloud resources to departments, labs, and centers. And MIT’s longstanding model for research computing — which has been mostly decentralized — can lead to inefficiencies and inequities among departments, even as it offers flexibility.

The Institute has been carefully assessing how to address these issues for several years, including in connection with the establishment of the MIT Schwarzman College of Computing. In August 2019, a college task force on computing infrastructure found a “campus-wide preference for an overarching organizational model of computing infrastructure that transcends a college or school and most logically falls under senior leadership.” The task force’s report also addressed the need for a better balance between centralized and decentralized research computing resources.

“The needs for computing infrastructure and support vary considerably across disciplines,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “With the new Office of Research Computing and Data, the Institute is seizing the opportunity to transform its approach to supporting research computing and data, including not only hardware and cloud computing but also expertise. This move is a critical step forward in supporting MIT’s research and scholarship.”

Over time, ORCD (pronounced “orchid”) aims to recruit a staff of professionals, including data scientists and engineers and system and hardware administrators, who will enhance, support, and maintain MIT’s research computing infrastructure, and ensure that all researchers on campus have access to a minimum level of advanced computing and data management.

The new research computing and data effort is part of a broader push to modernize MIT’s information technology infrastructure and systems. “We are at an inflection point, where we have a significant opportunity to invest in core needs, replace or upgrade aging systems, and respond fully to the changing needs of our faculty, students, and staff,” says Mark Silis, MIT’s vice president for information systems and technology. “We are thrilled to have a new partner in the Office of Research Computing and Data as we embark on this important work.”
Quelle: Massachusetts Institute of Technology

Lincoln Laboratory honored for transfer of security-enhancing technologies

The Federal Laboratory Consortium for Technology Transfer (FLC) awarded their 2021 Excellence in Technology Transfer Award for the Northeast region to two Lincoln Laboratory technologies developed to improve security.

The first technology, Forensic Video Exploitation and Analysis (FOVEA), is a suite of analytic tools that makes it significantly easier for investigators to review surveillance video footage. The second technology, Keylime, is a software architecture designed to increase the security and privacy of data and services in the cloud. Both technologies have transitioned to commercial use via license or open-source access.

“These Federal Laboratory Consortium awards are an acknowledgement that the advanced capabilities developed at MIT Lincoln Laboratory are valued, not only for their contribution to enhancing national security, but also for their value to related private-sector needs,” says Bernadette Johnson, the chief technology ventures officer at Lincoln Laboratory. “Technology transfer is considered an integral element of the Department of Defense’s mission and is explicitly called out in the laboratory’s Prime Contract and Sponsoring Agreement. The transfer of these two technologies is emblematic of the unique ‘R&D-to-rapid-prototyping’ transition pipeline we have been developing at Lincoln.”

Speeding up video review 

The FOVEA program first began under sponsorship from the Department of Homeland Security (DHS) to address the challenge of efficiently reviewing video surveillance footage. The process of searching for a specific event, investigating abandoned objects, or piecing together activity from multiple cameras can take investigators hours or even days. It is especially challenging in large-scale closed-circuit TV systems, like those that surveil subway stations.

The FOVEA suite overcomes these challenges with three advanced tools. The first tool, video summarization, condenses all motion activity into a visual summary, transforming, for example, an hour of raw video into a three-minute product that only highlights motion. The second tool, called jump back, automatically seeks a portion of the video when an idle object, such as a backpack, first appeared. The third tool, multi-camera navigation and path reconstruction, allows an operator to track a person or vehicle of interest across multiple camera views.

Notably, FOVEA’s analytic tools can be integrated directly into existing video surveillance systems and can be processed on any desktop or laptop computer. In contrast, most commercial offerings first require customers to export their video data for analysis and to purchase proprietary server equipment or cloud services.

“The project team worked very hard on not just the development of the FOVEA prototype, but also packaging the software in a way that accommodates hand-off to third-party deployment sites and transition partners,” says Marianne DeAngelus, who led the development of FOVEA with a team in the Homeland Sensors and Analytics Group.

Under government sponsorship, the developers first deployed FOVEA to two mass transit facilities. Through participation in an MIT-led Innovation-Corps program, the team then adapted the technology into a commercial application. Doradus Lab, Inc. has since licensed FOVEA for security surveillance in casinos.

“Though FOVEA was originally developed for a specific use case of mass transit security, our tech transfer to industry will make it available for a broader set of security applications that would benefit from accelerated forensic analysis of surveillance video. We and our DHS sponsor are happy that this may lead to a wider impact of the technology,” adds Jason Thornton, who leads the technical group.

Putting trust in the cloud

Keylime is making it possible for government and industry users with sensitive data to increase the security of their cloud and internet-of-things (IoT) devices. This free, open-source software architecture enables cloud customers to securely upload cryptographic keys, passwords, and certificates into the cloud without divulging these secrets to their cloud provider, and to secure their cloud resources without relying on their provider to do it for them.

Keylime started as an internal project funded through Lincoln Laboratory’s Technology Office in 2015. Eventually, the Keylime team began discussions with RedHat, one of the world’s largest open-source software companies, to expand the technology’s reach. With RedHat’s help, Keylime was transitioned in 2019 into the Cloud Native Computing Foundation as a sandbox technology with more than 30 open-source developers contributing to it from around the world. Most recently, IBM announced its plans to adopt Keylime into its cloud feet, enabling IBM to attest to the security of its thousands of cloud servers.

“Keylime’s transfer and adoption into the open-source community and cloud environments helps to empower edge/IoT and cloud customers to validate provider claims of trustworthiness, rather than needing to rely solely on trust of the underlying environment for compliance and correctness,” says Charles Munson, who developed Keylime with former laboratory staff member Nabil Schear and adapted it as an open-source platform with Luke Hinds at RedHat. 

Keylime achieves its cloud security by leveraging a piece of hardware called a TPM, an industry-standard hardware security chip. A TPM generates a hash, a short string of numbers representing a much larger amount of data, that changes significantly if data are even slightly tampered with. Keylime can detect and react to this tampering in under a second.

Before Keylime, TPMs were incompatible with cloud technology, slowing down systems and forcing engineers to change software to accommodate the module. Keylime gets around these problems by serving as a piece of intermediary software that allows users to leverage the security benefits of the TPM without having to make their software compatible with it.

Transferring to industry

The transition of Lincoln Laboratory’s technology to industry and government is central to its role as a federally funded research and development center (FFRDC).

The mission of the FLC is to facilitate and educate FFRDCs and industry on the process of technology transfer. More than 300 federal laboratories, facilities, research centers, and their parent agencies make up the FLC community.

The transfer of these FLC-awarded technologies was supported by Bernadette Johnson and Lou Bellaire in the Technology Ventures Office; David Pronchick, Drinalda Kume, Zachary Sweet, and Jayme Selinger of the Contracting Services Department; and Daniel Dardani in MIT’s Technology Licensing Office, along with the technology development teams. Both FOVEA and Keylime were also awarded R&D 100 Awards in 2020, acknowledging them among the year’s 100 most innovative technologies available for sale or license.

The FLC will recognize the award recipients at a regional meeting in October.
Quelle: Massachusetts Institute of Technology

Keylime security software is deployed to IBM cloud

Keylime, a cloud security software architecture, is being adopted into IBM’s cloud fleet. Originally developed at MIT Lincoln Laboratory to allow system administrators to ensure the security of their cloud environment, Keylime is now a Cloud Native Computing Foundation sandbox technology with more than 30 open-source developers contributing to it from around the world. The software will enable IBM to remotely attest to the security of its thousands of cloud servers.

“It is exciting to see the hard work of the growing Keylime community coming to fruition,” says Charles Munson, a researcher in the Secure Resilient Systems and Technology Group at Lincoln Laboratory who created Keylime with Nabil Schear, now at Netflix. “Adding integrated support for Keylime into IBM’s cloud fleet is an important step towards enabling cloud customers to have a zero-trust capability of ‘never trust, always verify.'”

In a blog post announcing IBM’s integration of Keylime, George Almasi of IBM Research said, “IBM has planned a rapid rollout of Keylime-based attestation to the entirety of its cloud fleet in order to meet requirements for a strong security posture from its financial services and other enterprise customers. This will leverage work done on expanding the scalability and resilience of Keylime to manage large numbers of nodes, allowing Keylime-based attestation to be operationalized at cloud data center scale.”

Keylime is a key bootstrapping and integrity management software architecture. It was first developed to enable organizations to check for themselves that the servers storing and processing their data are as secure as cloud service providers claim they are. Today, many organizations use a form of cloud computing called infrastructure-as-a-service, whereby they rent computing resources from a cloud provider who is responsible for the security of the underlying systems.

To enable remote cloud-security checks, Keylime leverages a piece of hardware called a trusted platform module, or TPM, an industry-standard and widely used hardware security chip. A TPM generates a hash, a short string of numbers representing a much larger amount of data. If data are tampered with even slightly, the hash will change significantly, a security alarm that Keylime can detect and react to in under a second.

Before Keylime, TPMs were incompatible with cloud technology, slowing down systems and forcing engineers to change software to accommodate the module. Keylime gets around these problems by serving as a piece of intermediary software that allows users to leverage the security benefits of the TPM without having to make all of their software compatible with it.

In 2019, Keylime was transitioned into the CNCF as a sandbox technology with the help of RedHat, one of the world’s leading open-source software companies. This transition better incorporated Keylime into the Linux open-source ecosystem, making it simpler for users to adopt. In 2020, the Lincoln Laboratory team that developed Keylime was awarded an R&D 100 Award, recognizing the software among the year’s 100 most innovative new technologies available for sale or license.
Quelle: Massachusetts Institute of Technology

Lincoln Laboratory earns a 2020 Stratus Award for Cloud Computing

MIT Lincoln Laboratory is among the winners of the 2020 Stratus Awards for Cloud Computing. The Business Intelligence Group presented 38 companies, services, and executives with these awards that recognize leaders in cloud-based technology. The laboratory won for developing TRACER (Timely Randomization Applied to Commodity Executables at Runtime), software that prevents cyber attackers from remotely attacking Windows applications.

Since 2012, the Business Intelligence Group has acknowledged industry leaders with several awards for innovation in technology and services. With the move of so many business and institutional functions to the cloud, the Stratus Awards were initiated to recognize companies and individuals that have enabled effective, secure cloud-based computing.

Maria Jimenez, chief nominations officer of the Business Intelligence Group, says, “We now rely on the cloud for everything from entertainment to productivity, so we are proud to recognize all of our winners. Each and every one is helping in their own way to make our lives richer every day. We are honored and proud to reward these leaders in business.”

TRACER addresses a problem inherent in the immensely popular Windows’ commodity applications: all installations of these applications look alike, so cyber intruders gain the ability to compromise millions of computers simply by “cracking” into one computer. In addition, because more than 90 percent of desktop computers run Microsoft Windows with closed-source applications, many cyber protections that rely on having the source code available are not applicable for these desktop systems.

The patented TRACER technology re-randomizes sensitive internal data and layout at every output from the application. This continuous re-randomization thwarts attempts to use data leaks to hijack the computer’s internals; any information leaked by the application will be stale when attackers attempt to exploit it.

TRACER’s research and development was led by Hamed Okhravi of Lincoln Laboratory’s Secure Resilient Systems and Technology Group and included contributions by Jason Martin, David Bigelow, David Perry, Kristin Dahl, Robert Rudd, Thomas Hobson, and William Streilein.

“One of our primary goals for TRACER was to make it as easy to use as possible. The current version requires minimal steps to set up and requires no user interaction during its operation, which we hope facilitates its widespread adoption,” Okhravi said.

The software has been made available via a commercial company. For its innovation and potential to revolutionize the cybersecurity field, TRACER was named a 2020 R&D 100 Award winner by R&D World. TRACER was also honored with MIT Lincoln Laboratory’s 2019 Best Invention Award.
Quelle: Massachusetts Institute of Technology

Detecting program-tampering in the cloud

For small and midsize organizations, the outsourcing of demanding computational tasks to the cloud — huge banks of computers accessible over the Internet — can be much more cost-effective than buying their own hardware. But it also poses a security risk: A malicious hacker could rent space on a cloud server and use it to launch programs that hijack legitimate applications, interfering with their execution.In August, at the International Cryptology Conference, researchers from MIT and Israel’s Technion and Tel Aviv University presented a new system that can quickly verify that a program running on the cloud is executing properly. That amounts to a guarantee that no malicious code is interfering with the program’s execution.The same system also protects the data used by applications running in the cloud, cryptographically ensuring that the user won’t learn anything other than the immediate results of the requested computation. If, for instance, hospitals were pooling medical data in a huge database hosted on the cloud, researchers could look for patterns in the data without compromising patient privacy.Although the paper reports new theoretical results (view PDF), the researchers have also built working code that implements their system. At present, it works only with programs written in the C programming language, but adapting it to other languages should be straightforward.The new work, like much current research on secure computation, requires that computer programs be represented as circuits. So the researchers’ system includes a “circuit generator” that automatically converts C code to circuit diagrams. The circuits it produces, however, are much smaller than those produced by its predecessors, so by itself, the circuit generator may find other applications in cryptography.Zero knowledgeAlessandro Chiesa, a graduate student in electrical engineering and computer science at MIT and one of the paper’s authors, says that because the new system protects both the integrity of programs running in the cloud and the data they use, it’s a good complement to the cryptographic technique known as homomorphic encryption, which protects the data transmitted by the users of cloud applications. On the paper, Chiesa joins Madars Virza, also a graduate student in electrical engineering and computer science; the Technion’s Daniel Genkin and Eli Ben-Sasson, who was a visiting scientist at MIT for the past year; and Tel Aviv University’s Eran Tromer. Ben-Sasson and Tromer were co-PIs on the project. The researchers’ system implements a so-called zero-knowledge proof, a type of mathematical game invented by MIT professors Shafi Goldwasser and Silvio Micali and their colleague Charles Rackoff of the University of Toronto. In its cryptographic application, a zero-knowledge proof enables one of the game’s players to prove to the other that he or she knows a secret key without actually divulging it.But as its name implies, a zero-knowledge proof is a more general method for proving mathematical theorems — and the correct execution of a computer program can be redescribed as a theorem. So zero-knowledge proofs are by definition able to establish whether or not a computer program is executing correctly.The problem is that existing implementations of zero-knowledge proofs — except in cases where they’ve been tailored to particular algorithms — take as long to execute as the programs they’re trying to verify. That’s fine for password verification, but not for a computation substantial enough that it might be farmed out to the cloud.The researchers’ innovation is a practical, succinct zero-knowledge proof for arbitrary programs. Indeed, it’s so succinct that it can typically fit in a single data packet.Linear thinkingAs Chiesa explains, his and his colleagues’ approach depends on a variation of what’s known as a “probabilistically checkable proof,” or PCP. “With a standard mathematical proof, if you want to verify it, you have to go line by line from the start to the end,” Chiesa says. “If you were to skip one line, potentially, that could fool you. Traditional proofs are very fragile in this respect.” “The PCP theorem says that there is a way to rewrite proofs so that instead of reading them line by line,” Chiesa adds, “what you can do is flip a few coins and probabilistically sample three or four lines and have a probabilistic guarantee that it’s correct.”The problem, Virza says, is that “the current known constructions of the PCP theorem, though great in theory, have quite bad practical realizations.” That’s because the theory assumes that an adversary who’s trying to produce a fraudulent proof has unbounded computational capacity. What Chiesa, Virza and their colleagues do instead is assume that the adversary is capable only of performing simple linear operations.“This assumption is, of course, false in practice,” Virza says. “So we use a cryptographic encoding to force the adversary to only linear evaluations. There is a way to encode numbers into such a form that you can add those numbers, but you can’t do anything else. This is how we sidestep the inefficiencies of the PCP theorem.”“I think it’s a breakthrough,” says Ran Canetti, a professor of computer science at Boston University who was not involved with the research. When the PCP theorem was first proved, Canetti says, “nobody ever thought that this would be something that would be remotely practical. They’ve become a little bit better over the years, but not that much better.”“Four or five years ago,” Canetti adds, “these guys wrote on the flag the crazy goal of trying to make [proofs for arbitrary programs] practical, and I must say, I thought, ‘They’re nuts.’ But they did it. They actually have something that works.”
Quelle: Massachusetts Institute of Technology

Protecting data in the cloud

Cloud computing — outsourcing computational tasks over the Internet — could give home-computer users unprecedented processing power and let small companies launch sophisticated Web services without building massive server farms.But it also raises privacy concerns. A bank of cloud servers could be running applications for 1,000 customers at once; unbeknownst to the hosting service, one of those applications might have no purpose other than spying on the other 999.Encryption could make cloud servers more secure. Only when the data is actually being processed would it be decrypted; the results of any computations would be re-encrypted before they’re sent off-chip.In the last 10 years or so, however, it’s become clear that even when a computer is handling encrypted data, its memory-access patterns — the frequency with which it stores and accesses data at different memory addresses — can betray a shocking amount of private information. At the International Symposium on Computer Architecture in June, MIT researchers described a new type of secure hardware component, dubbed Ascend, that would disguise a server’s memory-access patterns, making it impossible for an attacker to infer anything about the data being stored. Ascend also thwarts another type of attack, known as a timing attack, which attempts to infer information from the amount of time that computations take.Computational trade-offSimilar designs have been proposed in the past, but they’ve generally traded too much computational overhead for security. “This is the first time that any hardware design has been proposed — it hasn’t been built yet — that would give you this level of security while only having about a factor of three or four overhead in performance,” says Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, whose group developed the new system. “People would have thought it would be a factor of 100.”The “trivial way” of obscuring memory-access patterns, Devadas explains, would be to request data from every address in the memory — whether a memory chip or a hard drive — and throw out everything except the data stored at the one address of interest. But that would be much too time-consuming to be practical.What Devadas and his collaborators — graduate students Ling Ren, Xiangyao Yu and Christopher Fletcher, and research scientist Marten van Dijk — do instead is to arrange memory addresses in a data structure known as a “tree.” A family tree is a familiar example of a tree, in which each “node” (in this example, a person’s name) is attached to only one node above it (the node representing the person’s parents) but may connect to several nodes below it (the person’s children).With Ascend, addresses are assigned to nodes randomly. Every node lies along some “path,” or route through the tree, that starts at the top and passes from node to node, without backtracking, until arriving at a node with no further connections. When the processor requires data from a particular address, it sends requests to all the addresses in a path that includes the one it’s really after.To prevent an attacker from inferring anything from sequences of memory access, every time Ascend accesses a particular memory address, it randomly swaps that address with one stored somewhere else in the tree. As a consequence, accessing a single address multiple times will very rarely require traversing the same path.Less computation to disguise an addressBy confining its dummy requests to a single path, rather than sending them to every address in memory, Ascend exponentially reduces the amount of computation required to disguise an address. In a separate paper, which is as-yet unpublished but has been posted online, the researchers prove that querying paths provides just as much security as querying every address in memory would.Ascend also protects against timing attacks. Suppose that the computation being outsourced to the cloud is the mammoth task of comparing a surveillance photo of a criminal suspect to random photos on the Web. The surveillance photo itself would be encrypted, and thus secure from prying eyes. But spyware in the cloud could still deduce what public photos it was being compared to. And the time the comparisons take could indicate something about the source photos: Photos of obviously different people could be easy to rule out, but photos of very similar people might take longer to distinguish.So Ascend’s memory-access scheme has one final wrinkle: It sends requests to memory at regular intervals — even when the processor is busy and requires no new data. That way, attackers can’t tell how long any given computation is taking.
Quelle: Massachusetts Institute of Technology

Diagnosing “broken" buildings to make them greener

The co-founders of MIT spinout KGS Buildings have a saying: “All buildings are broken.” Energy wasted through faulty or inefficient equipment, they say, can lead to hundreds of thousands of dollars in avoidable annual costs.That’s why KGS aims to “make buildings better” with cloud-based software, called Clockworks, that collects existing data on a building’s equipment — specifically in HVAC (heating, ventilation, and air conditioning) equipment — to detect leaks, breaks, and general inefficiencies, as well as energy-saving opportunities.The software then translates the data into graphs, metrics, and text that explain monetary losses, where it’s available for building managers, equipment manufacturers, and others through the cloud.Building operators can use that information to fix equipment, prioritize repairs, and take efficiency measures — such as using chilly outdoor air, instead of air conditioning, to cool rooms.“The idea is to make buildings better, by helping people save time, energy, and money, while providing more comfort, enjoyment, and productivity,” says Nicholas Gayeski SM ’07, PhD ’10, who co-founded KGS with Sian Kleindienst SM ’06, PhD ’10 and Stephen Samouhos ’04, SM ’07, PhD ’10.The software is now operating in more than 300 buildings across nine countries, collecting more than 2 billion data points monthly. The company estimates these buildings will save an average of 7 to 9 percent in avoidable costs per year; the exact figure depends entirely on the building. “If it’s a relatively well-performing building already, it may see lower savings; if it’s a poor-performing building, it could be much higher, maybe 15 to 20 percent,” says Gayeski, who graduated from MIT’s Building Technology Program, along with his two co-founders.Last month, MIT commissioned the software for more than 60 of its own buildings, monitoring more than 7,000 pieces of equipment over 10 million square feet. Previously, in a year-long trial for one MIT building, the software saved MIT $286,000.  Benefits, however, extend beyond financial savings, Gayeski says. “There are people in those buildings: What’s their quality of life? There are people who work on those buildings. We can provide them with better information to do their jobs,” he says.The software can also help buildings earn additional incentives by participating in utility programs. “We have major opportunities in some utility territories, where energy-efficiency has been incentivized. We can help buildings meet energy-efficiency goals that are significant in many states, including Massachusetts,” says Alex Grace, director of business development for KGS.Other customers include universities, health-care and life-science facilities, schools, and retail buildings.Equipment-level detectionFault-detection and diagnostics research spans about 50 years — with contributions by early KGS advisors and MIT professors of architecture Les Norford and Leon Glicksman — and about a dozen companies now operate in the field.But KGS, Gayeski says, is one of a few ventures gathering “equipment-level data,” gathered through various sensors, actuators, and meters attached to equipment that measure functionality.Clockworks sifts through that massive store of data, measuring temperatures, pressures, flows, set points, and control commands, among other things. It’s able to gather a few thousand data points every five minutes — which is a finer level of granularity than meter-level analytics software that may extract, say, a data point every 15 minutes from a utility meter.“That gives a lot more detail, a lot more granular information about how things are operating and could be operating better,” Gayeski says. For example, Clockworks may detect specific leaky valves or stuck dampers on air handlers in HVAC units that cause excessive heating or cooling.To make its analyses accurate, KGS employs what Gayeski calls “mass customization of code.” The company has code libraries for each type of equipment it works with — such as air handlers, chillers, and boilers — that can be tailored to specific equipment that varies greatly from building to building.This makes Clockworks easily scalable, Gayeski says. But it also helps the software produce rapid, intelligent analytics — such as accurate graphs, metrics, and text that spell out problems clearly.Moreover, it helps the software to rapidly equate data with monetary losses. “When we identify that there’s a fault with the right data, we can tell people right away this is worth, say, $50 a day or this is worth $1,000 a day — and we’ve seen $1,000-a-day faults — so that allows facilities managers to prioritize which problems get their attention,” he says.KGS Buildings’ foundationThe KGS co-founders met as participants in the MIT entry for the 2007 Solar Decathlon — an annual competition where college teams build small-scale, solar-powered homes to display at the National Mall in Washington. Kleindienst worked on lighting systems, while Samouhos and Gayeski worked on mechanical design and energy-modeling.After the competition, the co-founders started a company with a broad goal of making buildings better through energy savings. While pursuing their PhDs, they toyed with various ideas, such as developing low-cost sensing technology with wireless communication that could be retrofitted on to older equipment.Seeing building data as an emerging tool for fault-detection and diagnostics, however, they turned to Samouhos’ PhD dissertation, which focused on building condition monitoring. It came complete with the initial diagnostics codes and a framework for an early KGS module.“We all came together anticipating that the building industry was about to change a lot in the way it uses data, where you take the data, you figure out what’s not working well, and do something about it,” Gayeski says. “At that point, we knew it was ripe to move forward.”Throughout 2010, they began trialing software at several locations, including MIT. They found guidance among the seasoned entrepreneurs at MIT’s Venture Mentoring Service — learning to fail fast, and often. “That means keep at it, keep adapting and adjusting, and if you get it wrong, you just fix it and try again,” Gayeski says.Today, the company — headquartered in Somerville, Mass., with 16 employees — is focusing on expanding its customer base and advancing its software into other applications. About 180 new buildings were added to Clockworks in the past year; by the end of 2014, KGS projects it could deploy its software to 800 buildings. “Larger companies are starting to catch on,” Gayeski says. “Major health-care institutions, global pharmaceuticals, universities, and [others] are starting to see the value and deciding to take action — and we’re starting to take off.”Liberating dataBy bringing all this data about building equipment to the cloud, the technology has plugged into the “Internet of things” — a concept where objects would be connected, via embedded chips and other methods, to the Internet for inventory and other purposes.Data on HVAC systems have been connected through building automation for some time. KGS, however, can connect that data to cloud-based analytics and extract “really rich information” about equipment, Gayeski says. For instance, he says, the startup has quick-response codes — like a barcode — for each piece of equipment it measures, so people can read all data associated with it.“As more and more devices are readily connected to the Internet, we may be tapping straight into those, too,” Gayeski says. “And that data can be liberated from its local environment to the cloud,” Grace adds. Down the road, as technology to monitor houses — such as automated thermostats and other sensors — begins to “unlock the data in the residential scale,” Gayeski says, “KGS could adapt over time into that space, as well.”
Quelle: Massachusetts Institute of Technology