Two Lincoln Laboratory software products honored with national Excellence in Technology Transfer Awards

The Federal Laboratory Consortium (FLC) has awarded 2023 Excellence in Technology Transfer Awards at the national level to two MIT Lincoln Laboratory software products developed to improve security: Keylime and the Forensic Video Exploitation and Analysis (FOVEA) tool suite. Keylime increases the security and privacy of data and services in the cloud, while FOVEA expedites the process of reviewing and extracting useful information from existing surveillance videos. These technologies both previously won FLC Northeast regional awards for Excellence in Technology Transfer, as well as R&D 100 Awards.

“Lincoln Laboratory is honored to receive these two national FLC awards, which demonstrate the capacity of government-nonprofit-industry partnerships to enhance our national security while simultaneously driving new economic growth,” says Louis Bellaire, acting chief technology ventures officer at the laboratory. “These awards are particularly meaningful because they show Lincoln Laboratory teams at their best, developing transformative R&D [research and development] and transferring these results to achieve the strongest benefits for the nation.”

A nationwide network of more than 300 government laboratories, agencies, and research centers, FLC helps facilitate the transfer of technologies out of research labs and into the marketplace. Ultimately, the goal of FLC — organized in 1974 and formally chartered by the Federal Technology Transfer Act of 1986 — is to “increase the impact of federal laboratories’ technology transfer for the benefit of the U.S. economy, society, and national security.” Each year, FLC confers awards to commend outstanding technology transfer efforts of employees of FLC member labs and their partners from industry, academia, nonprofit, or state and local government. The Excellence in Technology Transfer Award recognizes exemplary work in transferring federally developed technology.

Keylime: Enabling trust in the cloud 

Cloud computing services are an increasingly convenient way for organizations to store, process, and disseminate data and information. These services allow organizations to rent computing resources from a cloud provider, who handles the management and security of those rented machines. Although cloud providers claim that the machines are secure, customers have no way to verify this security. As a result, organizations with sensitive data, such as U.S. government agencies and financial institutions, are reluctant to reap the benefits of flexibility and low cost that commercial cloud providers offer.

Keylime is an open-source software that enables customers with sensitive data to continuously verify the security of cloud machines, and edge and internet-of-things (IoT) devices. To enact its constant security checks, Keylime leverages a piece of hardware called a trusted platform module (TPM). The TPM generates a hash (a string of characters representing data) that will change significantly if data are tampered with. Keylime was designed to make TPMs compatible with cloud technology and reacts to a TPM hash change within seconds to shut down a compromised machine. Keylime also enables users to securely bootstrap secrets (in other words, upload cryptographic keys, passwords, and certificates into the rented machines) without divulging these secrets to the cloud provider.

Lincoln Laboratory transitioned Keylime to the public via an open-source license and distribution strategy that involved a series of partnerships. In 2015, after completing a prototype of Keylime, laboratory researchers Charles Munson and Nabil Schear collaborated with Boston University and Northeastern University to implement it as a core security component in the Mass Open Cloud (MOC) alliance, a public cloud service supporting thousands of researchers in the state. That experience led the team to work with Red Hat (under a pilot program funded by the U.S. Department of Homeland Security) to mature the technology in the open-source community.

Through the efforts of the Red Hat partnership, Keylime was accepted into the Linux Foundation’s highly selective Cloud Native Computing Foundation as a Sandbox project technology in 2019, a significant step in establishing the technology’s prestige. More than 50 open-source developers are now contributing to Keylime from around the world, and large organizations, including IBM, are deploying the technology to their cloud machines. Most recently, Red Hat released Keylime into its Enterprise Linux 9.1 operating system.

“We are proud that the Keylime team, our partners, and open-source developers have been recognized for their hard work and dedication with this national FLC award. We look forward to maintaining and building impactful collaborations, and helping the Keylime open-source community continue to grow,” says Munson.

The team members recognized with the FLC award are Munson and Schear (creators of Keylime at Lincoln Laboratory); Orran Krieger (MOC and Boston University); Luke Hinds and Michael Peters (Red Hat); Gheorghe Almasi (IBM); and Dan Dardani (formerly of the MIT Technology Licensing Office).

FOVEA: Accelerating video surveillance review 

While significant investments have improved camera coverage and video quality, the burden on video operators to analyze and obtain meaningful insights from surveillance footage — still a largely manual process — has greatly increased. The large-scale closed-circuit television systems patrolling public and commercial spaces can comprise hundreds or thousands of cameras, making daily investigation tasks burdensome. Examples of these tasks include searching for events of interest, investigating abandoned objects, and piecing together people’s activity from multiple cameras. As with any investigation, time is of the essence in apprehending persons of interest before they have inflicted widespread harm.

FOVEA dramatically reduces the time required for such forensic video analysis. With FOVEA, security personnel can review hours of video in minutes and perform complex investigations in hours rather than days, translating to faster reaction times to in-progress events and a stronger overall security posture. No pre-analysis video curation or proprietary server equipment are required; the add-on suite of video analytic capabilities can be applied to any video stream in an on-demand fashion and support both routine investigations and unforeseen or catastrophic circumstances such as terrorist threats. This suite includes capabilities for jump back, which automatically rewinds video to critical times and detects general scene changes; video summarization, which condenses all motion activity from long raw video into a short visual summary; multicamera navigation and path reconstruction, which tracks activity over place and time and camera to camera in chronological order; and on-demand person search, which scans neighboring cameras for persons of similar appearance.

Lincoln Laboratory began developing FOVEA under sponsorship from the U.S. Department of Homeland Security to address the critical needs of security operators in mass transit security centers. Through an entrepreneurial training program based on the National Science Foundation’s Innovation Corps, Lincoln Laboratory conducted a broad set of customer interviews, which ultimately led to Doradus Labs licensing FOVEA. The Colorado-based software development and technical support small business offered FOVEA to two of their casino customers and is now introducing the technology to their customers in the educational and transportation industries.

The laboratory team members recognized with the FLC award are Marianne DeAngelus and Jason Thornton (technology invention and primary contact with Doradus); Natalya Luciw, Diane Staheli, Sanjeev Mohindra, and (formerly) Tyler Shube (customer discovery); Ronald Duarte, Zach Elko, Brett Levasseur (software design and technology demonstrations); Jesslyn Alekseyev, Heather Griffin, and Kimberlee Chang and (formerly) Christine Russ, Aaron Yahr, and Marc Valliant (algorithm and software development); Dan Dardani (formerly of the MIT Technology Licensing Office) and Louis Bellaire (licensing); and Drinalda Kume, Jayme Selinger, and Zach Sweet (contracting services).

“It is wonderful to see the software team’s efforts recognized with this award,” says DeAngelus. “I am grateful for the many friendly people across Lincoln Laboratory and MIT who made this transition happen — especially the licensing, contracts, and communications offices.”

The FLC 2023 award winners will be recognized on March 29 at an awards reception and ceremony during the FLC National Meeting. 
Quelle: Massachusetts Institute of Technology

MIT to launch new Office of Research Computing and Data

As the computing and data needs of MIT’s research community continue to grow — both in their quantity and complexity — the Institute is launching a new effort to ensure that researchers have access to the advanced computing resources and data management services they need to do their best work. 

At the core of this effort is the creation of the new Office of Research Computing and Data (ORCD), to be led by Professor Peter Fisher, who will step down as head of the Department of Physics to serve as the office’s inaugural director. The office, which formally opens in September, will build on and replace the MIT Research Computing Project, an initiative supported by the Office of the Vice President for Research, which contributed in recent years to improving the computing resources available to MIT researchers.

“Almost every scientific field makes use of research computing to carry out our mission at MIT — and computing needs vary between different research groups. In my world, high-energy physics experiments need large amounts of storage and many identical general-purpose CPUs, while astrophysical theorists simulating the formation of galaxy clusters need relatively little storage, but many CPUs with high-speed connections between them,” says Fisher, the Thomas A. Frank (1977) Professor of Physics, who will take up the mantle of ORCD director on Sept. 1.

“I envision ORCD to be, at a minimum, a centralized system with a spectrum of different capabilities to allow our MIT researchers to start their projects and understand the computational resources needed to execute them,” Fisher adds.

The Office of Research Computing and Data will provide services spanning hardware, software, and cloud solutions, including data storage and retrieval, and offer advice, training, documentation, and data curation for MIT’s research community. It will also work to develop innovative solutions that address emerging or highly specialized needs, and it will advance strategic collaborations with industry.

The exceptional performance of MIT’s endowment last year has provided a unique opportunity for MIT to distribute endowment funds to accelerate progress on an array of Institute priorities in fiscal year 2023, beginning July 1, 2022. On the basis of community input and visiting committee feedback, MIT’s leadership identified research computing as one such priority, enabling the expanded effort that the Institute commenced today. Future operation of ORCD will incorporate a cost-recovery model.

In his new role, Fisher will report to Maria Zuber, MIT’s vice president for research, and coordinate closely with MIT Information Systems and Technology (IS&T), MIT Libraries, and the deans of the five schools and the MIT Schwarzman College of Computing, among others. He will also work closely with Provost Cynthia Barnhart.

“I am thrilled that Peter has agreed to take on this important role,” says Zuber. “Under his leadership, I am confident that we’ll be able to build on the important progress of recent years to deliver to MIT researchers best-in-class infrastructure, services, and expertise so they can maximize the performance of their research.”

MIT’s research computing capabilities have grown significantly in recent years. Ten years ago, the Institute joined with a number of other Massachusetts universities to establish the Massachusetts Green High-Performance Computing Center (MGHPCC) in Holyoke to provide the high-performance, low-carbon computing power necessary to carry out cutting-edge research while reducing its environmental impact. MIT’s capacity at the MGHPCC is now almost fully utilized, however, and an expansion is underway.

The need for more advanced computing capacity is not the only issue to be addressed. Over the last decade, there have been considerable advances in cloud computing, which is increasingly used in research computing, requiring the Institute to take a new look at how it works with cloud services providers and then allocates cloud resources to departments, labs, and centers. And MIT’s longstanding model for research computing — which has been mostly decentralized — can lead to inefficiencies and inequities among departments, even as it offers flexibility.

The Institute has been carefully assessing how to address these issues for several years, including in connection with the establishment of the MIT Schwarzman College of Computing. In August 2019, a college task force on computing infrastructure found a “campus-wide preference for an overarching organizational model of computing infrastructure that transcends a college or school and most logically falls under senior leadership.” The task force’s report also addressed the need for a better balance between centralized and decentralized research computing resources.

“The needs for computing infrastructure and support vary considerably across disciplines,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “With the new Office of Research Computing and Data, the Institute is seizing the opportunity to transform its approach to supporting research computing and data, including not only hardware and cloud computing but also expertise. This move is a critical step forward in supporting MIT’s research and scholarship.”

Over time, ORCD (pronounced “orchid”) aims to recruit a staff of professionals, including data scientists and engineers and system and hardware administrators, who will enhance, support, and maintain MIT’s research computing infrastructure, and ensure that all researchers on campus have access to a minimum level of advanced computing and data management.

The new research computing and data effort is part of a broader push to modernize MIT’s information technology infrastructure and systems. “We are at an inflection point, where we have a significant opportunity to invest in core needs, replace or upgrade aging systems, and respond fully to the changing needs of our faculty, students, and staff,” says Mark Silis, MIT’s vice president for information systems and technology. “We are thrilled to have a new partner in the Office of Research Computing and Data as we embark on this important work.”
Quelle: Massachusetts Institute of Technology

Lincoln Laboratory honored for transfer of security-enhancing technologies

The Federal Laboratory Consortium for Technology Transfer (FLC) awarded their 2021 Excellence in Technology Transfer Award for the Northeast region to two Lincoln Laboratory technologies developed to improve security.

The first technology, Forensic Video Exploitation and Analysis (FOVEA), is a suite of analytic tools that makes it significantly easier for investigators to review surveillance video footage. The second technology, Keylime, is a software architecture designed to increase the security and privacy of data and services in the cloud. Both technologies have transitioned to commercial use via license or open-source access.

“These Federal Laboratory Consortium awards are an acknowledgement that the advanced capabilities developed at MIT Lincoln Laboratory are valued, not only for their contribution to enhancing national security, but also for their value to related private-sector needs,” says Bernadette Johnson, the chief technology ventures officer at Lincoln Laboratory. “Technology transfer is considered an integral element of the Department of Defense’s mission and is explicitly called out in the laboratory’s Prime Contract and Sponsoring Agreement. The transfer of these two technologies is emblematic of the unique ‘R&D-to-rapid-prototyping’ transition pipeline we have been developing at Lincoln.”

Speeding up video review 

The FOVEA program first began under sponsorship from the Department of Homeland Security (DHS) to address the challenge of efficiently reviewing video surveillance footage. The process of searching for a specific event, investigating abandoned objects, or piecing together activity from multiple cameras can take investigators hours or even days. It is especially challenging in large-scale closed-circuit TV systems, like those that surveil subway stations.

The FOVEA suite overcomes these challenges with three advanced tools. The first tool, video summarization, condenses all motion activity into a visual summary, transforming, for example, an hour of raw video into a three-minute product that only highlights motion. The second tool, called jump back, automatically seeks a portion of the video when an idle object, such as a backpack, first appeared. The third tool, multi-camera navigation and path reconstruction, allows an operator to track a person or vehicle of interest across multiple camera views.

Notably, FOVEA’s analytic tools can be integrated directly into existing video surveillance systems and can be processed on any desktop or laptop computer. In contrast, most commercial offerings first require customers to export their video data for analysis and to purchase proprietary server equipment or cloud services.

“The project team worked very hard on not just the development of the FOVEA prototype, but also packaging the software in a way that accommodates hand-off to third-party deployment sites and transition partners,” says Marianne DeAngelus, who led the development of FOVEA with a team in the Homeland Sensors and Analytics Group.

Under government sponsorship, the developers first deployed FOVEA to two mass transit facilities. Through participation in an MIT-led Innovation-Corps program, the team then adapted the technology into a commercial application. Doradus Lab, Inc. has since licensed FOVEA for security surveillance in casinos.

“Though FOVEA was originally developed for a specific use case of mass transit security, our tech transfer to industry will make it available for a broader set of security applications that would benefit from accelerated forensic analysis of surveillance video. We and our DHS sponsor are happy that this may lead to a wider impact of the technology,” adds Jason Thornton, who leads the technical group.

Putting trust in the cloud

Keylime is making it possible for government and industry users with sensitive data to increase the security of their cloud and internet-of-things (IoT) devices. This free, open-source software architecture enables cloud customers to securely upload cryptographic keys, passwords, and certificates into the cloud without divulging these secrets to their cloud provider, and to secure their cloud resources without relying on their provider to do it for them.

Keylime started as an internal project funded through Lincoln Laboratory’s Technology Office in 2015. Eventually, the Keylime team began discussions with RedHat, one of the world’s largest open-source software companies, to expand the technology’s reach. With RedHat’s help, Keylime was transitioned in 2019 into the Cloud Native Computing Foundation as a sandbox technology with more than 30 open-source developers contributing to it from around the world. Most recently, IBM announced its plans to adopt Keylime into its cloud feet, enabling IBM to attest to the security of its thousands of cloud servers.

“Keylime’s transfer and adoption into the open-source community and cloud environments helps to empower edge/IoT and cloud customers to validate provider claims of trustworthiness, rather than needing to rely solely on trust of the underlying environment for compliance and correctness,” says Charles Munson, who developed Keylime with former laboratory staff member Nabil Schear and adapted it as an open-source platform with Luke Hinds at RedHat. 

Keylime achieves its cloud security by leveraging a piece of hardware called a TPM, an industry-standard hardware security chip. A TPM generates a hash, a short string of numbers representing a much larger amount of data, that changes significantly if data are even slightly tampered with. Keylime can detect and react to this tampering in under a second.

Before Keylime, TPMs were incompatible with cloud technology, slowing down systems and forcing engineers to change software to accommodate the module. Keylime gets around these problems by serving as a piece of intermediary software that allows users to leverage the security benefits of the TPM without having to make their software compatible with it.

Transferring to industry

The transition of Lincoln Laboratory’s technology to industry and government is central to its role as a federally funded research and development center (FFRDC).

The mission of the FLC is to facilitate and educate FFRDCs and industry on the process of technology transfer. More than 300 federal laboratories, facilities, research centers, and their parent agencies make up the FLC community.

The transfer of these FLC-awarded technologies was supported by Bernadette Johnson and Lou Bellaire in the Technology Ventures Office; David Pronchick, Drinalda Kume, Zachary Sweet, and Jayme Selinger of the Contracting Services Department; and Daniel Dardani in MIT’s Technology Licensing Office, along with the technology development teams. Both FOVEA and Keylime were also awarded R&D 100 Awards in 2020, acknowledging them among the year’s 100 most innovative technologies available for sale or license.

The FLC will recognize the award recipients at a regional meeting in October.
Quelle: Massachusetts Institute of Technology

Keylime security software is deployed to IBM cloud

Keylime, a cloud security software architecture, is being adopted into IBM’s cloud fleet. Originally developed at MIT Lincoln Laboratory to allow system administrators to ensure the security of their cloud environment, Keylime is now a Cloud Native Computing Foundation sandbox technology with more than 30 open-source developers contributing to it from around the world. The software will enable IBM to remotely attest to the security of its thousands of cloud servers.

“It is exciting to see the hard work of the growing Keylime community coming to fruition,” says Charles Munson, a researcher in the Secure Resilient Systems and Technology Group at Lincoln Laboratory who created Keylime with Nabil Schear, now at Netflix. “Adding integrated support for Keylime into IBM’s cloud fleet is an important step towards enabling cloud customers to have a zero-trust capability of ‘never trust, always verify.'”

In a blog post announcing IBM’s integration of Keylime, George Almasi of IBM Research said, “IBM has planned a rapid rollout of Keylime-based attestation to the entirety of its cloud fleet in order to meet requirements for a strong security posture from its financial services and other enterprise customers. This will leverage work done on expanding the scalability and resilience of Keylime to manage large numbers of nodes, allowing Keylime-based attestation to be operationalized at cloud data center scale.”

Keylime is a key bootstrapping and integrity management software architecture. It was first developed to enable organizations to check for themselves that the servers storing and processing their data are as secure as cloud service providers claim they are. Today, many organizations use a form of cloud computing called infrastructure-as-a-service, whereby they rent computing resources from a cloud provider who is responsible for the security of the underlying systems.

To enable remote cloud-security checks, Keylime leverages a piece of hardware called a trusted platform module, or TPM, an industry-standard and widely used hardware security chip. A TPM generates a hash, a short string of numbers representing a much larger amount of data. If data are tampered with even slightly, the hash will change significantly, a security alarm that Keylime can detect and react to in under a second.

Before Keylime, TPMs were incompatible with cloud technology, slowing down systems and forcing engineers to change software to accommodate the module. Keylime gets around these problems by serving as a piece of intermediary software that allows users to leverage the security benefits of the TPM without having to make all of their software compatible with it.

In 2019, Keylime was transitioned into the CNCF as a sandbox technology with the help of RedHat, one of the world’s leading open-source software companies. This transition better incorporated Keylime into the Linux open-source ecosystem, making it simpler for users to adopt. In 2020, the Lincoln Laboratory team that developed Keylime was awarded an R&D 100 Award, recognizing the software among the year’s 100 most innovative new technologies available for sale or license.
Quelle: Massachusetts Institute of Technology

Lincoln Laboratory earns a 2020 Stratus Award for Cloud Computing

MIT Lincoln Laboratory is among the winners of the 2020 Stratus Awards for Cloud Computing. The Business Intelligence Group presented 38 companies, services, and executives with these awards that recognize leaders in cloud-based technology. The laboratory won for developing TRACER (Timely Randomization Applied to Commodity Executables at Runtime), software that prevents cyber attackers from remotely attacking Windows applications.

Since 2012, the Business Intelligence Group has acknowledged industry leaders with several awards for innovation in technology and services. With the move of so many business and institutional functions to the cloud, the Stratus Awards were initiated to recognize companies and individuals that have enabled effective, secure cloud-based computing.

Maria Jimenez, chief nominations officer of the Business Intelligence Group, says, “We now rely on the cloud for everything from entertainment to productivity, so we are proud to recognize all of our winners. Each and every one is helping in their own way to make our lives richer every day. We are honored and proud to reward these leaders in business.”

TRACER addresses a problem inherent in the immensely popular Windows’ commodity applications: all installations of these applications look alike, so cyber intruders gain the ability to compromise millions of computers simply by “cracking” into one computer. In addition, because more than 90 percent of desktop computers run Microsoft Windows with closed-source applications, many cyber protections that rely on having the source code available are not applicable for these desktop systems.

The patented TRACER technology re-randomizes sensitive internal data and layout at every output from the application. This continuous re-randomization thwarts attempts to use data leaks to hijack the computer’s internals; any information leaked by the application will be stale when attackers attempt to exploit it.

TRACER’s research and development was led by Hamed Okhravi of Lincoln Laboratory’s Secure Resilient Systems and Technology Group and included contributions by Jason Martin, David Bigelow, David Perry, Kristin Dahl, Robert Rudd, Thomas Hobson, and William Streilein.

“One of our primary goals for TRACER was to make it as easy to use as possible. The current version requires minimal steps to set up and requires no user interaction during its operation, which we hope facilitates its widespread adoption,” Okhravi said.

The software has been made available via a commercial company. For its innovation and potential to revolutionize the cybersecurity field, TRACER was named a 2020 R&D 100 Award winner by R&D World. TRACER was also honored with MIT Lincoln Laboratory’s 2019 Best Invention Award.
Quelle: Massachusetts Institute of Technology

Detecting program-tampering in the cloud

For small and midsize organizations, the outsourcing of demanding computational tasks to the cloud — huge banks of computers accessible over the Internet — can be much more cost-effective than buying their own hardware. But it also poses a security risk: A malicious hacker could rent space on a cloud server and use it to launch programs that hijack legitimate applications, interfering with their execution.In August, at the International Cryptology Conference, researchers from MIT and Israel’s Technion and Tel Aviv University presented a new system that can quickly verify that a program running on the cloud is executing properly. That amounts to a guarantee that no malicious code is interfering with the program’s execution.The same system also protects the data used by applications running in the cloud, cryptographically ensuring that the user won’t learn anything other than the immediate results of the requested computation. If, for instance, hospitals were pooling medical data in a huge database hosted on the cloud, researchers could look for patterns in the data without compromising patient privacy.Although the paper reports new theoretical results (view PDF), the researchers have also built working code that implements their system. At present, it works only with programs written in the C programming language, but adapting it to other languages should be straightforward.The new work, like much current research on secure computation, requires that computer programs be represented as circuits. So the researchers’ system includes a “circuit generator” that automatically converts C code to circuit diagrams. The circuits it produces, however, are much smaller than those produced by its predecessors, so by itself, the circuit generator may find other applications in cryptography.Zero knowledgeAlessandro Chiesa, a graduate student in electrical engineering and computer science at MIT and one of the paper’s authors, says that because the new system protects both the integrity of programs running in the cloud and the data they use, it’s a good complement to the cryptographic technique known as homomorphic encryption, which protects the data transmitted by the users of cloud applications. On the paper, Chiesa joins Madars Virza, also a graduate student in electrical engineering and computer science; the Technion’s Daniel Genkin and Eli Ben-Sasson, who was a visiting scientist at MIT for the past year; and Tel Aviv University’s Eran Tromer. Ben-Sasson and Tromer were co-PIs on the project. The researchers’ system implements a so-called zero-knowledge proof, a type of mathematical game invented by MIT professors Shafi Goldwasser and Silvio Micali and their colleague Charles Rackoff of the University of Toronto. In its cryptographic application, a zero-knowledge proof enables one of the game’s players to prove to the other that he or she knows a secret key without actually divulging it.But as its name implies, a zero-knowledge proof is a more general method for proving mathematical theorems — and the correct execution of a computer program can be redescribed as a theorem. So zero-knowledge proofs are by definition able to establish whether or not a computer program is executing correctly.The problem is that existing implementations of zero-knowledge proofs — except in cases where they’ve been tailored to particular algorithms — take as long to execute as the programs they’re trying to verify. That’s fine for password verification, but not for a computation substantial enough that it might be farmed out to the cloud.The researchers’ innovation is a practical, succinct zero-knowledge proof for arbitrary programs. Indeed, it’s so succinct that it can typically fit in a single data packet.Linear thinkingAs Chiesa explains, his and his colleagues’ approach depends on a variation of what’s known as a “probabilistically checkable proof,” or PCP. “With a standard mathematical proof, if you want to verify it, you have to go line by line from the start to the end,” Chiesa says. “If you were to skip one line, potentially, that could fool you. Traditional proofs are very fragile in this respect.” “The PCP theorem says that there is a way to rewrite proofs so that instead of reading them line by line,” Chiesa adds, “what you can do is flip a few coins and probabilistically sample three or four lines and have a probabilistic guarantee that it’s correct.”The problem, Virza says, is that “the current known constructions of the PCP theorem, though great in theory, have quite bad practical realizations.” That’s because the theory assumes that an adversary who’s trying to produce a fraudulent proof has unbounded computational capacity. What Chiesa, Virza and their colleagues do instead is assume that the adversary is capable only of performing simple linear operations.“This assumption is, of course, false in practice,” Virza says. “So we use a cryptographic encoding to force the adversary to only linear evaluations. There is a way to encode numbers into such a form that you can add those numbers, but you can’t do anything else. This is how we sidestep the inefficiencies of the PCP theorem.”“I think it’s a breakthrough,” says Ran Canetti, a professor of computer science at Boston University who was not involved with the research. When the PCP theorem was first proved, Canetti says, “nobody ever thought that this would be something that would be remotely practical. They’ve become a little bit better over the years, but not that much better.”“Four or five years ago,” Canetti adds, “these guys wrote on the flag the crazy goal of trying to make [proofs for arbitrary programs] practical, and I must say, I thought, ‘They’re nuts.’ But they did it. They actually have something that works.”
Quelle: Massachusetts Institute of Technology

Protecting data in the cloud

Cloud computing — outsourcing computational tasks over the Internet — could give home-computer users unprecedented processing power and let small companies launch sophisticated Web services without building massive server farms.But it also raises privacy concerns. A bank of cloud servers could be running applications for 1,000 customers at once; unbeknownst to the hosting service, one of those applications might have no purpose other than spying on the other 999.Encryption could make cloud servers more secure. Only when the data is actually being processed would it be decrypted; the results of any computations would be re-encrypted before they’re sent off-chip.In the last 10 years or so, however, it’s become clear that even when a computer is handling encrypted data, its memory-access patterns — the frequency with which it stores and accesses data at different memory addresses — can betray a shocking amount of private information. At the International Symposium on Computer Architecture in June, MIT researchers described a new type of secure hardware component, dubbed Ascend, that would disguise a server’s memory-access patterns, making it impossible for an attacker to infer anything about the data being stored. Ascend also thwarts another type of attack, known as a timing attack, which attempts to infer information from the amount of time that computations take.Computational trade-offSimilar designs have been proposed in the past, but they’ve generally traded too much computational overhead for security. “This is the first time that any hardware design has been proposed — it hasn’t been built yet — that would give you this level of security while only having about a factor of three or four overhead in performance,” says Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, whose group developed the new system. “People would have thought it would be a factor of 100.”The “trivial way” of obscuring memory-access patterns, Devadas explains, would be to request data from every address in the memory — whether a memory chip or a hard drive — and throw out everything except the data stored at the one address of interest. But that would be much too time-consuming to be practical.What Devadas and his collaborators — graduate students Ling Ren, Xiangyao Yu and Christopher Fletcher, and research scientist Marten van Dijk — do instead is to arrange memory addresses in a data structure known as a “tree.” A family tree is a familiar example of a tree, in which each “node” (in this example, a person’s name) is attached to only one node above it (the node representing the person’s parents) but may connect to several nodes below it (the person’s children).With Ascend, addresses are assigned to nodes randomly. Every node lies along some “path,” or route through the tree, that starts at the top and passes from node to node, without backtracking, until arriving at a node with no further connections. When the processor requires data from a particular address, it sends requests to all the addresses in a path that includes the one it’s really after.To prevent an attacker from inferring anything from sequences of memory access, every time Ascend accesses a particular memory address, it randomly swaps that address with one stored somewhere else in the tree. As a consequence, accessing a single address multiple times will very rarely require traversing the same path.Less computation to disguise an addressBy confining its dummy requests to a single path, rather than sending them to every address in memory, Ascend exponentially reduces the amount of computation required to disguise an address. In a separate paper, which is as-yet unpublished but has been posted online, the researchers prove that querying paths provides just as much security as querying every address in memory would.Ascend also protects against timing attacks. Suppose that the computation being outsourced to the cloud is the mammoth task of comparing a surveillance photo of a criminal suspect to random photos on the Web. The surveillance photo itself would be encrypted, and thus secure from prying eyes. But spyware in the cloud could still deduce what public photos it was being compared to. And the time the comparisons take could indicate something about the source photos: Photos of obviously different people could be easy to rule out, but photos of very similar people might take longer to distinguish.So Ascend’s memory-access scheme has one final wrinkle: It sends requests to memory at regular intervals — even when the processor is busy and requires no new data. That way, attackers can’t tell how long any given computation is taking.
Quelle: Massachusetts Institute of Technology

Diagnosing “broken" buildings to make them greener

The co-founders of MIT spinout KGS Buildings have a saying: “All buildings are broken.” Energy wasted through faulty or inefficient equipment, they say, can lead to hundreds of thousands of dollars in avoidable annual costs.That’s why KGS aims to “make buildings better” with cloud-based software, called Clockworks, that collects existing data on a building’s equipment — specifically in HVAC (heating, ventilation, and air conditioning) equipment — to detect leaks, breaks, and general inefficiencies, as well as energy-saving opportunities.The software then translates the data into graphs, metrics, and text that explain monetary losses, where it’s available for building managers, equipment manufacturers, and others through the cloud.Building operators can use that information to fix equipment, prioritize repairs, and take efficiency measures — such as using chilly outdoor air, instead of air conditioning, to cool rooms.“The idea is to make buildings better, by helping people save time, energy, and money, while providing more comfort, enjoyment, and productivity,” says Nicholas Gayeski SM ’07, PhD ’10, who co-founded KGS with Sian Kleindienst SM ’06, PhD ’10 and Stephen Samouhos ’04, SM ’07, PhD ’10.The software is now operating in more than 300 buildings across nine countries, collecting more than 2 billion data points monthly. The company estimates these buildings will save an average of 7 to 9 percent in avoidable costs per year; the exact figure depends entirely on the building. “If it’s a relatively well-performing building already, it may see lower savings; if it’s a poor-performing building, it could be much higher, maybe 15 to 20 percent,” says Gayeski, who graduated from MIT’s Building Technology Program, along with his two co-founders.Last month, MIT commissioned the software for more than 60 of its own buildings, monitoring more than 7,000 pieces of equipment over 10 million square feet. Previously, in a year-long trial for one MIT building, the software saved MIT $286,000.  Benefits, however, extend beyond financial savings, Gayeski says. “There are people in those buildings: What’s their quality of life? There are people who work on those buildings. We can provide them with better information to do their jobs,” he says.The software can also help buildings earn additional incentives by participating in utility programs. “We have major opportunities in some utility territories, where energy-efficiency has been incentivized. We can help buildings meet energy-efficiency goals that are significant in many states, including Massachusetts,” says Alex Grace, director of business development for KGS.Other customers include universities, health-care and life-science facilities, schools, and retail buildings.Equipment-level detectionFault-detection and diagnostics research spans about 50 years — with contributions by early KGS advisors and MIT professors of architecture Les Norford and Leon Glicksman — and about a dozen companies now operate in the field.But KGS, Gayeski says, is one of a few ventures gathering “equipment-level data,” gathered through various sensors, actuators, and meters attached to equipment that measure functionality.Clockworks sifts through that massive store of data, measuring temperatures, pressures, flows, set points, and control commands, among other things. It’s able to gather a few thousand data points every five minutes — which is a finer level of granularity than meter-level analytics software that may extract, say, a data point every 15 minutes from a utility meter.“That gives a lot more detail, a lot more granular information about how things are operating and could be operating better,” Gayeski says. For example, Clockworks may detect specific leaky valves or stuck dampers on air handlers in HVAC units that cause excessive heating or cooling.To make its analyses accurate, KGS employs what Gayeski calls “mass customization of code.” The company has code libraries for each type of equipment it works with — such as air handlers, chillers, and boilers — that can be tailored to specific equipment that varies greatly from building to building.This makes Clockworks easily scalable, Gayeski says. But it also helps the software produce rapid, intelligent analytics — such as accurate graphs, metrics, and text that spell out problems clearly.Moreover, it helps the software to rapidly equate data with monetary losses. “When we identify that there’s a fault with the right data, we can tell people right away this is worth, say, $50 a day or this is worth $1,000 a day — and we’ve seen $1,000-a-day faults — so that allows facilities managers to prioritize which problems get their attention,” he says.KGS Buildings’ foundationThe KGS co-founders met as participants in the MIT entry for the 2007 Solar Decathlon — an annual competition where college teams build small-scale, solar-powered homes to display at the National Mall in Washington. Kleindienst worked on lighting systems, while Samouhos and Gayeski worked on mechanical design and energy-modeling.After the competition, the co-founders started a company with a broad goal of making buildings better through energy savings. While pursuing their PhDs, they toyed with various ideas, such as developing low-cost sensing technology with wireless communication that could be retrofitted on to older equipment.Seeing building data as an emerging tool for fault-detection and diagnostics, however, they turned to Samouhos’ PhD dissertation, which focused on building condition monitoring. It came complete with the initial diagnostics codes and a framework for an early KGS module.“We all came together anticipating that the building industry was about to change a lot in the way it uses data, where you take the data, you figure out what’s not working well, and do something about it,” Gayeski says. “At that point, we knew it was ripe to move forward.”Throughout 2010, they began trialing software at several locations, including MIT. They found guidance among the seasoned entrepreneurs at MIT’s Venture Mentoring Service — learning to fail fast, and often. “That means keep at it, keep adapting and adjusting, and if you get it wrong, you just fix it and try again,” Gayeski says.Today, the company — headquartered in Somerville, Mass., with 16 employees — is focusing on expanding its customer base and advancing its software into other applications. About 180 new buildings were added to Clockworks in the past year; by the end of 2014, KGS projects it could deploy its software to 800 buildings. “Larger companies are starting to catch on,” Gayeski says. “Major health-care institutions, global pharmaceuticals, universities, and [others] are starting to see the value and deciding to take action — and we’re starting to take off.”Liberating dataBy bringing all this data about building equipment to the cloud, the technology has plugged into the “Internet of things” — a concept where objects would be connected, via embedded chips and other methods, to the Internet for inventory and other purposes.Data on HVAC systems have been connected through building automation for some time. KGS, however, can connect that data to cloud-based analytics and extract “really rich information” about equipment, Gayeski says. For instance, he says, the startup has quick-response codes — like a barcode — for each piece of equipment it measures, so people can read all data associated with it.“As more and more devices are readily connected to the Internet, we may be tapping straight into those, too,” Gayeski says. “And that data can be liberated from its local environment to the cloud,” Grace adds. Down the road, as technology to monitor houses — such as automated thermostats and other sensors — begins to “unlock the data in the residential scale,” Gayeski says, “KGS could adapt over time into that space, as well.”
Quelle: Massachusetts Institute of Technology

Computing at full capacity

According to a 2014 study from NRDC and Anthesis, in 2013 U.S. data centers burned 91 billion kilowatt-hours of electricity, enough to power every household in New York City twice over. That figure is expected to rise to 140 billion by 2020. While improved energy efficiency practices could go a long way toward lowering this figure, the problem is greatly exacerbated by the underutilization of servers, including an estimated 30 percent of servers that are still plugged in, but are no longer performing any services, the study says.

In another 2014 study, tech research firm Gartner, Inc., found that data center systems collectively represent a $143 billion market. With enterprise software adding $320 billion to that and IT services another $963 billion, the overall IT industry represents a whopping $3.8 trillion market.

Companies are increasingly seeking new ways to cut costs and extract the largest possible value from their IT infrastructure. Strategies include placing data centers in cooler climates, switching to more affordable open source software, and virtualizing resources to increase utilization. These solutions just scratch the surface, however.

An MIT-connected startup called Jisto offers businesses a new tool for cutting data center and cloud costs while improving resource utilization. Jisto manages existing enterprise applications by automatically wrapping them in Jisto-managed Docker containers, and intelligently deploying them across all available resources using automated real-time deployment, monitoring, and analytics algorithms. As the resource utilization profile changes for each server or different parts of the network and storage, Jisto elastically scales its utilization in real-time to compensate.

“We’re helping organizations get higher utilization of their data center and cloud resources without worrying about resource contention,” says Jisto CEO and co-founder Aleksandr (Sasha) Biberman. So far, the response has been promising. Jisto was a Silver Winner in the 2014 MassChallenge, and early customers include data-intensive companies such as banks, pharmaceutical companies, biotech firms, and research institutions.

“There’s pressure on IT departments from two sides: How can they more efficiently reduce data center expenditures, and how can they improve productivity by giving people better access to resources,” Biberman says. “In some cases, Jisto can double the productivity with the same resources just by making better use of idle capacity.”

Biberman praises the MIT Industrial Liaison Program and Venture Mentoring Service for hosting networking events and providing connections. “The ILP gave us connections to companies that we would have never otherwise have connected to all around the world,” he says. “It turned us into a global company.”

Putting idle servers back to work

The idea for Jisto came to Biberman while he was a postdoc in electrical engineering at MIT Research Lab of Electronics (RLE), studying silicon photonic communications. While researching how optical technology could improve data center performance and efficiency, he discovered an even larger problem: underutilization of server resources.

“Even with virtualization, companies use only 20 to 50 percent of in-house server capacity,” Biberman says. “Collectively, companies are wasting more than $100 billion annually on unused cycles. The public cloud is even worse, where utilization runs at 10 to 40 percent.”

In addition to the problem of sheer waste, Biberman also discovered that workload resources are often poorly managed. Even when more than a half of a company’s resources are sitting idle, workers often complain they can’t get enough access to servers when they need them.

Around the time of Biberman’s realization, he and his long-time friend Andrey Turovsky, a Cornell University-educated tech entrepreneur, and now Jisto CTO and co-founder, had been brainstorming some startup ideas. They had just developed a lightweight platform to automatically deploy and manage applications using virtual containers, and they decided to apply it to the utilization and workload management problem.

Underutilization of resources is less a technical issue, than a “corporate risk aversion strategy,” Biberman says. Companies tend to err on the side of caution when deploying resources and typically acquire many more servers than they need.

“We started seeing some crazy numbers in data center and cloud provisioning,” Biberman explains. “Typically, companies provision for twice as much as they need. One company looks at last year’s peak loads, and overprovisions above that by a factor of four for the next year. Companies always plan for a worst-case scenario spike. Nobody wants to be the person who hasn’t provisioned enough resources, so critical applications can’t run. Nobody gets fired for overprovisioning.”

Despite overprovisioning, users in most of the same organizations complain about lack of access to computing resources, says Biberman: “When you ask companies if they have enough resources to run applications, they typically say they want more even though their resources are sitting there going to waste.”

This paradox emerges from the common practice of splitting access into different resource groups, which have different levels of access to various cluster nodes. “It’s tough to fit your work into your slice of the pie,” Biberman says. “Say my resource group has access to five servers, and it’s agreed that I use them on Monday, and someone else takes Tuesday, and so on. But if I can’t get to my project on Monday, those servers are sitting completely idle, and I may have to wait a week. Maybe the person using it on Tuesday only needs one of the five servers, so four will sit idle, and maybe the guy using it the next day realizes he really needs 10 or 20 servers, not just the five he’s limited to.”

Jisto breaks down the artificial static walls created with ownership profiles and replaces them with a more dynamic environment. “You can still have priority during your server time, but if you don’t use it, someone else can,” Biberman explains. “That means people can sometimes get access to more servers than were allotted. If there’s a mission-critical application that generates a spike we can’t predict, we have an elastic method to quickly back off and give it priority.”

Financial services companies are using Jisto to free up compute cycles for Monte Carlo simulations that could benefit from many more servers and nodes. Pharma and life science companies, meanwhile, use a similar strategy to do faster DNA sequencing. “The more nodes you have, the more accurately you can run a simulation,” Biberman says. “That’s a huge advantage.”

Docker containers for the enterprise

Jisto is not the only cloud-computing platform that claims to improve resource utilization and reduce costs. The problem with most, however, is that “if you have a really quick spike in workload, there’s not enough time to make intelligent decisions about what to do,” Biberman says. “With Jisto, an automatic real-time decision-making process kicks in, enabling true elasticity across the entire data center with granularity as fine as a single core of a CPU.”

Jisto not only monitors CPU usage but other parameters such as memory, network bandwidth, and storage. “If there’s an important memory transfer happening that requires a lot of bandwidth, Jisto backs off, even if there’s plenty of CPU power available,” Biberman says. “Jisto can make intelligent decisions about where to send jobs based on all these dynamic factors. As soon as something changes, Jisto decides whether to stop the workload, pause it, or reduce resources. Do you transfer it to another server? Do you add redundancy to reduce the latency tail? People don’t have to make and implement those decisions.”

The platform also integrates rigorous security provisions, says Biberman. IT directors are understandably cautious about bringing third-party software into their complex data center ecosystems, which are often protected by firewall and regulation settings. Jisto, however, can quickly prove with a beta test how the software can spin its magic without interfering with mission-critical resources, he adds.

Jisto’s unobtrusiveness is largely due to its use of Docker containers. “Docker has nice APIs and makes the process much easier, both for us as developers and for Jisto customers,” Biberman explains. “Docker is very portable — if you can run it on Linux, you can run it on Docker — and it doesn’t care if you’re running it on a local data center, a private cloud, or on Amazon. With containers, we don’t need to do something complicated like run a VM inside another VM. Docker gives us a lightweight way to let people use the environment that’s already set up.”

Based in Cambridge, Massachusetts, Jisto was the first, and remains one of few, Docker-based startups in this region.

Moving up to the cloud

Companies are increasingly saving on data center costs by using public cloud resources in a hybrid strategy during peak demand. Jisto can help bridge the gap with better efficiency and flexibility, says Biberman. “If you’re a bank, you might have too many regulations on your data to use the public cloud, but most companies can gain efficiencies with public clouds while still keeping their private cloud for confidential, regulated, or mission-critical tasks.”

Jisto operates essentially the same whether it’s running on-premises, or in a private, public, or hybrid cloud. Companies that exceed the peak level of their private data center can now “burst out” onto the public cloud and take advantage of the elastic nature of services such as Amazon, says Biberman. “Some companies provision hundreds of thousands of nodes on Amazon,” he adds. The problem is that Amazon charges by the hour. “If a company only needs five minutes of processing, as many as 100,000 nodes would sit idle for 55 minutes.”

Jisto has recently begun to talk to companies that do cloud infrastructure as a service, explaining how Jisto can reprovision wasted resources and let someone else use them. According to Biberman, it’s only a matter of time before competitive pressures lead a cloud provider to use something like Jisto.MIT Startup Exchange (STEX) is an initiative of MIT’s Industrial Liaison Program (ILP) that seeks to connect ILP member companies with MIT-connected startups. Visit the STEX website and log in to learn more about Jisto and other startups on STEX.
Quelle: Massachusetts Institute of Technology

Communities in the cloud

The cloud’s very name reflects how many people think of this data storage system: intangible, distant, and disentangled from day-to-day life. But MIT PhD student Steven Gonzalez is reframing the image and narrative of an immaterial cloud. In his research, he’s showing that the cloud is neither distant nor ephemeral: It’s a massive system, ubiquitous in daily life, that contains huge amounts of energy, has the potential for environmental disaster, and is operated by an insular community of expert technicians.

Who’s tending the cloud?

“People so often rely on cloud services,” Gonzalez notes, “but they rarely think about where their data is stored and who is storing it, who is doing the job of maintaining servers that run 24/7/365, or the billons of gallons of water used daily to cool the servers, or the gigawatts of electricity that often come from carbon-based grids.”

The first time Gonzalez walked into a server farm, he was enthralled and puzzled by this giant factory filled with roaring computers and by the handful of IT professionals keeping it all running. At the time, he was working with specialized sensors that measured air in critical spaces, including places like the server farm. But the surreal facility led him back to his undergraduate anthropological training: How do these server spaces work? How has the cloud shaped these small, professional communities?

Gonzalez has been fascinated with visible, yet rarely recognized, communities since his first undergraduate ethnography on bus drivers in the small New Hampshire city of Keene. “In anthropology, everyone is a potential teacher,” he says, “Everyone you encounter in the field has something to teach you about the subject that you’re looking at, about themselves, about their world.”

Server farms are high-stakes environments

Listening — and a lot of patience — are skills with which Gonzalez cultivated the technical expertise to understand his subject matter. Cloud communities are built around, and depend upon, the technology they maintain, and that technology in turn shapes their behavior. So far, Gonzalez has completed his undergraduate and masters research and degrees, and is currently wrapping up PhD coursework en route to his dissertation. He’s visited server farms across North America and in Scandinavia, where farm operators are seeking to go carbon-free in order to cut the cloud’s carbon emissions, which comprise up to 3 percent of greenhouse gases, according to Greenpeace.

The server-farm technicians function in an extremely high-stakes world: Not only is a massive amount of energy expended on the cloud, but even a few moments of downtime can be devastating. If the systems go down, companies can lose up to $50,000 per minute, depending on what sector (financial, retail, public sector, etc.) and which server racks are affected. “There’s a kind of existential dread that permeates a lot of what they say and what they do,” Gonzalez says. “It’s a very high-stress, unforgiving type of work environment.”

New technology, old gender inequity

In response to these fears, Gonzalez has noted some “macho” performances in language and behavior by cloud communities. The mostly male cloud workforce “tend to use very sexual language,” Gonzalez observes. For instance, when all the servers are functioning properly it’s “uptime”; “They’ll use sexualized language to refer to how ‘potent’ they are or how long they can maintain uptime.”

The cloud communities aren’t exclusively male, but Gonzalez says visibility for women is a big issue. Women tend to be framed as collaborators, rather than executors. Tied up in this sexist behavior is the decades-old patriarchal stereotype that technology is a male domain in which machines are gendered in a way that makes them subordinate.

Although anthropological research is the focus of his academic work, Gonzalez’s interests at MIT have been expansive. With the encouragement of his advisor, Professor Stefan Helmreich, he’s kept his lifelong interest in music and science fiction alive by singing in the MIT Jazz Choir and Concert Choir and taking coursework in science fiction writing. He also enjoyed exploring coursework in history, documentary making, and technology courses. Anthropology is the first among several passions he first discovered during explorations as an undergraduate at Keene State College.

“For me, what makes anthropology so capacious is just the diversity of human experience and the beauty of that,” says Gonzalez. “The beauty of so many different possibilities, different configurations of being, that exist simultaneously.”

The open doors of MIT

Gonzalez was born in Orlando, Florida, to Puerto Rican parents who made sure he always had a connection with the island, where he would spend summers with his grandmother. A first-generation college student, Gonzalez says it was never a given that he would even go to college, let alone earn a doctorate: “I never would have imagined that I would have ended up here. It’s a sad reality that, as a Latino person in this country, I was more likely to end up in prison than in a place like MIT. So I had — and I still do — immense respect and awe for the Institute. MIT has a mystique, and when I first arrived I had to deal with that mystique, getting over the sense that I don’t belong.”

He had big expectations about entering a hugely competitive institution but was surprised to find that, in addition to its competitive edge, the Institute was incredibly supportive. “The thing that surprised me the most was how open everyone’s door was.”

Gonzalez has become more and more deeply involved with the campus goings-on: he’s now a Diversity Conduit for the Graduate Student Council Diversity and Inclusion Initiative and is also part of an MIT student initiative that is exploring Institute ties and possible investments in the prison-industrial complex.

Story prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Writer: Alison Lanier
Quelle: Massachusetts Institute of Technology