Lincoln Laboratory earns a 2020 Stratus Award for Cloud Computing

MIT Lincoln Laboratory is among the winners of the 2020 Stratus Awards for Cloud Computing. The Business Intelligence Group presented 38 companies, services, and executives with these awards that recognize leaders in cloud-based technology. The laboratory won for developing TRACER (Timely Randomization Applied to Commodity Executables at Runtime), software that prevents cyber attackers from remotely attacking Windows applications.

Since 2012, the Business Intelligence Group has acknowledged industry leaders with several awards for innovation in technology and services. With the move of so many business and institutional functions to the cloud, the Stratus Awards were initiated to recognize companies and individuals that have enabled effective, secure cloud-based computing.

Maria Jimenez, chief nominations officer of the Business Intelligence Group, says, “We now rely on the cloud for everything from entertainment to productivity, so we are proud to recognize all of our winners. Each and every one is helping in their own way to make our lives richer every day. We are honored and proud to reward these leaders in business.”

TRACER addresses a problem inherent in the immensely popular Windows’ commodity applications: all installations of these applications look alike, so cyber intruders gain the ability to compromise millions of computers simply by “cracking” into one computer. In addition, because more than 90 percent of desktop computers run Microsoft Windows with closed-source applications, many cyber protections that rely on having the source code available are not applicable for these desktop systems.

The patented TRACER technology re-randomizes sensitive internal data and layout at every output from the application. This continuous re-randomization thwarts attempts to use data leaks to hijack the computer’s internals; any information leaked by the application will be stale when attackers attempt to exploit it.

TRACER’s research and development was led by Hamed Okhravi of Lincoln Laboratory’s Secure Resilient Systems and Technology Group and included contributions by Jason Martin, David Bigelow, David Perry, Kristin Dahl, Robert Rudd, Thomas Hobson, and William Streilein.

“One of our primary goals for TRACER was to make it as easy to use as possible. The current version requires minimal steps to set up and requires no user interaction during its operation, which we hope facilitates its widespread adoption,” Okhravi said.

The software has been made available via a commercial company. For its innovation and potential to revolutionize the cybersecurity field, TRACER was named a 2020 R&D 100 Award winner by R&D World. TRACER was also honored with MIT Lincoln Laboratory’s 2019 Best Invention Award.
Quelle: Massachusetts Institute of Technology

Detecting program-tampering in the cloud

For small and midsize organizations, the outsourcing of demanding computational tasks to the cloud — huge banks of computers accessible over the Internet — can be much more cost-effective than buying their own hardware. But it also poses a security risk: A malicious hacker could rent space on a cloud server and use it to launch programs that hijack legitimate applications, interfering with their execution.In August, at the International Cryptology Conference, researchers from MIT and Israel’s Technion and Tel Aviv University presented a new system that can quickly verify that a program running on the cloud is executing properly. That amounts to a guarantee that no malicious code is interfering with the program’s execution.The same system also protects the data used by applications running in the cloud, cryptographically ensuring that the user won’t learn anything other than the immediate results of the requested computation. If, for instance, hospitals were pooling medical data in a huge database hosted on the cloud, researchers could look for patterns in the data without compromising patient privacy.Although the paper reports new theoretical results (view PDF), the researchers have also built working code that implements their system. At present, it works only with programs written in the C programming language, but adapting it to other languages should be straightforward.The new work, like much current research on secure computation, requires that computer programs be represented as circuits. So the researchers’ system includes a “circuit generator” that automatically converts C code to circuit diagrams. The circuits it produces, however, are much smaller than those produced by its predecessors, so by itself, the circuit generator may find other applications in cryptography.Zero knowledgeAlessandro Chiesa, a graduate student in electrical engineering and computer science at MIT and one of the paper’s authors, says that because the new system protects both the integrity of programs running in the cloud and the data they use, it’s a good complement to the cryptographic technique known as homomorphic encryption, which protects the data transmitted by the users of cloud applications. On the paper, Chiesa joins Madars Virza, also a graduate student in electrical engineering and computer science; the Technion’s Daniel Genkin and Eli Ben-Sasson, who was a visiting scientist at MIT for the past year; and Tel Aviv University’s Eran Tromer. Ben-Sasson and Tromer were co-PIs on the project. The researchers’ system implements a so-called zero-knowledge proof, a type of mathematical game invented by MIT professors Shafi Goldwasser and Silvio Micali and their colleague Charles Rackoff of the University of Toronto. In its cryptographic application, a zero-knowledge proof enables one of the game’s players to prove to the other that he or she knows a secret key without actually divulging it.But as its name implies, a zero-knowledge proof is a more general method for proving mathematical theorems — and the correct execution of a computer program can be redescribed as a theorem. So zero-knowledge proofs are by definition able to establish whether or not a computer program is executing correctly.The problem is that existing implementations of zero-knowledge proofs — except in cases where they’ve been tailored to particular algorithms — take as long to execute as the programs they’re trying to verify. That’s fine for password verification, but not for a computation substantial enough that it might be farmed out to the cloud.The researchers’ innovation is a practical, succinct zero-knowledge proof for arbitrary programs. Indeed, it’s so succinct that it can typically fit in a single data packet.Linear thinkingAs Chiesa explains, his and his colleagues’ approach depends on a variation of what’s known as a “probabilistically checkable proof,” or PCP. “With a standard mathematical proof, if you want to verify it, you have to go line by line from the start to the end,” Chiesa says. “If you were to skip one line, potentially, that could fool you. Traditional proofs are very fragile in this respect.” “The PCP theorem says that there is a way to rewrite proofs so that instead of reading them line by line,” Chiesa adds, “what you can do is flip a few coins and probabilistically sample three or four lines and have a probabilistic guarantee that it’s correct.”The problem, Virza says, is that “the current known constructions of the PCP theorem, though great in theory, have quite bad practical realizations.” That’s because the theory assumes that an adversary who’s trying to produce a fraudulent proof has unbounded computational capacity. What Chiesa, Virza and their colleagues do instead is assume that the adversary is capable only of performing simple linear operations.“This assumption is, of course, false in practice,” Virza says. “So we use a cryptographic encoding to force the adversary to only linear evaluations. There is a way to encode numbers into such a form that you can add those numbers, but you can’t do anything else. This is how we sidestep the inefficiencies of the PCP theorem.”“I think it’s a breakthrough,” says Ran Canetti, a professor of computer science at Boston University who was not involved with the research. When the PCP theorem was first proved, Canetti says, “nobody ever thought that this would be something that would be remotely practical. They’ve become a little bit better over the years, but not that much better.”“Four or five years ago,” Canetti adds, “these guys wrote on the flag the crazy goal of trying to make [proofs for arbitrary programs] practical, and I must say, I thought, ‘They’re nuts.’ But they did it. They actually have something that works.”
Quelle: Massachusetts Institute of Technology

Protecting data in the cloud

Cloud computing — outsourcing computational tasks over the Internet — could give home-computer users unprecedented processing power and let small companies launch sophisticated Web services without building massive server farms.But it also raises privacy concerns. A bank of cloud servers could be running applications for 1,000 customers at once; unbeknownst to the hosting service, one of those applications might have no purpose other than spying on the other 999.Encryption could make cloud servers more secure. Only when the data is actually being processed would it be decrypted; the results of any computations would be re-encrypted before they’re sent off-chip.In the last 10 years or so, however, it’s become clear that even when a computer is handling encrypted data, its memory-access patterns — the frequency with which it stores and accesses data at different memory addresses — can betray a shocking amount of private information. At the International Symposium on Computer Architecture in June, MIT researchers described a new type of secure hardware component, dubbed Ascend, that would disguise a server’s memory-access patterns, making it impossible for an attacker to infer anything about the data being stored. Ascend also thwarts another type of attack, known as a timing attack, which attempts to infer information from the amount of time that computations take.Computational trade-offSimilar designs have been proposed in the past, but they’ve generally traded too much computational overhead for security. “This is the first time that any hardware design has been proposed — it hasn’t been built yet — that would give you this level of security while only having about a factor of three or four overhead in performance,” says Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and Computer Science, whose group developed the new system. “People would have thought it would be a factor of 100.”The “trivial way” of obscuring memory-access patterns, Devadas explains, would be to request data from every address in the memory — whether a memory chip or a hard drive — and throw out everything except the data stored at the one address of interest. But that would be much too time-consuming to be practical.What Devadas and his collaborators — graduate students Ling Ren, Xiangyao Yu and Christopher Fletcher, and research scientist Marten van Dijk — do instead is to arrange memory addresses in a data structure known as a “tree.” A family tree is a familiar example of a tree, in which each “node” (in this example, a person’s name) is attached to only one node above it (the node representing the person’s parents) but may connect to several nodes below it (the person’s children).With Ascend, addresses are assigned to nodes randomly. Every node lies along some “path,” or route through the tree, that starts at the top and passes from node to node, without backtracking, until arriving at a node with no further connections. When the processor requires data from a particular address, it sends requests to all the addresses in a path that includes the one it’s really after.To prevent an attacker from inferring anything from sequences of memory access, every time Ascend accesses a particular memory address, it randomly swaps that address with one stored somewhere else in the tree. As a consequence, accessing a single address multiple times will very rarely require traversing the same path.Less computation to disguise an addressBy confining its dummy requests to a single path, rather than sending them to every address in memory, Ascend exponentially reduces the amount of computation required to disguise an address. In a separate paper, which is as-yet unpublished but has been posted online, the researchers prove that querying paths provides just as much security as querying every address in memory would.Ascend also protects against timing attacks. Suppose that the computation being outsourced to the cloud is the mammoth task of comparing a surveillance photo of a criminal suspect to random photos on the Web. The surveillance photo itself would be encrypted, and thus secure from prying eyes. But spyware in the cloud could still deduce what public photos it was being compared to. And the time the comparisons take could indicate something about the source photos: Photos of obviously different people could be easy to rule out, but photos of very similar people might take longer to distinguish.So Ascend’s memory-access scheme has one final wrinkle: It sends requests to memory at regular intervals — even when the processor is busy and requires no new data. That way, attackers can’t tell how long any given computation is taking.
Quelle: Massachusetts Institute of Technology

Diagnosing “broken" buildings to make them greener

The co-founders of MIT spinout KGS Buildings have a saying: “All buildings are broken.” Energy wasted through faulty or inefficient equipment, they say, can lead to hundreds of thousands of dollars in avoidable annual costs.That’s why KGS aims to “make buildings better” with cloud-based software, called Clockworks, that collects existing data on a building’s equipment — specifically in HVAC (heating, ventilation, and air conditioning) equipment — to detect leaks, breaks, and general inefficiencies, as well as energy-saving opportunities.The software then translates the data into graphs, metrics, and text that explain monetary losses, where it’s available for building managers, equipment manufacturers, and others through the cloud.Building operators can use that information to fix equipment, prioritize repairs, and take efficiency measures — such as using chilly outdoor air, instead of air conditioning, to cool rooms.“The idea is to make buildings better, by helping people save time, energy, and money, while providing more comfort, enjoyment, and productivity,” says Nicholas Gayeski SM ’07, PhD ’10, who co-founded KGS with Sian Kleindienst SM ’06, PhD ’10 and Stephen Samouhos ’04, SM ’07, PhD ’10.The software is now operating in more than 300 buildings across nine countries, collecting more than 2 billion data points monthly. The company estimates these buildings will save an average of 7 to 9 percent in avoidable costs per year; the exact figure depends entirely on the building. “If it’s a relatively well-performing building already, it may see lower savings; if it’s a poor-performing building, it could be much higher, maybe 15 to 20 percent,” says Gayeski, who graduated from MIT’s Building Technology Program, along with his two co-founders.Last month, MIT commissioned the software for more than 60 of its own buildings, monitoring more than 7,000 pieces of equipment over 10 million square feet. Previously, in a year-long trial for one MIT building, the software saved MIT $286,000.  Benefits, however, extend beyond financial savings, Gayeski says. “There are people in those buildings: What’s their quality of life? There are people who work on those buildings. We can provide them with better information to do their jobs,” he says.The software can also help buildings earn additional incentives by participating in utility programs. “We have major opportunities in some utility territories, where energy-efficiency has been incentivized. We can help buildings meet energy-efficiency goals that are significant in many states, including Massachusetts,” says Alex Grace, director of business development for KGS.Other customers include universities, health-care and life-science facilities, schools, and retail buildings.Equipment-level detectionFault-detection and diagnostics research spans about 50 years — with contributions by early KGS advisors and MIT professors of architecture Les Norford and Leon Glicksman — and about a dozen companies now operate in the field.But KGS, Gayeski says, is one of a few ventures gathering “equipment-level data,” gathered through various sensors, actuators, and meters attached to equipment that measure functionality.Clockworks sifts through that massive store of data, measuring temperatures, pressures, flows, set points, and control commands, among other things. It’s able to gather a few thousand data points every five minutes — which is a finer level of granularity than meter-level analytics software that may extract, say, a data point every 15 minutes from a utility meter.“That gives a lot more detail, a lot more granular information about how things are operating and could be operating better,” Gayeski says. For example, Clockworks may detect specific leaky valves or stuck dampers on air handlers in HVAC units that cause excessive heating or cooling.To make its analyses accurate, KGS employs what Gayeski calls “mass customization of code.” The company has code libraries for each type of equipment it works with — such as air handlers, chillers, and boilers — that can be tailored to specific equipment that varies greatly from building to building.This makes Clockworks easily scalable, Gayeski says. But it also helps the software produce rapid, intelligent analytics — such as accurate graphs, metrics, and text that spell out problems clearly.Moreover, it helps the software to rapidly equate data with monetary losses. “When we identify that there’s a fault with the right data, we can tell people right away this is worth, say, $50 a day or this is worth $1,000 a day — and we’ve seen $1,000-a-day faults — so that allows facilities managers to prioritize which problems get their attention,” he says.KGS Buildings’ foundationThe KGS co-founders met as participants in the MIT entry for the 2007 Solar Decathlon — an annual competition where college teams build small-scale, solar-powered homes to display at the National Mall in Washington. Kleindienst worked on lighting systems, while Samouhos and Gayeski worked on mechanical design and energy-modeling.After the competition, the co-founders started a company with a broad goal of making buildings better through energy savings. While pursuing their PhDs, they toyed with various ideas, such as developing low-cost sensing technology with wireless communication that could be retrofitted on to older equipment.Seeing building data as an emerging tool for fault-detection and diagnostics, however, they turned to Samouhos’ PhD dissertation, which focused on building condition monitoring. It came complete with the initial diagnostics codes and a framework for an early KGS module.“We all came together anticipating that the building industry was about to change a lot in the way it uses data, where you take the data, you figure out what’s not working well, and do something about it,” Gayeski says. “At that point, we knew it was ripe to move forward.”Throughout 2010, they began trialing software at several locations, including MIT. They found guidance among the seasoned entrepreneurs at MIT’s Venture Mentoring Service — learning to fail fast, and often. “That means keep at it, keep adapting and adjusting, and if you get it wrong, you just fix it and try again,” Gayeski says.Today, the company — headquartered in Somerville, Mass., with 16 employees — is focusing on expanding its customer base and advancing its software into other applications. About 180 new buildings were added to Clockworks in the past year; by the end of 2014, KGS projects it could deploy its software to 800 buildings. “Larger companies are starting to catch on,” Gayeski says. “Major health-care institutions, global pharmaceuticals, universities, and [others] are starting to see the value and deciding to take action — and we’re starting to take off.”Liberating dataBy bringing all this data about building equipment to the cloud, the technology has plugged into the “Internet of things” — a concept where objects would be connected, via embedded chips and other methods, to the Internet for inventory and other purposes.Data on HVAC systems have been connected through building automation for some time. KGS, however, can connect that data to cloud-based analytics and extract “really rich information” about equipment, Gayeski says. For instance, he says, the startup has quick-response codes — like a barcode — for each piece of equipment it measures, so people can read all data associated with it.“As more and more devices are readily connected to the Internet, we may be tapping straight into those, too,” Gayeski says. “And that data can be liberated from its local environment to the cloud,” Grace adds. Down the road, as technology to monitor houses — such as automated thermostats and other sensors — begins to “unlock the data in the residential scale,” Gayeski says, “KGS could adapt over time into that space, as well.”
Quelle: Massachusetts Institute of Technology

Computing at full capacity

According to a 2014 study from NRDC and Anthesis, in 2013 U.S. data centers burned 91 billion kilowatt-hours of electricity, enough to power every household in New York City twice over. That figure is expected to rise to 140 billion by 2020. While improved energy efficiency practices could go a long way toward lowering this figure, the problem is greatly exacerbated by the underutilization of servers, including an estimated 30 percent of servers that are still plugged in, but are no longer performing any services, the study says.

In another 2014 study, tech research firm Gartner, Inc., found that data center systems collectively represent a $143 billion market. With enterprise software adding $320 billion to that and IT services another $963 billion, the overall IT industry represents a whopping $3.8 trillion market.

Companies are increasingly seeking new ways to cut costs and extract the largest possible value from their IT infrastructure. Strategies include placing data centers in cooler climates, switching to more affordable open source software, and virtualizing resources to increase utilization. These solutions just scratch the surface, however.

An MIT-connected startup called Jisto offers businesses a new tool for cutting data center and cloud costs while improving resource utilization. Jisto manages existing enterprise applications by automatically wrapping them in Jisto-managed Docker containers, and intelligently deploying them across all available resources using automated real-time deployment, monitoring, and analytics algorithms. As the resource utilization profile changes for each server or different parts of the network and storage, Jisto elastically scales its utilization in real-time to compensate.

“We’re helping organizations get higher utilization of their data center and cloud resources without worrying about resource contention,” says Jisto CEO and co-founder Aleksandr (Sasha) Biberman. So far, the response has been promising. Jisto was a Silver Winner in the 2014 MassChallenge, and early customers include data-intensive companies such as banks, pharmaceutical companies, biotech firms, and research institutions.

“There’s pressure on IT departments from two sides: How can they more efficiently reduce data center expenditures, and how can they improve productivity by giving people better access to resources,” Biberman says. “In some cases, Jisto can double the productivity with the same resources just by making better use of idle capacity.”

Biberman praises the MIT Industrial Liaison Program and Venture Mentoring Service for hosting networking events and providing connections. “The ILP gave us connections to companies that we would have never otherwise have connected to all around the world,” he says. “It turned us into a global company.”

Putting idle servers back to work

The idea for Jisto came to Biberman while he was a postdoc in electrical engineering at MIT Research Lab of Electronics (RLE), studying silicon photonic communications. While researching how optical technology could improve data center performance and efficiency, he discovered an even larger problem: underutilization of server resources.

“Even with virtualization, companies use only 20 to 50 percent of in-house server capacity,” Biberman says. “Collectively, companies are wasting more than $100 billion annually on unused cycles. The public cloud is even worse, where utilization runs at 10 to 40 percent.”

In addition to the problem of sheer waste, Biberman also discovered that workload resources are often poorly managed. Even when more than a half of a company’s resources are sitting idle, workers often complain they can’t get enough access to servers when they need them.

Around the time of Biberman’s realization, he and his long-time friend Andrey Turovsky, a Cornell University-educated tech entrepreneur, and now Jisto CTO and co-founder, had been brainstorming some startup ideas. They had just developed a lightweight platform to automatically deploy and manage applications using virtual containers, and they decided to apply it to the utilization and workload management problem.

Underutilization of resources is less a technical issue, than a “corporate risk aversion strategy,” Biberman says. Companies tend to err on the side of caution when deploying resources and typically acquire many more servers than they need.

“We started seeing some crazy numbers in data center and cloud provisioning,” Biberman explains. “Typically, companies provision for twice as much as they need. One company looks at last year’s peak loads, and overprovisions above that by a factor of four for the next year. Companies always plan for a worst-case scenario spike. Nobody wants to be the person who hasn’t provisioned enough resources, so critical applications can’t run. Nobody gets fired for overprovisioning.”

Despite overprovisioning, users in most of the same organizations complain about lack of access to computing resources, says Biberman: “When you ask companies if they have enough resources to run applications, they typically say they want more even though their resources are sitting there going to waste.”

This paradox emerges from the common practice of splitting access into different resource groups, which have different levels of access to various cluster nodes. “It’s tough to fit your work into your slice of the pie,” Biberman says. “Say my resource group has access to five servers, and it’s agreed that I use them on Monday, and someone else takes Tuesday, and so on. But if I can’t get to my project on Monday, those servers are sitting completely idle, and I may have to wait a week. Maybe the person using it on Tuesday only needs one of the five servers, so four will sit idle, and maybe the guy using it the next day realizes he really needs 10 or 20 servers, not just the five he’s limited to.”

Jisto breaks down the artificial static walls created with ownership profiles and replaces them with a more dynamic environment. “You can still have priority during your server time, but if you don’t use it, someone else can,” Biberman explains. “That means people can sometimes get access to more servers than were allotted. If there’s a mission-critical application that generates a spike we can’t predict, we have an elastic method to quickly back off and give it priority.”

Financial services companies are using Jisto to free up compute cycles for Monte Carlo simulations that could benefit from many more servers and nodes. Pharma and life science companies, meanwhile, use a similar strategy to do faster DNA sequencing. “The more nodes you have, the more accurately you can run a simulation,” Biberman says. “That’s a huge advantage.”

Docker containers for the enterprise

Jisto is not the only cloud-computing platform that claims to improve resource utilization and reduce costs. The problem with most, however, is that “if you have a really quick spike in workload, there’s not enough time to make intelligent decisions about what to do,” Biberman says. “With Jisto, an automatic real-time decision-making process kicks in, enabling true elasticity across the entire data center with granularity as fine as a single core of a CPU.”

Jisto not only monitors CPU usage but other parameters such as memory, network bandwidth, and storage. “If there’s an important memory transfer happening that requires a lot of bandwidth, Jisto backs off, even if there’s plenty of CPU power available,” Biberman says. “Jisto can make intelligent decisions about where to send jobs based on all these dynamic factors. As soon as something changes, Jisto decides whether to stop the workload, pause it, or reduce resources. Do you transfer it to another server? Do you add redundancy to reduce the latency tail? People don’t have to make and implement those decisions.”

The platform also integrates rigorous security provisions, says Biberman. IT directors are understandably cautious about bringing third-party software into their complex data center ecosystems, which are often protected by firewall and regulation settings. Jisto, however, can quickly prove with a beta test how the software can spin its magic without interfering with mission-critical resources, he adds.

Jisto’s unobtrusiveness is largely due to its use of Docker containers. “Docker has nice APIs and makes the process much easier, both for us as developers and for Jisto customers,” Biberman explains. “Docker is very portable — if you can run it on Linux, you can run it on Docker — and it doesn’t care if you’re running it on a local data center, a private cloud, or on Amazon. With containers, we don’t need to do something complicated like run a VM inside another VM. Docker gives us a lightweight way to let people use the environment that’s already set up.”

Based in Cambridge, Massachusetts, Jisto was the first, and remains one of few, Docker-based startups in this region.

Moving up to the cloud

Companies are increasingly saving on data center costs by using public cloud resources in a hybrid strategy during peak demand. Jisto can help bridge the gap with better efficiency and flexibility, says Biberman. “If you’re a bank, you might have too many regulations on your data to use the public cloud, but most companies can gain efficiencies with public clouds while still keeping their private cloud for confidential, regulated, or mission-critical tasks.”

Jisto operates essentially the same whether it’s running on-premises, or in a private, public, or hybrid cloud. Companies that exceed the peak level of their private data center can now “burst out” onto the public cloud and take advantage of the elastic nature of services such as Amazon, says Biberman. “Some companies provision hundreds of thousands of nodes on Amazon,” he adds. The problem is that Amazon charges by the hour. “If a company only needs five minutes of processing, as many as 100,000 nodes would sit idle for 55 minutes.”

Jisto has recently begun to talk to companies that do cloud infrastructure as a service, explaining how Jisto can reprovision wasted resources and let someone else use them. According to Biberman, it’s only a matter of time before competitive pressures lead a cloud provider to use something like Jisto.MIT Startup Exchange (STEX) is an initiative of MIT’s Industrial Liaison Program (ILP) that seeks to connect ILP member companies with MIT-connected startups. Visit the STEX website and log in to learn more about Jisto and other startups on STEX.
Quelle: Massachusetts Institute of Technology

Communities in the cloud

The cloud’s very name reflects how many people think of this data storage system: intangible, distant, and disentangled from day-to-day life. But MIT PhD student Steven Gonzalez is reframing the image and narrative of an immaterial cloud. In his research, he’s showing that the cloud is neither distant nor ephemeral: It’s a massive system, ubiquitous in daily life, that contains huge amounts of energy, has the potential for environmental disaster, and is operated by an insular community of expert technicians.

Who’s tending the cloud?

“People so often rely on cloud services,” Gonzalez notes, “but they rarely think about where their data is stored and who is storing it, who is doing the job of maintaining servers that run 24/7/365, or the billons of gallons of water used daily to cool the servers, or the gigawatts of electricity that often come from carbon-based grids.”

The first time Gonzalez walked into a server farm, he was enthralled and puzzled by this giant factory filled with roaring computers and by the handful of IT professionals keeping it all running. At the time, he was working with specialized sensors that measured air in critical spaces, including places like the server farm. But the surreal facility led him back to his undergraduate anthropological training: How do these server spaces work? How has the cloud shaped these small, professional communities?

Gonzalez has been fascinated with visible, yet rarely recognized, communities since his first undergraduate ethnography on bus drivers in the small New Hampshire city of Keene. “In anthropology, everyone is a potential teacher,” he says, “Everyone you encounter in the field has something to teach you about the subject that you’re looking at, about themselves, about their world.”

Server farms are high-stakes environments

Listening — and a lot of patience — are skills with which Gonzalez cultivated the technical expertise to understand his subject matter. Cloud communities are built around, and depend upon, the technology they maintain, and that technology in turn shapes their behavior. So far, Gonzalez has completed his undergraduate and masters research and degrees, and is currently wrapping up PhD coursework en route to his dissertation. He’s visited server farms across North America and in Scandinavia, where farm operators are seeking to go carbon-free in order to cut the cloud’s carbon emissions, which comprise up to 3 percent of greenhouse gases, according to Greenpeace.

The server-farm technicians function in an extremely high-stakes world: Not only is a massive amount of energy expended on the cloud, but even a few moments of downtime can be devastating. If the systems go down, companies can lose up to $50,000 per minute, depending on what sector (financial, retail, public sector, etc.) and which server racks are affected. “There’s a kind of existential dread that permeates a lot of what they say and what they do,” Gonzalez says. “It’s a very high-stress, unforgiving type of work environment.”

New technology, old gender inequity

In response to these fears, Gonzalez has noted some “macho” performances in language and behavior by cloud communities. The mostly male cloud workforce “tend to use very sexual language,” Gonzalez observes. For instance, when all the servers are functioning properly it’s “uptime”; “They’ll use sexualized language to refer to how ‘potent’ they are or how long they can maintain uptime.”

The cloud communities aren’t exclusively male, but Gonzalez says visibility for women is a big issue. Women tend to be framed as collaborators, rather than executors. Tied up in this sexist behavior is the decades-old patriarchal stereotype that technology is a male domain in which machines are gendered in a way that makes them subordinate.

Although anthropological research is the focus of his academic work, Gonzalez’s interests at MIT have been expansive. With the encouragement of his advisor, Professor Stefan Helmreich, he’s kept his lifelong interest in music and science fiction alive by singing in the MIT Jazz Choir and Concert Choir and taking coursework in science fiction writing. He also enjoyed exploring coursework in history, documentary making, and technology courses. Anthropology is the first among several passions he first discovered during explorations as an undergraduate at Keene State College.

“For me, what makes anthropology so capacious is just the diversity of human experience and the beauty of that,” says Gonzalez. “The beauty of so many different possibilities, different configurations of being, that exist simultaneously.”

The open doors of MIT

Gonzalez was born in Orlando, Florida, to Puerto Rican parents who made sure he always had a connection with the island, where he would spend summers with his grandmother. A first-generation college student, Gonzalez says it was never a given that he would even go to college, let alone earn a doctorate: “I never would have imagined that I would have ended up here. It’s a sad reality that, as a Latino person in this country, I was more likely to end up in prison than in a place like MIT. So I had — and I still do — immense respect and awe for the Institute. MIT has a mystique, and when I first arrived I had to deal with that mystique, getting over the sense that I don’t belong.”

He had big expectations about entering a hugely competitive institution but was surprised to find that, in addition to its competitive edge, the Institute was incredibly supportive. “The thing that surprised me the most was how open everyone’s door was.”

Gonzalez has become more and more deeply involved with the campus goings-on: he’s now a Diversity Conduit for the Graduate Student Council Diversity and Inclusion Initiative and is also part of an MIT student initiative that is exploring Institute ties and possible investments in the prison-industrial complex.
 

Story prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Writer: Alison Lanier
Quelle: Massachusetts Institute of Technology