Trust on the wild web

Mark Zuckerberg is the world’s youngest billionaire. He got there by founding facebook.com, one of the biggest beasts in the Internet jungle. In the early days, so the story goes, he boasted to a friend on instant messenger that he had the personal details of over 4,000 students in Harvard, and if he ever wanted to know anything he should get in touch. Understandably, his incredulous friend wanted to know how Zuckerberg had access to this information. His reply? ‘People just submitted it. I don’t know why. They trust me, dumbf***s.’
The online environment is no longer merely an aid to living well offline; for many, it has become a forum where much of life is now conducted. But one issue that raises its head again and again is this question of trust on the Internet.
Examining whether and how we can design the Internet for online trust is the focus of my research in the Faculty of Philosophy, supervised by Dr Alex Oliver. The project is sponsored by Microsoft Research, whose Socio-Digital Systems group in Cambridge looks at how technology interacts with human values.
The research is a chance to do some practical philosophy, reflecting on and engaging with applied issues. And as the Internet increasingly becomes a more pervasive part of our lives, issues of trust online are only going to grow in importance. So there is a unique and timely opportunity – and challenge – to break new terrain.

Trusting me, trusting you
It is easily overlooked, but when you stop to think, it is striking how much we trust to other people. It is a fundamental precondition for the smooth functioning of society. Like the air we breathe, or the cement in brickwork, trust is both essential and usually taken for granted.
One consequence is that we tend to notice our reliance on trust only when things go wrong. And although it is easy to eulogise trust, it is not always appropriate. Trusting the untrustworthy is often a dramatically bad idea. But distrusting the trustworthy may have equally serious consequences.
Certainly, most people want to live in a world where it makes sense to trust people, and for people to trust them. But they also don’t want to be taken for a ride. So we have to work out when trust is appropriate.
The trouble is, it is much harder to work out online when trust is appropriate and when not. It is much more difficult to determine online whether a particular person is trustworthy – much of the personal and social context of offline forms of interaction are stripped away in cyberspace, and online identities can be less stable.
But perhaps more seriously, it is still relatively unclear what the norms and mores are that govern appropriate behaviour online. This applies both to the informal norms that spontaneously arise in interpersonal interaction, and also to the apparatus of formal law.
The web, in this sense, is a bit like the Wild West. It is not that life is impossible there – far from it. Indeed, it’s often pretty flamboyant and colourful, and a stimulating place to be. But people can also act unpredictably, and there is little recourse for those who get stung.

Building trust
One moral of the story about the Facebook founder’s comment is that you’ve got to be careful who you trust online. That’s obvious enough, and no different to what we tell our children.
But there are some more challenging issues. For the online world has an important feature: it is malleable. How something is built often serves particular ends, whether intended or not, and these ends in turn serve to realise particular visions of how people ought to live. Were I a metalsmith, for instance, I would rather make ploughs than thumbscrews – I don’t want to contribute to making a world where thumbscrews are plentiful.
This applies to contemporary technologies too. At the last count, 500 million people now have their social relationships partially structured by Zuckerberg’s vision of connecting people, according to whether they have confirmed or ignored the one-size-fits-all ‘friends request’ on facebook.com. The basic IP/TCP structures of the Internet were built according to a broadly libertarian vision widely shared among the early computer science pioneers, which denies central control or ownership in order to facilitate free expression.
So the more pertinent question is: can we build the Internet in a way that facilitates well-placed trust, and encourages trustworthiness? In short, can we design for online trust? To answer this, we need to look at why people are trustworthy and untrustworthy; what counts as good evidence for a person’s trustworthiness online; the effects of online anonymity and pseudonymity; and the role of institutions in grounding trustworthiness. For instance, one mechanism through which we can secure others’ trustworthiness is to develop better online reputation systems and track past conduct.
These questions cannot be answered once and for all. Technology is dynamic: , for instance, is considered by many to be a step change in the way we compute, and it too raises specific questions around trust (see panel). As technology changes, so too will the philosophical challenges. The hope is that collaborative work between computer engineers, lawyers and philosophers can help to make the Internet a safer place.

For further information, please contact Tom Simpson (tws21@cam.ac.uk), whose PhD research in the Faculty of Philosophy (www.phil.cam.ac.uk/) is being sponsored by Microsoft Research Cambridge. His article on ‘e-Trust and Reputation’ is published in Ethics and Information Technology.

Philosopher Tom Simpson asks: can we build a trustworthy and safe Internet?
Engineering is always about solving problems for people and the society in which they live. Philosophy can help understand what those problems are and how they are to be solved.

Professor Richard Harper, Microsoft Research Cambridge©iStockPhoto.com/Amanda RohdeDigital trustCloud computingCloud computing is widely heralded as one of the most radical changes to the way we compute, and its full impact is thought to be just around the corner. First and foremost, the cloud is a change in the geography of computing – instead of having your PC store your data and run everything, your computing will be done on banks of servers and accessed remotely. Along with the change in geography, the move to the cloud is also a change in the scale of computing, with access to far more powerful computing facilities than ever before.
But the cloud raises a host of philosophical issues, particularly questions of responsibility. Who should own what data? When are ‘crowd-sourcing’ techniques appropriate, and when not? What are the effects of more powerful techniques of profiling individuals? What happens to privacy when we compute in the cloud?
To discuss these and related issues, the Faculty of Philosophy and Microsoft Research are co-hosting an international conference in Cambridge, gathering together leading philosophers and practitioners. Two open lectures will be held on the evenings of 5 and 6 April 2011.
For further details, please visit trustandcloudcomputing.org.uk

This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.
Yes
Quelle: University of Cambridge

Published by