Skip navigation
July 30, 2014

Five Minutes with Judith Donath

Posted by: Dave Ryman

Our latest "Five Minutes with the Author" is with Judith Donath who is a Faculty Fellow at Harvard University’s Berkman Center for Internet and Society and a Visiting Scholar at MIT’s Program in Science, Technology, and Society, and author of  The Social Machine.

You claim your book “is a guide to understanding how existing systems influence behavior and a manifesto for designing radically new environments for social interaction”. Why are these new environments necessary?

The internet has already brought about many extraordinary changes to how people interact.  Some of these changes have been quite beneficial; we have unprecedented opportunities to connect with others who share our interests or concerns.  Others are less positive. Many online discussions devolve into vicious arguments or are infiltrated with spam.  

New and well-designed interfaces can help ameliorate many of these problems.  For example, much of the richness and subtlety of the physical world is missing online, while other social patterns exist but are hard to perceive.   A big theme in the book is that we can design interfaces and visualizations that make social patterns visible—which is key to encouraging sociability and cooperation. Another problem is that it is difficult to distinguish between public and private space online.  Are your words read by ten people or ten thousand?  Are they ephemeral or permanent?  Without knowing these conditions, we cannot correctly gauge how to act, how candidly to speak.  Good design can make the distinction between public and private legible.

How we design the medium shapes the culture that evolves. Face to face, our impressions of people are shaped by their age, gender, skin color, height, weight, etc.  Online, we can reinvent identity, creating communities where different information is salient about a person—whether it is their taste in books, the essays they write, their shopping history, the regard in which they are held by others, etc.  Can this eliminate harmful stereotypes?   What other goals do we have for the online societies we are creating?

We have really just begun to explore what is possible online.  

Some critics say that we rely too much on technology; that we don’t need new ways of interacting socially online—we should connect in person. What do you say to these critics? Is it inevitable that that vast majority of us will do most of our social interaction via technology? If the answer is yes, what are the implications of this?

Sociability is good—both online and off!  It is a false dichotomy to point to online socializing as the bane of sociability.  Rather, there are many cultural and economic forces that are changing, and sometimes diminishing, sociability in general. 

Let’s look at our changing personal social networks—the set of people we know and keep in touch with.  Unaided, without the help of social technologies, there is limit to how many people with whom we can maintain ties.  And in in particular, strong ties—the close relationships we have with the family and friends we rely on for support, who are there for us in a crisis, and whom we help out when needed—take a lot of effort to maintain. In the past, we were deeply dependent on these relationships: you’d need such ties to help with the harvest, to build a house, with childcare, etc. Now these tasks are often outsourced to the market: we buy our homes and food ready-made; we hire sitters for our kids.  We still have strong ties, but few of us rely on them nearly as heavily as was once the norm.   

It is not only with major tasks that we are replacing social exchange with impersonal actions.  If I’m planning a trip, I can find all the schedules and recommendations I need with a quick search—no need any more to ask friends for advice. Stopping a stranger to ask directions is becoming as anachronistic as looking for a pay phone.  The Web has provides much of the advice and knowledge we once sought from other people.

It seems paradoxical that at the same time that we rely less on other people, we have a growing collection of technologies that support personal relationships.   A few years ago, if you met someone at a party or a conference and discovered you had some interests in common, you were likely to exchange phone numbers or business cards, say you’d keep in touch—and never do so.  Today, you can connect with them on Facebook, LinkedIn, Instagram, etc. (depending on your profession, age, and the shared topic of interest)—and not only stay in touch easily, but often learn of other common tastes and concerns that might otherwise never have come up in one-to-one conversation.  Until recently, people who went off to college or the army or to seek work in a new town often lost track of their old friends; today, social network sites help us rediscover lost connections and keep up with huge numbers of people.  While these sites are certainly used to maintain strong ties, their most transformative effect is in strengthening weak ties: they provide a semi—public forum in which an active interacting with a few people makes many aware of what you are up to and provides a glimpse of your multifaceted concerns and interests.

We are evolving the networks (and tools needed to support them) that are well suited for a highly mobile world, where access to information is key.  Face to face and online communication are complementary, not competitive—we learn different things about each other in these different environments, and help each other out in different ways. 

With the revelation that the NSA was collecting and storing our metadata, privacy has become the biggest concern about our online presence. In Chapter 11, “Privacy and Public Space”, you argue that “privacy and publicity are complementary and need to be in balance”. What are some ways that we can use your design recommendations to balance privacy issues?

Privacy fails when something that was intended for one context gets shown in another.  A vivid example is revenge porn: Susie sends Charlie a naked picture of herself, when they are happy and in love; they break up, and he posts it online.  Of course, many privacy violations are not so egregious — a home video of you singing “Happy Birthday” to your niece in an off—key voice is cute within the private space of the family, but acutely embarrassing if shown to your colleagues.

One major cause of privacy breaches is that activities we are accustomed to think of as ephemeral, such as casual conversations, are, online, permanently archived.  Furthermore, search engines make it likely that this information will resurface outside the context for which it were intended and in which it makes sense. 

Once information has become public it is difficult, if not impossible, to put it back into a private box.  While the EU has recently ruled that users have the “right to be forgotten”— to have negative online information about themselves be removed, doing so effectively is very challenging.  A more promising strategy is to provide more data, more context. If there is a lot of information about me that is part of my intended self—presentation, less weight will attach to any particular thing (though the success of this approach depends on how scandalous the troubling material is. A video of off—key singing may simply add a bit of color to the impression one makes, but something deeply shocking or offensive may overwhelm everything else.)   

Helping people manage their online self—presentation is central to maintaining privacy. Offline, we can keep different facets of our lives separate, but this is quite difficult to do online if everything is united via your name. An important tool here is the use of pseudonyms.  A pseudonym will not hide your activities from determined and savvy surveillance (unless you are extremely intrepid and adept at using onion routers and complex anonymizers, etc.), but it will provide the sort of everyday privacy that gives you control over how you appear to others.   But for these to work well, we need to design interfaces that support persistent pseudonyms and anchor identity to one’s history within a community, rather than to one’s real name. 

Another cause of privacy failure is the invisible audience.  Unawareness of audience helps make many online discussions feel so intimate and forthcoming—people think of themselves as conversing with the small group that is actively participating, but the readership is much larger.  This is not always bad— both in restaurants and online, we often like these liminal spaces, which feel personal, but also have the energy of a public space.  But, sometimes this ambiguity leads to mistakes—to acting in ways or revealing things that you would not have done were you more aware of the extent of the audience.  Designs that make the audience visible help people gauge how formal or revealing they wish to be.

It is important to keep in mind that privacy is not solely or even primarily a technological issue.  It is a legal and social issue.  In a very diverse and accepting society, there is much less need for privacy.  Designing interfaces that support privacy is essential, but so is working to promote tolerance in the public sphere.

In the chapter, “Embodied Interaction”, you talk about the difficulties of socializing through technology. Some of examples of this that you highlight are: when communicating through text, it’s impossible to pick up nonverbal signals from the other person. Another instance is that video conferences can feel awkward. Your solution to this is immersive virtual reality. In March, Facebook purchased Oculus Rift, a virtual reality company. In what ways will this acquisition influence the future of interacting socially through VR?

Just to be clear—I don’t propose immersive virtual reality as the solution to these problems!

For social communication, the real problem is limited input, more than display.   Face to face, we communicate not only with words, but also with gestures, facial expression, gaze, etc.  When the input is limited to typed text, as is common online, we lose all those non-verbal signals.   Now, this isn’t necessarily bad—the pared—down medium of text has many advantages, ranging from ease of use to the ability to reconfigure identity.  And, there are subtle social signals hidden in text interactions which we can make more discernable with visualizations.  But there are nuances we just don’t get with text and it’s especially hard to get the vivid (though sometimes inaccurate!) sense of personality that we read from face to face non-verbal clues.

Occulus Rift is a VR helmet that lets you explore a 3D world by moving your head.  It’s designed for games, which have detailed environments to explore, need speedy graphics, and have a limited set of actions that the users perform.  It does track head position, both for navigation and as gestural input, to drive the user character.  But in a game, these simple gestures are molded onto an avatar with a complex face and body — so most of the movements of the avatar are generated by the game software, not by the user.  It looks good, but it’s not primarily conveying the user’s reaction—or personality.  It’s a semi—autonomous puppet. 

A device designed specifically for social interaction should focus more on increasing the types of input, e.g. sensing hand gestures and facial expression.  And, the key point I make in that chapter is that the output should be scaled to the input.  If you have only simple measurements, use simple representations.  These can still be very expressive, and are more communicative too, for a simple output design matched to the level of detail in the input stream conveys the user’s intentions, not the computer’s.

Finally, what are some of the current trends in social technology that you think have potential to transform how we interact? What are some trends that could be potentially troublesome for us?

The vast and detailed data shadows we are amassing have the potential to transform how we perceive other people.  This will have a tremendous effect on how we organize society, and simply go about our daily lives.

“Data shadows” include information people create, such as blog posts, vacation photographs, and product reviews, as well as references to and depictions of them made by others, whether in company newsletters, race day results, friends’ party pictures, or police reports. We see glimpses of these shadows whenever we do a search on someone’s name.  Today, much data exists about some people, while others are still blank ciphers, but as more of our activities occur online, our data shadows grow bigger and more detailed.

Today, the most extensive and vivid data shadows are not viewable even by the subject.  They are gathered by, and are the property of, advertisers and the government. Highly detailed, they also include private data, such as the search terms you use and the pages you read.  These dossiers are made for persuasion and surveillance; these are not social uses—and it is controversial whether they benefit or harm the subject.

Here, though, I’d like to focus on the social uses for personal data.  The fundamental problem of social life is making sense of other people—figuring out who to trust and how another relates to you.  By trust I mean both the big scale of trusting with your life (or with your kids) and also the everyday trust of empathy and shared tastes. Do we see things the same way? And if we don’t, do I trust your judgment enough to let you influence and persuade me?  These problems in social perception are especially acute online, where many of the cues we rely on face-to-face are missing.

How can design help?  One way is to make these portions of these data shadows visible—creating data portraits by visualizing people’s history and data about them.   The design of such portraits raises many issues about privacy, control, point of view etc.  Portraits are subjective: their design highlights some data and can laud particular things and denigrate others.  Their design influences how we perceive status, what we strive for and what we admire. Even a Google search is a form of portrait, the result of a sophisticated algorithm that determines the importance and relevance of each link, presenting them to you in an algorithmically determined order.

Early internet theorists were optimistic that an online world in which people were perceived through their words, rather than their appearance, would bring about a greatly improved society—one based on merit, and lacking in prejudice.  Today, the mood is different.  One big concern is that our online personas will be manipulated to advance commercial interests.  We can see glimpses of this now, on Facebook, where people suspect that their postings are promoted if they mention products or are the sort of content that makes a good setting for ads—and where one’s name and likeness are used to market things to others.

This is troublesome itself, and compounded when we consider that these data shadows will soon be moving from the screen to the physical world.  Tremendous cultural transformation will occur when face recognition technology allows us to see a person’s data as a shadow trailing their physical self.   This is not entirely negative.  I can imagine a future in which greater information about the people around you could break down many of the barriers that separate us in day to day life.  But there is also the specter of vastly diminished privacy, a world in which your history and identity are always exposed.  

In which flavor of future will we find ourselves?  The key is how much control we have over our online portrayal. At the moment, with the rise of commercial social platforms, advertiser support of content and widespread corporate and government surveillance the trend is toward diminishing control.

But that trend is not inevitable. As people learn more about how interface design shapes society, how it affects how they are seen by and perceive others, they will, I hope, demand greater control over their data.  And not just to hide it, but to also embrace also the ways that more knowledge of each other can enrich and strengthen society.

Responses to the blog post