On Adam Greenfield’s Ethical Guidelines
Adam Greenfield writes eloquently on the site boxes and arrows on ethical design and ubiquitous computing. His article All watched over by machines of loving grace: Some ethical guidelines for user experience in ubiquitous-computing settings, sets out a series of five principles that designers can follow in order to design ethically. These are:
1. Default to harmlessness (user safety comes first)
2. Be self-disclosing (always disclose all features)
3. Be conservative of face (do not embarrass, humiliate or shame users)
4. Be conservative of time (don’t waste someones time)
5. Be deniable (opt out at any point)
This is a good list. I doubt anyone would deny it’s utility. They remind me a little of Asimov’s Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction allow a human being to come to harm
2. A robot must obey orders givin it by human beings except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
These were an attempt to create some basic principles for artificial intelligence programming that would ensure ‘robots’ would do no wrong, namely be moral. And it is not unremarkable that we see the convergence of user interface design and artificial intelligence design principles. The smarter our machines get the more they will have to act like moral agents.
But I don’t think Greenfield goes far enough. Like most contemporary discussions on ethics in HCI, the preoccupation seems to be with ease of use and giving the user a good user experience. To quote from his article:
“Imagine the feeling of being stuck in voice-mail limbo, or fighting unwanted auto-formatting in a word processing program, or trying to quickly silence an unexpectedly ringing phone by touch, amid the hissing of fellow moviegoers—except all the time, and everywhere, and in the most intimate circumstances of our lives. Levels of discomfort we accept as routine (even, despite everything we know, inevitable!) in the reasonably delimited scenarios presented by our other artifacts will have redoubled impact in aubicomp world.”
Certainly this matters. Frustration, design-induced user error, accidental information disclosure, the reduction in privacy; all of these are important. But they are based on too narrow a view. They don’t take into account the longer term. The model being used is a direct user to machine interaction, and the immediate (or imminent) emotions that a user experiences. Shame, frustration, and so on. What is not accounted for is the longer term effect of technology on cultural norms, on collective behaviour, on society. Otherwise known as morality.
Our tools effect us not only in the moment, over time our tools reshape our culture. And in turn our culture reshapes us. As designers of tools, we have a moral duty to not only build tools that have good usability (as per Greenfield’s principles), but must also consider how the use of those tools will in turn change the users.
We accept the statement we are what we eat without much resistance. I would add to this: we become what we use. Or, in the words of Marshall McLuhan “We become what we behold. We shape our tools and then our tools shape us”.
Consider television. Taking the conventional HCI ethical design approach we might devise some principles like:
1. Ensure the remote control is easy to use, including for the old and disabled
2. Ensure parental controls can prevent children seeing inappropriate content
3. Ensure that any information being gathered via the TV (eg viewing habits) is made clear to the viewers, and has their consent
4. Don’t force viewers to watch what they don’t want to watch (the great flaw in TV imho)
These are fine, but not enough. They fail to address the effect that TV has on those who watch it. How, for instance, family life has changed as the TV has become the focal point in the living room and has taken on the role of the conversationalist, the teacher, the role model and the window onto the world. A society with televisions in it is radically different from one without, irrespective of what is being broadcast, or what the user interface of the televisions is like. As Marshall McLuhan put it in 1964 in Understanding Media:
“The young people who have experienced a decade of TV have naturally imbibed an urge toward involvement in depth that makes all the remote visualized goals of usual culture seem not only unreal but irrelevant, and not only irrelevant but anemic. It is the total involvement in all-inclusive nowness that occurs in young lives via TV’s mosaic image. This change of attitude has nothing to do with programming in any way, and would be the same if the programs consisted entirely of the highest cultural content. The change in attitude by means of relating themselves to the mosaic TV image would occur in any event. It is, of course, our job not only to understand this change but to exploit it for its pedagogical richness. The TV child expects involvement and doesn’t want a specialist job in the future. He does want a role and a deep commitment to his society. Unbridled and misunderstood, this richly human need can manifest itself in the distorted forms displayed in West Side Story. The TV child cannot see ahead because he wants involvement, and he cannot accept a fragmentary and merely visualized goal or destiny in learning or in life”.
He has paints an overly positive picture of the notion of involvement provoked by TV I feel, but he does colourfully describe the more profound effects of a medium on its users. In particularly, through his concept the medium is the message, he shows us that the true effect (including moral effect) of a new medium should not be understood in terms of the content its delivers, or its usability, but rather in terms of how it alters our behaviour, self image and subsequent desires and actions.
Trying to understand how our design decisions may effect the lives of our users and alters their culture is incredibly difficult. In particular, we need to understand why people behave the way they do: the interface between our fixed / universal behaviour (our nature) and our culture (our nurture). What can we legitimately seek to change, and what must we take as fixed? We need to understand how group dynamics work, so that we can understand how our tools may change them. Why are we good to each other, and what encourages / discourages us to be so? And we need to understand how our own individual moral sense is formed, shaped and expressed within continuously changing culture. Why do I make the choices that I do, and how do the tools and media that I surround myself with influence those choices? And finally, how does the feedback from this in turn influence and change me?