This chapter of the class notes discusses some social and ethical issues, beginning with an overview the Actor-Network Theory approach to the sociology of science and technology. There are also some notes on the Afterword of Shneiderman's text.
Actor-network theory (abbreviated ANT), which was initiated by Bruno Latour and Michel Callon in France, is an important recent approach to the sociology of science and technology. The sociological angle is expressed by one of Latour's favorite slogans, "follow the actors", which means that the sociologist should not only look at what the actors do, but should also be interested in what interests them, and (more doubtfully) even believe what they believe. Actor-network theory focuses attention on the socio-technical networks that engineers and scientists create to get their projects done, emphasizing that no one acts alone (or if they do, then no one notices, so it doesn't matter). In contrast to most other work in sociology, actor-network theory does not distinguish (very much) between "human" and "non-human" actors. In my opinion, this is more of a rhetorical, or even dramatic, device than a theoretical axiom, but it certainly serves to bring forward the important roles played by resources of all kinds, including equipment, data, money, publicity, and power, and it is a useful counterbalance to approaches that concentrate on just one of the two. The neologism actant is sometimes used as a neutral way to refer to both human and non-human actors, avoiding the strong human bias in the word "actor."
Latour's view that people and machines should be treated as equal is called the Principle of Symmetry, and it is sometimes applied in ways that may be surprising. For example, he says we need to negotiate with machines just as with people, we need to recruit them as allies, to authorize and notify them, and to mobilize and delegate them; he claims that this kind of language should be taken literally not metaphorically. Of course, this is opposite to what most philosophers (and ordinary people) think. Perhaps these terms seem strange because they are so anthropomorphic (i.e., human centered). Personally, I consider them mainly as suggestive metaphors. What do you think?
Latour's book Aramis is the sad story of a project to build a highly innovative public transport system in the suburbs of Paris; the story is sad because the project fails, and the Aramis system is left without any friends. In this book, Latour claims that only in successful projects can you figure out what actually happened; this is perhaps a bit shocking. Does objectivity really only exist for successful projects? This strange viewpoint comes from his requirement that you (as the researcher) should take the viewpoint of the actors, plus the observation that the actors will not agree among themselves about what happened when the project failed, due to the dissolution of the alliances recruited to create the project in the first place.
Another piece of Latour's unusual terminology is continous chains of translation, which refers to the ever ongoing efforts to keep actors involved with the project, by "translating" into their own languages and values. This is part of his effort to overcome technological determinism, which is the (false!) theory that technology is an autonomous force that directly changes society. For example, the very common phrase "social impact" embodies this false viewpoint. (To say that technological determinism is false is not to deny that technology has social effects - instead, it is to deny that one can ignore the social context of technology.) Latour sometimes refers to (instances of) technological determinism as "heroic narratives of technological innovation," since particular examples (e.g., newspaper and magazine articles on the history of technology) are often framed in such terms. In case you are doubtful that technological determinism can be a problem, here are some examples. Probably we've all heard the aphorism "If you build a better mousetrap, then the world will beat a path to your door." A while ago in the local paper, I saw the sentences "Cloning is inevitable once it is possible" and "Fusion power just doesn't have the impetus to succeed." These articles were written as if the projects involved had nothing to do with their context of people and other things, but had a momentum of their own. Thus they were highly misleading, by failing to address the real problems.
The word mediate is sometimes used for the role of intermediate actants in these chains of translation; this terminology provides a nice way to avoid the deterministic bias of the more usual ways of speaking of the role of (for example) a machine, a paper record, or a technology, in some project or part of a project.
In Aramis, Latour says (pp.99, 101):
The only way to increase a project's reality is to compromise, to accept sociotechnological compromises.These quotations not only deny the separability of the social and the technical (even munging them into a single word), but they also make the same point as mentioned above, about the necessity for translations. Once all these translations, or recuitments, succeed, the technology "disappears", i.e., it becomes "transparent" and can be taken for granted. But if the translations fail to "interest" the actors enough, then the actors will go their own ways again, each with a different view of what the project is (or was) about.The pertinent question is not whether it's a matter of technology or society, but only what is the best sociotechnological compromise.
That's why ... it [i.e., the project] can never be fixed once and for all, for it varies according to the state of the aliances. (p.106)Note that this way of thinking has the effect of overcoming technological determinism.... each element ... can become either an autonomous element, or everything, or nothing, either the component or the recognizable part of a whole. (p.107)
I would like to step outside this exposition of classical ANT for a moment to emphasize a feature that is usually quickly passed over. Notice that translation, mediation, or recruitment involves values in a crucial way, since the point is to "interest" the actors by appeal to their own values, using their own languages. This way of thinking about socio-technical systems includes a clear understanding of the fact that many different value systems and languages may be involved, and that communication is likely to be happening in all of them. For non-human actants, these languages and values may be technical, e.g., gears have some number of teeth per meter, and need oil (note that politics is sometimes described using similar metaphors of gears and oil!). My belief is that these values are the key to understanding how any given system actually works.
An important methodological point is that, since values show up along the links between actants, we can use this as a guide in seeking to understand a socio-technical system: we should look for the values of actants by asking what translations are being done to maintain each link in the network.
On page 108 of Aramis, Latour argues that the "division of labor" into subprojects (and other aspects of projects) can only be made after a project has succeeded (I called this the retrospective hypothesis in Requirements Engineering as the Reconciliation of Technical and Social Issues). This may sound like a radical view, but it is what you see in real projects, and quotes from Latour's interviews with Aramis project participants, as well as my own experience with other projects, back this up empirically. Pages 118 to 120 contrast VAL (a different French public transportation project that actually succeeded) with Aramis, arguing that VAL can be described "heroically" only because it succeeded. More significantly, Latour also argues that VAL succeeded because it continued to compromise, whereas Aramis failed because it did not continue to compromise.
The more a technological project progresses, the more the role of technology decreases, in relative terms. (p.126)In particular, Latour denies that sociology can ever attain a viewpoint that is "objective," above and beyond the viewpoints of the participants, and he even denies that there can be any "metalanguage" in which to express such a viewpoint. This is a very different viewpoint from that of classical sociology, but it is in full agreement with ethnomethodology.To study Aramis after 1981, we have to add to the filaments of its network a small number of people representing other interests and other goals: elected officials, Budget Office authorities, economists, evaluators, ... (p.134)
A single context can bring about contrary effects. Hence the idiocy of the notion of "preestablished context." The people are missing; the work of contextualization is missing. The context is not the spirit of the times, which would penetrate all things equally. (p.137)
In fact, the trajectory of a project depends not on the context but on the people who do the work of contextualizing. (p.150)
Does there really exist a causal mechanism known only to the sociologist that would give the history of a technological project the necessity that seems so cruelly lacking? No, the actors offer each other a version of their own necessities, and from this they deduce the strategies they ascribe to each other. (p.163)The actors create both their society and their sociology, their language and their metalanguage. (p.167)
There are as many theories of action as there are actors. (p.167)
To the multiplicty of actors a new multiplicity is now added: that of the efforts made to unify, to simplify, to make coherent the multiplicity of viewpoints, goals, and desires, so as to impose a single theory of action. (p.167-8)
To study technological projects you have to move from a classical sociology - which has fixed frames of reference - to a relativistic sociology - which has fluctuating referents. (p.169)
With a technological project, interpretations of the project cannot be separated from the project itself, unless the project has become an object. (p.172)This is the only case where "classical" sociology might apply, and even then only in a relative way.
By multiplying the valorimeters that allow them to measure the tests in store and to prove certain states of power relations, the actors manage to achieve some notion of what they want. By doing their own economics, their own sociology, their own statistics, they do the observer's work ... They make incommensurable frames of reference once again commensurable and translatable. (p.181)(The neologism "valorimeter" just refers to some way of measuring how well an actor's requirements are being met; examples are passenger flow, cost, publicity, etc.)
The interpretations offered by the relativist actors are performatives. They prove themselves by transforming the world in conformity with their perspective on the world. By stabilizing their interpretation, the actors end up creating a world-for-others that strongly resembles an absolute world with fixed reference points. (p.194)(Performatives are speech acts that actually "perform" what they say, i.e., they cause it to be the case; standard examples are christening and marrying; this term comes from speech act theory.) Latour claims that technologists, in doing their jobs, are actually doing better sociology than classical sociologists.
It is interesting to contrast the view of ANT with the "dead mechanical universe" of classical mechanics; the ANT universe is very much alive, full of actors and their actions, full of all kinds of interactions, that are constantly reconfiguring the network. Hence this is a very non-classical point of view.
Actor-network theory can also be seen as a systematic way to bring out the infrastructure that is usually left out of the "heroic" (or "hagiographic") accounts of scientific and technological achievements, that are unfortunately so common. Newton did not act alone in creating the theory of gravitation: he needed observational data from the Astronomer Royal, John Flamsteed, he needed publication support from the Royal Society and its members (most especially Edmund Halley), he needed the geometry of Euclid, the astronomy of Kepler, the mechanics of Galileo, the rooms, lab, food, etc. at Trinity College, an assistant to work in the lab, the mystical idea of action at a distance, and more, much more (see the book on Newton by Michael White listed on the CSE 275 homepage). The same can be said of any scientific or technological project: a network is needed to support it. Other famous examples of heroic narratives in technology and science for which there exist good actor network studies which take a non-heroic view emphasizing infrastructure include Edison's invention of the electric light bulb and Pasteur's work on bacteria (the latter by Bruno Latour).
For what it's worth, here is my own brief outline summary of some of the main ideas of ANT:
An important achievement of actor-network theory is that technological and social determinism are impossible if you use its method and language correctly. Of course, ANT has been much criticized, but (in my opinion) much of the criticism has been from people who either didn't understand it, or who rejected it for failure to conform with their own prefered paradigm. The most valid criticisms should come from within this new paradigm. One criticism of ANT from outside is that it dehumanizes humans by treating them equally with non-humans; it is said that a brave new world is coming our way that involves more and more interaction with machines, to the point of our becoming cyborgs, but (they say) we should resist it rather than celebrate it. Another criticism is that ANT fails to provide explanations for the dynamic restructuring of networks. It is also said that ANT fails to take account of the effects that technology can have on those who are not part of the network that produces it, and that it therefore fails to support value judgements on the desirability or undesirability of such effects. ANT is also criticized for its disinclination (or inability) to make contributions to debates about policy for technology and science. Some criticisms of ANT from within can be found in Traduction/Trahison - Notes on ANT by John Law, and in How things (actor-net)work: Classification, magic and the ubiquity of standards by Geoffrey Bowker and Susan Leigh Star, which are discussed in Sections 7.2 and 7.3, respectively, of the CSE 275 class notes (these papers can be fetched online via the readings page of CSE 275).
Another criticism, which seems mostly to come from curmudgeonly physical scientists, and which in fact applies to most work in the sociology of technology and science (abbreviated STS) is that it destroys the credability of science, by leaving no place for the objective truth that science (allegedly) uncovers. This is discussed in detail in Section 6.2 of the CSE 275 class notes, along with the fact that by the nature of their work, sociologists of science should deliberately avoid making commitments of this kind.
ANT is part of an area of STS that is often called constructivist, because it focuses on how social systems get constructed by their participants. The developers of ANT (Latour and Callon) have recently declared that ANT is over, but of course it's too late now for them to stop others from using, criticizing, and modifying their ideas.
A name for the general method of looking for what supports a technical or scientific project, instead of telling a heroic tale, is called infrastructural inversion (this term is due to Geoff Bowker). Its converse, which is burying the infrastructure, I call infrastructural submersion. The work of lab technicians, secretaries, nurses, janitors, computer system administrators, etc. is very often subjected to infrastructural submersion, with the effect of creating a very misleading picture of the network involved; this is of course related to the heroic narratives of classical sociology.
Leigh Star has defined boundary objects to be data objects or collections that are used in more than one way by different social groups, and that therefore provide an interface for those groups, translating across their differences. One reason this idea is important is that it provides a model of cooperation that does not require consensus. The notion of translation used here comes from ANT. Boundary objects would seem to be especially relevant for studying many social issues in computer science, and should have interesting applications to many design problems.
It seems that one can see certain errors repeated again and again in information technology businesses. One of these is making an overly ambitious and overly precise business plan, and then trying to stick to it to the bitter end. This is particularly common in startups, which by their nature are often committed to going all out after an ambitious goal. But what we have learned from actor network theory suggests that business plans should avoid being overly committed and precise, and instead should include contingency planning: they should sketch and cost out the scenarios that at that time seem the most plausible, and explicitly budget for replanning at a certain point, where the most plausible scenarios will again be sought. We all know that IPOs are a gamble, and that this gamble usually fails; this empirical fact can be seen as very strong support for the impossibility of making precise predictions about interactions between society and technology.
Anyone who has worked in the computer industry, and especially in software development, will have seen many instances of the phenomena described by actor network theory, and will also have seen many instances of the kinds of myth and foolishness that it is capable of exposing, including naive optimism, hagiography, and technological determinism. In my opinion, a careful contemplation of actor network theory, including a number of good case studies, would be excellent preparation for high technology managers, and should be required for all engineering students.
Returning to the focus of this class, actor network theory is an excellent way to approach studying the ecological system of users and their machines, as part of determining what kind of interface should be designed and built for this system.
Much more information about the sociology of technology and science, and especially information technology, can be found in the class notes and readings for my course CSE 275.
Shneiderman's text Designing the User Interface (4th edition with CAterine Plaisant, Addison-Wesley 2005) has an "Afterword" entitled "Society and Individual Impact of User Interfaces," which discusses a number of significant social and ethical issues associated with user interface design. Two major themes are (1) universal usablility, making computers more available to disadvantaged users (such as blind users, people living in poverty, elderly users), and (2) arguing against "animism" which is trying to design machines to be like humans. The latter is really a,n extension of the "agent squabble" that we discussed earlier.
It is very unusual to find something like this in a computer science textbook. Can you imagine it in a book on operating systems, or compilers? Most of this section is interesting, and some of it is inspiring. However, I would like to add two caveats. Shneiderman does not sufficiently recognize the ways in which social and technical issues are intertwined, as immediately suggested by the word "impact" in the title of this section. This is a common error, that can be corrected by the material on actor-network theory earlier in this section of the class notes.
As far as hopes and visions go, why not hope for world peace, universal human rights, adequate food, shelter and clothing for all on the planet, for happiness and a balanced state of mind for all? Of course, such hopes serve to emphasize the fact that these are primarily social issues, in which user interface design in the narrow sense can have little impact. But if we take user interface design in the broader sense suggested by semiotics, in which almost anything can be seen as an interface, this objection disappears, although we may be left feeling daunted by the magnitude of the tasks that are implied, both theoretical (how can we develop semiotics further in ways that will make it more useful for such goals?) and practical (how can we make some real progress towards such goals?). At present we can only make some small steps in these directions, but I agree with Shneiderman that we should keep in mind large scale goals and visions as we stumble forward. One thing we can do is try to help local civic organizations with web design through class projects; this benefits education as well as the organizations and the users that they serve.
Certainly user interface designers can expect to come face to face with many important moral issues in their work; indeed, I would go so far as to claim that designing "good" interfaces is already a moral issue. One need only think of UID for medical systems, nuclear reactors, and defense systems to see that there are important applications with significant moral dimensions. But even more prosaic applications raise similar issues; e.g., consider the design of web search engines. A semiotic study of some ethical aspects of search engines can be found in The Ethics of Databases, and the course CSE 175: Social and Ethical Aspects of Information Technology, goes into ethical aspects of information technology in some detail.