CSE 171: User Interface Design: Social and Technical Issues
Notes for the Ninth Meeting


Notes on Actor-Network Theory

Actor-network theory (abbreviated ANT), which was initiated by Bruno Latour and Michel Callon in France, is an important recent approach to the sociology of science and technology. The sociological angle is expressed by one of Latour's favorite slogans, "follow the actors", which means that the sociologist should not only look at what the actors do, but should also be interested in what interests them, and (more doubtfully) even believe what they believe. Actor-network theory focuses attention on the socio-technical networks that engineers and scientists create to get their projects done, emphasizing that no one acts alone (or if they do, then no one notices, so it doesn't matter). In contrast to most other work in sociology, actor-network theory does not distinguish (very much) between "human" and "non-human" actors. In my opinion, this is more of a rhetorical, or even dramatic, device than a theoretical axiom, but it certainly serves to bring forward the important roles played by resources of all kinds, including equipment, data, money, publicity, and power, and it is a useful counterbalance to approaches that concentrate on just one of the two. The neologism actant is sometimes used as a neutral way to refer to both human and non-human actors, avoiding the strong human bias in the word "actor."

Latour's view that people and machines should be treated as equal is called symmetry, and is applied in ways that may be surprising. For example, he says we need to negotiate with machines just as with people, we need to recruit them as allies, to authorize and notify them, and to mobilize and delegate them; he claims that this kind of language should be taken literally not metaphorically. Of course, this is opposite to what most philosophers (and ordinary people) think. Perhaps these terms seem strange because they are so anthropomorphic. Personally, I consider these terms mainly as suggestive metaphors. What do you think?

Latour's book Aramis is the sad story of a project to build a highly innovative public transport system in the suburbs of Paris; the story is sad because the project fails, and the Aramis system is left without any friends. In this book, Latour claims that only in successful projects can you figure out what actually happened; this is perhaps a bit shocking. Does objectivity really only exist for successful projects? This strange viewpoint comes from his requirement that you (in the role of sociologist) should take the viewpoint of the actors, plus the observation that the actors will not agree among themselves about what happened when the project failed, due to the dissolution of the alliances recruited to create the project in the first place.

Another piece of Latour's eccentric terminology is continous chains of translation, which refers to the ever ongoing efforts to keep actors involved with the project, by "translating" into their own languages and values. This is part of his effort to overcome technological determinism, which is the (false!) theory that technology is an autonomous force that directly changes society. For example, the very common phrase "social impact" embodies this false viewpoint. (To say that technological determinism is false is not to deny that technology has social effects - instead, it is to deny that one can ignore the social context of technology.) Latour sometimes refers to (instances of) technological determinism as "heroic narratives of technological innovation," since particular examples (e.g., newspaper and magazine articles on the history of technology) are often framed in such terms. In case you are doubtful that technological determinism can be a problem, here are some examples. Probably we've all heard the aphorism "If you build a better mousetrap, then the world will beat a path to your door." A while ago in the local paper, I saw the sentences "Cloning is inevitable once it is possible" and "Fusion power just doesn't have the impetus to succeed." These articles were written as if the projects involved had nothing to do with their context of people and other things, but had a unstoppable momentum of their own.

The word mediate is sometimes used for the role of intermediate actants in these chains of translation; this terminology provides a nice way to avoid the deterministic bias of the more usual ways of speaking of the role of (for example) a machine, a paper record, or a technology, in some project or part of a project.

In Aramis, Latour says (pp.99, 101):

The only way to increase a project's reality is to compromise, to accept sociotechnological compromises.

The pertinent question is not whether it's a matter of technology or society, but only what is the best sociotechnological compromise.

These quotations not only deny the separability of the social and the technical (even munging them into a single word), but they also make the same point as mentioned above, about the necessity for translations. Once all these translations, or recuitments, succeed, the technology "disappears", i.e., it becomes "transparent" and can be taken for granted. But if the translations fail to "interest" the actors enough, then the actors will go their own ways again, each with a different view of what the project is (or was) about.
That's why ... it [i.e., the project] can never be fixed once and for all, for it varies according to the state of the aliances. (p.106)

... each element ... can become either an autonomous element, or everything, or nothing, either the component or the recognizable part of a whole. (p.107)

On page 108 of Aramis, Latour argues that the "division of labor" into subprojects (and other aspects of projects) can only be made after a project has succeeded (I called this the retrospective hypothesis in Requirements Engineering as the Reconciliation of Technical and Social Issues). This may sound like a radical view, but it is what you see in real projects, and quotes from Latour's interviews with Aramis project participants, as well as my own experience with other projects, back this up empirically. Pages 118 to 120 contrast VAL (a different French public transportation project that actually succeeded) with Aramis, arguing that VAL can be described "heroically" only because it succeeded. More significantly, Latour also argues that VAL succeeded because it continued to compromise, whereas Aramis failed because it did not continue to compromise.
The more a technological project progresses, the more the role of technology decreases, in relative terms. (p.126)

To study Aramis after 1981, we have to add to the filaments of its network a small number of people representing other interests and other goals: elected officials, Budget Office authorities, economists, evaluators, ... (p.134)

A single context can bring about contrary effects. Hence the idiocy of the notion of "preestablished context." The people are missing; the work of contextualization is missing. The context is not the spirit of the times, which would penetrate all things equally. (p.137)

In fact, the trajectory of a project depends not on the context but on the people who do the work of contextualizing. (p.150)

In particular, Latour denies that sociology can ever attain a viewpoint that is "objective," above and beyond the viewpoints of the participants, and he even denies that there can be any "metalanguage" in which to express such a viewpoint. This is a very different viewpoint from that of classical sociology, but it is in full agreement with ethnomethodology.
Does there really exist a causal mechanism known only to the sociologist that would give the history of a technological project the necessity that seems so cruelly lacking? No, the actors offer each other a version of their own necessities, and from this they deduce the strategies they ascribe to each other. (p.163)

The actors create both their society and their sociology, their language and their metalanguage. (p.167)

There are as many theories of action as there are actors. (p.167)

To the multiplicty of actors a new multiplicity is now added: that of the efforts made to unify, to simplify, to make coherent the multiplicity of viewpoints, goals, and desires, so as to impose a single theory of action. (p.167-8)

To study technological projects you have to move from a classical sociology - which has fixed frames of reference - to a relativistic sociology - which has fluctuating referents. (p.169)

With a technological project, interpretations of the project cannot be separated from the project itself, unless the project has become an object. (p.172)
This is the only case where "classical" sociology might apply, and even then only in a relative way.
By multiplying the valorimeters that allow them to measure the tests in store and to prove certain states of power relations, the actors manage to achieve some notion of what they want. By doing their own economics, their own sociology, their own statistics, they do the observer's work ... They make incommensurable frames of reference once again commensurable and translatable. (p.181)
(The neologism "valorimeter" just refers to some way of measuring how well an actor's requirements are being met; examples are passenger flow, cost, publicity, etc.)
The interpretations offered by the relativist actors are performatives. They prove themselves by transforming the world in conformity with their perspective on the world. By stabilizing their interpretation, the actors end up creating a world-for-others that strongly resembles an absolute world with fixed reference points. (p.194)
("Performatives" are speech acts that actually "perform" what they say, i.e., they cause it to be the case; standard examples are christening and marrying; this term comes from speech act theory.) Latour claims that technologists, in doing their jobs, are actually doing better sociology than classical sociologists.

It is interesting to contrast the view of ANT with the "dead mechanical universe" of classical mechanics; the ANT universe is very much alive, full of actors and their actions, full of all kinds of interactions, that are constantly reconfiguring the network. Hence this is a very non-classical point of view.

Actor-network theory can also be seen as a systematic way to bring out the infrastructure that is usually left out of the "heroic" (or "hagiographic") accounts of scientific and technological achievements. Newton did not really act alone in creating the theory of gravitation: he needed observational data from the Astronomer Royal, John Flamsteed, he needed publication support from the Royal Society and its members (most especially Edmund Halley), he needed the geometry of Euclid, the astronomy of Kepler, the mechanics of Galileo, the rooms, lab, food, etc. at Trinity College, an assistant to work in the lab, the mystical idea of action at a distance, and more, much more (see the book on Newton by Michael White listed on the CSE 275 homepage). The same can be said of any scientific or technological project: a network is needed to support it. Other famous examples of heroic narratives in technology and science for which there exist good actor network studies which take a non-heroic view emphasizing infrastructure include Edison's invention of the electric light bulb and Pasteur's work on bacteria.

For what it's worth, here is my own brief outline summary of some of the main ideas of ANT:

  1. There is an emphasis on networks and links, as opposed to heroic individuals.
  2. The nodes in these networks, called actants, include not just humans, but also non-humans, such as physical objects; they all do some kind of work to maintain the integrity of the network.
  3. Groups (and individuals) in general have different value systems, and translation, or compromise, among these systems is necessary for a network to succeed; this is the work that is done along the links in the network. Socio-technical compromise is the work done to bring the various technical and social nodes into alignment, so that a project can go forward.
  4. The structure of a project can only be seen clearly when these translations (and hence the project) have been successful; hence the values, and even the parts and the structure, of a failed project are not in general well defined.
  5. The human actors in a project are in a sense sociologists, since they must do acts of interpretation, which in effect are theories of the project; this work should be taken very seriously by sociologists of technology.
It should not be thought that applying ANT is just drawing a graph with some actants on nodes; the work that they do should be explored, the nature of the relations should be explored - often these are interesting translations; the possibilities for multiplicity and dynamic adaptation should be explored; etc.

An important achievement of actor-network theory is that technological and social determinism are impossible if you use its method and language correctly. Of course, ANT has been much criticized, but (in my opinion) much of the criticism has been from people who either didn't understand it, or who rejected it for failure to conform with their own prefered paradigm. The most valid criticisms should come from within this new paradigm. One criticism is that ANT dehumanizes humans by treating them equally with non-humans; it is said that a brave new world is coming our way that involves more and more interaction with machines, to the point of our becoming cyborgs, but (they say) we should resist it rather than celebrate it. Another criticism is that ANT fails to provide explanations for the dynamic restructuring of networks. It is also said that ANT fails to take account of the effects that technology can have on those who are not part of the network that produces it, and that it therefore fails to support value judgements on the desirability or undesirability of such effects. ANT is also criticized for its disinclination (or inability) to make contributions to debates about policy for technology and science. Some other criticisms can be found in Traduction/Trahison - Notes on ANT by John Law, and in How things (actor-net)work: Classification, magic and the ubiquity of standards by Geoffrey Bowker and Susan Leigh Star, which are discussed in Sections 7.2 and 7.3, respectively, of the CSE 275 class notes (these papers can also be fetched online from the readings page).

Another criticism, which seems mostly to come from curmudgeonly physical scientists, and which in fact applies to most work in the sociology of technology and science (abbreviated STS) is that it destroys the credability of science, by leaving no place for the objective truth that science (allegedly) uncovers. This is discussed in detail in Section 6.2 of the CSE 275 class notes, along with the fact that by the nature of their work, sociologists of science should deliberately avoid making commitments of this kind.

ANT is part of an area of STS that is often called constructivist, because it focuses on how social systems get constructed by their participants. The developers of ANT (Latour and Callon) have recently declared that ANT is over, but of course it's too late now for them to stop others from using, criticizing, and modifying their ideas.

A name for the general method of looking for what supports a technical or scientific project, instead of telling a heroic tale, is called infrastructural inversion (this term is due to Geoff Bowker). Its converse, which is burying the infrastructure, I call infrastructural submersion. The work of lab technicians, secretaries, nurses, janitors, computer system administrators, etc. is very often subjected to infrastructural submersion, with the effect of creating a very misleading picture of the network involved; this is of course related to the heroic narratives of classical sociology.

Leigh Star has defined boundary objects to be data objects or collections that are used in more than one way by different social groups, and that therefore provide an interface for those groups, translating across their differences. One reason this idea is important is that it provides a model of cooperation that does not require consensus. The notion of translation used here comes from ANT. Boundary objects would seem to be especially relevant for studying many social issues in computer science, and should have interesting applications to many design problems.


So What?

It seems that one can see certain errors repeated again and again in information technology businesses. One of these is making an overly ambitious and overly precise business plan, and then trying to stick to it to the bitter end. This is particularly common in startups, which by their nature are often committed to going all out after an ambitious goal. But what we have learned from actor network theory suggests that business plans should avoid being overly committed and precise, and instead should include contingency planning: they should sketch and cost out the scenarios that at that time seem the most plausible, and explicitly budget for replanning at a certain point, where the most plausible scenarios will again be sought. We all know that IPOs are a gamble, and that this gamble usually fails; this empirical fact can be seen as a good argument for the impossibility of making precise predictions about interactions between society and technology.

Anyone who has worked in the computer industry, and especially in software development, will have seen many instances of the phenomena described by actor network theory, and will also have seen many instances of the kinds of myth and foolishness that it is capable of exposing, including naive optimism, hagiography, and technological determinism. In my opinion, a careful contemplation of actor network theory, including a number of good case studies, would be excellent preparation for high technology managers, and should be required for all engineering students.

Much more information about the sociology of technology and science, especially information technology, can be found in the class notes and readings for my course CSE 275.


Media and the Ubiquity of Interfaces

User interface issues appear everywhere. A coffee cup is an interface between the coffee and the user; questions like how thick it should be, what its volume should be, and whether it should have a handle, are all user interface issues. A book can be considered a user interface to its content; note that a book is interactive, because users turn the pages, and can go to any indicated page they want; they also uses indices, glossaries, etc in an interactive manner. Buildings can be seen as providing interfaces to users who want to get to a certain room, e.g. by a directory in the lobby, buttons outside and inside elevators, "EXIT" signs, doorknobs, stairways, and even corridors (you make choices with your body - not your mouse). Returning to the obvious, medical instruments have user interfaces (for doctors, nurses, and even patients) that can have extreme consequences if badly designed. By perhaps stretching a bit, almost anything can be seen as a user interface, having its own issues of design and representation. Certainly this is the way Andersen views his museum.

Of course, all this is quite parallel to what semiotics says about signs, and indeed such issues can be considered a part of semiotics, although the notion of semiotic morphism is often needed in making the translation. The basic idea is to consider an object, such as a cup or a building, as a composite sign. Here are some further concepts that are useful:

Andersen's museum is a multimedia interactive system (and so is any other museum, though in a much more prosaic sense).


Notes on the "Afterward" of Shneiderman

It is very unusual to find something like this in a computer science textbook. Can you imagine it in a book on operating systems, or compilers? Unlike of the rest of the book, most of this section is interesting, and some of it is actually inspiring. Moreover, I seemed to agree with nearly all of it. However, I would like to add two caveats: some material is a bit outdated, for example, plagues 2, 3 and 7; and Shneiderman does not sufficiently recognize the ways in which social and technical issues are intertwined. The latter point should be corrected by the material on actor-network theory given previously in these notes. The fact that the Afterward seems rather disconnected from the rest of the book is explained by the fact that Shneiderman pays little attention to the social dimensions of user interface design in the body of the book.

As far as hopes and visions go, why not hope for world peace, universal human rights, adequate food, shelter and clothing for all on the planet, and happiness and a balance state of mind for all? Of course, such very high hopes serve to emphasize the fact that these are primarily social issues, in which user interface design in the narrow sense can have little impact. But if we take user interface design in the broader sense suggested by semiotics, in which almost anything can be seen as an interface, this objection disappears, although we may be left feeling daunted by the magnitude of the tasks that are implied, both theoretical (how can we develop semiotics further in ways that will make it more useful for such goals?) and practical (how can we make some real progress towards such goals?). At present we can only make some small steps in these directions, but I agree with Shneiderman that we should keep in mind large scale goals and visions as we stumble forward.

Shneiderman's relate-create-deonate cycle seems an excellent model for system development. The fact that this model applies well beyond user interface development is illustrated by the success of Linux and many other open source products, and of course it can also be very valuable in education, as Shneiderman notes.

Certainly user interface designers can expect to come face to face with many important moral issues in their work; indeed, I would go so far as to claim that designing "good" interfaces is already a moral issue. One need only think of UID for medical systems, nuclear reactors, and defense systems to see that there are important applications with moral dimensions. But even more prosaic applications raise similar issues; e.g., consider the design of web search engines. A semiotic study of ethical aspects of search engines can be found in The Ethics of Databases.


Notes on Andersen's Dynamic Logic

I think this is a terrifically interesting piece, but after reading his latest version (as opposed to the one I read two years ago) and thinking about the background that this class has, I think parts of it are a little too difficult for this class. Hence I suggest you read it over lightly, and pause whenever you find something that interests you. We will not follow up on this material.


To CSE 171 homepage
To my home page
Last modified: Fri Mar 3 12:29:50 PST 2000