CSE 271: User Interface Design: Social and Technical Issues
2. Approaches to Interface Design

A very fundamental question that any course on user interface design ought to address is

"What is design?"
One way to approach this question is to examine the linguistic structure of the word "design." There are two morphemes, "de" and "sign", so that "design" literally refers to a thing that is derived from signs; it is interesting to notice that "design" shares its first two morphemes with the word "designate" ("de" + "sign" + "ate"). Through the sort of semantic evolution to which societies commonly subject their words, "design" as a verb has come to mean creating some configuration of signs, for some particular purpose. So user interface design is about creating configurations of signs, today usually on a computer screen, and also usually for some practical purpose.

Even a quick look at the HTML language can give a sense of what are today considered (for user interface design) the most basic kinds of sign, and the most important ways to derive complex signs from other signs: letters and images are the most basic kinds of sign; and paragraphs, lists, tables, etc. are the most important devices for derivation. The result of designing a webpage in HTML is a complex sign, derived in a carefully described way from a number of specified parts. The same can be seen in other languages that support interface construction, though perhaps less clearly.

Thus, our subject is in a very fundamental way involved with signs, and in particular, with how to construct complex signs out of basic signs, in order to effectively accomplish some practical aim. There is much in common with design in other media, such as architecture, magazine layout, and watercolor painting, including artistic dimensions such as good taste, creativity, flexibility, insight, etc., but there are also some significant differences; in particular, the structuring devices and the goals are usually much more explicit, and much more limited. This is an ideal situation for applying semiotics, and because semiotics is at an early stage of development, it is also an ideal situation for supporting its further development.

Shneiderman's text presents an orthodox view of user interface design as strongly influenced by experimental psychology. This is historically correct, but while experimental psychology may have been a good place to start, its limitations have become more and more apparent. The following somewhat naive description appears on page 28:

The reductionist scientific method has this basic outline: Materials and methods must be tested by pilot experiments, and results must be validated by replication in variant situations.
Section 9 of the class notes includes some discussion of how science gets done in practice.

Although Chapter 1 of Shneiderman might seem a bit light, it has some important content; in particular, you should be aware of all the resources listed at the end of the chapter, including the website associated to the book, and the several lists of citations relevant to various purposes. The next to last paragraph of section 1.5.1 (page 20) gives a good example of how the social issues interact with even ergonomic issues. The remarks about the problems with measuring intelligence on page 21, and about video games for women on page 22, are provocative, although it would have been nice to have more detail. The emphasis on diversity in Section 1.5 is very nice; more on this appears in Section 2.4. One delightful feature of this book is its human, humane, and humanistic tone, of which diversity is but one example. Shneiderman really believes that good user interface design can and should help improve the lot of humanity, e.g., by providing users with various disabilities better access to a wide variety of resources.

Chapter 2 has some interesting material. The critical remarks about reductionist theories at the beginning of section 2.2.5 are good; in my experience low level measurements and theories are rarely of much real use. A good example is "Fitt's law," which is discussed in section 9.3.5 (page 325); all of the most important things fall outside its scope, such as the fact that switching between keyboard and mouse can slow users down a great deal, and the fact that being confused can consume a great deal of user "think time." Please read section 2.2.5 (page 60) several times. Although compact, the first paragraph contains a valuable critique of the rest of section 2.2; however, the assertions about reusing estimates of widget "cognitive complexity" seem overly optimistic, because such reuse would require ignoring the context of widget use, which is a major determinant of even such crude measures as response time.

Most engineers do not know much about what psychologists really do, so here's a little background to help with understanding the first part of Shneiderman's chapter 2. In sections 2.2 and 2.3, he is talking (to a great extent) about experimental cognitive psychology. Cognitive psychologists make theories of cognition, i.e. thinking, and in general, experimental psychologists are concerned with discovering what kinds of theory work well for describing the results of some class of experiments; thus experimental psychologists devise experiments to test theories. To run a test, you need something very specific, not just general features of a high level theory; therefore it is not possible to actually prove a theory, but only to fail to refute it (see also the quote from Ayer on page 51 of Shneiderman). A good list of variables that might be measured in psychology experiments is given on page 15. Only after a theory has been validated for a particular class of real phenomena can it be used to make predictions that are relevant to real design issues. As a rule, industrial or applied psychologists are concerned with actual applications, while academic psychologists are more concerned with the theories themselves. The split between these two communities is reflected in the structure of Shneiderman's book, for example, the split between the last two sections of each chapter, one for practitioners (who are usually not psychologists) and the other for researchers (who increasingly are also not psychologists.)

Each of the four levels of modeling described on page 54 are "reductionist" and "cognitivist," in that this schema (implicitly) assumes that users have definite "mental models" that can be explicitly described, e.g., by rule-based systems (also called production systems), and also that the semantics of commands and displays can be explicitly described, e.g., by some sort of logical formulae. However much of "what users know" cannot be explicitly represented, because it is implicit or "tacit knowledge," that is embedded in particular contexts of use. For example, there are some phone numbers that I only know by their pattern of button pushes; hence this information can only be available in the physical presence of a standard telephone keypad (or an equivalent). The same holds for knowledge about how to play various sports, which is notoriously difficult to convey in words. Indeed, it holds for almost everything of significant human value, including satisfaction, artistic enjoyment, emotions of all kinds, and even intentions. (The term "tacit knowledge" was introduced by the philosopher Michael Polanyi as part of his criticism of logical positivism, which is the most extreme form of reductionism to have ever achieved much popularity; logical positivism is also a basis for modern cognitivism, especially in the extreme form embraced by early AI research, which I have dubbed "logical cognitivism.")

In my opinion, to assume that users can be described in the same style as machines is a strong form of reductionism that is demeaning to humans; moreover (and perhaps more convincingly), it does not work very well in practice, except sometimes at the lowest level of mechanical operations, such as typing in material already written on paper; but as already noted, models at such low "keystroke" levels have relatively little practical value.

Grammatical descriptions like those in section 2.2.4 can be useful for exposing certain kinds inconsistency, but they are more applicable to command line interfaces than to GUIs, since they cannot capture the graphical metaphors that make such interfaces appealing (e.g., the ever popular "desktop" metaphor), nor in fact can they capture any significant kind of context dependency. Similar objections apply (though with less force) to Shneiderman's own Object-Action Interface (OAI) model, described in section 2.3. Do you think that interfaces can really be expressed as simple trees? It seems to me that, on the contrary, organizing metaphors (like the desktop) cut across such hierarchies, linking objects at quite different levels in complex ways with concepts that are not explicitly represented at all (such as "opening" something). Moreover, "hyperlinks" as in HTML allow arbitrary cyclic graph structures.

Representing user intentions as simple trees seems even more hopeless, and the idea of representing plans by trees (or by even more complex formal structures) has been demolished by Lucy Suchman in her book Plans and Situated Actions, as well as by many other authors in many other places; even for the relatively clear situation of plans for computation, which we normally call "programs," no one today would want to try to get by with a pure branching structure (without loops). (See figure 2.2 on page 62.) However, even such an incomplete model as OAI is better than no model at all, because it does highlight a number of issues that must be addressed, and it also provides some support for solving some of them.

Slogans like "Recognize diversity" and "Know thy user" are worth repeating, but are much harder to apply in practice than you might think at first. The list of needs for the 3 classes of user on pages 68-69 is useful, and the list of advantages and disadvantages of styles of interaction (pages 71-74) is really useful, especially the summary on page 72. However, I would add to the disadvantages of direct manipulation that it is hard to use for navigating and manipulating large homogeneous structures, such as proof trees. It is also worth remarking that, contrary to page 70, not every designer would agree that "the set of tasks must be determined before design can begin" - the modern alternative called iterative design involves (among other things) building a series of prototypes, of which the early ones can be very rough, and then successively refining them into a releasable system, by constantly taking account of user feedback. In my experience, attempting an exhaustive task analysis before beginning design is a very bad idea; it will consume a great deal of time, the result is likely to be very inaccurate, and quite possibly will be out of date before it can be deployed.

The "Eight Golden Rules" (pages 74-75) are good reminders, but again, such lists are not as useful as you might hope, because they are far easier to apply with hindsight than at design time. The discussion of errors beginning on page 76 is useful, and the observation that errors are very common is certainly very important: designers should definitely take account of the fact that users will surely make errors, and they should try to find out what errors are most common, and provide some way to deal with them. The 5 points on page 80 are again good reminders, but as Shneiderman himself points out, they are too vague; on the other hand the bullets on pages 80-81 are very project-specific (and the last bullet has more radical implications than Shneiderman perhaps realizes). The idea here is that you should make up one such specific list for each specific project. This is a standard way to achieve the consistency that is called for in the first "Golden Rule."

Section 2.9 of Shneiderman is a salutary flame about agents, emphasizing the problem of responsibility. I think Shneiderman's position is argued much more effectively here than in the Scientific American interview; we will also read a piece by Lanier on this same topic, which has given rise to ongoing debates in the HCI community.

Box 2.2 on page 84, comparing humans and machines, is fun. I would like to emphasize the following sentence from page 85, which occurs in the context of air traffic control, but which applies equally well to many other domains, such as nuclear reactor control, and the so called Star Wars missile defense system:

In short, real-world situations are so complex that it is impossible to anticipate and program for every contingency; human judgement and values are necessary in the decision-making process.
Much more along these lines can be found in the important book Normal Accidents, by Charles Perrow. Another sentence on page 85 says: "The entire system must be designed and tested, not only for normal situations, but also for as wide a range of anomalous situations as can be anticipated." This also is related to the theory of "normal accidents" propularized by Perrow in his book of the same title. A related sentence from page 89 says: "Extensive testing and iterative refinement are necessary parts of every development project". It's amazing how often engineers think they can get things right the first time - and then end up wasting a lot of time as a result.

"User participation" can be difficult, frustrating and even misleading, but it is usually essential. In particular, various prototyping techniques can be used to get user feedback very early in the design or even requirements phase of a project. This is one place where social issues can enter in an obvious and significant way. An important point is that asking users why they do something often does not work: much of user knowledge is tacit rather than explicit; moreover, users often have not previously thought about issues that designers want to raise, and even if they have, they may have a biased, incomplete, or even mistaken point of view.

Three of the most important general lessons from chapters 1 and 2 may be: that good design is much too difficult to be subsumed by any simple guidelines or theories; that design is at least as much an art as a science; and that design is much more important than is often acknowledged. Shneiderman's summary for practitioners (section 2.10, page 89) is very good, but his agenda for researchers (section 2.11, page 90) calls for projects that seem to have an overly reductionist flavor; moreover, his phrasing appears to indicate that he somewhat doubts the value of the cognitivist research that is reported in his section 2.2.


Lanier's Agents of Alienation is an unusual piece. In my opinion, Lanier's rhetoric is excessive (though perhaps a refreshing contrast to Shneiderman's rather flat academic style), and he glosses over some important points; but let's seek out what is interesting in it. For now, I would highlight two main points: (1) Lanier is against agents, and does not accept that they "are inevitable" (a position he attributes to Negroponte); (2) Lanier goes beyond purely technical issues, and raises basic ethical issues - in this respect, his argument against agents is quite different from Shneiderman's. However, he does connect with issues in user interface design, and it is interesting to notice that his list of agents being promoted in 1995 is now a list of notorious failures. See also Direct Manipulation vs. Interface Agents, by Ben Shneiderman and Pattie Maes, Interactions, 4, no. 6, pp 42-61, 1997, a digest of a debate held at CHI 97, which attracted a lot of attention in the HCI community.


The piece Art and the Zen of Web Sites by Tony Karp takes what now seems like a naive view of user interface design, as well as an overly enthusiastic tone, and its views of art and Zen are so naive as to seem derogatory. It is interesting to notice how much the social context of Karp's piece has changed since 1997; originally, it seemed rather flakey but somewhat cute, whereas now it seems a bit offensive, and very out of date. This is a sign of how very fast our field is changing! Karp was a professional web designer, and his advice was most appropriate for commercial sites. Commercial sites usually cater to lowest-common-denominator browsers, but academic sites do not have this restriction, so some of his guidelines are actually opposite to what should (usually) be recommended for academic sites; also what now counts as a lowest-common-denominator browser is quite different than it was then. If you do read this document, do so with a critical mind; do not simply accept it without thought - the same of course applies to all documents! (By the way, this document has nothing to do with Zen ("Chan" in Chinese) Buddhism; the title was inspired by the book Zen and the Art of Motorcycle Maintenance by Pirsig, which does have something to do Zen, and its title was in turn inspired by Zen and the Art of Archery by Herrigal, which really is about Zen.)

A much better treatment of similar material can be found in the Yale Style Manual; indeed, its detail and precision make it perhaps the best general style manual available on the web.


To CSE 271 homepage
To the next section of the class notes
To the previous section of the class notes
Maintained by Joseph Goguen
© 2000, 2001 Joseph Goguen, all rights reserved.
Last modified: Mon Jun 11 16:22:47 PDT 2001