CSE 171: User Interface Design: Social and Technical Issues
Notes for the Second Lecture
Notes for Class Discussion

"What is design?" is a fundamental question that any course on user interface design ought to address. Looking at the linguistic structure of the word "design," we get "de"+"sign", which literally refers to something derived from signs; it shares the first two morphemes of its linguistic structure with the word "designate" ("de"+"sign"+"ate"). Through the sort of semantic evolution to which societies commonly subject their words, "design" as a verb has come to mean creating some configuration of signs, for some particular purpose. So user interface design is about creating configurations of signs, today usually on a computer screen, and again, usually for some practical purpose. Even a quick look at the HTML language can give a sense of what are today considered (for user interface design) the most basic kinds of sign; and the most important ways to configure them: letters and images are the most basic kinds of sign, and paragraphs, lists, tables, etc. are the most important structuring devices (see the crucial HTML tags page). The result of designing a webpage in HTML is a complex sign, configured in a carefully described way out of certain specifically indicated parts.

Thus, our subject is in a very fundamental way involved with signs, and in particular, with how to construct complex signs from basic signs, in order to effectively accomplish some practical aim. There is much in common with design in other media, such as architecture, magazine layout, and watercolor painting, including artistic dimensions such as good taste, creativity, flexibility, insight, etc., but there are also significant differences, so that the structuring devices and goals are much more explicit, and much more limited. This is an ideal situation for applying semiotics.

Most engineers do not know much about what psychologists really do, so here's a little background to help with understanding the first part of Shneiderman's chapter 2. In sections 2.2 and 2.3, he is talking (in part) about experimental cognitive psychology. Cognitive psychologists make theories of thinking, and especially they want to know what kinds of theory work well. Only after a theory has been validated for a particular class of applications can it be used to make predictions that are relevant to real design issues. So experimental psychologists devise experiments to test theories. To run a test, you need something very specific, not just general features of a high level theory; however, the point of the experiment is usually to determine whether a particular kind of theory works for a certain class of experiment. As a rule, industrial psychologists are more concerned with actual applications, and academic psychologists are more concerned with the theories themselves. The split between these two communities is reflected in the structure of Shneiderman's book, for example, the split between the last two sections of each chapter.


Notes on Readings

Shneiderman presents an orthodox view of user interface design as strongly influenced by experimental psychology. This is historically correct, but while experimental psychology may have been a good place to start, many limitations have since been discovered. The first chapter, although perhaps a bit boring, has some important content; in particular, you should be aware of all the resources listed at the end of the chapter, including the associated website and the many citations. The next to last paragraph of section 1.5.1 (page 20) gives a good example of how the social can impact even ergonomic issues. The remarks about intelligence on page 21 and video games for women on page 22 are provocative, but (I think) without enough detail. The emphasis on diversity is nice.

Chapter 2 contains some interesting material. The critical remarks about reductionist theories at the beginning of section 2.2.5 are good; in my experience low level measurements and theories are usually not very useful. A good example is "Fitt's law," discussed in section 9.3.5 (page 325); all of the most important things fall outside its scope, such as the fact that switching between keyboard and mouse can slow users down a great deal, and the fact that being confused can consume a great deal of user "think time." Please read section 2.2.5 (page 60) several times. Although compact, it contains a valuable critique of the rest of section 2.2; however, its assertion about reusing estimates of widget "cognitive complexity" seems overly optimistic, because such reuse would require ignoring the context of widget use, which is a major determinant of even such crude measures as response time.

The four levels of modeling described on page 54 are "cognitivist," in that this schema (implicitly) assumes that users have definite "mental models" that can be explicitly described, e.g., by rule-based systems (also called production systems), and also that the semantics of commands and displays can be explicitly described, e.g., by some sort of logical formulae. However much of "what users know" cannot be explicitly represented, because it is implicit or "tacit" knowledge, that is embedded in particular contexts of use. For example, there are some phone numbers that I only know by their pattern of button pushes; hence this information can only be available in the physical presence of a standard telephone keypad (or an equivalent). The same holds for knowledge about how to play various sports, which is notoriously difficult to convey in words. Indeed, it holds for almost everything of significant human value, including satisfaction, artistic enjoyment, emotions of all kinds, and even intentions. (The term "tacit knowledge" was introduced by the philosopher Michael Polanyi as part of his criticism of logical positivism, the most extreme form of reductionism to have ever achieved much popularity, and the basis for modern cognitivism, especially in the extreme form embraced by early AI research, which I have dubbed "logical cognitivism."

In my opinion, to assume that users can be described in the same style as machines is a strong form of reductionism that is demeaning to humans; moreover (and perhaps most convincingly), it does not work in practice, except sometimes at the lowest level of mechanical operations, such as typing in material already written on paper; but as already noted, models at such low "keystroke" levels have little practical value.

Grammatical descriptions like those in section 2.2.4 can be useful for exposing certain kinds inconsistency, but they are more applicable to command line interfaces than to GUIs, since they cannot capture the graphical metaphors that make such interfaces appealing (e.g., the ever popular "desktop" metaphor), nor in fact can they capture any significant kind of context dependency. Similar objections apply (though with less force) to Shneiderman's own Object-Action Interface (OAI) model, described in section 2.3. Do you think that interfaces can really be expressed as simple trees? It seems to me that, on the contrary, organizing metaphors (like the desktop) cut across such hierarchies, linking objects at quite different levels in complex ways with concepts that are not explicitly represented at all (such as "opening" something). Representing user intentions as simple trees is even more hopeless, and the idea of representing plans by trees (or by even more complex formal structures) has been demolished by Lucy Suchman in her book Plans and Situated Actions, as well as by many other authors in many other places; even for the relatively clearcut situation of plans for computation, which we normally call "programs," no one today would want to try to get by with a pure branching structure (without loops). (See figure 2.2 on page 62.) However, even such an incomplete model as OAI is better than no model at all, because it does after all highlight a number of issues that must be addressed, and also provides partial support for solving some of them.

Cliches like "Recognize diversity" and "Know thy user" are worth repeating, but are much harder to apply in practice than you might think at first. The list of needs for the 3 classes of user on pages 68-69 is useful, and the list of advantages and disadvantages of styles of interaction (pages 71-74) is really useful. However, I would add to the disadvantages of direct manipulation that it is hard to use for navigating and manipulating large homogeneous structures, such as proof trees. It is also worth remarking that, contrary to page 70, not every designer would agree that "the set of tasks must be determined before design can begin" - the modern alternative called iterative design (among other things) involves building a series of prototypes, of which the early ones can be very rough, and then successively refining them into a releasable system, by constantly taking account of user feedback. In my experience, attempting an exhaustive task analysis before beginning design is a very bad idea; it will consume a great deal of time, and the result is likely to be very inaccurate, and quite possibly will be out of date.

The "Eight Golden Rules" (pages 74-75) are good reminders, but again, such lists are not as useful as you might hope, because they are far easier to apply with hindsight than at design time. The discussion of errors beginning on page 76 is useful, and the observation that errors are very common is certainly very important: designers should definitely take account of the fact that users will surely make errors, and should try to find out what errors are most common, and find some way to deal with them. The 5 points on page 80 are also good reminders, but as Shneiderman points out, they are too vague; on the other hand the bullets on pages 80-81 are too project-specific (and the last bullet has more radical implications than Shneiderman perhaps realizes). The idea here is that you should make up such one such specific list for each specific project.

Box 2.2 on page 84, comparing humans and machines, is sort of fun. I would agree with and somewhat emphasize the following sentence on page 85, which occurs in the context of air traffic control, but applies equally well to many other domains, including nuclear reactor control, and the so called Star Wars missile defense system:

In short, real-world situations are so complex that it is impossible to anticipate and program for every contingency; human judgement and values are necessary in the decision-making process.
This is followed by a salutary flame about agents, emphasizing the problem of responsibility. Shneiderman's position is much better argued here than in the Scientific American interview; we will also read a piece by Lanier on this same topic.

Another sentence from page 85 says: "The entire system must be designed and tested, not only for normal situations, but also for as wide a range of anomalous situations as can be anticipated." This is related to the theory of "normal accidents" propularized by Charles Perrow in his book of the same title. A related sentence from page 89 says: "Extensive testing and iterative refinement are necessary parts of every development project". It's amazing how often engineers think they can get things right the first time - and then end up wasting a lot of time as a result.

"User participation" can be difficult, frustrating and even misleading, but is usually essential. In particular, various prototyping techniques can be used to get user feedback very early in the design or even requirements phase of a project. This is one place where social issues can enter in an obvious and significant way. An important point is that asking users why they do something often does not work: much of user knowledge is tacit rather than explicit; moreover, users often have not previously thought about issues that designers want to raise, and even if they have, they may have a biased, incomplete, or even mistaken point of view.

The most important general lessons from these two chapters may be: that good design is much too difficult to be subsumed by any simple guidelines or theories; that it is at least as much an art as a science; and that it is much more important than is often acknowledged. Shneiderman's summary for practitioners (section 2.10, page 89) is good, but his agenda for researchers (section 2.11, page 90) seems to have an overly reductionist flavor.

The piece Art and the Zen of Web Sites by Tony Karp takes what now seems like a naive view of user interface design, as well as an overly enthusiastic tone, and views of art and Zen that are so naive as to seem derogatory; I'm sorry I assigned it. A much better treatment of similar material can be found in the Yale Style Manuual; indeed, its detail and precision make it perhaps the best general style manual available. It is interesting to notice how much the social context of Karp's piece has changed in two years; originally, it seemed a little flakey but maybe cute, whereas now it seems a bit offensive. This is another sign of how very fast our field is changing! Karp was a professional web designer, and his advice is most appropriate for commercial sites. Some of his guidelines are actually opposite to what should (usually) be said for academic sites, and some of his information is dated. Commercial sites usually cater to lowest-common-denominator browsers, but academic sites do not have this restriction. Read this document with a critical mind; do not simply accept it without thought. (By the way, it has nothing to do with Zen ("Chan" in Chinese) Buddhism; the title is inspired by the book Zen and the Art of Motorcycle Maintenance by Pirsig, which does have something to do Zen, the title of which in turn was inspired by Zen and the Art of Archery by Herrigal, which really is about Zen.)


To CSE 171 homepage
To my home page
Version of 24 January 2000.