If software designers ever finally give up fashioning cutesy humanoid icons and voices to advise users on how to navigate ever more unwieldy programs, the person to thank will be Ben Shneiderman, head of the Human-Computer Interaction Laboratory at the University of Maryland. Shneiderman, who since 1981 has argued that effective programs allow people to manipulate on-screen objects directly, is on a personal campaign to purge his field of anthropomorphism, which he regards as an affront to human dignity. The mere mention of the fashionable software "agents" that operate independently and anticipate users' needs makes Shneiderman sigh and roll his eyes theatrically. But it takes him only an instant to summon some quotable zingers to express his disdain. It's hard to avoid the impression that Shneiderman relishes his role as an iconoclast.
According to Shneiderman, agents and their cyber-kin, which have been promoted most notably by Massachusetts Institute of Technology professor Patti Maes, are a new version of the "mimicry game," the long and undistinguished tradition of making devices that look or work like humans. He sees them as descendants of 17th-century dolls that amused courtiers by playing musical instruments and about as likely to improve suffering humanity's lot. Yet Shneiderman's criticisms also have a serious side. He thinks enhancing computers' autonomy raises troubling questions about who will be responsible if machines controlling air traffic or medical equipment, for example, make disastrous errors. (Maes is now taking this fear seriously, he allows.) And he completely rejects the related notion of giving computers "emotions" so that they might attempt to calm a distressed user. "Machines don't have emotions," he declares roundly.
Shneiderman is almost as dismissive of efforts at M.I.T. to create a humanoid robot called Cog based on biological design principles [see "Here's Looking at You," News and Analysis, January]. The plan is a "dangerous" distraction, he announces, adding a little too casually that it might lead to "better animatronic dolls for Disney World or better crash-test dummies."
James A. Landay, a computer scientist at the University of California at Berkeley, says that Shneiderman's opinions on agents and autonomous software in general have forced researchers to pay attention to hard questions about accountability for machine actions. And Terry Winograd, a prominent researcher at Stanford University, agrees that his "energy and enthusiasm" have been "a useful corrective" to exaggerated claims made for agents. But Oren Etzioni of the University of Washington, chair of the Agents '99 conference, counters that Shneiderman fails to consider the rewards agents can offer. "Yes, you lose some control. That's the cost. But the benefit is enormous," Etzioni maintains.
Maes, for her part, says Shneiderman is attacking a straw man. The goal of agents research, she remarks, is not to mimic human intelligence but to help the user suffering from information overload by providing "simple, understandable, predictable programs" that can act on his or her behalf. She believes it is clear that a user who instructs an agent should assume responsibility for its actions.
Shneiderman asserts that his own goal is to "amplify human creativity 1,000-fold." He punctuates his views with grins, chuckles and shrugs that conjure an aura of gentle reasonableness. "Creative explorations" in artificial intelligence are justifiable, he concedes. But he holds that most people do not want to deal with an on-screen "deception"--a program portrayed as a person. Too many artificial-intelligence projects waste tax dollars in pursuit of unclear goals and fail to evaluate their products adequately, he complains. He cites the instance of a Unix natural-language interface whose author recounted that he had not had time to test it with naive users. Shneiderman emphasizes that software should be checked for ease and speed of use as well as for the number of errors it provokes.
Shneiderman's perspective is consistent with his own broader philosophical views. He declares that he is a humanist. His bottomless respect for human potential appears to leave him close to vitalism, although he grants that there is no reason humans should not ultimately understand how the mind works. But he insists that most research carried out under the banner of artificial intelligence has actually slowed progress toward developing more accessible technologies. Researchers in artificial intelligence have "such a shallow model of human performance and human emotions--that's the tragedy," he observes. He maintains that delays in devising machines that can respond to natural language have forced workers to push back that goal to more than a decade hence. Deep Blue, the IBM computer that beat chess champion Garry Kasparov, is "merely a tool with no more intelligence than a wooden pencil," Shneiderman wrote in a 1997 article in Educom Review.
People, on the other hand, are "richly creative and generative in ways that amaze me and that defy simple modeling," he states. So the last thing they want is "an electronic buddy or chatty bank machine." Names and products that try to indicate humanlike intelligence do not endure, he elaborates. Tillie the Teller and Harvey Wallbanker, early automated teller machines, have joined the U.S. Postal Service's Postal Buddy and Microsoft's Bob computer characters on the trash heap of computer history. Bob's electronic progeny Einstein and Clip-It, now found in Microsoft's Office suite, will go the same way, Shneiderman predicts.
The offspring of two journalists, Shneiderman grew up in a European intellectual circle in New York City that he says taught him to appreciate the arts as well as technology. (His uncle, David Seymour, traveled the world photographing wars as well as actresses for Look and Life, among other magazines.) As a physics student at City College of New York during the 1960s, he was swept up in post-Sputnik enthusiasm for all things scientific. Resisting pressure to specialize and inspired by Marshall McLuhan's portrait of a global electronic village, he sought ways of "getting out of linear culture" through electronics. He tried to bridge psychology and computing while remaining alert to the arts. He held academic appointments at the State University of New York at Stony Brook and at Indiana University before moving to Maryland.
The purpose of computing is insight, not numbers, Shneiderman likes to reiterate, and likewise the purpose of visualization is insight, not pictures. What people want in their interactions with computers, he argues, is a feeling of mastery. That comes from interfaces that are controllable, consistent and predictable. Direct manipulation of on- screen objects--moving a file to the trash can, say--is the ideal solution. Natural-language dialogue is a loser (except as an aid for the visually impaired) because it slows down users' thinking. "We want to fly through a library, not mimic the dialogue with a reference librarian," he comments.
Shneiderman believes that unlike adaptive systems, which change their behavior in nonobvious ways, successful programs offer rapid, incremental and reversible actions. Such insights led him to invent in the early 1980s what is now known as the hyperlink, in a videodisc exhibit he helped to design for the U.S. Holocaust Memorial Museum. A green screen offered numbered items on a menu: Shneiderman decided to drop the numbers and highlight words to denote choices (his research is behind the pale-blue color of most links). The idea was commercialized by Cognetics as "Hyperties" and used in a high-profile electronic book for computer professionals and a Smithsonian Institution exhibit. Tim Berners-Lee, the originator of the World Wide Web, referenced the idea in 1989 in an early description of his concept. Shneiderman has since worked on the small, high-precision screens used in pocket-size computers and organizers.
Today he supervises a variety of projects. His mantra, printed 12 times consecutively in his textbook Designing the User Interface, is "overview first, zoom and filter, then details on demand." He favors shallow search trees, slide controllers and information-rich screens with tightly coordinated panel views of data. "A pixel is a terrible thing to waste" is one of his many maxims.
He and his colleagues Chris North and Catherine Plaisant have applied these principles to develop an interface that makes it easier for researchers to select images from the National Library of Medicine's Visible Human digital library, which contains thousands of high- resolution sections of a cadaver. Another product, commercialized as Spotfire, displays data as color- and size-coded blobs on graphs whose axes can be selected and scaled at will.
Spotfire and similar programs are "a new form of telescope" that allow users to discern patterns in data they might otherwise never discover, Shneiderman maintains. And important customers are convinced he has something to offer. The National Aeronautics and Space Administration has adopted one of his group's ideas as an interface for its master directory of research on global change. IBM has come knocking for advice on electronic commerce and on presenting medical records.
Shneiderman's latest book, Readings in Information Visualization: Using Vision to Think, written with Stuart K. Card and Jock Mackinlay, was published in January. His longer-term project goes by the name of genex, a somewhat vague scheme to improve software for creativity. Shneiderman thinks today's programs have a long way to go toward maximizing human potential. Hierarchical browsing, self-describing formats and synchronized scrolling are among the notion s that are featured in his writings on genex. "We have to do more than teach our kids to surf the Web. We have to teach them to make waves," he pronounces. Shneiderman himself seems already to have created quite a storm.
-- Tim Beardsley in Washington, D.C.
-- from Scientific American, March 1999.