One instructive and important class of applications for semiotic morphisms is the construction of overviews, summaries, or visualizations for (possibly large) collections of data. Here the source sign system consists of data structured in some particular way; examples include books, source code for programs, digital libraries, websites, scientific data, and databases of all kinds.
Such interfaces often have a direct manipulation flavor. The kind of visualization done in scientific and engineering applications, such as aerodynamic flow over a wing, is a hot topic today; indeed, scientific visualization tools are part of a revolution in how science is being done, to such an extent that the very notion of scientific model is changing (e.g., see the recent book by Stephen Wolfran). Communication among and federation of databases is also becoming important, e.g., with the semantic web. It may sound a bit far out now, but virtual reality interfaces to large databases could well become important in the future.
On p.523 of his popular text, Shneiderman gives the following "mantra":
Overview first, zoom and filter, then details on demand.This might seem obvious, but as Shneiderman emphasizes, it is easy to forget; in fact, he repeats it 12 times in his book, once for each time he forgot it when he should have used it in some project. Although there are many situations where such a design can be used, there are also many situations where it does not apply. Please note that an overview is the image of a semiotic morphism from a source space of data, and that zooming, filtering, and selecting details are each manipulations of the semiotic morphism, modifying it to better approximate what the user wants; i.e., the slogan calls for designing not just a semiotic morphism, but a tool for defining semiotic morphisms; what this tool does is sometimes called filtering. Note that collaborative filtering can be considered the use of social processes to improve semiotic morphisms.
For the designer of a tool to support this kind of interactive construction of a visualization, the source space should be a theory of the semiotic morphisms that the tool supports, and morphisms from that source space will produce the sliders, menus, etc. with which users construct the visualization that that particular tool allows; thus there are two kinds of display, one for controlling the morphism, and one for displaying the result of that morphism on a particular dataset.
Some further related discussion is given in Information Visualization and Semiotic Morphisms, by Joseph Goguen and D. Fox Harrell, an informal introduction to semiotic morphisms applied to both analysis and design of information visualization, and see also The Ethics of Databases, a naturalistic study of the values embedded in web search engines.
In contrast to the 19th century trend of making heroes of a small number of individuals (e.g., Einstein, Newton, or Mozart), recent research often looks for the work that was done to make something happen, and in particular, at the kind of "infrastructural" work that is usually left out in traditional accounts, e.g., the work of people who actually build the instruments that are used in physics experiments, or the people who somehow obtain the money to build a skyscraper. The research strategy that consciously looks for these omissions is called infrastructural inversion (a phrase due to Leigh Star), and I have suggested that the omissions themselves should be called infrastructural submersions. One major example is that the hygiene infrastructure (especially sewers) of cities like London and Paris made possible the medical advances of the mid to late nineteenth century. And of course the experimental work of Newton would have been impossible without the advances in technology that made his experimental apparatus constructable. The same is true of high energy physics today, where (for example) low temperature magnets are an important and highly specialized intrastructure for particle accelerators.
A sociological understanding of technology (and science, which cannot really be separated from technology) must concern itself with what engineers and scientists actually do, which is often different from what they say they do. This is a special case of a very general problem in anthropology and ethnography, called the say-do problem. Among the factors that produce this discrepency are tacit knowledge, false memory syndrome, and the myths that all professions have about their work; very often the discrepency is not a deliberate deception. Tacit knowledge is knowledge of how to do something, without the ability to say how we do it. Instances are very common in everyday life and in professional life; for example, few people can describe how they tie their shoes, brush their teeth, or organize their schedule. As an illustration, numerous case studies have shown that a large part of "routine" office work actually consists of handling exceptions, i.e., of doing things that by definition are not routine; but if you ask (for example) a file clerk what he does, you will get only a description of the most routine activities.
The ubiquity of the say-do problem has very serious methodological implications for sociologists: in many cases they cannot just ask informants to tell them the answers to the questions that they really want to have answered; however, sometimes sociologists can ask other questions, and then use their answers to get at what they really want to know. Thus designing good questionaires is a delicate art, that must take account of how people tend to respond to various kinds of question.
In fact, much of today's sociology has a statistical flavor, being based on questionaires, structured interviews, and various kinds of demographics. While this seems to work rather well for selling soap and politicians, it does not help very much with understanding how technologies relate to society. In general, better answers can be obtained in an interview if very specific questions are asked, and if the researcher is immersed in the concrete details of a particular project; general questions tend to produce general answers, which are often misleading or wrong - though of course the same can happen with specific questions. Concrete details are often much more useful than statistical summaries.