4. Inseparability of the Technical, the Social and the Ethical
There are four main points for this chapter:
One rather dramatic example of the intertwining of technical, social, and ethical factors can be seen in the controversies surrounding the investigation of the deadly crash of EgyptAir 990 on 31 October 1999. I have put two news articles on this topic on the web, one on the American investigation, and the other on some Egyptian reactions. The Egyptians are extremely sensitive about what they take as criticisms of their country, and accuse the Americans of bias, in order to protect the American manufacturer of the aircraft, Boeing; these are of course issues of value, of what each country considers to be important. Questions about the cultural significance of "religious remarks" made by the relief pilot are intermixed with questions about the technical significance of a "rare inflight split in direction of the plane's elevator panels" and questions about the value system and religious beliefs of the pilot. The entire situation ascended to the level of a diplomatic dispute, and simultaneously descended to the level of a public relations exercise.
Now let's consider some common object, such as a mug; in fact, let's consider three very specific mugs (which are actually brought into class), in line with the dictum to be as specific as possible in any social investigations. Of course mugs are very low tech compared to current information technology, but this only makes it easier for us to see some of the issues that are involved. Each mug was produced and designed somehow, and ended up being used somewhere (in this case, by me, in my office). One good way to understand what is going on is to look at economic and social factors: who paid for the design, manufacture, and distribution, and why did they do so? Presumably, it was thought worth the cost and effort involved. Who is now responsible for their maintenance, and why do they care to do so? Considering such questions carefully can bring out the values that are involved in situations.
Economics can also be a great help in understanding many things about technology, because money is often the most important value involved (at least, for some of the participants). For example, why does the US have such a poor public transportation system, especially in comparison with Europe and Japan? The answer (in part) is that automotive transport is heavily subsidized, in that public roads are almost totally subsidized, whereas other public transportation systems, such as trains and trolleys, are little subsidized, if at all. Note also that highways are very expensive, costing the order of tens or even hundreds of millions of dollars for a single interchange. So one reason public transport is so much worse in the US than in Europe and Japan is that alternatives to the highway system are much less subsidized in the US; this is not explained by any facts about the technologies involved; and of course, subsidization is a social phenomenon. So this is a counter-example to technological determinism. There are also important ethical issues here, because the choice to support highways instead of buses and trains makes life more difficult for both rural and urban poor.
Another counter-example to technological determinism is the BetaMax versus VHS battle of some years ago. BetaMax was actually the superior technology, but it lost to VHS because the movies that people wanted to see were available on VHS but not on Beta. It seems that Sony accepted technological determinism to the extent of believing that a better technology would win; as a result, they lost billions, and then later they invested heavily in media (Sony records, Sony movies, etc., which they call "software" because it "runs" on their hardware), to avoid similar problems in the future. This is an interesting example of a dynamic interaction between technology, society, and values, as well as of an organization learning from experience and changing its behavior. In fact, Sony is an unusually dynamic and adaptive organization. (However, it is not clear that Sony's media investments will actually pay off.)
4.2 Politics, Ethics, and Conflicts
Because technologies are developed and used for particular purposes by particular persons or groups, we should expect that there will be conflicts with the goals of other persons and groups, and that such conflicts will be a key to understanding many otherwise strange things. According to Webster's Dictionary, "politics" is the art or science of government, or of influencing government, or of winning or holding control over a government. By extending the meaning of "government", politics refers to issue of control, influence and conflict in any organization or part thereof. Therefore the political context is another important aid to understanding any technology. If we can understand how groups are trying to influence other groups, we may be able to understand why they are doing this, i.e., what their values are. Often money and power are the values involved, but other values can also arise. As a general rule, wherever we find politics, we also find conflicts of values, and these help to understand what actually are the values of the groups involved. And of course there are ethical issues about the methods used to obtain influence and power.
Political conflicts are inevitable in any organization; for example, in a workplace, people may compete for status, salary, perks, space, and all kinds of resources. Workers often form subgroups, which compete with other subgroups and individuals. Of course, companies compete in the marketplace. In information technology, conflicts about standards and standards organizations are very common and very important; I hope we will discuss this later on.
4.3 Technology and its Effects
Let us now notice that when we speak of "a technology", we are actually speaking of an abstraction, not something "real" that we can actually find in the world. This needs a little explanation, because of course we do find objects like TV sets and telephones in the world. To make this discussion more concrete, consider a sentence like
We may begin to see how "the telephone" refers to a huge and very diverse collection that includes automated answering systems, human operators, cost cutting, marketing plans, fiber optic cables, ICC regulations, different uses in different situations, actual instruments on particular desks, new systems announced or planned, companies that market long distance service, companies that market local service, companies that sell actual telephones, satellites, failed systems like Iridium, and much much more, all of which is involved in using telephones, a complex mixture of physical objects, corporate structures, legal constraints, social patterns, etc.; such a "thing" can only exist in our minds, and can appear quite different for different people and different situations. It is also amusing that in the last few years, "the telephone" has begun to take on a somewhat archaic connotation, since it is being overtaken by wireless devices of various kinds.
The best way to understand some technology like the telephone is to be as specific as possible, to look at actual uses of actual instruments, at some particular time and place. For example,
At 10:32 on the morning of 8 October 1998, I called a division of the UCSD Medical School to reserve a place for a lecture. What I got was a "telephone tree" having 6 top level nodes, which took more than one minute to get through, where only the last node allowed for the possibility of reaching a real person, as was needed to reserve a place. However I did not get to a real person; instead I had to leave a message, which was never returned. As a result, I did not go to that lecture.Through such concrete events, we can begin to see how a technology is actually used, and what its constituents, effects and context actually are. In fact, concrete experience usually brings deeper insights than speculations about generalities and abstractions. (For example, in this case, the technology actually impeded and ultimately defeated communication.) And no doubt if an audio (or better, video) of this interaction were available, we could notice many other things that would be very hard to imagine in the abstract. An ethical issue here is the evasion of responsibility that results from the use of such technology by an organization.
A famous example of how values and technology mix is the roads between Manhattan and Long Island, built in the 1950s, with overpass height limits deliberately chosen to prevent buses from using the roads. It is well documented (e.g., see the famous article Do Artifacts Have Politics? by Langdon Winner, in Daedalus 109) that the purpose (of Robert Moses, who was in charge of constructing such things) was to keep the poor people (e.g. in Harlem) who don't have cars from using the beaches in Long Island (which is a rich commuter community). Here the values are, metaphorically speaking, right in the steel. There are many less dramatic examples of a similar kind.
An article More Technology, More Expensive, by an (anonymous!) lawyer in the Summer 1998 issue of In Formation Magazine discusses some effects of technology in law, including: (1) elimination of many support personnel, particularly typists and legal researchers; and (2) making it more difficult for poorer people (meaning most of us, to say nothing of homeless people, prisoners, etc.) to match the quality of legal work done for wealthy clients, such as large corporations. This example illustrates another common phenomenon, which is that people at the bottom of the social hierarchy often suffer the most from negative effects of technology (e.g., consider the chemical plant at Bhopal, India - and consider that even in the US, poor people much more often live near hazardous waste dumps, chemical plants, etc.) This of course is a very serious ethical issue. (And the anonymity of the author is a smaller ethical issue.)
The paper Effects of Technology on Family and Community, by J.A. English-Lueck, reports on a study of the effects of technology on family life in Silicon Valley, claiming that it has actually decreased the quality of life of many people. It seems that many people in this community may value their high tech image more than they do their family life.
Despite all the examples above, and many others which are very widely known (such as the Titanic, Chernobyl and Three Mile Island), our culture inclines us to give the benefit of the doubt new to new technologies. Hence the myth of progress has a strong synergy with the myth of technological determinism in making the kind of high tech advertisements that we have been studying work effectively.
Of course, not all effects of technology are negative, and personally I am more of an technological optimist than a technological pesimist; I believe that if technology is designed and used (or in some cases, not used) in a responsible way, there can definitely be benefits to society. My purpose here is to make it clear that there indeed can be negative aspects to technology. It is strange but true that many engineers wish to deny this obvious fact. I believe that this is connected with the widely held myth that science and technology are value neutral, as well as with the myths of progress that pervade our society.
"Requirements engineering" is the name for an early phase of system development (including but not limited to software systems) which determines what properties the system must have in order to succeed. This is where the "rubber meets the road" for the real-life construction of large systems, and it is one place where the material of this course becomes very relevant to practicing computer scientists. In particular, it is important to know the values of the major stakeholders involved in a project, and to identify the value conflicts among them. It is also important to look for possible negative social aspects when designing a complex computer system that is supposed to be used by real human beings in real social environments, and to have some idea of what kinds of negative aspects to look for, based on familiarity with a variety of examples.
Although requirements engineering is more of a practical subject than an academic subject, it has mainly been practiced in a very technical way, largely ignoring the social and ethical aspects of systems; this has to do with the myth that building computer-based systems is a purely technical exercise. However, it has become more and more clear that ignoring social factors has been a major cause of many large failures, and current practice, under some pressure from the academic world, is starting to take the social and ethical aspects of system design much more seriously.
One way to describe the problem with the traditional practice of requirements engineering is to say that it has tried to proceed under the assumption that requirements can be developed and stated in a way that is largely independent of context, in analogy to the semantics of programs and of programming languages: we expect a FORTRAN program to compute the same function no matter where in the world it is run, or who runs it, or who gets the result. However, this is not the right way to think about building complex systems for ordinary real world users (and it isn't what happens with real FORTRAN programs either!). Different users may have different needs, and different expectations; e.g, some prefer Unix, and some prefer Windows, and often for good reasons; for example, blind users cannot handle graphical interfaces, and might prefer Unix because it is easier to turn its commands into speech. (Incidentally, this brings out a value that is implicit in GUIs and raises an ethical issue about that value.)
For a general example of how context interacts with meaning, consider someone shouting "Fire!" in the following environments: a crowded movie theater; a military firing squad; an employee evaluation meeting; a pottery kiln; and a police academy firing range. This example clearly demonstrates how the meaning of words is very strongly dependent on context. But this only hints at the complex webs of interconnected meanings and contexts that come into play in real workplaces where computers (or any other physical objects) are used. A key part of context is often the values involved; e.g., we need to know why someone is shouting "Fire!" and why others might respond in certain ways.
To sum up a bit, let's consider chairs (noting that to speak this way is to reify a large and very diverse collection): they make little sense purely as physical objects; to understand chairs, you must understand sitting, and not just as a physical act, but within the social situations where sitting occurs, since these determine the kind of physical object that is appropriate. For example, a throne is very different from a dentist's chair, and both are very different from a living room reclining arm chair; they express completely different values. This point of view has been sloganized in sentences such as "A chair is to sit," which emphasize that sitting is not to be understood in the abstract, but rather in actual concrete social situations that embody concrete values. This is another good example of the way that the social, the ethical, and the technical are inseparable (and we should note in passing that all three of these are reified concepts). But to speak in a fully accurate way which does justice to the inseparability and avoids reifiction would be very complex and tedious; therefore ordinary people (i.e., everyone except a few sociologists and philosophers, who end up being very difficult to understand) never speak in such a way; but as a result of this, ordinary language can be very misleading on such topics.