5. Values and Ethics
This chapter begins our serious discussion of ethics, which is (roughly speaking) the study of "the good," or of "right and wrong." It turns out that there are many different ways to define ethics, many different theories of ethics, and in fact, many different kinds of theories of ethics, as well as many different application areas. We will be able to explore only a little of this vast territory, and therefore we must make some tough choices about what to leave out; we will put some emphasis on some very recent approaches that are based on science, in particular, socio- and evolutionary biology, and cognitive science.
In chapter 4, we saw that narratives, and even physical objects like bridges, chairs and mugs, embody values in definite, natural ways. Values in this sense are assessments of what is valuable or important for some individual or social group at some time. Such values can be relatively explicit, but they can also be implicit, and even rather hidden. In a story, evaluative material plays the crucial role of connecting the events reported to the values of the social group within which the story is being told, and the same relation holds (but perhaps in less obvious ways) for physical objects, as well as for non-physical objects like standards, programs, and even theorems.
We have also previously argued that the technical and the social are inseparable, and also that the social and the ethical are inseperable, so that both social and ethical issues are inseperable from technical issues. A deeper level of analysis notices that is it mainly through values that the technical connects to the social. This chapter will clarify the relation between ethics and values, and the nature of ethical theories, and will consider some examples in a bit of detail.
5.1 Normative and Descriptive Theories
Since values determine what is considered valuable, they also determine what is considered "good" by an individual or group. Thus values are closely connected with ethics. However, the approach taken here is different from that of much of what is usually called ethics. One relevant distinction is as follows: a descriptive approach tries to accurately describe what some phenomenon is actually like, whereas a normative approach tries to say what it should be like. Descriptive theories are essentially models of some kind, which can be compared to actual phenomena for validation; if the theory does not match the model well enough, then the model is wrong. However, normative theories cannot be validated in this way: if there is a discrepency, then one says that the phenomenon is wrong, rather than the model! Thus descriptive theories try to be scientific, while normative theories do not. For example, a descriptive grammar tells how people actually speak (or write) some language, whereas a normative grammar tells how (someone thinks) people ought to speak (or write) it.
Similarly, descriptive ethics describes what people actually value, whereas normative ethics tells what they ought to value. This raises a second relevant distinction, between approaches that are based on behavior and those that are based on intention; most Western approaches are based on behavior, but I am more inclined to consider the underlying values, and hence intentions; some reasons for this are discussed in Section 5.3 below. Notice that our analyses of narrative are descriptive, and are intended to bring out implicit values; these analyses are not normative, since we do not say what values should be there. A more conventional version of this distinction would say that descriptive ethics says how people actually behave, whereas normative ethics says how people should behave; however, I question this on the grounds that "how they actually behave" is too broad, and fails to capture the kind of behavior that is of interest for ethics.
It should not be thought that normative approaches are arbitrary and useless, despite the fact that some of them have been to a greater or lesser extent. For example, if you want to be accepted as an educated person, you need to have certain habits of speech, and you need to avoid others; this can be described by a normative grammar, and many such grammars can be found in ESL (English as a Second Language) schools around the world. On the other hand, some normative grammar that that is still taught today can have a negative effect, e.g. use of the subjunctive tense in hypothetical clauses, as in
If it were to rain, one might become wet.To talk this way is to risk seeming an elitist snob, though of course this was not always the case, and there are still some places where such talk is the norm, e.g., among Oxford dons.
Most ethical systems (and most grammars) have been normative, and there have been many different attempts to justify such systems; we will explore why some of these justifications are significantly better than others. But first, we look at some ethical issues that are of special interest to computer science students.
5.2 Student Ethics
The classrooms of universities, including UCSD, are places where important ethical issues constantly appear. The one that has perhaps captured the most attention is cheating by students, and we will consider this in some detail, although we will also mention some others. It is interesting to notice that the use of information technology raises new ethical issues about cheating, or at least raises old issues in new ways.
UCSD, and CSE, have a variety of resources regarding student cheating, many of which are online. I will assume that it is already relatively clear what should and should not be done, and instead will focus on arguments given in favor of and against various behaviors, not just their content, but also their form, and their presuppositions. This will help us see the values that are involved, and will also help us to prepare for material on theories of ethics to come later, where we will be concerned with the quality or "strength" of various arguments.
Let us consider five documents. In a scientific approach, documents are regarded as data, as neutral texts to be examined critically and objectively, rather than as having some special status arising from the authority of their authors; we will seek to discover their underlying values, just as we did with narratives. The first document is a typical handout on student ethics for a computer science class, prepared by Prof. Scott Baden. It has been copied, edited and then used by several other CSE professors, and it includes excerpts from our second document, which is the "UCSD Policy on Integrity of Scholarship" (from the "Academic Regulations" part of the UCSD General Catalog - you will need to scroll down to find the relevant section); in addition, the Baden handout has provisions that are specific to courses where computers are extensively used. The third document is the official (at least, as of 1972) UCSD policy on plagarism, entitled "Sources: Their Use and Acknowledgement"; it is based on an earlier document from Dartmouth College, and has a very intellectual tone, rather slanted towards the humanities. The last documents are an email sent by Gary Gillespie to CSE faculty, describing what he says to students in his classes about cheating, and a related response from Scott Baden.
The Baden handout gives a pretty clear statement of what actions are prohibited, but says almost nothing about why. The second document has a little more: it says that academic ethical standards are needed to maintain "the integrity of scholarship" and to "protect the validity of university grading;" these phrases occur in its first and second sentences, respectively, but are vague. The third document has much more explanation, which interestingly has a very different focus, mostly on the development of scholarly skills, and on traditions of mutual help and fairness within the academic community. It also admits quite openly (in its third sentence) that the proper treatment of outside sources can raise "perplexing problems." The Gillespie email is very interesting because it gives many reasons why students should not cheat, as discussed in some detail in class. The Baden email introduces a different issue, which is how faculty feel about cheating.
The first three documents appear to refer in some indirect way to the possibility of sanctions. The first uses the word "self-destructive" in the second clause of its first sentence, but this is not further explained, and can be interpreted in many ways. The second document does not mention sanctions, but within the "Academic Regulations" part of the UCSD General Catalog, it is followed by sections that give very detailed descriptions of responsibilities and procedures for dealing with academic dishonesty; however, there is nothing there about the reaons for having these very complex procedures. The third document repeatedly refers to various sorts of "trouble" that one might get into, but does not say more about their nature. The Gillespie email contains some explicit references to sanctions for students who cheat.
None of this tells us why UCSD would care about cheating. A good hint appears in the Wall Street Journal article on problems in prep schools handed out in class 25 January. This article says that prep schools (which are expensive secondary schools intended to prepare students for admission to Ivy League colleges like Harvard) are in trouble because fewer of their students are actually being admitted to these universities, and parents are complaining. They are therefore being driven to morally dubious practices, such as inflating grades and (in at least one case) even falsifying student records. But if there is an article about such practices in the Wall Street Journal, then they are in much more trouble, because their reputation with these universities is likely to be diminished. To use a business metaphor, the bottom line is that these schools need to protect the reputation of their product (which is students) or they will lose customers.
The same reasoning applies to UCSD: if student cheating causes the quality of students to appear higher than it actually is, then employers who hire students, and universities that accept them for graduate school, will become suspicious and the university's reputation will be damaged, which in turn will hurt its ability to attract good faculty, good students, and funding from government and industry; these three are the lifeblood of a modern university. Schools of all kinds are very careful about their reputations, for just such reasons.
Notice that we have not used the same analysis techniques on these documents as we did on narratives, since we do not have narrative structure. Instead, we looked at and compared multiple related documents, interviewed participants, stakeholders, etc. Actually, such techniques are also valid for the analysis of narratives, and if you have a serious need for a good understanding of some situation, it is advisable to use as many techniques as you can; for example, an accurate understanding of values is often a crucial part of a good requirements analysis.
5.3 Theories of Ethics
It is often said that any scientific approach to ethics must be descriptive rather than normative. However, we will see that it is possible, given some reasonable assumptions, to draw normative conclusions from descriptive theories. But before that, we review some of the older work on normative ethical systems.
Writings on ethical theory go back thousands of years, some high points of traditional thought being the teachings of Plato, Aristotle, Aquinas, Buddha, and Christ. Ethical theory has long been a central concern of Western culture, and there is a vast literature, which we cannot possibly cover in any detail, but can only hope to hit a few key points. A relatively readable survey of some areas of ethics, oriented towards the concerns of this course, can be found in the book by Deborah Johnson that is in our List of Recommended Books. It is usual to divide ethical theories into four main categories, called absolutist, relativist, utilitarian, and deontological. A somewhat unusual concern will be to try to uncover some of the main presuppositions behind the various kinds of ethical theory. Note that there is no uniformity of opinion in ethics, and in particular, that many different versions can be found within each category of ethical theory.
Absolutist theories say that there are absolute moral laws, which can be applied in any situation to determine what actions are right and wrong. Theories of this form are also called moral law theories. The example most likely to be familiar to many readers is fundamentalist interpretations of the Christian Bible, and in particular, of the Ten Commandments.
Relativist theories say that what is right depends on the situation, but there are different views on how broadly to define "the situation." Weak relativist theories say that the situation includes the local social and physical context of an act. Strong relativist theories broaden this to include the entire social context, and say that right and wrong are relative to the standards of the society in which an individual lives. Radical relativist theories say that it is impossible to decide what what is right, and that there are no standards. All relativist approaches agree that there is no single absolute standard for right and wrong. For example, it may be right to kill in one situation, but not in another.
Utilitarian theories claim that what is best can be determined by some approach that is similar to what we now call a cost-benefit analysis, i.e., maximizing some measure of utility. The name most closely associated with this approach is Jeremy Bentham (1748-1832). Utilitarian approaches are of course very widely used in business, and they are supported by information technology such as spread sheets, and various kinds of economic models. Utilitarianism is a kind of weak relativism. One problem with utilitarianism is determining how to measure utility. For example, Bentham wanted to maximize overall "happiness" - but how can we measure happiness, and how can we be sure that it is even the right thing to measure? Businesses want to maximize profits, but they too have difficulties in measuring the costs and benefits of their actions.
There is an interesting blend of utilitarianism with moral law theory: one can try to justify absolute moral laws by determining their utility. For example, one can argue for the moral law which says that lying is wrong, by examining the consequences if it were followed by all individuals. Approaches of this kind are called rule utilitarianism, in contrast to act utilitarianism, which denies that general rules can be given, and instead calls for examining the consequences of particular actions. Some variants of rule utilitarianism call for examining the consequences for a whole society instead of a single individual; indeed, one of Bentham's original intentions was to justify parts of the English legal system in this way.
Consequentialist theories say that an act should be judged right or wrong depending on its consequences. This is at least a weak relativist position, and possibly a strong relativist position, depending on how the consequences are evaluated. Utilitarian approaches are also necessarily consequentialist.
Deontological theories (sorry about the awkward Greek-based terminology; it is unfortunately typical in philosophy) focus on the principle of an act, rather than on the act itself or its immediate consequences. The most important name here is Immanuel Kant (1724-1804), and his approach has probably been the most influential on today's "common sense" understanding of morality, and on most philosophy of ethics, except perhaps the most recent. Perhaps the two most famous ideas due to Kant were both called categorical imperative by him (Kant actually had three versions). The first is that people should never be treated merely as means to some end, but rather should always be treated as ends in themselves. The second is that rules or actions should be judged by whether or not they can be universalized, i.e., by whether or not it would be good if everyone behaved that way. These principles can be used to justify or criticize many ethical principles; for example, it has been used to argue that killing another person is always wrong. The second principle is similar to the so called the golden rule of Christianity, which is often stated in the form "Do unto others as you would have them do unto you." This is not an accident, because Kant's goal was to give a philosophical justification for the Judeo-Christian ethical tradition without any appeal to theology, and in particular, not involving God in any way.
It is interesting to notice that it is not easy to use the categorical imperative either to justify or to refute the Old Testament principle of "an eye for an eye and a tooth for a tooth." It is also difficult to justify or refute its converse, which is the New Testament principle to "turn the other cheek." (It is a good exercise to try to do these arguments and see what difficulties arise.)
Kant's categorical imperative is an absolute moral law, but it has the unusual character of being a meta-law or second order law, that is, a law about laws; moreover, any moral law that it justifies will be an absolute moral law. The chief problems with this approach are that it is difficult to apply such meta-laws to concrete situations, and it is impossible to do so in a rigorous manner, as Kant himself recognized. Kant did suggest a way to deal with this problem, which is to regard a proposed moral law as a natural law (analogously to those of physics) and ask whether one would wish to live in a universe govened by that law; this is a more concrete version of Kant's principle of universalization. It is interesting to notice (following Mark Johnson in his book on our List of Recommended Books) that this is a suggestion to use a certain metaphor.
We will now try to uncover some basic presuppositions of the above approaches to ethical theory. All of these approaches focus on actions, and they also (except for some versions of rule utilitarianism) focus on individual agents. Moreover, they all presuppose the autonomy of individual agents, in the sense that each agent has his or her own goals and plans, the freedom to carry them out, and can be held responsible for the resulting actions. All of them also presuppose the rationality of individual agents, i.e., the capacity to think and to act rationally; in fact, they assume unlimited rationality, in the sense of placing no upper bound on the complexity of the reasoning that may be required. Real agents of course have limited rationality, i.e., a finite capacity for reasoning, and (for example) are unlikely to be able to apply the categorical imperative in real time-limited situations; actually, most people have some trouble understanding how to apply the categorical imperative in any kind of situation at all. Utilitarianism presupposes that a rational, objective, and preferably numerical value can be assigned to each relevant factor in such a way that they can be integrated and maximized. A more general and less obvious presupposition is a split between mind and body, so that rational agents are presumed to be free from the constraints of embodiment, i.e., of having a body; these constraints would include emotional attachment, bias, prejudice, selfishness, and so on.
All of these presuppositions are to some degree untrue of real agents in the real world. For example, our minds and bodies are not separate; research in many areas has shown a variety of important ways in which they are integrated, such as the fact that emotional associations play a key role in creating and retrieving memories. And real agents do not behave (purely) rationally, let alone possess unbounded rationality.
An alternative to the focus on the actions of agents in all the above approaches is to consider the cognitive states of agents. This often occurs in real world moral reasoning, as when someone decides to tell a lie in order to avoid hurting the feelings of someone else; for example, you may not want to tell your spouse that you are dying of cancer until some time after you learn of it yourself. Let us call such approaches cognitivist, as opposed to the behavioralist approaches that focus on actions. Our concern with the values of agents instead of their behavior is cognitivist in this sense, since it considers the intentions of agents; let us call such approaches intentionalist. One argument for taking an intentionalist approach is that bad intentions can have a negative effect on the agent; for example, merely having the intention to murder decreases the sensitivity and capacity for being human. Intentionalist theories assume (limited) autonomy but do not assume rationality. Buddhist ethics, which go back over 2,500 years, tends to take such an approach. Since this reflects my own view, I want to emphasize it is by no means required that you should agree with me, and indeed, one main point of this course is to get you to think for yourself about such issues.
Notice that the intention of an agent plays an important role in some areas of the law. For example, the three different kinds of murder, usually called first degree murder, second degree murder, and manslaughter, are distinguished by the intention of the murderer: first degree murder is premeditated; second degree murder is unpremeditated (i.e. spontaneous); and manslaughter is accidental; i.e., the agent has the intention to murder in the first two cases, but not in the third, and has the intention well in advance in the first, but not in the second. Thus murder law is partially intentionalist, rather than purely behavioralist, which would require focusing on only the act of murder itself.
We can find examples of the use of various approaches to ethics in the documents on student cheating that we considered earlier. Threats (such as references to sanctions) are of course consequentialist. Discussions of loss and gain, and of "win-win" (as in the Gillespie email) or "lose-lose" situations are utilitarian. The references to faculty morale in the Baden email are consequentialist and cognitive. Real world examples of moral reasoning often involve combining several different approaches to ethics. Much further analysis is given in the classroom discussion of these documents.
5.3.1 Two More Recent Approaches
We have reviewed the standard philosophical theories of ethics above, plus intentionalism, but not the recent work that draws on socio- and evolutionary biology, and on cognitive science. Socio-biology is concerned with discovering the biological origins of, or at least influences on, human (and sometimes primate) social behavior; it is closely related to evolutionary biology because its arguments are in general based on neo-Darwinian evolution. Since evolution is so basic to modern biology, we will speak of just socio-biology instead of something like socio-evolutionary-biology. Cognitive science concerns how humans (and perhaps primates, etc.) think, but the branch that has been most closely concerned with ethics is cognitive linguistics, which is concerned with the cognitive abilities and structures that lie behind language; researchers in this area have been especially concerned with metaphor, as illustrated by the observation of Mark Johnson about Kant's use of metaphor to implement the categorical imperative that we discussed earlier.
The foundation of the socio-biological approach to ethics is to argue that some fact about the human species is as it is because it has been selected through evolution, due to its ability to increase the chances of survival; therefore it is good. Notice that this depends on the "bridging assumption" that survival of the human species is a good thing, which cannot be proved scientifically. One very clear example is the prohibition of incest, which has survival value because incest decreases the quality of the gene pool; therefore avoiding incest is good. Almost all societies have had an incest taboo, and those that didn't had trouble surviving, such as the Egyptian pharoahs and European royality. Arguments from socio-biology can also take the form of asserting that some behavior has survival value because it helps groups work together to accomplish valuable tasks that no individual could do alone; this argument form can be applied to larger and larger groups, up to the level of whole societies. For example, lying can have a negative effect on the survival of a group engaged in hunting; therefore lying is bad (i.e., prohibiting lying is good) for such groups. Similarly, effective leadership in a group has survival value, because the group can perform better; therefore it is good. Notice that such arguments involve the assumption that culture can be considered to evolve (or rather, to co-evolve) with its biological basis; this assumption is still somewhat controversial, but it is increasing accepted. Notice that socio-biological arguments start with descriptive scientific theories such as evolution, and then use them to justify to normative ethical theories. Much much more could be said here, but we don't have the time for it; for more information, see the books authored by E.O. Wilson and edited by Leonard Katz in our List of Recommended Books.
The best known name in cognitive linguistics is George Lakoff, who is responsible for a considerable deepening of our understanding of metaphor. The thrust of work on ethics within cognitive linguistics is to ask what kind of moral reasoning is natural to humans, given what we now know about their cognitive capabilities, which includes work on the structure of concepts, as well as work on metaphor. Mark Johnson (in his book in our List of Recommended Books) argues that moral reasoning in real life is often based on metaphor, whose source domain is some very basic human situation. This work is of course descriptive and not normative; but (like the socio-biological approach) it does give very strong grounds for rejecting radical moral relativism, in that actions can be argued to be right or wrong based on similarity to prototypical situations. Similarly, it gives strong grounds for rejecting moral law theory, in that humans do not seem to have the cognitive capabilities need to use such theories. Cognitive linguistics seems better suited to critiquing general ethical theories than to supporting particular moral decisions.
A key insight from cognitive science is that (real human) concepts are not defined by sets of predicates, as is often assumed in artificial intelligence research. Instead, careful experiments by Eleanor Rosch and others have shown that most concepts are defined by prototypes, and that the more a percept looks like the prototype, the more it is considered to be an instance of that concept; thus (real human) concepts are inherently fuzzy, in the sense of not having well-defined boundaries. For example, the concept of "bird" (for most Americans) is defined by the protype of a robin. Moreover, many concepts have extensions from some "core" meaning that are given by metaphors; these are called metaphorical extensions. For example, "up" has a basic spatial meaning, but by metaphorical extension also has meanings that relate to administrative hierarchy, mood, and much more (e.g., "He's way up in management." and "I'm feeling really up this morning."). The concepts used in (real world) moral reasoning are no exception to this general pattern, again as shown in Mark Johnson's book. These observations constitute an argument against any kind of absolute moral law theory, based on the "bridging assumption" that the actual use of a good moral theory should be natural to ordinary humans. As was the case with socio-biological approaches much more that could be said here, but again we don't have the time.
The material in this section has not yet appeared in any book on computer ethics, and I would guess it will be some time before it does, due to its inherent complexity and difficulty, despite its strong affinities with information technology. So you are getting a sneak preview of material that I think will be increasingly important in the twenty-first century.
5.4 Professional Codes of Ethics
The (extract from the) article "Codes of professional ethics," by Ronald Anderson, Deborah Johnson et al. (in Computerization and Controversy, ed. Rob Kling, pp. 876-877) begins with
Historically, professional associations have viewed codes of ethics as mechanisms to establish their status as a profession or as a means to regulate their membership and thereby convince the public that they deserve to be self-regulating. Self-regulation depends on ways to deter unethical behavior of the members, and a code, combined with an ethics review board, was seen as the solution. Codes of ethics have tended to list possible violations and threaten sanctions for such violations.Examples of organizations that have successfully taken such an approach include the American Medical Association and (for the legal profession) the various state bar associations. But even a brief glance at the ACM Code of Ethics will reveal that it does not follow such a model. Anderson, Johnson et al. note that the original 1972 ACM Code of Ethics took a similar form, but "had difficulties implementing an ethics review system." In fact, the new (1992) code is based on voluntary compliance, with no sanctions except possible termination of membership in the ACM, which has very little bite; moreover, there does not appear to be any procedure for initiating such a termination.
It is interesting to ask why there is such a significant difference from the approach that has been so successful for other professional societies. A good starting point can be found in the fine article "Confronting ethical issues of systems design in a web of social relationships" by Ina Wagner (in Computerization and Controversy, ed. Rob Kling, pp. 889-902), in her discussion of the differences in workplace power between doctors and computer scientists. Computer scientists have little control over their schedules, and little say in what they work on, or how they do it (except of course insofar as they become managers instead). Likewise, our professional societies have little political influence or legal power. Of course, the medical and legal professions have roots going back thousands of years, and have been well organized for centuries. This seems to suggest that the near term outlook is dim for significant improvements in the enforcement of professional ethics in computer science.
The ACM Code of Ethics has a clearly Kantian tone. First, notice that it uses the Kantian term "imperative" for its rules. Second, notice that most of the rules have exactly the kind of form that could be justified by Kantian universalization. This is especially apparent with the first group of rules (1.1 - 1.8), particularly the first four, and the Kantian influence can also be seen in the brief justifications for the rules often given at the very beginning of each corresponding section of the so-called Guidelines. The introduction to the code also refers to derivation of rules from more general principles; possibly there is an ACM document somewhere that makes this more explicit.
5.6 Case Studies
The article "Confronting ethical issues of systems design in a web of social relationships" by Ina Wagner (in Computerization and Controversy, ed. Rob Kling, pp. 889-902) is a case study concerning the design of an automated scheduling system for use of operating theatres in a hospital. Many interesting moral observations are made, some of which may even be a bit shocking. For example, Wagner found that it was the values of surgeons that controlled scheduling, and that among these values was to perform as many maximally complex operations as possible, since that confers status within their social group. The value system of nurses, which includes spending time consoling patients, plays no role in scheduling. Wagner also raises questions about her own ethical responsibilities as a computer scientist doing a requirements analysis in this situation: for example, is it her responsibility to write requirements based on a democratic vision of how the hospital operates, in which the interests of nurses have equal weight with those of surgeons? Although such an approach is recommended by the participatory design methodology to which she subscribes, it is utterly unrealistic, and so unlikely to be implemented. Also there are important but difficult privacy issues involved in running an effective scheduling system. The surgeons do not want to make their personal schedules available, whereas on the contrary, nurses have no choice at all in scheduling their time. This excellent article is well worth reading very carefully, and preferably more than once.
The article "Power in systems design," by Bo Dahlbom and Lars Mathiassen (in Computerization and Controversy, ed. Rob Kling, pp. 903-906) is a brief case study of an ethical issue, namely the unauthorized inclusion of a feature into a system by a programmer who wanted to improve the privacy of a certain class of users. The article raises some important questions but fails to provide definitive answers. It also appears to advocate what we call social determinism, but which the authors call social constructionism; they also rail against what they call technological sonambulism (after Langdon Winner).
....... more to come here ......