CSE268D: Social Aspects of Technology and Science
Notes for the Fourth Meeting
4. Science and Technology (continued)
        (This discussion continues from the notes for the third meeting.)

As you might expect, things get much more complicated in the 20th century. We will be able to cover only a small part of this large territory, a little bit here, and then an even smaller bit later on.

The scientists of the classical era were all inspired by the certainty of mathematical results, and by their amazing applicability to the physical world. Newton's physics applies to the planets, to ballistics (cannon balls, etc.), to steam trains, and more, seemed to confirm this. Even quantum mechanics supports this view, since no other physical theory has ever been shown to be accurate to so many decimal places. The physicist Eugene Wigner called this the unreasonable effectiveness of mathematics, wondering what it can mean about the world that mathematics can describe so many aspects of it so very well.

One result of this was that other subjects sought to achieve the same precision and deductive rigor as physics, by employing mathematical methods themselves. Another result was that some philosophers decided that the ideal kind of knowledge was scientific knowledge, and that other kinds of knowledge were inferior. In the 1920s, the so called Vienna circle (which included Rudolph Carnap, Moritz Schlick, Kurt Godel (the great logician), and Hans Reichenbach, and which was influenced by the early work of Ludwig Wittgenstein) developed logical positivism, which held that the only meaninful sentences are those that are empirically verifiable, or else are logical truths (which are necessarily tautological). Therefore all metaphysics is nonsense, including religion; art and ethics are nonsense; all this should be discarded. Their so called verifiability criterion came under attack from many quarters, especially the later work of Wittgenstein, and it now has few adherents. (Pythagoras (circa 572-510 BC) maintained that the world actually is mathematical, giving evidence from music and geometry, but few have been willing to go so far in more recent times.)

However, the influence of logical positivism lives on in so called analytic philosophy, which is the dominant school in the US and Britain. And for society as a whole, the view called modernism can be seen as coherent with logical positivism. Although the term is used in many different ways by different people, roughly speaking modernism calls for a homogeneity of society, an interchangeability of workers, mass consumerism in the media and in physical goods (which are called "commodities"), in predictability, in rationality. Society is composed of autonomous rational consumers. Science is considered to support modernism. We are said to live in "modern times".

High school science textbooks (and even many college textbooks) give an outline of the scientific method that looks something like the following: (1) state a hypothesis H; (2) devise an experimental test for H; (3) carry out the experiment; (4) analyze the data and either confirm or deny H. It is often said that this leads to an ever growing body of sound empirical knowledge.

Thomas Kuhn introduced a different way to conceptualize scientific progress in his famous book The Structure of Scientific Revolutions, using the notions of paradigm, crisis, revolution, and paradigm shift. See the readings for 21 October for details. But please note that Kuhn's own version differs from that of some of his interpreters, and the fact that Kuhn introduced these ideas does not necessarily mean that his versions are better; on the contrary, just as Newton's physics is better than Galileo's, it seems more likely that (at least some of) the later interpretations are better. Once again, I really want to encourage you to think it through for yourself.

As an example, we can consider the Ptolmaic paradigm vs. the Copernican paradigm for the heavens, noting that the Ptolmaic paradigm is deeply entwined with the Aristotelian world view, which in turn is deeply entwined with Catholic theology. As Kuhn notes, the Copernican paradigm was not initially better than the Ptolmaic, which in fact gave more accurate results, until it was later realized that the orbits of the planets were ellipses rather than circles.

Contrary to the high school model (and most philosophy of science up until recently), experiments are not purely objective determinations of fact, but rather are theory laden, in the sense that they only make sense in the context of some particular theory; no one would think of trying to measure how long it takes objects to fall without first having some theoretical context, including (for example) notions of length and time; more generally, experiments only make sense within particular paradigms. Galileo's experiment made sense in terms of his opposition to the Aristotelian paradigm, and his own fledgling more quantitative theories. It should also be noted that theories are underdetermined with respect to data: this means that any given set of experiments can always be explained in more than one way.

Experiments are also value laden, because they are always embedded in a paradigm, and paradigms are value laden, in that they involve a community with shared values, which determine what is and is not worth pursuing, what are good and bad results, what counts as data, what counts as theory, and even what counts as a problem.

An important point about successive paradigms follows from this, that they are incomparable, in the sense that using the values of one paradigm to criticize another is going to give misleading results at best, and in general is just plain wrong. For example, Aristotle's physics was not really about "motion" in the same sense as Galileo's. Nevertheless, it is quite usual for each paradigm to give a rational reconstruction of its preceeding paradigm, reevaluating the older achievements in terms of its own values. This makes for shorter, more coherent textbooks, but it also makes for bad history.

As a result of the incomparability of paradigms, it is not correct to say that a later paradigm is better than an earlier paradigm in absolute terms, although of course it is better in its own terms. Moreover, a current paradigm is very likely to be more coherent with the values of the current culture. It is this, plus the rational reconstruction of earlier paradigms, and the fact that progress does occur within a paradigm during normal science (that is, until a crisis appears) that supports the myth of steady progress. That is, standing within our own culture and within some current paradigm, we are genuinely entitled to say that things have progressed. But we should also realize that this is relative to a set of values that is not absolute. For example, the Nazis no doubt saw things getting better and better during the 1930s, relative to their own values.

This example emphasizes that we should not give in to the total moral relativism that is found in some quarters. For example, I am quite willing to say that taking life is bad, while still recognizing that this is not a value that everyone shares at every point in time, or interprets in the same way that I would.

Paradigms are naturally conservative, in there is great reluctance to overturn fundamental values, paradigmatic experiments, etc.; this makes sense because these define the paradigm. Rather, things that don't fit are seen as puzzles to be worked on and solved, and if after long effort they still don't fit, then they are ostracized as anomalies. If some field is too willing to change its own fundamentals, then it will not be seen as scientific, but rather as disorganized and chaotic, and therefore as pre-paradigmatic.

Statistics plays a fundamental role in most sciences today, because it is well known that measurements are always somewhat inaccurate, and that repeated measurements are needed to ensure accuracy. Furthermore, it's not enough to just compute an average and proclaim "Well that looks close enough to me". Indeed, statistics has become a very sophisticated subject, and we will just skim a few main points here. First, a statistic is a function for computing a value that summarizes some dataset. Statistics have their own probability distributions, and have a certain likelihood of giving misleading values. So an experimenter should ensure that the probability of drawing a false conclusion from a statistic is small.

The standard approach is called hypothesis testing: there is a so called null hypothesis, which says that what you are testing is false, and you hope for a high probability that the null hypothesis is false, and hence that the hypothesis you are testing is true. This corresponds to the dictum that you can never prove hypotheses in science, but only disprove them. Karl Popper is famous for his doctrine that only falsifiable assertions can be scientific (in part, this was an attack on the logical positivists). But science as it is practiced often takes a looser approach than all this discussion might suggest; e.g., consider cosmology, where experiments are impossible (since we only have one universe).

Especially in the social sciences and medicine, statistical tests often determine the degree to which variables are correlated or covariant, that is, the degree to which they vary together. In many cases the objective is determine whether or not one variable "causes" another. For example, cigarettes and cancer have been clearly shown to covary, but this does not in itself prove that cigarettes cause cancer; it might be that some other factor predisposes people to both cancer and cigarettes. In the 1950s, when attacks on the cigarette manufacturers began, exactly this argument was made in the courts, and at that time, it won! Now we know more about the underlying mechanisms, so the situation is very different, and the argument that tests to not prove causation cannot prevail. (There are also many examples where absolutely false causal inferences have been drawn from statistics, so the cigarette example should not be taken as paradigmatic! See the reading on statistics in medicine.)

In passing, I would like to mention that, shockingly to many people, probabilities enter into the very foundations of quantum mechanics; QM does not directly predict outcomes, but only probability distributions of outcomes; and furthermore, the Hiesenberg uncertainty principles says that attempting to measure one variable (say position) more accurately will cause another variable (such as momentum) to become more uncertain that it was before. So absolute certainty is no longer something that modern science can promise.

All this gives us some idea of the history and philosophy of science, but still not much of an idea about how science and technology are related. One obvious point is that technology provides infrastructure for science. The huge experiments of modern physics are also huge engineering projects, e.g., consider the Stanford Linear Accelerator (SLAC), with its one mile of magnets accelerating particles in an incredably straight line; whole teams worked on designing, testing and building just these very special magnets; another whole team used lasers to ensure the linear alignment of the beam.

And of course, it is also said that science underlies technology. For example, Newton's optics was used in working with the laser beams that aligned the magnets at SLAC. Finally, it is said that technology converts the abstract truths of science into tangible benefits for society. For example, what is learned about particle physics at SLAC will help us build better bombs to defend democracy, and eventually even better consumer products. But is the picture really so simple as this?

        (The class notes continue in the notes for the fifth meeting.)


To CSE 268D homepage
Maintained by Joseph Goguen
Last modified 25 October 1998