ISU - FOI IDS 189.11- Burke Chapter

Worlds Without End

When Einstein made the great conceptual leap that changed physics and with it the understanding of the fundamental nature of matter and the way the universe worked, he said that it came to him as if in a dream. He saw himself riding on a beam of light and concluded that if he were to do so, light would appear to be static. This concept was against all the laws of physics at the time, and it brought Einstein to the realisation that light was the one phenomenon whose speed was constant under all conditions and for all observers. This led him directly to the concept of relativity.

Einstein's dreamlike experience is echoed by other descriptions of the same kind of event. August Kekue, the discoverer of the benzene ring which typifies the mechanism or structure by which groups of atoms join to form molecules that can be added to other molecules, wrote of gazing into the fire and seeing in the flames a ring of atoms like a serpent eating its own tail. Newton is supposed to have had his revelation when watching an apple fall to earth. Archimedes, so the story goes, leapt out of his bath crying 'Eureka!' as he realised the meaning of displacement. Gutenberg described the idea of the printing press as 'coming like a ray of light'. Wallace came to theory of evolution in a 'delirium'. Each of them experienced the flash of insight that comes at the moment of discovery.

This act of mystical significance in which man uncovers yet another secret of nature is at the very heart of science. Through discovery man has broadened and deepened his control over the elements, explored the far reaches of the solar system, laid bare the forces holding together the building-blocks of existence. With each discovery the condition of the human race has changed in some way for the better, as new understanding has brought more enlightened modes of thought and action, and new techniques have enhanced the material quality of life.

Each step forward has been characterised by an addition or refinement to the body of knowledge which has changed the view of society regarding the universe as a whole. As the knowledge changed, so did the view.

With the arrival in northern Europe in the twelfth century of the Greek and Arab sciences and the logical system of thought contained in the writings of Aristotle, saved from loss in the Muslim texts, the mould in which life had been cast for at least seven hundred years broke. Before the texts arrived man's view of life and the universe was unquestioning, mystical, passive. Nature was transient, full of decay, ephemeral, not worth investigation. The truth lay not in the world around, which decomposed, but in the sky, where the stars which wheeled in eternal perfection were the divine plan written in light. If man looked for inspiration at all he looked backwards, to the past, to the work of giants. The new Arab knowledge changed all this.

Whereas with St Augustine man had said, 'Credo ut intelligam' (I come to understanding only through belief), he now began to say, 'Intelligo ut credam' (belief can come only through understanding). New skills in the logical analysis of legal texts led to a rational, scholastic system of thought which subjected nature to examination.

The new, logical approach encouraged empiricism. Man's individual experience of the world was now considered valuable. As the questioning grew, stimulated by the flood of information arriving from the Arab world, knowledge became institutionalized with the establishment of the European universities, where students were taught to think investigatively. The first tentative steps towards science were taken by Theodoric of Freiburg and Roger Bacon. Man had become a rational thinker, confident and above all forward-looking.

A century later another Arab was to change Europe again when his theories on optics were rediscovered. Al Hazen's views, disseminated in Florence by Toscanelli, brought perspective geometry to the humanist thinkers of the early Renaissance, thus providing them with the means of escape from Aristotle. Aristotle's universe of concentric crystal spheres, hierarchical in nature, was filled with objects each of which was unique, created individually by God. The only significant characteristic of each object was its 'essence', the unique nature of the object that provided its particular traits. All objects existed only in relation to the centre of the universe, so their representation in art had no perspective. Each was assigned a certain theological importance and was depicted accordingly. Saints were big; people were small. Each object existed only as a part of God's mysterious plan, and as such could not be measured in any comparative, realistic way. This was especially true of the stars.

Perspective geometry provided the tool with which to measure anything, at any distance. It made possible the creation of physical forms of expression, including architecture, according to proportionate scales. Balance and harmony became the standard of excellence. As the new system of measurement spread, it was applied to the planet. Unknown areas of the earth could be scaled and more easily examined. The universe lay open to exploration: the New World was discovered. In the new philosophy, nature could be described in terms of measurement which related all things to a common standard.

In the middle of the fifteenth century a German goldsmith called Gutenberg superseded memory with the printing press. In the earlier, oral world which the press helped to destroy, daily life had been intensely parochial. Knowledge and awareness of the continuity of social institutions had rested almost solely on the ability of the old to recall past events and customs. Elders were the source of authority. The need for extensive use of memory made poetry the carrier of most information, for merchants as much as for university students. In this world all experience was personal: horizons were small, the community was inward-looking. What existed in the outside world was a matter of hearsay.

Printing brought a new kind of isolation, as the communal experience diminished. But the technology also brought greater contact with the world outside. The rate of change accelerated. With printing came the opportunity to exchange information without the need for physical encounter. Above all, indexing permitted cross-reference, a prime source of change. The 'fact' was born, and with it came specialisation and the beginning of a vicarious form of experience common to us all today.

The Copernican revolution brought a fundamental change in the attitude to nature. The Aristotelian cosmos it supplanted had consisted of a series of concentric crystal spheres, each carrying a planet, while the outermost carried the fixed stars. Observation had shown that the heavenly bodies appeared to circle the earth unceasingly and unchangingly, so Aristotle made them perfect and incorruptible, in contrast to earth, where things decayed and died. Natural terrestrial motion was rectilinear, because objects fell straight to earth. In the sky all motion was circular.

The two forms of existence, earthly and celestial, were incommensurable. Everything that happened in the cosmos was initiated by the Prime Mover, God, whose direct intervention was necessary to maintain the system. At the centre of it all was the earth and man, fashioned by God in His own image.

Copernicus shattered this view of the cosmos. He placed the earth in solar orbit and opened the way to an infinite universe. Man was no longer the centre of all. The cosmic hierarchy that had given validity to the social structure was gone. Nature was open to examination and was discovered to operate according to mathematical laws. Planets and apples obeyed the same force of gravity; Newton wrote equations that could be used to predict behaviour. Modern science was born, and with it the confident individualism of the modern world. In a clockwork universe we now held the key.

In the eighteenth century the world found a new form of energy which gave us the ability to change the physical shape of the environment and released us from reliance on the weather. Until then, all life had been dependent on agricultural output. Land was the fundamental means of exchange and source of power. Society was divided into small agricultural or fishing communities in which the relationship between worker and master was patriarchal. Workers owed labour to their master, who was in turn responsible for their welfare. People consumed what they produced. Most communities were self-sufficient, while political power lay in the hands of those who owned the most land. Populations rose and fell according to the effect of weather on crops, and life took the form of cycles of feast and growth alternating with starvation and high death rates.

This self-balancing structure was radically changed by the introduction of steam power. Society became predominantly urban. Relationships were defined in terms of cash. The emergence of industrial capitalism brought the first forms of class struggle as the new means of production generated material wealth and concentrated it in the hands of the entrepreneurial few. Consumerism was born of mass-production, as were the major ideological and political divisions of the modern world.

Before the early years of the nineteenth century the nature of disease was unknown, except as a list of symptoms each of which was the manifestation of the single 'disease' that attacked each body separately and produced individual effects. In this situation the doctor treated the patient as the patient dictated. Each practitioner used idiosyncratic remedies, all of which were claimed to be the panacea for all forms of the disease.

The rise of surgeons to positions of responsibility during the wars of the French Revolution and the use of recently developed probability theory combined to produce a new concept of disease as a localised phenomenon. Statistical surveys established the nature and course of the disease and the efficacy of treatment. In the new medical practice the bedside manner gave way to hospital techniques and a consequent loss of involvement on the part of the patient in the diagnosis and treatment of his ailment.

As medical technology advanced it became unnecessary to consult the patient at all. Information on the nature of his illness was collected at first without his active participation, and later without his knowledge or understanding. Along with these changes came the great medical discoveries of the nineteenth century and dramatic improvements in personal and public health. By the end of the century the doctor had assumed his modern role of unquestioned and objective arbiter. Patients had become numbers.

The biblical version of history reigned until the middle of the nineteenth century. The six days of Creation and the garden of Eden were regarded as matters of historical fact. The age of the earth was established by biblical chronology at approximately six thousand years. The Bible was also the definitive text of geological history. The flood was an event which accounted for the discovery of extinct organisms. The purpose of natural history was only to elaborate God's Grand Design. Taxonomy, the listing and naming of all parts of nature, was the principal aim of this endeavour. The patterns which these lists revealed would form God's original plan, unchanged since Creation.

The discovery of more fossils as well as geological evidence of a hitherto unsuspected span of history led to the theory of evolution. The cosmic view became a materialist one. Man, it seemed, was made of the same stuff as the rest of nature. It was accident of circumstance, rather than purposeful design, which ensured survival. The universe was in constant change. Progress and optimism became the new watchwords. Man, like the rest of nature, could be improved because society obeyed biological evolutionary laws. The new discipline of sociology would study and apply these laws.

From the Middle Ages to the end of the nineteenth century the cosmological view had changed only once, as the Aristotelian system gave way to Newton's clockwork universe. All objects were now seen to obey the law of gravity. Time and space were universal and absolute. All matter moved in straight lines, affected only by gravity or impact.

With the investigation of the electromagnetic phenomenon, Newton's world fell apart. The new force curved; it took time to propagate through space. The universe was a structure based on probability and statistics, an uncertain cosmos. Absolutes no longer existed. Quantum mechanics, relativity, electronics and nuclear physics emerged from the new view.

In the light of the above we would appear to have made progress. We have advanced from magic and ritual to reason and logic; from superstitious awe to instrumental confidence; from localised ignorance to generalised knowledge; from faith to science; from subsistence to comfort; from disease to health; from mysticism to materialism; from mechanistic determinism to optimistic uncertainty. We live in the best of all possible worlds, at this latest stage in the ascent of man. Each of us has more power at a fingertip than any Roman emperor. Of the scientists who gave us that power, more are alive today than in the whole of history. It seems that, barring mishaps and temporary setbacks the way ahead lies inevitably onward and upward towards even further discovery and innovation, as we draw closer to the ultimate truths of the universe that science can reveal.

The generator of this accumulation of knowledge over the centuries, science, seems at first glance to be unique among mankind's activities. It is objective, making use of methods of investigation and proof that are impartial and exacting. Theories are constructed and then tested by experiment. If the results are repeatable and cannot be falsified in any way, they survive. If not, they are discarded. The rules are rigidly applied. The standards by which science judges its work are universal. There can be no special pleading in the search for the truth: the aim is simply to discover how nature works and to use that information to enhance our intellectual and physical lives. The logic that directs the search is rational and ineluctable at all times and in all circumstances. This quality of science transcends the differences which in other fields of endeavour make one period incommensurate with another, or one cultural expression untranslatable in another context. Science knows no contextual limitations. It merely seeks the truth.

But which truth? At different times in the past, reality was observed differently. Different societies coexisting in the modern world have different structures of reality. Within those structures, past and present, forms of behaviour reveal the cultural idiosyncrasy of a particular geographical or social environment. Eskimoes have a large number of words for 'snow'. South American gauchos describe horse-hides in more subtle ways than can another nationality. The personal space of an Arab, the closest distance he will permit between himself and a stranger, is much smaller than that of a Scandinavian.

Even at the individual level, perceptions of reality are unique and autonomous. Each one of us has his own mental structure of the world by which he may recognise new experiences. In a world today so full of new experiences, this ability is necessary for survival. But by definition, the structure also provides the use with hypotheses about events before they are experienced. The events then fit the hypothesis, or are rejected as being unrecognisable and without meaning. Without the structure, in other words, there can be no reality.

This is true at the basic neurophysiological level. Visual perception consists of energetic particles bouncing off an object or coming from a light source to strike the rods and cones in the retina of the eye. The impact releases a chemical which starts a wave of depolarisation along the neurons that form graduated networks behind the eye. The signal is routed along the optic nerve to the brain. At this point it consists merely of a complex series of changes in electrical potential.

A very large number of these signals arrive in the visual field of the brain, where the object is 'seen'. It is. at this point that the object first takes on an identity for the brain. It is the brain which sees, not the eye. The pattern of signals activates neurons whose job it is to recognise each specific signal. The cognition or comprehending of the signal pattern as an object occurs because the pattern fits an already existing structure. Reality, in one sense, is in the brain before it is experienced, or else the signals would make no sense.

The brain imposes visual order on chaos by grouping sets of signals, rearranging them, or rejecting them. Reality is what the brain makes it. The same basic mechanism functions for the other senses. This imposition of the hypothesis on an experience is what causes optical illusions. It also modifies all forms of perception at all levels of complexity. To quote Wittgenstein once more, You see what you want to see."

All observation of the external world is, therefore, theory laden. The world would be chaos if this were not so. A good example of this is the case of the visual illusion formed of black and white blobs, illustrated here. [See book, pp. 308-9].

When the illustration is orderly but ambiguous, the preferred view, or Gestalt, will choose between the available alternatives. In the examples above, even though the Gestalt may be switched, enabling the observer to see the alternative version, only one of the alternatives can be seen at a time.

Perception is also coloured by value judgments. In the Necker cube illustration, the word which has unpleasant connotations will be seen on the rear wall of the cube, while the one with acceptable overtones will appear on the front.

Observation is similarly dependent upon context in the case of specialist data, where the illustration will have meaning only to the initiate. Terrain which, to a geographer, is recognisable on a map, will appear to the amateur only as a series of lines. The tracks left by particles fragmenting in a bubble chamber are meaningful only to a physicist.

In all cases of perception, from the most basic to the most sophisticated, the meaning of the experience is recognised by the observer according to a horizon of expectation within which the experience will be expected to fall. Anything which does not do so will be rejected as meaningless or irrelevant. If you believe that the universe is made of omelette, you design instruments to find traces of intergalactic egg. In such a structure, phenomena such as planets or black holes would be rejected. This is not as far-fetched as it may seem. The structure, or Gestalt, controls all perceptions and all actions. It is a complete version of what reality is supposed to be. It must be so if the individual or group is to function as a decision-making entity. Each must have a valid structure of reality by which to live. All that can accurately be said about a man who thinks he is a poached egg is that he is in the minority.

The structure therefore sets the values, bestows meaning, determines the morals, ethics, aims, limitations and purpose of life. It imposes on the external world the contemporary version of reality. The answer therefore to the question, "Which truth does science seek?" can only be, "The truth defined by the contemporary structure."

The structure represents a comprehensive view of the entire environment within which all human activity takes place. It thus directs the efforts of science in every detail. In all areas of research, from the cosmic to the sub-atomic, the structure indicates the best means of solving the puzzles which are themselves esignated by the structure as being in need of solution. It provides a belief system, a guide and, above all, an explanation of everything in existence. It places the unknown in an area defined by expectation and therefore more accessible to exploration. It offers a set of routines and procedures for every possible eventuality during the course of investigation. Science progresses by means of these guidelines at all times, in every case, everywhere.

The first of the guidelines is the most general. It defines what the cosmos is and how it functions. All cultures in history have had their own cosmogonies. In pre-Greek times these were predominantly mythological in nature, dealing with the origins of the universe, usually in anthropomorphic terms, with gods and animals of supernatural power.

The Aristotelian cosmology held longest sway in Western culture, lasting over two thousand years. Aristotle based his system on common-sense observations. The stars were seen to circle the earth regularly and unchangingly every night. Five planets moved against this general wheeling movement of the stars, as did the moon. During the day the sun circled the earth in the same direction. Aristotle placed these celestial objects on a series of concentric spheres circling the earth.

These observations served as the basis for an overview of all existence. God had set the spheres in motion. Each object, like the planets, had its natural place. On earth this place was as low as the object could get. Everything in existence, therefore, had its preferred position in an immense, complex and unchanging hierarchy that ranged from inanimate rocks up through plants and animals to man, heavenly beings and finally God, the Prime Mover.

The cosmic order dictated that the universal hierarchy be mirrored in the social order in which every member of society had a designated place. The cosmology conditioned science in various ways. Astronomy was expected to account for the phenomena, not seek unnecessary explanations. It was for this reason that the Chinese, whose structure had no block concerning the possibility of change in the sky, made regular observations and developed sophisticated astronomy centuries before those in the West.

The static nature of Aristotle's universe precluded change and transformation, so the science of dynamics was unnecessary. Since each object was unique in its 'essence' and desires, there could be no common forms of behaviour or natural laws which applied equally to all objects.

By the middle of the nineteenth century a different cosmology reigned. The Anglican Church was committed to the biblical record, the Mosaic version of the history of the earth involving six days of Creation, the garden of Eden and an extremely young planet. The Church strongly opposed the new geological speculation by James Hutton and Charles Lyell regarding the extreme age of the earth. This opposition took various forms including support for a professorial chair in geology at Oxford, initially given to the diluvialist William Buckland in an effort to promote views more in tune with ecclesiastical sentiment. It was ultimately this clerical interference which was to cause a split in the geological ranks. The breakaway group, keen to remove the study of the evolutionary implications of geology from the influence of the Church, established the new and independent scientific discipline of biology.

In our own day, the opposing 'big bang' and 'steady state' theories of cosmic origin influence scientific effort because they have generated sub-disciplines within physics and chemistry which are dedicated to finding supportive evidence for each view.

All cosmologies by their very form dictate the nature, purpose and, if any, the direction of movement of the universe. The epic work of Linnaeus in the middle of the eighteenth century to create a taxonomic structure in which all plants and animals would fit was spurred by a Newtonian desire to discover the Grand Design he believed was in the mind of God when He had started a clockwork universe at the time of Creation. By identifying and naming all forms of plants and animals in this unchanging and harmonious universe, thus laying bare the totality of God's work, Linnaeus considered that he had completed the work of science.

By the middle of the nineteenth century the view had changed. According to the cosmic theory implicit in Darwin's Origin of Species, the universe was dynamic and evolutionary, and contained organisms capable of change from one form to another. Some Darwinists, such as the German Ernst Haeckel, were of the opinion that organic forms of life had evolved from inorganic material early in the earth's history.

In the third quarter of the century the eminent biologist Thomas Huxley found what he took to be a fossil in a mud sample taken from the sea-bed ten years earlier by the crew of the Challenger during the first round the world oceanographic survey. Obedient to Haeckel's theory that at some time in the past there had been a life form which was half-organic, half-inorganic, Huxley identified the fossil as the missing organism and named it Bathybius haeckelii. Some years later, Bathybius was revealed to be an artifact created by the effect of the preservative fluid on the mud in the sample. In the interim, however, it had served to confirm a key element in a wide-ranging cosmic theory.

A major scientific step was taken in the field of agricultural chemistry in the nineteenth century, due also to the view that natural processes were dynamic and directional. In 1840 Baron John Justus von Liebig published the results of his work on plant and soil chemistry, which he based on a balance-sheet theory of nature. Ideas about agriculture had supposed the ultimate source of plant nutrition to be humus, of which the soil was presumed to be an inexhaustible source. Technical methods were developed to exploit this for maximum profit, on a field-by-field basis.

Liebig believed the contemporary economic theories of Adam Smith and others that the market was a natural regulator and that supply aid demand were the balancing influences that kept an economy healthy. At the end of the eighteenth century, however, the balance of society was in danger on account of the exploding population generated by the Industrial Revolution. The increase in population threatened to overwhelm traditional methods of food production. Malthus had drawn attention to the disparity between the rates of increase of crop yield and the rate of expansion of the population:

Population, when unchecked, goes on doubling itself every twenty-five years, or increases in geometrical ratio. . . whereas the means of subsistence, under the circumstances the most favourable to industry, could not possibly be made to increase faster than in an arithmetic ratio.

The balance-sheet model in which Liebig believed led him to approach the problem of agricultural yield expecting to find a general mechanism of cyclic balance in the supply and demand of plants which was being disturbed by high-yield, intensive farming methods. He looked for an overall mechanism. He burned straw, hay and fruit, and discovered by analysing the ash that any area of land supporting any kind of vegetation produced the same quantity of carbon in its plants, irrespective of the type of plant or soil. The plants, he thought, must be taking carbon from the air, not the soil. Their hydrogen obviously came from rain-water. The copious presence of ammonia in the sap of every plant told him that this must be the source of nitrogen for the plant and that it too came from rain-water.

He found that each plant required a specific amount of alkaline material to neutralise its own acids, and that it would grow most where those alkalis were plentiful and least where they were scarce. Supplementing these minerals should therefore save soil from exhaustion and increase yields without damaging the natural cycle. Artificial fertilser was the outcome of Liebig's adaptation of economic theories to nature.

In the general structure of nature, whether the cosmos is seen to be static or the subject of linear or cyclic change, boundaries are indicated within which investigation of nature may be conducted. Research beyond those boundaries will be defined as useless, unnecessary, or counter-productive.

In the 1860's the new non-Euclidean geometry of Bernard Reimann and Hermann von Helmholtz appeared in Britain, where it met strong opposition because of the implications it held for the accepted view at the time of how reality could be described. The new geometry questioned the validity of Euclidean geometry as a true and accurate means of describing the universe.

Non-Euclidean geometry described what the universe would look like, for instance, to two-dimensional beings living on the surface of a sphere. In their curved space the internal angles of a triangle would add up to more than 180 degrees. Indeed, the sum of the degrees would vary according to the curvature of the sphere.

This concept struck at the classical Newtonian view of a three-dimensional cosmos in which one of the absolutes was that it conformed to Euclidean geometry. To question this view, as non-Euclidean geometry did, was to question the received model of God's creation. Disbelief in this would undermine Christian society, and, worse, limit the ability of science to represent the real world, as Euclidean geometry was supposed, uniquely, to do.

A similar limitation on research occurred as a result of James Maxwell's demonstration in 1873 that light waves were not the only form of electromagnetic radiation and that there should be others. Heinrich Hertz discovered the existence of radio waves in 1887, and further investigation continued, culminating in the work of David Edward Hughes and Marconi at the end of the century, when the first radio transmissions crossed the Atlantic. Throughout this time scientists continued to try to identify radio emissions from the sun, without success.

However, in 1902 Max Planck's theory of radiation appeared to show that all extraterrestrial radio emissions would, in principle, be so weak as to be undetectable. This view was held so strongly that no further investigations were conducted for thirty years. Then, in 1930, the Bell Telephone Company commissioned one of their employees, Karl Jansky, to find out why the new car radios suffered from static. Jansky set up radio antennae, and heard a steady hiss coming from the direction of the Milky Way. Radio astronomy was born thirty years later, due to the power of the Planck structure of radiation behaviour.

Limitations were also imposed by the political structure that reigned after the French Revolution. Mathematics and physics were deemed to be too closely allied to the elitist ideologies of the pre-revolutionary Enlightenment, and were banned. Chemistry, on the other hand, dealing as it did with such things as bleaching agents, gunpowder and general technical processes, was felt to be closer to the life of the common man, and as such received encouragement and financial assistance.

Galileo's view had hit similar obstacles in 1612, two years after he had gained instant fame as a result of the publication of his telescopic observations. At the time Galileo was engaged in an argument about why objects floated on water. This apparently innocuous matter was to raise a tide of opposition to his views that would eventually engulf him. It began with an argument between Galileo and two professors at Pisa about the properties of cold. In particular, the argument centred on the behaviour of ice, which floated.

Galileo's opponents, quoting Aristotle, said that ice floated because of its broad, flat shape, which was unable to overcome the resistance of the water and sink to the bottom. Lodovico delle Colombe produced supporting evidence in the form of flat slivers and small balls of ebony, both of the same weight: the slivers, he claimed, floated, while the balls sank. Galileo, in a reply published in 1612 and replete with experimental observation, argued that what mattered was the heaviness of the object. If it were heavier than the water it displaced, it would sink; if not, it would float. The shape was irrelevant. The ebony slivers had also floated, Galileo argued, because they had not been completely wetted.

This apparently harmless piece of wrangling, written in Italian rather than Latin, almost immediately ran to four editions and caused Galileo immense harm. The fact was that he had struck at the roots of Aristotelianism. If Aristotle were wrong in one aspect, the entire fabric of his system of nature must lie open to question. Early seventeenth-century Catholic society rested on Aristotelian foundations. In questioning belief in the system and obedience to its concept of a hierarchy subject to the Church, Galileo was attacking the very fabric of society. The Discourse on Floating Bodies was politically and theologically revolutionary in its implications, and as such was to be suppressed.

It is these structure-generated limitations on the freedom of action in science which set the boundaries beyond which it is unsafe to go. Within those boundaries the structure also dictates what research is to be considered socially or philosophically desirable.

In the England of the 1660s there was considerable fear of a return to the chaos and bloodshed of the recent Civil War, the first domestic revolution the country had ever experienced, which had also involved the execution of a king who had claimed, like all others before him, to rule by Divine Right. Surrounded by a predominantly Catholic and antagonistic Europe, England's prosperity and strength had at all costs to be built up. In 1660 the Royal Society was founded. Its mandate was to encourage experimental science to produce inventions and techniques which would aid the development and extension of trade and industry, make England richer, and provide jobs for the discontented poor.

One of the founders of the Royal Society was Robert Boyle, a confirmed experimentalist and leader of the empirical school of natural science. Boyle rejected the Aristotelian and scholastic view dominant on the Continent which held that logical argument was sufficient proof of a case. For Boyle any theory that could not be experimentally observed and tested was not proven.

One of the major topics of scientific argument at the time was the vacuum. Aristotelians denied its existence because, they said, "Nature abhors a vacuum." They believed this was why water could be sucked up a tube, whereas Boyle claimed that it was due to the effects of air pressure on the surface of the liquid at the bottom pushing the water into the vacuum created by suction. However, Boyle's stance in favour of the vacuum was taken for other than scientific reasons.

If the universe were filled with matter, as Aristotle had said, there would be no room for a vacuum. If this were the case there could be nowhere for immaterial forms such as angels and human souls to reside. Without souls and angels, and the entire hierarchy of a God-ordered universe, all authority could be questioned, including that of the king, God's Vicar on Earth. This view would open the way to the kind of sectarian fanaticism that had almost destroyed the country during the Civil War and the Commonwealth.

The increasingly powerful school of pagan naturalism which held that the water filled the tube because it, like all other natural forms, 'knew' that nature abhorred a vacuum, gave all nature a conscious purpose equal with that of the human race and thus denied God's special relationship with man. In such a world there could be no authority, no stability, no hierarchy and, above all, no monarch. Finally, if experimental proof of the existence of the vacuum were to be ignored, the entire value of empirical science to industry and therefore to England's prosperity and safety would be placed in jeopardy. Establishment of the existence of a vacuum was a social and political necessity.

Scientists themselves can determine the value of their own work within the contemporary structure. In the 1930s a small group of physicists, including Max Delbruck and Leo Szilard, decided that the discipline of physics was unlikely to provide interesting problems worth solving in the foreseeable future. The field of biology seemed to present more opportunity, being relatively untouched by the physical methods which they were used to employing. There followed a migration by a large group of physicists to the discipline of biology. Their arrival created a new kind of biology which incorporated techniques and ideas from physics. The new discipline became known as molecular biology. Research in biology was thereafter seen to be necessary in order to keep physicists involved in interesting work, rather than springing from a stimulus internal to the science itself.

Strategic and political considerations in the late nineteenth century were brought to bear on medical research. At the time it had become urgently necessary to establish a policy regarding the control of malaria. The British, working in far-flung malaria-ridden posts of the Empire, urgently needed the means to prevent or cure the disease if imperial administration were to continue to function.

The problem was approached in two ways. Ronald Ross advocated preventive methods concerned with public health. He visited Equatorial Africa, and while in Sierra Leone ordered garbage to be removed, ponds and all stagnant water to be drained, water containers to be covered, and pest-breeding areas to be sprayed with kerosene or stripped of undergrowth. This sanitarian approach would, he argued, make malarial zones safe for administrators and local population alike.

A different view was held by Patrick Manson, who stressed the need for scientific research. He believed that increased knowledge about the origins and course of tropical diseases would in the end prove more effective than control.

Several committees were formed to decide the matter. The scientific approach won, not through any theoretical or experimental evidence one way or the other, but because of its social implications. The setting up of a School of Tropical Medicine would enhance the scientific reputation of overseas doctors and thereby increase the systematic control and exploitation of the colonies. It was also less expensive than the public health approach, and inasmuch as it reflected a progressive view of the problem it was in tune with the generally optimistic imperial ethic of the time. The scientific method of approach would also enhance the position of those working in the discipline, endowing tropical medicine with some of the kudos of the older sciences. For these social and political reasons, rather than through a desire to ameliorate tropical conditions, the committees, themselves formed predominantly of professional scientists, decided against Ross, whose view did not fit the accepted structure.

Sometimes an entirely new area of specialisation may be generated by socially desirable goals. At the beginning of the nineteenth century the city of Edinburgh was feeling the first effects of industrialisation. It had avoided them for longer than most major British cities and regarded itself as the non-industrial intellectual capital of the north. As the numbers of the working class and petite bourgeoisie grew, the aristocrats and professionals moved out to the New Town, widening the divide between the social classes.

The newly affluent merchant class, denied entrance to the faculties, clubs and institutions of the city, felt their isolation from positions of power very strongly. By 1817 they had their own newspaper, The Scotsman. In defiance of the Scottish intellectual establishment, which they saw as exclusive, scholastic and interested in knowledge for its own sake, these social rejects supported the entirely new 'science' of phrenology.

The study of the skull had originated in Germany with two physicians who had trained in Vienna, Franz Gall and Johann Spurzheim. They argued that the brain was the organ of the mind, with different faculties located in different areas of its surface, and that an excess or deficiency in any one of these faculties could be detected by bumps or hollows in the skull immediately over these particular areas. It was, therefore, possible to determine a person's level of endowment in all the faculties by examining his head.

The city rapidly became a centre for the practice of phrenology. Thirty-three separate mental faculties in the brain had been identified by George Combe, the Edinburgh lawyer who was the leading proponent of the new science. Combe's 'faculties' included amativeness (propensity to love), cleverness, educability, wisdom, sense of purpose, forethought, vanity, tendency to steal, instinct for murder, memory, aggression, numeracy, poetry, and so on.

The Scottish phrenologists found a large and receptive audience among the lower-middle and working classes. At the height of its popularity, phrenology attracted hundreds to the lectures held in the Cowgate Chapel. In 1820 the Phrenological Institute was established. It numbered only one academic from the university. The phrenologists were regarded as dangerous social reformers: they agitated for better treatment of the insane, for education of the working class, critical law reform, more enlightened colonial policy, improved working conditions in factories, and of course for a change in their own social status.

All these arguments were based on the belief that phrenology offered the opportunity to study the mechanisms of character and intelligence. This would provide the information necessary for any progressive social programme. The ameliorative effect of reforms in education, conditions at work, sanitation and the general environment could be observed directly, by scientific methods, on the skull of the 'improved' citizen.

By the end of the nineteenth century interest in phrenology had waned, but not before it had spurred brain research well beyond contemporary necessity. There was no particular need for medicine, and surgery in particular, to examine the brain at this time, and little practical use to which the resulting knowledge could have been put. But the phrenologists' claims focused attention on brain function and structure, which over the following forty years led to the major early neurophysiological discoveries by Ramon y Cajal and others.

Before any such research can be carried out, it is necessary first to establish the existence of the phenomena to be investigated. Evidence must be gathered. But this evidence is accepted or rejected according to the value placed on it by the structure.

At the beginning of this century the accepted view of natural history was Darwinian. The only flaw in his theory, however, was that it lacked evidence of an intermediary species between ape and man. If the 'missing link' could be found, the theory would be complete.

In February 1912 a country solicitor and amateur palaeontologist called Charles Dawson wrote to the keeper of the Department of Geology at the British Museum, Arthur Woodward, to say that he had made an extraordinary fossil find in a gravel quarry in Sussex. It was, said Dawson, an unusually thick skull which might prove to be the oldest human remains yet found, since it had been discovered in a layer dating from the Pleistocene period. Further digging at the quarry revealed several fossillsed animal teeth whose type -- mastodon, hippopotamus, and soon -- confirmed the dating of the skull. Close by the teeth were found flint tools worked by human hand.

Later in the same year a jawbone was found in the quarry. The discovery rocked the palaeontological establishment because the jaw appeared to be that of an ape, even though it had two molars which were worn down in a pattern that could only be produced by a freely moving jaw. Humans have freely moving jaws, whereas apes do not. The position of the jawbone in the quarry suggested strongly that it had come from the skull. Since the Darwinian model presumed that evolution would first enhance the skull and later the jaw, this was evidently the 'missing link' everybody was looking for. The excitement was intense.

All that prevented complete identification was a missing canine tooth. If one could be found also showing evidence of human wear and if it did not project above the level of the other teeth, then, however apelike the jawbone, the entire skull would be human. Models were made of what the canine should look like. On 30 August 1913, near the site of the jawbone, a tooth was found. It fitted the predictions exactly. There was jubilation. Here, indeed, was the link between ape and man foretold by Darwin. When a second, similar skull and jaw were found in 1915, two miles from the original site, the last remnants of doubt dissolved.

From the mid 1920s on, fossil men were discovered in Africa, Java and China. All of them, however, revealed developments opposite to those shown by the Sussex discovery. Their brain cavity was still apelike, whereas their features had evolved. By 1944 it was concluded that there had been two distinct evolutionary lines leading to man; but the only examples from one of the lines were the Sussex skulls. Confusion reigned.

Then newly developed fluorine tests were made on the Sussex bones and it was discovered that they were a fraud. The bones were probably medieval in origin. What is more, the skull and jaw had been stained with iron to give the appearance of great age. The molars had been filed down to simulate human wear, and the canine tooth had also been filed and painted brown. The animal bones found near the skull were discovered to be from various parts of the world and from animals which would never have congregated in one spot at any time in history. The skull of Piltdown Man was a hoax.

The fact that at the time of the discoveries techniques were available to identify iron-staining, filing and above all the presence of oil paint, shows how strongly the structure of an evolutionary line which was expected to contain a 'missing link' influenced the acceptance of fraudulent evidence. Even the presence in the gravel pit of a fossilised elephant bone carved in the shape of a cricket bat failed to alert the experts! Their expectations, structured by the contemporary palaeontological model prevented any objective assessment of the evidence.

Sometimes, too, evidence is deliberately rejected because its source or style does not conform to accepted standards. In 1769 three 'thunderstones' were submitted by three different sources for examination by the French Academy of Sciences. They were reputed to have fallen from the sky. Chemical analysis revealed them to be surprisingly similar, but the account of their origin was rejected. This was because of the prevailing view of meteors whose composition was disputed by scientists although their existence was not. Meteors were seen by scientists. Falling stones were seen by peasants, or at best local clerics. In the days before the French Revolution such sources of evidence were to be ignored.

Scientific proof during the Enlightenment had been uniquely established by observation and experiment on the part of a scientific community which had chosen to isolate itself from the rest of society. Disorganised and for the most part amateur, it was jealous of its dilettante status. The acceptance of evidence from members of the lower classes would endanger that status.

Meteorites, or falling stones were also part of folklore and as such to be discredited, even when they occurred in conjunction with the appearance of a well-observed meteor. When a large meteorite fall occurred near La Grange de la Juliac, in southern France, it was witnessed by over three hundred people including the mayor and the town's lawyer, but because no scientists were present their report was dismissed as fiction. The usual objection was that the observers had suffered from an optical illusion.

By 1801, however, enough 'stones' were being chemically examined for the French scientific community to feel that the matter was in reliable professional hands. Then, and only then, were new reports of meteorite falls taken seriously. In 1803 a giant fall at L'Aigle, near Paris, was examined by Jean-Baptiste Biot, of the Institut de France, and declared to be celestial in origin. The revolution had enhanced the position of the common man in France, and with scientists controlling the means of finding and examining the evidence, meteorites could now be accepted as a genuine phenomenon. Final proof of the meteorites' origin came when scientific analysis of the stones revealed a composition of nickel and iron not found anywhere on earth.

By the time the structure had changed sufficiently for meteorites to be accepted for what they were, the same structure also dictated the use of scientific analysis using instruments in specific ways in order to establish whether or not the stones were of earthly or unearthly composition. The researchers were looking for the presence or absence of predicted data.

When evidence has been accepted or rejected and the existence of a phenomenon established, the structure again dictates the next step. It provides the means for examining the phenomenon and a guide to expected data. Any data presented in this way will be acceptable, since the instruments used will have been designed to find only those data which, according to the structure, are needed for composition. Any data considered to be extraneous to the event will be disregarded.

In England in the late nineteenth century, for instance, a time when it was thought that electromagnetic radiation exerted pressure, William Crookes constructed a radiometer to measure the pressure. He pivoted a number of tiny vanes on a vertical axis in a glass bulb from which all the air had been extracted. The side of the vanes facing the radiation sources was painted black, because it was known that radiation affected dark surfaces more than bright ones. Sure enough, when the device was exposed to sunlight, the vanes spun away from the light. The more intense the light, the faster the spin. The radiation was evidently causing pressure on the vanes, as predicted. The instrument was so sensitive that it was used in turn to detect and measure stellar radiation.

However, some time later it was shown that the cause of the rotation was not radiation pressure at all. The vanes spun because the light heated the small amount of gas present in the near-perfect vacuum; along the edges of the vanes, unequal heating would result in gas creep towards the hotter parts of the vane, where the gas would condense, causing a rise in pressure. It was this inequality of gas pressure which caused the vanes to spin. In response to theoretical expectations the radiometer produced the right results for the wrong reasons.

Galileo used the same technique on a different occasion. In Venice, in 1609, he made his first telescopic observations and came to the heretical and dangerous conclusion that Copernicus had been right and that the earth did indeed circle the sun. Through the telescope, which he predicted would show him whether or not what Aristotle had said about the universe was true, he saw what he took to be evidence that the earth was not the centre of the solar system. The quality of image which the telescope provided was extremely poor, full of aberrations and distortions. Galileo drew pictures of what he saw: the satellites of Jupiter, the phases of Venus and the surface of the moon with its mountains and 'seas'. Of all these, the satellites best proved his case. The pictures he had drawn of the moon seen through the telescope looked inaccurate even to the naked eye.

When Galileo showed his critics what the telescope had revealed, he did so in a specific way. He first showed them how it magnified distant objects such as carved lettering on a building, or ships at sea. These were familiar sights and the telescope did indeed show them more clearly. Then Galileo pointed his telescope at the sky, where the detail it would show was entirely unfamiliar. However, was it not evident to all that the telescope magnified objects? It was a brilliant non sequitur and Galileo's opponents said so. But there was no terrestrial standard which could be used to judge what the telescope showed in the sky. Knowing this, Galileo took advantage of the prior acceptance of telescopic powers by those who had been prepared to look through the telescope and who were, therefore, already predisposed to his view. He concentrated their attention on the entirely incomparable satellites, and played down the image of the moon, where the inadequacy of the instrument was clearly visible and would have undermined his argument.

One example of how data were regarded as extraneous occurred in 1663, when Otto von Guericke became interested in the way some substances were attractive when rubbed. One such material was sulphur. Guericke moulded a sulphur ball and rubbed it as it spun. His intention was to further the investigations carried out earlier by William Gilbert, the English doctor whose work on magnets, published at the beginning of the century, had stimulated experimental studies of attractiveness. According to Gilbert the earth was a giant magnet holding everything to its surface by magnetic attraction. Johannes Kepler had shown that this form of attraction kept the planets in orbit round the sun. Magnetism, according to the physical structure of nature at the time, was the basic phenomenon holding everything together.

\

Using the sulphur ball as his instrument, Guericke measured its attractiveness in all environments and under all conditions. He noticed that besides exhibiting attraction while it was being rubbed, the sulphur ball also made a crackling noise and gave off a spark. But the instrument had been designed only to investigate magnetism. It could not, therefore, according to the experimental structure, provide significant data on any other phenomena outside Guericke's investigations, so he ignored the sparks, mentioning them only briefly at the end of a lengthy work. It was fortunate that he did so, because his observation spurred the later work which was to lead to the discovery of electricity.

By the time a decision has been made that data are to be found within a particular cosmic structure, ordered and moving in a particular way due to mechanisms which operate in modes defined by the structure itself, which also delineates approved forms of research into phenomena that can be properly identified from reputable evidence and examined with instruments designed specifically to examine them -- by this time it is the instruments themselves, or the expectations of the researchers, which will give meaning to the data. These data have no 'objective' meaning, in the sense of representing information about nature which has been discovered by a passive and disinterested process. Every stage of the investigation until this point has been shaped by the preceding stage. Thus, the instrument is constructed to find only one kind of data. The meaning of the data revealed by measurement or observation of the phenomenon is already inferred by everything which has gone before.

Callipers were produced in the later half of the nineteenth century for the purpose of measuring human skulls with great accuracy. In the phrenological structure of human character, the bigger the bump on the skull, the more active the relevant part of the brain. The bigger the bulge in that part of the skull covering the frontal lobes of the cerebral cortex, the greater the genius. Intellect became defined in inches. The myth persists today that a large, domed head and a high forehead are signs of intelligence.

A similar value-loaded interpretation of data occurred in regard to the geological formations known as the 'parallel roads' of Glen Roy in Scotland, which were explained differently by Darwin and other geologists. According to the chosen geological structure, the strata could reveal that land had been elevated above the sea, or that the sea had receded exposing them, or that they had been formed by glacial lakes, or non-glacial lakes. Without any of these hypotheses with which to interpret the 'roads' they did not exist at all.

In late nineteenth-century astronomy, both the canals of Mars and the Red Spot on Jupiter were given equal prominence as phenomena which could be measured and whose existence confirmed an entire set of predictions about each planet and about the solar system as a whole. In the case of the Martian canals, the meaning was clear. They showed without doubt the presence at some time of an advanced civilisation on the red planet. Eventually it was found that the canals were artifacts produced by the limitations of the small-aperture telescope.

On some occasions the theory-laden prediction of what the data will show is so strong that absence of the expected results casts doubt not on the theoretical structure but on the observational technique itself. Albert Michelson and Edward Morley failed to find interference fringes produced by the returning split beam of light which they were using to measure the effect of the ether. This result flabbergasted them. Since the model for the behaviour of electromagnetic radiation demanded that there be an ether, through which the radiation could propagate, the possibility that the experiment had been successful in showing that the ether did not exist was out of the question. For Michelson and Morley and many other scientists of the time, the experiment had simply failed because they had used the wrong experimental technique.

When the theoretical structure does strongly indicate the need for evidence of a predicted phenomenon, the data will have meaning even if there are no data. In the last decade of the nineteenth century French attempts to decentralise culture and industry had poured millions of francs into provincial capitals such as Nancy, in eastern France. By 1896 Nancy's new laboratories and especially the privately funded Electrotechnical Institute, stood as evidence of the scientific ascendancy of the city. Staff at the new university, laboratories were under intense pressure to produce results that would justify the city's improved status and the years of investment by central government. Throughout the country there was also a feeling that French science was generally in decline and that a spectacular success was called for in order to boost its reputation.

Around 1900 there was a general surge of interest in psychological and spiritual phenomena. Telepathy and suggestion were researched. There appeared to be links between the action of the nerves and electricity. Nancy had a distinguished psychiatric unit at which Freud studied.

Elsewhere in Europe Wilhelm Rontgen discovered X-rays in 1895, and a year later Antoine Becquerel identified radioactivity. By 1900 alpha, beta and gamma rays had been found. More were expected. In 1903 a distinguished physicist called Rene Blondlot, who was a member of the French Academy of Sciences and a senior figure at Nancy University, announced his discovery of another ray. In honour of his city he called it the N-ray.

Blondlot had found the new form of radiation while looking at the behaviour of polarised X-rays. He had seen that the new rays, which penetrated aluminium, increased the brightness of an electric spark. The rays were also refracted by a prism and it was known that X-rays could not be refracted in this way. Since the scientific community expected new rays to be found, Blondlot's work immediately attracted dozens of young graduates keen to make their name in this new field.

Within three years three hundred papers had been written on the subject, and doctoral theses were being prepared. Not only did the rays traverse material opaque to light, but, extraordinarily, they were given off by the muscles of the human body. Moreover, N-rays heightened perception and they were produced by the human nervous system particularly during intellectual exertion. Was there a relationship between the mysterious N-rays and the psyche? In 1904 Blondlot was awarded the prestigious Prix Lecomte by the Academy of Sciences.

The crucial stage in the experiment proving the existence of N-rays was the brightening of the spark, which Blondlot always insisted had to be feeble. The trouble was that nobody outside the city of Nancy could see differences in the brightness. In September 1904 an American Professor of Physics, R. W. Wood, arrived in Nancy and Blondlot demonstrated the effect for him. Wood, too, was unable to see changes in the spark. He had previously noted that with the equipment currently available the minimum natural variation to which any spark's brightness could be controlled was as much as 25 per cent. Spark brightness was obviously a dubious criterion of measurement. It was when Blondlot used a prism to refract and split the N-rays so as to show the spread of their wavelength that Wood decided to act. While his French hosts were busy in the dark, Wood removed the prism. The demonstrators continued to see the N-rays. Wood published his story the same month. No more N-rays were observed. The discipline collapsed as quickly as it had appeared.

There was never any suggestion that Blondlot was a charlatan. He and his colleagues were victims of the expectation that N-rays would be discovered and when they built instruments to see the rays, they saw them. For a short time this non-existent phenomenon resisted the most stringent tests and methods known to science.

At every level of its operation,from the cosmos to the laboratory bench, the structure controls observation investigation. Each stage of research is carried out in response to a prediction based on a hypothesis about what the result will be. Failure to obtain that result is usually dismissed as an experiment failure. Every attempt is made to accommodate anomalies by a minor adjustment to the mechanism of the structure, as was the case with Ptolemy's epicycles or Descartes' vortices. In this way the structure remains essentially intact, as it must do if there is to be continuity and balance in the investigation of nature.

As has been seen, however, the structure contains within it systems which operate at every level to lead the investigator to the most detailed of analyses, and it is often at that level that anomalies occur which cannot be accommodated without a complete change in the contemporary structure.

One such event is described graphically by one of the researchers involved. At the end of 1966 Walter Pitman was looking at new profiles of the magnetic state of certain areas of the ocean floor, when, as he said, "It hit me like a hammer . . . in retrospect we were lucky to strike a place where there were no hindrances. . . . We didn't get profiles quite that perfect from any other place. There were no irregularities to distract or deceive us."

This was one of the rare moments in the history of knowledge when a structure was about to change. It was all the more exciting because the previous structure had been successfully resistant to alteration for over fifty years as it fended off various attempts to reorder the mechanism by which the continents were thought to have arrived at their present position.

In the last century the belief was that while the surface of the earth was subject to relatively constant vertical movement, after an initial cooling or contracting period the land-masses had remained in the positions in which they now were. So some old sea basins were now mountains, and mountain ranges were now on the sea-bed. In the 186Os, however, certain similarities between three hundred million year old fossils in the coal beds of Europe and those of North America caused Antonio Snider-Pellegrini to postulate that the continents had originally fitted together in one giant land-mass.

In 1915 a German meteorologist called Alfred Wegener went into more detail. The coastlines of Africa and North and South America looked as if they had once fitted together. There were striking geological similarities in areas that would, in this scenario, once have been contiguous: the Cape Mountains of South Africa and the Sierras of Buenos Aires; three major geological folds that continued from North America to Europe; the huge gneiss plateaux of Brazil and Africa. Many identical fossils dating from before the palaeozoic era (and virtually none from later) were recovered from South America and Africa.

For Wegener these and other questions could only be explained by the fact that the continents had once been joined and had since parted. He described the continents as being like giant icebergs of silicon and aluminium 'floating' on a sea of heavier basaltic material that formed the ocean floors. They had simply drifted apart.

The proposal was greeted with universal scorn. Wegener was not a geologist. There was no known mechanism which would propel the continents. The softer land-masses could not 'plough' through the harder ocean floor. The problems he had posed were pseudo problems. The bio-geographical similarities of the fossils were evidently attributable to the fact that ancient land bridges, now sunk, had once connected the continents, or to seeds and spores being carried on the wind across the sea. In any case, the continents did not fit exactly. The questions Wegener had raised were thus answered satisfactorily within the terms of the contemporary structure, and for thirty years no further serious defence of his view was attempted.

By the 1950s developments in an apparently unrelated field caused a reappraisal. Newly invented magnetometers had shown that the earth had a magnetic field which was parallel to the axis of rotation. Moreover, studies showed that rocks retained their original magnetic orientation, and that over aeons, changes had occurred, either in the position of the magnetic poles or in the position of the rocks as indicated by their residual magnetism. If movement accounted for the present magnetic orientation, India must have migrated north, and England also had moved north while rotating clockwise.

Ten years later the science of oceanography had altered the accepted view of the sea-bed. An extensive system of mid-ocean ridges had been discovered throughout the world. Running through the ridges were rift valleys with associated narrow earthquake zones. The ridges were also shown to have unusually high heat flows along their crests. They were obviously related to some kind of continuous activity in the ocean floor.

In 1960 magnetic analysis of the areas parallel to some of the ridges showed alternating strips of high and low residual magnetic intensity. At the same time oceanographers were shocked to discover that the sediments on the sea-bed were extremely thin, especially at the ridges. Moreover, no sediments older than the relatively young cretaceous period were found in core samples. The sea-bed was both younger and thinner than expected.

In June 1963 it was established that the polarity of the earth's magnetic field had undergone periodic reversals throughout history. Two researchers, Vine and Matthews, proposed that if the evidence showed that hot material was coming to the surface at the sea-bed ridges and spreading outwards, which would explain everything so far observed, the flow, as it started and stopped, ought to be characterised by strips on either side of the ridge which would have emerged during periods when the earth's magnetic field alternated. The strips should therefore have alternately polarised residual magnetism.

In 1966 several magnetic profiles were made of the Pacific-Antarctic ridge. They confirmed the new view. The ocean floor was spreading outward from the ridges, and it was this mechanism which had slowly pushed the continents apart. This was the only structure that could accommodate all the new data. What is more, it explained other anomalies. If the sea-bed were spreading it would encounter the continent edges and be forced back downwards. This would account for the earthquake zones along the Californian coastline and the mountain-building activity in the north-west of the United States.

The new structure presupposed that the earth's surface was composed of a number of tectonic plates, floating on a spherical, molten subsurface. The emergence of plate tectonics revolutionised the entire field of geophysics, and opened the door to a new set of structure and control by which research is now to be conducted in the new version of how the earth functions. The old structure has been replaced.

Each structure must, by definition, be a complete version of what reality, or one aspect of it, is supposed to be. It is the contemporary truth. But as has been seen, structures are replaced. Aristotle gives way to Copernicus who gives way to Newton who is replaced by Einstein. Lavoisier and Priestley destroy the concept of pneumatic chemistry and the mystery 'quality', phlogiston, in order to replace it with a chemistry based on combustion. The use of perspective geometry challenges the theological rules for interaction with the intangible physical world by making it measurable. Nineteenth Century geology does away with the biblical record of history.

In most cases, each structure is generated by circumstances that are not directly related to the scientific field itself. Often the pressure for change will come from outside the discipline. Whatever the cause, however, it will be seen that the initial cosmological structure sets the overall pattern of reality within which other structures work. They, in turn, define the areas of research to be covered. These areas demand specialist forms of investigation that then discover anomalies which the overall structure cannot accommodate, and so change occurs. But the theories, discoveries, equations, laws, procedures, instruments, as well as the judgmental systems used to assess the results of investigation are all defined by their context, all part of the structure.

The composition of our present structure is based on previous structures. Ours is the latest in a series of structural changes which has less to do with what has been discovered of reality than how views of reality have altered from one structure to another. For scientific activity has been influenced by factors within the overall structure that may have had little to do with the supposedly autonomous activities of science.

During the First World War, scientists in Germany looked forward to a postwar period in which science and technology would grow and prosper with increasing prestige, financial support and high social status. They expected Germany to win the war. The sudden and catastrophic defeat by the Allies, as well as the imposition of what were regarded as humiliating terms of surrender, caused a fundamental change in German thinking which was profoundly to affect one aspect of science above all.

German belief in order and a rational world had been shaken by the defeat. Mistakes had been made, and the nation felt a strong desire for strengthened unity to counter the general feeling of despair. Survival and recovery seemed to need a philosophy that emphasised the organic, the emotional, the irrational wellsprings of human life rather than what was seen as the cause of defeat, the 'dead hand' of the old mechanistic view. Science had taken things apart, reduced them to fragments and imposed laws that were deterministic, rather than offering hope and unity. For Germany, the Newtonian view was judged responsible for failure. It was to be rejected.

Within a few years of the war, educational reforms brought a drastic reduction in the teaching of mathematics and physics in schools. The hostility to science was palpable. The Prussian Secretary of Education, Carl Becker, said: 'The basic evil is the overvaluing of the purely intellectual. . . We must acquire again reverence for the irrational.'

The continuing economic and political problems of the years between 1918 and 1930 brought on a sense of crisis. Feelings were intensified by the overwhelming success of Oswald Spengler's Decline of the West, almost universally read by German intellectuals. In the book Spengler defined the kind of knowledge Germany needed if it were to survive. Each culture, he held, was autonomous and separate, with its own forms of knowledge. There were no universal criteria by which to judge truth. A sense of 'destiny' was essential to the health of a nation. It would provide an irrational, inner sense of truth which should dispense with the destructive views of science, that looked to cause and effect to explain the universe. Exact science could never be objective. Causality was dangerous and destructive. It had failed Germany.

This universal hostility to the causal view permeated every aspect of German life. Those who supported it would lose financial support, grants, positions. The repudiation of 'causality' was unique to the German sphere. It preceded the emergence of a new 'non-causal' view in German science, which regarded the operation of the universe as a matter not of cause and effect but of chance and probability. With Erwin Schrbdinger and Werner Heisenberg and the 'principle of uncertainty' at the heart of quantum physics came the end of experimental certainty. The observer altered the universe in the act of observing it. There was no causal reality to be observed.

Quantum physics might have developed elsewhere, later. The fact is that it developed in Weimar Germany in a social and intellectual environment that specifically encouraged a view of physics which did not naturally evolve out of the previous physics structure. Quantum theory is to a great extent the child of Germany's military defeat.

Even the birth of an entire scientific discipline can be due to factors that have little to do with the advance of knowledge. At one time medicine was in direct and unfavourable competition with astrology. As late as 1600 astrology was dominant. Both practices took the form of theoretical systems from which physical effects could be deduced. Both presented themselves as 'scientific'. Both attempted to explain the working of disease. Medicine relied almost exclusively on bleeding and purging, practices which killed more often than they cured. Few genuinely effective herbal remedies were used by doctors. Compared with this, astrology offered less risk and as good a chance of cure.

Astrology was not regulated by law: anyone could practise. Astrologers catered to the majority, a cross-section of the adult population, principally in country areas. They dealt with general problems defined by their clients, such as pregnancy, adultery, impotence, careers, and so on. In its use of herbal remedies astrology, unlike medicine, was remarkably efficacious. Astrologers were, however, ranked only as craftsmen.

Medicine on the other hand was elitist, predominantly urban, practised by a smaller, more coherent group which was attempting to develop professional forms of regulation and control with the aim of excluding non-members and of better controlling the market. Medicine fitted the contemporary view of the use of knowledge, for although it was largely incapable of curing people, it concentrated on classifying and labelling what was observed. Medicine also complied with the prevailing mode of thought in its concentration on the individual, whereas astrologers conducted few individual sessions.

As science became increasingly institutionalised during the Restoration, medicine more easily fitted its constraints than the anarchic, disorganised practice of astrology. Even then, however, neither discipline could claim to be more efficacious than the other. There were no break throughs in the ability to cure which would explain the triumph of medicine over astrology. But by 1700 astrology had lost its influence and support. The 'medical' view of disease had become the accepted model for reasons that had much to do with the ability of the physicians to organise, as well as the fact that their procedures fitted the overall model -- and virtually nothing to do with the scientific superiority of their methods over those of astrology.

The whole of Western experimental science had similarly unscientific beginnings. In medieval France, the arrival of advanced Aristotelian logic together with the entire corpus of Hellenistic scientific knowledge led thinkers like Pierre Abelard to approach matters of faith with a new eye. Logic would aid in strengthening faith by making belief comprehensible. Abelard and others used the new dialectic technique to consider contradictory elements in the Bible with a view to reconciling them in some form of synthesis. The logical end to this activity was apparent in the work of the late scholastics, such as Theodoric of Freiburg, Roger Bacon and Bishop Grosseteste, all of whom subjected nature to the same dialectical inquiry. In doing so, they effectively initiated modern scientific reasoning and removed what we would call science from the domain and control of theology. The investigation of nature in the West, then, had its origins in those very attempts to enhance a faith which itself claimed that the investigation of nature was meaningless and without value.

The basic mode of Western thought is itself born of a singular model, developed by the Greeks. Initially, Ionian Greeks found themselves in precarious circumstances that could only be survived through greater understanding and control of some aspects of their uncertain environment. In seeking to dominate their surroundings they took systems such as Egyptian pyramid-building techniques and first adapted them to the needs of navigation, later developing them to the level of complexity where geometry became the matrix, the pattern of all possible shapes, with which to examine and give order to the cosmos.

The rules which evolved for the use of this model were derived from the nature of geometry and the system of thought it imposed. Logic and reason sprang from the use of angles and lines. These tools became the basic instruments of Western thought: indeed, Aristotle's system of logic was referred to as the Organon (the tool). With it we were set on the rationalist road to the view that knowledge gained through the use of the model was the only knowledge worth having. Science began its fight to supplant myth and magic on the grounds that it provided more valid explanations of nature.

Yet myths and magic rituals and religious beliefs attempt the same task. Science produces a cosmogony as a general structure to explain the major questions of existence. So do the Edda and Gilgamesh epics, and the belief in Creation and the garden of Eden. Myths provide structures which give cause-and effect reasons for the existence of phenomena. So does science. Rituals use secret languages known only to the initiates who have passed ritual tests and who follow the strictest rules of procedure which are essential if the magic is to work. Science operates in the same way. Myths confer stability and certainty because they explain why things happen or fail to happen, as does science. The aim of the myth is to explain existence, to provide a means of control over nature, and to give to us all comfort and a sense of place in the apparent chaos of the universe. This is precisely the aim of science.

Science, therefore for all the reasons above, is not what it appears to be. It is not objectively impartial, since every observation it makes of nature is impregnated with theory. Nature is so complex and so random that it can only be approached with a systematic tool that presupposes certain facts about it. Without such a pattern it would be impossible to find an answer to questions even as simple as 'What am I looking at?'

The structure is institutionalised and given permanence by the educational system. Agreement on the structure is efficient: it saves investigators from having to go back to first principles each time. The theory of the structure dictates what 'facts' shall be, and all values and assessments of results are internal to the structure. Since theory 'creates' facts, and facts prove the theory, the argument of science is circular. Commitment to the theory is essential to orderly progress. The unknown can only be examined by first being defined in terms of the structure.

The implications of this are that, since the structure of reality changes over time, science can only answer contemporary questions about a reality defined in contemporary terms and investigated with contemporary tools. Logic is shaped by the values of the time; for Abelard it is revealed truth, for Galileo experimental evidence. Language, too, changes: in the fifteenth century 'earth' means 'fixed, unmoving'; in the eighteenth century 'electric' implies 'liquid'; 'space' before Georg Riemann is two-dimensional. Method is similarly dependent upon context: dialectic argument is replaced by empirical observation which is replaced by statistical probability. Science learns from mistakes only because they are deemed as such by the new structure.

In spite of its claims, science offers no method or universal explanation of reality adequate for all time. The search for the truth, the 'discovery of nature's secrets', as Descartes put it, is an idiosyncratic search for temporary truth. One truth is replaced by another. The fact that over time science has provided a more complex picture of nature is not in itself final proof that we live by the best, most accurate model so far.

The knowledge acquired through the use of any structure is selective. There are no standards or beliefs guiding the search for knowledge which are not dependent on the structure. Scientific knowledge, in sum, is not necessarily the clearest representation of what reality is; it is the artifact of each structure and its tool. Discovery is invention. Knowledge is man-made.

If this is so, then all views at all times are equally valid. There is no metaphysical, super-ordinary, final, absolute reality. There is no special direction to events. The universe is what we say it is. When theories change, the universe changes. The truth is relative.

This relativist view is generally shunned. It is supposed by the Left to dilute commitment and by the Right to leave society defenceless. In fact it renders everybody equally responsible for the structure adopted by the group. If there is no privileged source of truth, all structures are equally worth assessment and equally worth toleration. Relativism neutralises the views of extremists of all kinds. It makes science accountable to the society from which its structure springs. It urges care in judgment through awareness of the contextual nature of the judgmental values themselves.

A relativist approach might well use the new electronic data systems to provide a structure unlike any which has gone before. If structural change occurs most often through the juxtaposition of so-called 'facts' in a novel way, then the systems might offer the opportunity to evaluate not the facts which are, at the present rate of change, obsolete by the time they come to public consciousness, but the relationships between facts: the constants in the way they interact to produce change. Knowledge would then properly include the study of the structure itself.

Such a system would permit a type of 'balanced anarchy' in which all interests could be represented in a continuous reappraisal of the social requirements for knowledge, and the value judgments to be applied in directing the search for that knowledge. The view that this would endanger the position of the expert by imposing on his work the judgment of the layman ignores the fact that science has always been the product of social needs, consciously expressed or not. Science may well be a vital part of human endeavour, but for it to retain the privilege which it has gained over the centuries of being in some measure unaccountable would be to render both science itself and society a disservice. It is time that knowledge became more accessible to those to whom it properly belongs.


Copyright © 1997: Illinois State University
Last modified December 18, 1997
Mail questions or comments to: ITS Web Support