11 Thomas Kuhn and the Dialogue Between Historians and Philosophers of Science

William Shea

Download Chapter

DOI

10.34663/9783945561119-13

Citation

Shea, William (2016). Thomas Kuhn and the Dialogue Between Historians and Philosophers of Science. In: Shifting Paradigms: Thomas S. Kuhn and the History of Science. Berlin: Max-Planck-Gesellschaft zur Förderung der Wissenschaften.

I am, for example, acutely aware of the difficulties created by saying that when Aristotle and Galileo looked at swinging stones, the first saw constrained fall, the second a pendulum. The same difficulties are presented in an even more fundamental form by the opening sentences of this section: though the world does not change with a change of paradigm, the scientist afterwards works in a different world. Nevertheless, I am convinced that we must learn to make sense of statements that at least resemble these. (Kuhn 1970, 121)

Introduction: When Language Rebels

When Thomas Kuhn began to write The Structure of Scientific Revolutions, language model epistemology had just been smuggled out of departments of philosophy and linguistics and lobbed like grenades into unsuspecting departments of history of science. The traditional ties between language and reality external to language were threatened on the ground that language is the very structure of mental life and no meta-language can ever stand outside itself to observe reality external to itself. Thomas Kuhn thought the problem of translation from one language to another is mirrored in the problem of interpreting one scientific worldview in terms of a different scientific worldview. The difficulty is compounded by the fact that, whereas members of one linguistic community generally recognize that other communities may have their own, equally valid languages, the members of a given scientific tradition usually consider that theirs alone is genuinely scientific. Consider for instance, Sir Peter Medawar’s scathing review of Teilhard de Chardin’s Phenomenon of Man:

Some reviewers hereabouts have called it the Book of the Year—one, the Book of the Century. Yet the greater part of it, I shall show, is nonsense, tricked out with a variety of tedious metaphysical conceits, and its author can be excused of dishonesty only on the grounds that before deceiving others he has taken great pains to deceive himself. The Phenomenon of Man cannot be read without a feeling of suffocation, a gasping and flailing around for sense. There’s an argument in it, to be sure—a feeble argument, abominably expressed. (Medawar 1983, 242)

Medawar’s intemperate outburst was the result of his deep conviction that there is one scientific method, that he knew what it was, and that no one should dare to suggest there could be another! Such statements, usually couched in a blander tone, were common in the heyday of logical positivism, the philosophy of science that dominated the scene from the eve of the Second World War to the early 1960s. Logical positivists recognized different languages but, like Medawar, they believed there was a clear demarcation between cognitively significant and cognitively meaningless expressions. But locating the demarcation line soon proved difficult, and historical studies revealed that, when found, it had a way of shifting regardless of the pronouncements of logically minded philosophers or philosophically aspiring scientists.

Structure dealt a blow to facile generalizations about the nature of science and ushered in a period of soul-searching that shows no sign of abating fifty years later. The first section of this paper pays tribute to the memory of Thomas Kuhn and discusses his stimulating ideas about the quirks of language; the second section examines how historians and philosophers of science have tried to interact.

The Challenge of Translation

The quest for the scientific method that underpins all scientific research has proved as elusive as the search for a universal grammar that underlies all languages. Kuhn never disavowed his belief that a scientific revolution marks a break between two incommensurable points of view, but after the publication of his work he relentlessly sought a way of moving from the perspective of one group to that of a different one. Whereas a gestalt switch was the analogy invoked in Structure, Kuhn came to favor a comparison with the acquisition of a foreign language by a culturally and socially sensitive anthropologist. Neither incommensurability nor untranslatability need debar us from the understanding of scientific texts if we have the required intelligence, determination (and modesty) to live with them and to learn from them. What Kuhn would not grant is that understanding implies total comprehension. The constellation of theoretical concepts, practical insights and mathematical techniques that cluster around the key notion in a given body of scientific knowledge cannot be fully evoked by even the best translation into a different system.

The case is analogous to that of poetry. A good French translation of Intimations of Immortality can capture most of Wordsworth’s ideas. It may even recreate the atmosphere of the poem, but in order to do this it will have to forgo literal translation for literary creation. Kuhn stressed that we cannot translate an older scientific text simply by enriching the contemporary lexicon. A word alone, even a family of words, will not do. A scientific revolution is like a landslide: it moves whole layers of the lexicon to different places where they soon acquire their former deceptive naturalness and apparent permanence even though they no longer support the same superstructure. The delicate problem is the nature of the landslide, is it merely epistemological (i.e., a feature of our language about the world) or is it ontological (i.e., a feature of the structure of reality) as well? Kuhn sometimes wrote as though the structure of the world changes with each lexical shift, but he nonetheless maintained that we can use two different lexicons to describe the same phenomenal reality. It is difficult not to suspect that what Kuhn was groping for was an updated version of the Kantian noumenon / phenomenon dichotomy, although he framed his discussion in terms of access to manifold worlds of words.

Members of various linguistic communities organize the world in ways that need not be identical, and Kuhn even contemplated the abyss of saying that they need not overlap before withdrawing from an assertion that would preclude the possibility of the partial knowledge he wished to defend. Kuhn was no black-hole epistemologist. He did not believe that we are sealed in a linguistic house of mirrors even if he chose, at times, to use dazzling lights. What he conveyed to us is a vivid sense of the fact that the connection between the verbal signifier and the mental thing signified is more understandable and easier to describe than the connection of either with the world we revealingly qualify as “out there.” Kuhn has sometimes been branded as an anti-realist, but it seems to me that he avoided this pitfall with the same kind of instinctive little lurch of faith that takes us out of bed every morning confident that the floor will be where we left it.

Observations are never made in a cognitive void and even the most apparently factual report comes to us tinged with anticipations and shrouded in some conceptual garb. This theory-ladenness, however, is neither as permanent nor as objectionable as it may sound. To say that I cannot get something without an instrument is not the same thing as stating that it cannot be reached. There is a kind of purity that is just another word for nakedness! Kuhn himself gave an excellent account of various ways in which such terms as force, mass and weight can be acquired. He offers a cautionary tale about the perils of trying “to straighten out the facts” before “getting the facts straight,”—in other words, of doing philosophy of science without history of science.

For Kuhn, the worlds of science, arts and philosophy are coterminous; several strands are intertwined and there is a constant interchange of information at the boundaries. Kuhn was aware of cross-fertilizations that may have been startling when they occurred or baffling to a later age but that make excellent sense when constructed with historical sensitivity. Consider, for instance, Emanuel Swedenborg’s desire to explain the decrease of longevity since biblical times. This seems an unlikely stepping-stone to cosmological theories about the gradual slowing down of the axial rotation of the earth, yet it stimulated research in unsuspected ways. The rough outline of what was later called the Kant-Laplace Cosmogony was formulated by Kant in his Allgemeine Naturgeschichte und Theorie des Himmels in 1755, a work in which Kant devotes several pages to the inhabitants of the planets of the solar system whose “natures become more and more perfect and complete in proportion to the remoteness of their dwelling-place from the sun” (Kant 1960, 386).

Kant’s interest in extraterrestrials was aroused by his reading of Swedenborg’s Arcana Celestia, an eight-volume commentary on Genesis and Exodus that appeared between 1749 and 1756. The first volume of the commentary on Exodus, which was published in 1753, contains a description of Swedenborg’s communications with the inhabitants of the Moon, Mercury, Venus and Mars. The second volume, published the following year, deals with the people on Jupiter and Saturn. The recent discovery that the period of rotation of Jupiter is ten hours compared to the Earth’s twenty-four was submitted by Kant as an indication of the superior ability of the Jovians: in five hours of daylight they achieve as much as earthlings in twelve! Kant’s science fiction blends the possible world of Swedenborg with the actual world of Newton in what for him, and many of his contemporaries, was a seamless robe. This may sound paradoxical but it suggests how historians and philosophers could get their act together.

History as a Safeguard Against Anachronisms

From the vantage point of any particular moment in the development of science, what happened before the discovery of the current method can easily be misunderstood. There is a natural tendency—conscious or unconscious—to mould great scientists of the past into the image of present-day scientists. Galileo is a particularly striking case of this kind of attempt. Let me borrow a couple of examples that Kuhn found interesting. The first comes from what was for a long time the standard English translation of Galileo’s Two New Sciences. It has Galileo say that he “discovered by experiment some properties of motion that are worth knowing and which have not hitherto been observed or demonstrated” (Galilei 1914, 153). The words “by experiment” are absent from the original Italian version (Galilei 1890–1909c, 190). The translators, Henry Crew and Alfonso de Salvio, obviously believed that by adding those two words they were merely making explicit what Galileo intended to convey. The result, of course, is to alter the very thrust of his argument, but the translators did not see this because they equated good science with experimentation, and they had no doubt that Galileo was a good scientist (I mean a scientist, 1914 vintage, when the translation appeared). The rapid development of the experimental sciences led to a distortion of Galileo’s views in his own century. This can be seen in a passage from the first English translation of Galileo’s Dialogue on the Great World Systems by Thomas Salisbury in 1661. The context is a discussion of the path that a stone would follow if it were released from the mast of a moving ship. The Aristotelian Simplicio claims that the stone will not strike the deck at the foot of the mast but some distance behind since the ship will have moved forward during the time the stone fell. Galileo’s spokesman, Salviati, denies this and insists that it will strike the deck at the foot of the mast whether the ship is moving or at rest. When cross-examined, Salviati admits that he has not performed the experiment, and Simplicio asks why he should believe him rather than the reputable authors who held the opposite view. Salviati’s rejoinder is translated as, “I am assured that the effect will ensue as I tell you; for it is necessary that it should” (Galilei 1661, 126). The original Italian reads: “Io senza esperienza son sicuro che l’effetto seguirà come vi dico” (Galilei 1890–1909b, 171). Salusbury, perhaps unwittingly, left out the crucial senza esperienza (“without any experiment”). Writing at the time of the founding of the Royal Society, he saw Galileo as a scientist for whom only experiment counted. Two and a half centuries later, Crew and de Salvio, implicitly subscribing to the fashionable positivist interpretation of science, made Galileo think as they believed he must have.

What is interesting is not so much the attempt to foster an empiricist philosophy of science on Galileo as the fact, noted by Kuhn, that we are able to spot these occurrences, not because we went over earlier translations of Galileo with a fine comb but because we are familiar enough with Galileo’s thought processes to spot incongruity when we come across it. Foreign languages can be learned; so can alien scientific methods. Just as a linguist who has mastered French recognizes a wrong gender, so a historian of science will pick out an anachronistic interpretation. He can never be sure that he has detected all the slips anymore than the linguist can be certain that he has identified all the grammatical mistakes, but both practitioners, in their different ways, can become sufficiently adept to rule out gross misinterpretations. In other words, they get it right most of the time, and this is as much as can be hoped for within the realm of human communications.

A deeper or, at least, a thornier problem is posed by the ambiguities that are almost always bound up with an early formulation of a new law. It is not only that there are many possible worlds, but that each world is open to several possible interpretations. Here again, the easy solution is the anachronistic one; the ascription to one man of the process that began long before him and was probably not completed until long after. A distinguished scientist and philosopher like Ernst Mach taught that Galileo, virtually single-handedly, founded the new science of mechanics, created the notion of force and discovered “the so-called law of inertia, according to which a body not under the influence of forces, i.e. of special circumstances that change motion, will retain forever its velocity (and direction)” (Mach 1960, 169).

Galilean scholarship has swung the other way since Mach, and we now believe that Galileo is better understood as bringing a long process that began in the Middle Ages to its culmination.1 The realization that rectilinear motion is a state and not a process is a seventeenth-century achievement that cannot immediately be seen as having much in common with Aristotelian physics where motion in a straight line requires an external mover. But the principle of inertia did not spring Minerva-like from a single scientific head. Between Aristotelian mechanics and Newtonian dynamics we find a transitional phase in the theory developed by such thinkers as John Buridan and Nicole Oresme in the fourteenth century. Remaining within the tradition of Aristotelian physics inasmuch as it looked for a cause of motion, the impetus theory moved in the direction of the modern view by making impetus an impressed (i.e., internalized) and incorporeal force, and by considering the speed and the quantity of matter of a body as a measure of its strength. This theory encouraged a fresh approach to traditional problems by removing long-standing conceptual barriers. For instance, the Aristotelians had rejected outright the notion that the Earth could rotate on the grounds that a strong wind would be set up in the direction opposite to the Earth’s motion. But if the air could receive an impetus and be carried around with the Earth, then the motion of the Earth itself became a distinct possibility in the real world of science and not merely in the world of science fiction. Likewise, by making the cause of motion an internal, impressed force, the impetus theory opened a new world to scientific speculation. Since air was no longer the cause of motion, as in the Platonic or the Aristotelian account, motion in a void was no longer ruled out, and it became possible to think of the idealized case of a body moving in a perfect void, i.e., in the complete absence of any impeding force. Furthermore by explaining all cases of motion in terms of one kind of cause, impetus, it removed the Aristotelian dichotomy between natural and constrained motion and provided the basis for a uniform interpretation of all motion, be it celestial or terrestrial. The fact that medieval scholars were able to question some of the fundamental tenets of the Aristotelian tradition in which they operated should be borne in mind. The Aristotelian-Scholastic cosmology was an intricate web of sophisticated concepts that discouraged thinking in certain other ways. The void, for instance, appeared self-contradictory, and the motion of the Earth physically impossible, but the reasons for setting up these limitations were clearly stated in Aristotle, and they were explicitly recognized and criticized by writers like Oresme and Buridan.

The word inertia in its technical sense was not introduced by Galileo, but rather by Kepler, who conceived of matter as characterized by “sluggishness,” namely an inmate tendency to rest. Any lump or piece of matter comes to rest unless acted upon by some force. An important consequence of this view is that a body comes to rest not only whenever but wherever a force ceases to be applied to it. What moves the planets is the motive force emanating from the Sun. Were it to cease, the planets would come to a standstill.   If we turn to Galileo we find statements that have a much more modern ring, such as, “Furthermore we may note that any degree of speed found in a moving body is, by its nature, indelibly impressed when the causes of acceleration or retardation are removed as is only the case on a horizontal plane” (Galilei 1890–1909c, 243) or, “Consider a body projected along a horizontal plane from which all impediments have been removed; it is clear, from what has been more fully stated in the preceding pages, that this body will move along this plane with a motion that is uniform and perpetual, provided the plane extends to infinity” (Galilei 1890–1909c, 268).

Mach took such statements to be identical with Newton’s law of inertia. On closer inspection, however, we see that Galileo had not travelled that far. In Galileo’s physics, all horizontal planes are small sections of the circumference of the Earth and the motion that endures is not rectilinear but circular. The Newtonian analysis of planetary motion as compounded of a linear inertial component and a descent towards the center is absent from Galileo’s perspective because his belief that inertial motion is circular led him to claim that bodies on a rotating Earth would behave exactly like bodies on a stationary one. In his Lectures on the Sunspots of 1613, he wrote:

If all external impediments are removed, a heavy body placed on a spherical surface, which is concentric with the Earth, will be indifferent to rest and to movement toward any part of the horizon. And it will maintain itself in that state in which it has once been placed [...] Thus a ship, for instance, having once received some impetus through the tranquil sea, would move continually around our globe without ever stopping. (Galilei 1890–1909a, 134–135)

Galileo did not fly in the face of tradition, but he restated the common belief in the perennial nature of uniform circular motion in such a way that it invited consideration of motion along horizontal planes and further investigation of the concept of state of motion. What may have been a passing remark in the text I have just quoted became a general principle in Descartes’ Principles of Philosophy, where the “first law of nature” stipulates that uniform motion, like rest, is conserved because it is a state and not a process. A second and distinct “law of nature” adds that this motion is rectilinear, as becomes “the simplicity and immutability of the operation whereby God conserves motion in matter” (Descartes 1966–1974, 63).

Newton encountered the concept of a state of motion in Descartes and the continuing dialogue with his predecessor can be seen in the very title of his masterpiece, Philosophiae Naturalis Principia Mathematica, which repeats the title of Descartes’ own work with two notable additions: the Principles are now said to be mathematical, and the philosophy natural (namely what we now call physics). The transformation of the title is but a sign of the profound change that the Cartesian law of inertia underwent in Newton’s hands. Descartes’ two laws are fused into one, and inertia is seen as resulting from the nature of matter rather than stemming directly from the metaphysical attribute of God.

Just as there is a continuous and intelligible path from Buridan, so there is one from Galileo to Newton, but we must be wary of ascribing to Galileo insights that were arrived at only by working out implications, which he himself did not contemplate, let alone analyze. Four changes were necessary to convert Galileo’s concept into Newton’s first law of motion. The notion of inertia had to be: (1) recognized as playing a fundamental role in motion, (2) seen as implying rectilinearity, (3) extended from terrestrial to celestial phenomena and (4) associated with quantity of matter or mass. The first three steps were taken by Descartes, the fourth awaited Newton. Of course, once the principle of inertia had been clearly formulated in the Principia Mathematica along with Newton’s remark that Galileo had used it (Newton 1999, 424), no one could ever again turn to the Two New Sciences without reading into Galileo’s words the correct Newtonian implications. But for Galileo himself these allegedly obvious consequences had not yet entered the realm of possibility.

In The Equilibrium Controversy, Jürgen Renn and Peter Damerow have recently enhanced our knowledge of Galileo’s contribution by working out the implications of the law of the lever.2 By means of the principle of virtual velocities, Galileo extended the law of the lever to the simple machines and even to problems of hydrostatics. In all instances, the governing principle is the equality of the product mv at one end of the lever to that at the other. The momento (moment) of the lever thus easily transforms itself into the momento (momentum) of the moving body. A possibility of serious ambiguity is built into the lever, and Galileo, together with the whole century following him, slips into it unaware. Since both ends of the lever move in identical time without acceleration, it is immaterial whether one uses the virtual velocities of the two weights or their virtual displacements. Velocities must be in the same proportion as displacements, and when Galileo states the general principle of the lever, he does so in terms of velocity, although he often uses the word displacement. Renn and Damerow show how it is all too easy to forget that the equivalence holds only for the lever and analogous instances in which a mechanical connection ensures that each body moves for the same time, and in which, because of equilibrium, the motion involved is virtual motion, not accelerated motion. The case of free fall is not, of course, identical to the conditions of equilibrium because the times involved are not identical and because two separate, accelerated motions take place.3 If there is an equality of the product of weight x distance (that is, in our terms, work), there cannot be an equality of momentum (mv) but rather of kinetic energies (1/2 mv2). From the ambiguity of the lever springs the controversy between quantity of motion and vis viva in which the second half of the seventeenth century was to engage. This second phase has been studied by a number of distinguished historians of science, for instance, Richard S. Westfall in Force in Newton’s Physics. What was lacking until the publication of The Equilibrium Controversy was a clear understanding of the historical and conceptual background to Galileo’s endeavors.

The extensive research that led to The Equilibrium Controversy began in 2006 when the Max Planck Institute for the History of Science acquired a copy of Giovanni Benedetti’s Diversarum speculationum mathematicarum et physicarum liber that appeared in 1585. This book comprises several treatises including one which contains a critique of a section of the Aristotelian On Mechanics that was much discussed at the time. While Benedetti’s book is in itself an important source for understanding the struggles of early modern engineer-scientists with the ancient attitudes of mechanical knowledge, this specific copy is of special value because it contains handwritten marginal notes by Guidobaldo del Monte. Benedetti was influenced by earlier writers and more specifically by his master Tartaglia who had himself borrowed and modified material taken from the thirteenth-century Jordanus of Nemore whom he edited. The importance of Jordanus is illustrated by the fact that Guidobaldo del Monte not only read but annotated his copy of Jordanus. Renn and Damerow do not merely make the relevant material available; they offer a masterly survey of the development of mechanical knowledge from its origins in antiquity to the dawn of classical mechanics in the late Renaissance (Renn and Damerow 2012, 39–167). They stress that the development of technology owes much to challenging objects such as labor-saving machinery, ballistics, the stability of buildings and the performance of ships on the high seas. As a consequence a multiplicity of different pathways emerged. Renn and Damerow caution us against the danger of treating the results of these different approaches as if they were pieces of a puzzle that can be combined into a coherent whole. Strictly speaking, the solutions proposed in preclassical mechanics make use of alien concepts, such as natural and violent tendencies, which are incompatible with those of modern science.

A crucial problem was the exact relation between the key concepts of center of gravity and positional heaviness. Guidobaldo del Monte was proud to have reconciled the Archimedean theory of equilibrium, based on the concept of center of gravity, with the Aristotelian understanding of weight as tending to the center of the world. This reconciliation was embodied in what he saw as his greatest discovery: the realization that both an ideal balance and what he called a cosmological balance remain in indifferent equilibrium. Benedetti had claimed that, while such an indifferent equilibrium holds under terrestrial circumstances, it is impossible for a cosmological balance. This challenged Guidobaldo’s synthesis, and while Benedetti’s conclusion is in accordance with later classical physics, the controversy could not be settled with the arguments available at the time. In this sense, it was the equilibrium controversy more than its resolution that spurred the further developments of physics.

The Underlying Philosophical Stance

Philosophers of science clearly need historians of science if they are to avoid anachronisms, but historians of science can also learn from philosophers of science. I believe that Kuhn saw at least two ways in which philosophical considerations can prove useful to historians, namely (a) by elucidating the interpretive frameworks and the concepts employed, (b) by analyzing underlying methodological assumptions and (c) by clarifying the meaning of models and theories. I shall say a word about each aspect.

If history is to rise above a mere collection of anecdotes, it must be written from some point of view and with some unifying theme. It is here that the philosopher has a contribution to make by supplying some distinctive perspective, such as Kuhn’s view about paradigms, normal science and revolutions. The two examples that were discussed above concerning the falsification of Galileo’s text by well-meaning translators make it abundantly clear that no one can completely escape the climate of intellectual opinion prevalent in his own day. Unfortunately, historians only too often employ frameworks without thinking about them. They are left to operate as tacit assumptions, and are dangerous because they are not drawn out into the open and scrutinized for what they really are. The same can be said of key concepts, and this raises an important issue. Historians of science must immerse themselves in the writing of scientists of previous ages if they are to understand what they were actually up to. But it would be a futile exercise if their program of total immersion led them to lose their bearings in the world in which they actually live. Immersion is only profitable if it leads to eventual emergence into the contemporary setting with an enhanced ability to translate the past into terms that are meaningful for a present-day audience. The historian aims at recapturing the past not in order to live in the past, but in order to interpret it to those who cannot read its lessons first hand. Were the historian to divest himself of his twenty-first century frame of reference to the point of acquiring the full panoply of, say, Aristotelian thought, he would no longer be a historian but a living intellectual fossil.

Clarifying the Nature of the Argument

The philosopher of science can also cast light on the cogency of scientific reasoning. It is not enough to determine with historical accuracy what premises were employed to understand a scientific argument used in the past. To see the value of the argument one has to know whether the premises entail the conclusion or make it probable in the light of the evidence available at the time. The philosopher of science should be able, by virtue of his logical training, to examine the relations between the premises and the conclusions.

No one will deny that it is of intrinsic interest to discover whether an argument actually employed by a scientist of the past is cogent, but some might deny that this is history of science. The historian, it could be said, should ponder what the argument is, not whether it is any good. But this would be a narrow and ultimately stultifying approach. One of the most interesting questions in intellectual history is the determination of the value of arguments at the time when they were formulated. It is a task that requires the skills of both the philosopher and the historian of science, since we have to assess both the validity of the logical procedure and the nature of the evidence at hand. In this domain philosophical analysis can clearly compliment the historian’s craft.

Any effort to reconstruct the past must be accompanied by a critical examination of what, in the light of hindsight, we know to have actually been the case. For instance, in investigating the models of Maxwell, Kelvin, FitzGerald, Helmholtz and others, it is important to recognize the nature and thrust of the methodological assumptions that guided nineteenth-century physicists.4 In his paper on physical lines of force, published in 1861, Maxwell proposed a model of the electromagnetic field with the aid of certain assumptions, for example, that electromagnetic phenomena are due to the existence of matter under certain conditions of motion or pressure in every part of the magnetic field and not to action at a distance. Likewise, he took for granted that there is inequality of pressure in the magnetic field that is produced by vortices. What he does not discuss is the ontological status of these assumptions, in plainer words, the reality that he ascribed to them. Was he saying that the electromagnetic field is really composed of the elements he described? Was he merely drawing an analogy with the mechanical system? Or, rather, was he showing what the electromagnetic field would be like if it operated on purely mechanical principles without claiming that this was necessarily the case?

The Role of Models

One need only raise these questions to realize that they are important if we are to understand what Maxwell was actually doing. The philosopher of science may be in a position to help the historian to ponder the various ways in which the term model is used, and I shall say a few words about three main kinds of models, which I take to be mechanical, theoretical and imaginary.

Mechanical models offer three-dimensional physical representations of objects such that, by considering them, we are able to know some facts about the original objects of study. The simplest kinds of these models are tinkertoy models of the molecule or of solar systems found in museums. They may be bigger or smaller than the original. They may also represent only those characteristics that a scientist is interested in. In this case, they may serve as an analog for the original as, for instance, when Maxwell represented the electric field by describing an imaginary incompressible fluid flowing through to a variable section. The analogous properties here are electrostatic force and that of the imaginary fluid, which both vary as the square of the distance from their sources, and the potential of the electric field and the pressure of the fluid, both inversely proportional to the distance. A model, in this sense, is an object distinct from the one that it represents. This is not the case of the next category.

Theoretical models like the billiard-ball model of a gas, Bohr’s model of the atom, the corpuscular model of light or the shell model of the atomic nucleus, do not refer to a physical object that is distinct from the one of which it is a model but to a set of assumptions about the object that is itself under scrutiny.5 For instance, the billiard-ball model is a set of assumptions according to which molecules in a gas exert only contact forces on one another, travel in straight lines except at the instant collision, are small in size compared to average molecule distances, and so on. These theoretical models can be further characterized. First, they describe an object or system by attributing to it an inner structure or a mechanism that is intended to account for certain features of the object or system. In the case of the billiard-ball model, a molecular structure is ascribed to gases in order to explain observed relationships of pressure, volume, temperature, entropy, etc. Second, they are treated as useful approximations not exhaustive explanations. The billiard-ball model assumes that the only intramolecular forces are contact forces and thus ignores non-contact attractive and repulsive forces. This is useful in allowing a number of important relationships to be derived and in suggesting how the kinetic theory might be expanded. Thirdly, a theoretical model is set in the broader context of a more comprehensive theory. In the billiard-ball model, the behavior of the molecules always complies with Newton’s laws.

The third group of models, imaginary ones, refers to a set of assumptions about a system that are supposed to show what the system could be like if it were to satisfy certain conditions but for which no factual claims are made. An example is Poincaré’s model of a non-Euclidean world in which a number of assumptions are made such as that the temperature is greater at the center and gradually decreases as one moves towards the circumference where it is absolute zero, that bodies contract as they recede from the center and, as they move, achieve instant thermal equilibrium with their environment. This model satisfies the postulates of Lobachevskian geometry but Poincaré does not claim that such a physical world exists or that if a Lobachevskian world occurred it would necessarily be the one he describes. Such imaginary models serve the purpose of showing that certain assumptions, which may otherwise be thought self-contradictory, are at least consistent.

Armed with these distinctions, the historian can probe deeper into the status of Maxwell’s mechanical assumptions. Until this is known it will be impossible to proceed to the analysis of Maxwell’s argument. It is crucial to know whether Maxwell was actually ascribing the mechanical structure he described to the electromagnetic field or whether this was simply intended as a description of an analog or as a description of a possible mechanism if the field were purely mechanical. Unless we know what his model was intended to do, there is no way we can assess the validity of his reasoning. Nineteenth-century physicists did not explicitly distinguish three uses of the term model, but this does not mean that we cannot derive enlightenment from looking at their work with clearer concepts. If we consider the various models that were proposed in the nineteenth century, we readily see that it helps to bear these distinctions in mind. It is reasonably clear that in his 1861 paper, Maxwell was proposing an imaginary model of the electromagnetic field. He was saying what this field could be like if it were purely mechanical, but he was not claiming that it is actually like this or even that it is purely mechanical. In the case of Kelvin’s celebrated mechanical contraptions what was being offered were representational models, while FitzGerald’s proposition of 1899, according to which ether is a fluid, can be classified as a theoretical model.

Degrees of Likeness

There is much contemporary fuzzy-thinking about the meaning of theories. Although Kuhn was right in stressing that the framework of a given hypothesis determines to a large extent what questions can be raised and what views can be suggested about a particular problem, he did not manage to explain how different theories can be contrasted and appraised. On his view, one is practically driven to describe scientific change in revolutionary terms, to speak, for instance, of the “overthrow” of Aristotelian mechanics or the “victory” over phlogiston. As a result, theories seem “incommensurable” and their change can no longer be rendered intelligible in rational terms. This relativism is not, however, the outcome of an investigation of actual science and its history; it is merely a logical consequence of a narrow presupposition about the meaning of scientific terms. Positivists held that if the terms do not retain precisely the same meaning over the history of their incorporation into more general theories, then these theories cannot be compared, and the similarities they exhibit must be considered, at the best, as superficial and, at the worst, as deceptive and misleading. This claim rests on the assumption that two expressions or set of expressions must either have exactly the same meaning or must be completely different. The only possibility left open by this rigid dichotomy of meanings is that history of science, since it is not a simple process of development by accumulation, must be a completely noncumulative process of replacement.

The inherent weakness of this position turns out to be its retention of a positivistic concept of meaning. If anything the revolution is not radical enough! In spite of his spirited attack on the positivistic view that theories are parasitic on “observations,” Kuhn nonetheless approached problems with that distinction in mind. He applied the old classification to a new purpose in a daring way by inverting the respective roles of the two members of the classical distinction: it was now the “theory” that determined the meaning and acceptability of the “observation” rather than the other way around. Observations were now so embedded in a particular theory that they lost any identity of their own, and ceased to be comparable. But this did not solve the problem of meaning: it simply replaced the theory of meaning invariance with the doctrine of incommensurable meanings. An alternative is to consider meanings as similar or analogous: comparable in some respects while differing in others. The difficulty in this interpretation lies in the concept of similarity or degrees of likeness of meanings. It is here that much more work needs to be done, and an indication of the urgency of the task is the proliferation of works on the use of metaphors, beginning with the book of George Lakoff and Mark Johnson (2003).

Globalization and the Quest for the Underlying Unity of History

An innovative thrust on how knowledge and history interact can be found in a book recently edited by Jürgen Renn (2012). The central theme is that there is only one history of human knowledge. There may have been many false starts, and there were probably many new and promising beginnings that were thwarted, wasted or simply forgotten, but there is a stream of cumulative discoveries that can be seen from a global perspective. Knowledge, whether scientific, technological or cultural, is now shared globally. But was this always the case? If we are tempted to say, “No,” we may wish to pause after having been reminded of the rapid spread of the wheel in prehistory or of Roman law to such diverse areas as the Byzantine Empire and Ethiopia.

Globalization has been much discussed in relation to capital and labour, markets and finance, politics and military power, but it involves knowledge in many other significant ways, and the homogenization and universalization that are characteristics of globalization are fraught with dangers as well as opportunities. On the one hand, there is the threat of a standardization of mass culture that would result in a “dumbing down” of linguistic subtlety, political awareness and moral sensitivity. On the other hand, there is the opportunity of creating a richer network of social relations where diverse belief systems and political institutions would become complementary and could provide a stimulus for devising a more humane society on a worldwide scale.

Comprehensive globalization results from a number of factors such as the migration of populations, the spread of technologies, the dissemination of religious ideas and the emergence of multilingualism. These factors each have their own dynamics and history, and it is the study of their interconnection that enables us to see globalization at work. Historians of science have often focused on who made a discovery and when it occurred rather than on how it was rendered possible by the context in which it emerged. In other words, they privileged innovation over transmission and transformation. Renn redresses the balance by examining how knowledge is disseminated, enhanced and occasionally debased. For instance, the transfer of knowledge necessary for producing tools requires a framework of ideas that must be acquired. The late Peter Damerow, who was one of the driving forces behind the globalization project, was able to show how the powerful tools of writing and arithmetic were constructed and how they rendered possible the transmission of knowledge beyond the immediacy of verbal communication.

If systems of knowledge are essential to the organization of epistemic networks in a given social and cultural context, their subsequent restructuring is also of paramount importance. A particularly striking instance is the elaboration of Aristotelian natural philosophy, first in a theological milieu in the Middle Ages, and later in the wake of the scientific revolution in the seventeenth century. The outcome did not leave unaffected the intrinsic structure of Aristotelianism but created hybrids that changed the overall history of knowledge.

The relations between specifically scientific knowledge and socio-economic growth are clearly of importance. It was mainly in Europe that science and engineering became bedfellows and that a new class of scientists-engineers began to assimilate the know-how of craftsmen. This led them, in turn, to question the theories they had inherited. But we may well ask: Why is science reproducible and transportable? It can be argued that it is not because of any methodological principle, but because it focuses on means. The successful expansion of science within Europe created a model that was exported worldwide, including the replication of institutional settings and canons of what constitute knowledge. Science grew at an astonishing rate and travelled at an unprecedented pace. This was largely due to networks that introduced a connectivity that had once been assured by other bodies such as wealthy patrons, religious societies, universities and scientific academies. The rise of a new and highly mobile class of engineers was decisive. As their contribution to the solution of practical problems increased so did their personal prestige along with that of science. Local knowledge has generally been challenged, and frequently ousted by globalization, but there are several instances when they were preserved and served to shape the way new knowledge was perceived and integrated into different cultural traditions. Historians and philosophers of science must engage in a renewed dialogue over the significance of these changes. Thomas Kuhn would have considered them challenging, hence welcome. We should follow suit.

References

Bordoni, S. (2008). Crossing the Boundaries between Matter and Energy: Integration between Discrete and Continuous Theoretical Models in the Late Nineteenth-Century British Electromagnetism..

Damerow, P., G. Freudenthal, G. F., McLaughlin G. (2004). Exploring the Limits of Preclassical Mechanics: A Study of Conceptual Development in Early Modern Science; Free Fall and Compounded Motion in the Work of Descartes, Galileo, and Beeckman. New York: Springer.

Galilei, Galileo (1661). The Systeme of the World: In Four Dialogues, Wherein the Two Grand Systemes of Ptolomy and Copernicus Are Largely Discoursed Of. London: William Leybourne.

- (1890–1909a). Le Opere di Galileo Galilei. Florence: Firenze Tip. di G. Barbèra.

- (1890–1909b). Le Opere di Galileo Galilei. Florence: Firenze Tip. di G. Barbèra.

- (1890–1909c). Le Opere di Galileo Galilei. Florence: Firenze Tip. di G. Barbèra.

- (1914). Two New Sciences..

Heilbron, J. L., T. S. Kuhn (1969). The Genesis of the Bohr Atom. Historical Studies in the Pysical Sciences 1: 211-290

Kuhn, T. S. (1970). The Structure of Scientific Revolutions. Chicago: The University of Chicago Press.

Lakeff, G., M. Johnson (2003). Metaphors We Live By. Chicago: The University of Chicago Press.

Mach, E. (1960). The Science of Mechanics. La Salle, Illinois: Open Court.

Medawar, P. (1983). Pluto’s Republic. Oxford: Oxford University Press.

Renn, J. (2012). The Globalization of Knowledge in History. MPIWG, Berlin: Edition Open Access.

Renn, J., P. Damerow (2012). The Equilibrium Controversy: Guidobaldo del Monte’s Critical Notes on the Mechanics of Jordanus and Benedetti and Their Historical and Conceptual Background. MPIWG, Berlin: Edition Open Access.

Renn, J., S. Rieger, S. R. (2000). Hunting the White Elephant: When and How did Galileo Discover the Law of Fall?. Science in Context 13(3–4): 299-419

Footnotes

In what follows I rely heavily on the excellent studies in Damerow et al. (2004).

See Renn and Damerow (2012).

See Renn, Rieger, and Giulini (2000).

See Bordoni (2008).

See Heilbron and Kuhn (1969).