PHILOSOPHY
PHILOSOPHY
By
Bertrand Russell
NEW YORK
W · W · NORTON & COMPANY, INC.
Publishers
Copyright, 1927,
BERTRAND RUSSELL
Published in Great Britain under the title “An Outline of Philosophy”
PRINTED IN THE UNITED STATES OF AMERICA
FOR THE PUBLISHERS BY THE VAN REES PRESS
CONTENTS
| CHAPTER | PAGE | |
| I. | Philosophic Doubts | [1] |
| PART I | ||
| MAN FROM WITHOUT | ||
| II. | Man and His Environment | [16] |
| III. | The Process of Learning in Animals and Infants | [29] |
| IV. | Language | [43] |
| V. | Perception Objectively Regarded | [58] |
| VI. | Memory Objectively Regarded | [70] |
| VII. | Inference as a Habit | [79] |
| VIII. | Knowledge Behaviouristically Considered | [88] |
| PART II | ||
| THE PHYSICAL WORLD | ||
| IX. | The Structure of the Atom | [97] |
| X. | Relativity | [107] |
| XI. | Causal Laws in Physics | [114] |
| XII. | Physics and Perception | [123] |
| XIII. | Physical and Perceptual Space | [137] |
| XIV. | Perception and Physical Causal Laws | [144] |
| XV. | The Nature of Our Knowledge of Physics | [151] |
| PART III | ||
| MAN FROM WITHIN | ||
| XVI. | Self-observation | [161] |
| XVII. | Images | [176] |
| XVIII. | Imagination and Memory | [187] |
| XIX. | The Introspective Analysis of Perception | [201] |
| XX. | Consciousness? | [210] |
| XXI. | Emotion, Desire, and Will | [218] |
| XXII. | Ethics | [225] |
| PART IV | ||
| THE UNIVERSE | ||
| XXIII. | Some Great Philosophies of the Past | [236] |
| XXIV. | Truth and Falsehood | [254] |
| XXV. | The Validity of Inference | [266] |
| XXVI. | Events, Matter, and Mind | [276] |
| XXVII. | Man’s Place in the Universe | [292] |
PHILOSOPHY
[CHAPTER I]
PHILOSOPHIC DOUBTS
Perhaps it might be expected that I should begin with a definition of “philosophy”, but, rightly or wrongly, I do not propose to do so. The definition of “philosophy” will vary according to the philosophy we adopt; all that we can say to begin with is that there are certain problems, which certain people find interesting, and which do not, at least at present, belong to any of the special sciences. These problems are all such as to raise doubts concerning what commonly passes for knowledge; and if the doubts are to be answered, it can only be by means of a special study, to which we give the name “philosophy”. Therefore the first step in defining “philosophy” is the indication of these problems and doubts, which is also the first step in the actual study of philosophy. There are some among the traditional problems of philosophy that do not seem to me to lend themselves to intellectual treatment, because they transcend our cognitive powers; such problems I shall not deal with. There are others, however, as to which, even if a final solution is not possible at present, yet much can be done to show the direction in which a solution is to be sought, and the kind of solution that may in time prove possible.
Philosophy arises from an unusually obstinate attempt to arrive at real knowledge. What passes for knowledge in ordinary life suffers from three defects: it is cocksure, vague, and self-contradictory. The first step towards philosophy consists in becoming aware of these defects, not in order to rest content with a lazy scepticism, but in order to substitute an amended kind of knowledge which shall be tentative, precise, and self-consistent. There is of course another quality which we wish our knowledge to possess, namely comprehensiveness: we wish the area of our knowledge to be as wide as possible. But this is the business of science rather than of philosophy. A man does not necessarily become a better philosopher through knowing more scientific facts; it is principles and methods and general conceptions that he should learn from science if philosophy is what interests him. The philosopher’s work is, so to speak, at the second remove from crude fact. Science tries to collect facts into bundles by means of scientific laws; these laws, rather than the original facts, are the raw material of philosophy. Philosophy involves a criticism of scientific knowledge, not from a point of view ultimately different from that of science, but from a point of view less concerned with details and more concerned with the harmony of the whole body of special sciences.
The special sciences have all grown up by the use of notions derived from common sense, such as things and their qualities, space, time, and causation. Science itself has shown that none of these common-sense notions will quite serve for the explanation of the world; but it is hardly the province of any special science to undertake the necessary reconstruction of fundamentals. This must be the business of philosophy. I want to say, to begin with, that I believe it to be a business of very great importance. I believe that the philosophical errors in common-sense beliefs not only produce confusion in science, but also do harm in ethics and politics, in social institutions, and in the conduct of everyday life. It will be no part of my business, in this volume, to point out these practical effects of a bad philosophy: my business will be purely intellectual. But if I am right, the intellectual adventures which lie before us have effects in many directions which seem, at first sight, quite remote from our theme. The effect of our passions upon our beliefs forms a favourite subject of modern psychologists; but the converse effect, that of our beliefs upon our passions, also exists, though it is not such as an old-fashioned intellectualist psychology would have supposed. Although I shall not discuss it, we shall do well to bear it in mind, in order to realise that our discussions may have bearings upon matters lying outside the sphere of pure intellect.
I mentioned a moment ago three defects in common beliefs, namely, that they are cocksure, vague, and self-contradictory. It is the business of philosophy to correct these defects so far as it can, without throwing over knowledge altogether. To be a good philosopher, a man must have a strong desire to know, combined with great caution in believing that he knows; he must also have logical acumen and the habit of exact thinking. All these, of course, are a matter of degree. Vagueness, in particular, belongs, in some degree, to all human thinking; we can diminish it indefinitely, but we can never abolish it wholly. Philosophy, accordingly, is a continuing activity, not something in which we can achieve final perfection once for all. In this respect, philosophy has suffered from its association with theology. Theological dogmas are fixed, and are regarded by the orthodox as incapable of improvement. Philosophers have too often tried to produce similarly final systems: they have not been content with the gradual approximations that satisfied men of science. In this they seem to me to have been mistaken. Philosophy should be piecemeal and provisional like science; final truth belongs to heaven, not to this world.
The three defects which I have mentioned are interconnected, and by becoming aware of any one we may be led to recognise the other two. I will illustrate all three by a few examples.
Let us take first the belief in common objects, such as tables and chairs and trees. We all feel quite sure about these in ordinary life, and yet our reasons for confidence are really very inadequate. Naive common sense supposes that they are what they appear to be, but that is impossible, since they do not appear exactly alike to any two simultaneous observers; at least, it is impossible if the object is a single thing, the same for all observers. If we are going to admit that the object is not what we see, we can no longer feel the same assurance that there is an object; this is the first intrusion of doubt. However, we shall speedily recover from this set-back, and say that of course the object is “really” what physics says it is.[1] Now physics says that a table or a chair is “really” an incredibly vast system of electrons and protons in rapid motion, with empty space in between. This is all very well. But the physicist, like the ordinary man, is dependent upon his senses for the existence of the physical world. If you go up to him solemnly and say, “would you be so kind as to tell me, as a physicist, what a chair really is”, you will get a learned answer. But if you say, without preamble: “Is there a chair there?” he will say: “Of course there is; can’t you see it?” To this you ought to reply in the negative. You ought to say, “No, I see certain patches of colour, but I don’t see any electrons or protons, and you tell me that they are what a chair consists of”. He may reply: “Yes, but a large number of electrons and protons close together look like a patch of colour”. What do you mean by “look like”? you will then ask. He is ready with an answer. He means that light-waves start from the electrons and protons (or, more probably, are reflected by them from a source of light), reach the eye, have a series of effects upon the rods and cones, the optic nerve, and the brain, and finally produce a sensation. But he has never seen an eye or an optic nerve or a brain, any more than he has seen a chair; he has only seen patches of colour which, he says, are what eyes “look like.” That is to say, he thinks that the sensation you have when (as you think) you see a chair, has a series of causes, physical and psychological, but all of them, on his own showing, lie essentially and forever outside experience. Nevertheless, he pretends to base his science upon observation. Obviously there is here a problem for the logician, a problem belonging not to physics, but to quite another kind of study. This is a first example of the way in which the pursuit of precision destroys certainty.
[1] I am not thinking here of the elementary physics to be found in a school text-book; I am thinking of modern theoretical physics, more particularly as regards the structure of atoms, as to which I shall have more to say in later chapters.
The physicist believes that he infers his electrons and protons from what he perceives. But the inference is never clearly set forth in a logical chain, and, if it were, it might not look sufficiently plausible to warrant much confidence. In actual fact, the whole development from common-sense objects to electrons and protons has been governed by certain beliefs, seldom conscious, but existing in every natural man. These beliefs are not unalterable, but they grow and develop like a tree. We start by thinking that a chair is as it appears to be, and is still there when we are not looking. But we find, by a little reflection, that these two beliefs are incompatible. If the chair is to persist independently of being seen by us, it must be something other than the patch of colour we see, because this is found to depend upon conditions extraneous to the chair, such as how the light falls, whether we are wearing blue spectacles, and so on. This forces the man of science to regard the “real” chair as the cause (or an indispensable part of the cause) of our sensations when we see the chair. Thus we are committed to causation as an a priori belief without which we should have no reason for supposing that there is a “real” chair at all. Also, for the sake of permanence we bring in the notion of substance: the “real” chair is a substance, or collection of substances, possessed of permanence and the power to cause sensations. This metaphysical belief has operated, more or less unconsciously, in the inference from sensations to electrons and protons. The philosopher must drag such beliefs into the light of day, and see whether they still survive. Often it will be found that they die on exposure.
Let us now take up another point. The evidence for a physical law, or for any scientific law, always involves both memory and testimony. We have to rely both upon what we remember to have observed on former occasions, and on what others say they have observed. In the very beginnings of science, it may have been possible sometimes to dispense with testimony; but very soon every scientific investigation began to be built upon previously ascertained results, and thus to depend upon what others had recorded. In fact, without the corroboration of testimony we should hardly have had much confidence in the existence of physical objects. Sometimes people suffer from hallucinations, that is to say, they think they perceive physical objects, but are not confirmed in this belief by the testimony of others. In such cases, we decide that they are mistaken. It is the similarity between the perceptions of different people in similar situations that makes us feel confident of the external causation of our perceptions; but for this, whatever naive beliefs we might have had in physical objects would have been dissipated long ago. Thus memory and testimony are essential to science. Nevertheless, each of these is open to criticism by the sceptic. Even if we succeed, more or less, in meeting his criticism, we shall, if we are rational, be left with a less complete confidence in our original beliefs than we had before. Once more, we shall become less cocksure as we become more accurate.
Both memory and testimony lead us into the sphere of psychology. I shall not at this stage discuss either beyond the point at which it is clear that there are genuine philosophical problems to be solved. I shall begin with memory.
Memory is a word which has a variety of meanings. The kind that I am concerned with at the moment is the recollection of past occurrences. This is so notoriously fallible that every experimenter makes a record of the result of his experiment at the earliest possible moment: he considers the inference from written words to past events less likely to be mistaken than the direct beliefs which constitute memory. But some time, though perhaps only a few seconds, must elapse between the observation and the making of the record, unless the record is so fragmentary that memory is needed to interpret it. Thus we do not escape from the need of trusting memory to some degree. Moreover, without memory we should not think of interpreting records as applying to the past, because we should not know that there was any past. Now, apart from arguments as to the proved fallibility of memory, there is one awkward consideration which the sceptic may urge. Remembering, which occurs now, cannot possibly—he may say—prove that what is remembered occurred at some other time, because the world might have sprung into being five minutes ago, exactly as it then was, full of acts of remembering which were entirely misleading. Opponents of Darwin, such as Edmund Gosse’s father, urged a very similar argument against evolution. The world, they said, was created in 4004 B.C., complete with fossils, which were inserted to try our faith. The world was created suddenly, but was made such as it would have been if it had evolved. There is no logical impossibility about this view. And similarly there is no logical impossibility in the view that the world was created five minutes ago, complete with memories and records. This may seem an improbable hypothesis, but it is not logically refutable.
Apart from this argument, which may be thought fantastic, there are reasons of detail for being more or less distrustful of memory. It is obvious that no direct confirmation of a belief about a past occurrence is possible, because we cannot make the past recur. We can find confirmation of an indirect kind in the revelations of others and in contemporary records. The latter, as we have seen, involve some degree of memory, but they may involve very little, for instance when a shorthand report of a conversation or speech has been made at the time. But even then, we do not escape wholly from the need of memory extending over a longer stretch of time. Suppose a wholly imaginary conversation were produced for some criminal purpose, we should depend upon the memories of witnesses to establish its fictitious character in a law-court. And all memory which extends over a long period of time is very apt to be mistaken; this is shown by the errors invariably found in autobiographies. Any man who comes across letters which he wrote many years ago can verify the manner in which his memory has falsified past events. For these reasons, the fact that we cannot free ourselves from dependence upon memory in building up knowledge is, prima facie, a reason for regarding what passes for knowledge as not quite certain. The whole of this subject of memory will be considered more carefully in later chapters.
Testimony raises even more awkward problems. What makes them so awkward is the fact that testimony is involved in building up our knowledge of physics, and that, conversely, physics is required in establishing the trustworthiness of testimony. Moreover, testimony raises all the problems connected with the relation of mind and matter. Some eminent philosophers, e.g. Leibniz, have constructed systems according to which there would be no such thing as testimony, and yet have accepted as true many things which cannot be known without it. I do not think philosophy has quite done justice to this problem, but a few words will, I think, show its gravity.
For our purposes, we may define testimony as noises heard, or shapes seen, analogous to those which we should make if we wished to convey an assertion, and believed by the hearer or seer to be due to someone else’s desire to convey an assertion. Let us take a concrete instance: I ask a policeman the way, and he says, “Fourth turn to the right, third to the left.” That is to say, I hear these sounds, and perhaps I see what I interpret as his lips moving. I assume that he has a mind more or less like my own, and has uttered these sounds with the same intention as I should have had if I had uttered them, namely to convey information. In ordinary life, all this is not, in any proper sense, an inference; it is a belief which arises in us on the appropriate occasion. But if we are challenged, we have to substitute inference for spontaneous belief, and the more the inference is examined the more shaky it looks.
The inference that has to be made has two steps, one physical and one psychological. The physical inference is of the sort we considered a moment ago, in which we pass from a sensation to a physical occurrence. We hear noises, and think they proceed from the policeman’s body. We see moving shapes, and interpret them as physical motions of his lips. This inference, as we saw earlier, is in part justified by testimony; yet now we find that it has to be made before we can have reason to believe that there is any such thing as testimony. And this inference is certainly sometimes mistaken. Lunatics hear voices which other people do not hear; instead of crediting them with abnormally acute hearing, we lock them up. But if we sometimes hear sentences which have not proceeded from a body, why should this not always be the case? Perhaps our imagination has conjured up all the things that we think others have said to us. But this is part of the general problem of inferring physical objects from sensations, which, difficult as it is, is not the most difficult part of the logical puzzles concerning testimony. The most difficult part is the inference from the policeman’s body to his mind. I do not mean any special insult to policemen; I would say the same of politicians and even of philosophers.
The inference to the policeman’s mind certainly may be wrong. It is clear that a maker of wax-works could make a life-like policeman and put a gramophone inside him, which would cause him periodically to tell visitors the way to the most interesting part of the exhibition at the entrance to which he would stand. They would have just the sort of evidence of his being alive that is found convincing in the case of other policemen. Descartes believed that animals have no minds, but are merely complicated automata. Eighteenth-century materialists extended this doctrine to men. But I am not now concerned with materialism; my problem is a different one. Even a materialist must admit that, when he talks, he means to convey something, that is to say, he uses words as signs, not as mere noises. It may be difficult to decide exactly what is meant by this statement, but it is clear that it means something, and that it is true of one’s own remarks. The question is: Are we sure that it is true of the remarks we hear, as well as of those we make? Or are the remarks we hear perhaps just like other noises, merely meaningless disturbances of the air? The chief argument against this is analogy: the remarks we hear are so like those we make that we think they must have similar causes. But although we cannot dispense with analogy as a form of inference, it is by no means demonstrative, and not infrequently leads us astray. We are therefore left, once more, with a prima facie reason for uncertainty and doubt.
This question of what we mean ourselves when we speak brings me to another problem, that of introspection. Many philosophers have held that introspection gave the most indubitable of all knowledge; others have held that there is no such thing as introspection. Descartes, after trying to doubt everything, arrived at “I think, therefore I am”, as a basis for the rest of knowledge. Dr. John B. Watson the behaviourist holds, on the contrary, that we do not think, but only talk. Dr. Watson, in real life, gives as much evidence of thinking as anyone does, so if he is not convinced that he thinks, we are all in a bad way. At any rate, the mere existence of such an opinion as his, on the part of a competent philosopher, must suffice to show that introspection is not so certain as some people have thought. But let us examine this question a little more closely.
The difference between introspection and what we call perception of external objects seems to me to be connected, not with what is primary in our knowledge, but with what is inferred. We think, at one time, that we are seeing a chair; at another, that we are thinking about philosophy. The first we call perception of an external object; the second we call introspection. Now we have already found reason to doubt external perception, in the full-blooded sense in which common-sense accepts it. I shall consider later what there is that is indubitable and primitive in perception; for the moment, I shall anticipate by saying that what is indubitable in “seeing a chair” is the occurrence of a certain pattern of colours. But this occurrence, we shall find, is connected with me just as much as with the chair; no one except myself can see exactly the pattern that I see. There is thus something subjective and private about what we take to be external perception, but this is concealed by precarious extensions into the physical world. I think introspection, on the contrary, involves precarious extensions into the mental world: shorn of these, it is not very different from external perception shorn of its extensions. To make this clear, I shall try to show what we know to be occurring when, as we say, we think about philosophy.
Suppose, as the result of introspection, you arrive at a belief which you express in the words: “I am now believing that mind is different from matter”. What do you know, apart from inferences, in such a case? First of all, you must cut out the word “I”: the person who believes is an inference, not part of what you know immediately. In the second place, you must be careful about the word “believing”: I am not now concerned with what this word should mean in logic or theory of knowledge; I am concerned with what it can mean when used to describe a direct experience. In such a case, it would seem that it can only describe a certain kind of feeling. And as for the proposition you think you are believing, namely, “mind is different from matter”, it is very difficult to say what is really occurring when you think you believe it. It may be mere words, pronounced, visualised, or in auditory or motor images. It may be images of what the words “mean”, but in that case it will not be at all an accurate representation of the logical content of the proposition. You may have an image of a statue of Newton “voyaging through strange seas of thought alone”, and another image of a stone rolling downhill, combined with the words “how different!” Or you may think of the difference between composing a lecture and eating your dinner. It is only when you come to expressing your thought in words that you approach logical precision.
Both in introspection and in external perception, we try to express what we know in WORDS.
We come here, as in the question of testimony, upon the social aspect of knowledge. The purpose of words is to give the same kind of publicity to thought as is claimed for physical objects. A number of people can hear a spoken word or see a written word, because each is a physical occurrence. If I say to you, “mind is different from matter”, there may be only a very slight resemblance between the thought that I am trying to express and the thought which is aroused in you, but these two thoughts have just this in common, that they can be expressed by the same words. Similarly, there may be great differences between what you and I see when, as we say, we look at the same chair; nevertheless we can both express our perceptions by the same words.
A thought and a perception are thus not so very different in their own nature. If physics is true, they are different in their correlations: when I see a chair, others have more or less similar perceptions, and it is thought that these are all connected with light-waves coming from the chair, whereas, when I think a thought, others may not be thinking anything similar. But this applies also to feeling a toothache, which would not usually be regarded as a case of introspection. On the whole, therefore, there seems no reason to regard introspection as a different kind of knowledge from external perception. But this whole question will concern us again at a later stage.
As for the trustworthiness of introspection, there is again a complete parallelism with the case of external perception. The actual datum, in each case, is unimpeachable, but the extensions which we make instinctively are questionable. Instead of saying, “I am believing that mind is different from matter”, you ought to say, “certain images are occurring in a certain relation to each other, accompanied by a certain feeling”. No words exist for describing the actual occurrence in all its particularity; all words, even proper names, are general, with the possible exception of “this”, which is ambiguous. When you translate the occurrence into words, you are making generalisations and inferences, just as you are when you say “there is a chair”. There is really no vital difference between the two cases. In each case, what is really a datum is unutterable, and what can be put into words involves inferences which may be mistaken.
When I say that “inferences” are involved, I am saying something not quite accurate unless carefully interpreted. In “seeing a chair”, for instance, we do not first apprehend a coloured pattern, and then proceed to infer a chair: belief in the chair arises spontaneously when we see the coloured pattern. But this belief has causes not only in the present physical stimulus, but also partly in past experience, partly in reflexes. In animals, reflexes play a very large part; in human beings, experience is more important. The infant learns slowly to correlate touch and sight, and to expect others to see what he sees. The habits which are thus formed are essential to our adult notion of an object such as a chair. The perception of a chair by means of sight has a physical stimulus which affects only sight directly, but stimulates ideas of solidity and so on through early experience. The inference might be called “physiological”. An inference of this sort is evidence of past correlations, for instance between touch and sight, but may be mistaken in the present instance; you may, for instance, mistake a reflection in a large mirror for another room. Similarly in dreams we make mistaken physiological inferences. We cannot therefore feel certainty in regard to things which are in this sense inferred, because, when we try to accept as many of them as possible, we are nevertheless compelled to reject some for the sake of self-consistency.
We arrived a moment ago at what we called “physiological inference” as an essential ingredient in the common-sense notion of a physical object. Physiological inference, in its simplest form, means this: given a stimulus S, to which, by a reflex, we react by a bodily movement R, and a stimulus S′ with a reaction R′, if the two stimuli are frequently experienced together, S will in time produce R′.[2] That is to say, the body will act as if S′ were present. Physiological inference is important in theory of knowledge, and I shall have much to say about it at a later stage. For the present, I have mentioned it partly to prevent it from being confused with logical inference, and partly in order to introduce the problem of induction, about which we must say a few preliminary words at this stage.
[2] E.g. if you hear a sharp noise and see a bright light simultaneously often, in time the noise without the light will cause your pupils to contract.
Induction raises perhaps the most difficult problem in the whole theory of knowledge. Every scientific law is established by its means, and yet it is difficult to see why we should believe it to be a valid logical process. Induction, in its bare essence, consists of the argument that, because A and B have been often found together and never found apart, therefore, when A is found again, B will probably also be found. This exists first as a “physiological inference”, and as such is practised by animals. When we first begin to reflect, we find ourselves making inductions in the physiological sense, for instance, expecting the food we see to have a certain kind of taste. Often we only become aware of this expectation through having it disappointed, for instance if we take salt thinking it is sugar. When mankind took to science, they tried to formulate logical principles justifying this kind of inference. I shall discuss these attempts in later chapters; for the present, I will only say that they seem to me very unsuccessful. I am convinced that induction must have validity of some kind in some degree, but the problem of showing how or why it can be valid remains unsolved. Until it is solved, the rational man will doubt whether his food will nourish him, and whether the sun will rise tomorrow. I am not a rational man in this sense, but for the moment I shall pretend to be. And even if we cannot be completely rational, we should probably all be the better for becoming somewhat more rational than we are. At the lowest estimate, it will be an interesting adventure to see whither reason will lead us.
The problems we have been raising are none of them new, but they suffice to show that our everyday views of the world and of our relations to it are unsatisfactory. We have been asking whether we know this or that, but we have not yet asked what “knowing” is. Perhaps we shall find that we have had wrong ideas as to knowing, and that our difficulties grow less when we have more correct ideas on this point. I think we shall do well to begin our philosophical journey by an attempt to understand knowing considered as part of the relation of man to his environment, forgetting, for the moment, the fundamental doubts with which we have been concerned. Perhaps modern science may enable us to see philosophical problems in a new light. In that hope, let us examine the relation of man to his environment with a view to arriving at a scientific view as to what constitutes knowledge.
PART I
MAN FROM WITHOUT
[CHAPTER II]
MAN AND HIS ENVIRONMENT
If our scientific knowledge were full and complete, we should understand ourselves and the world and our relation to the world. As it is, our understanding of all three is fragmentary. For the present, it is the third question, that of our relation to the world, that I wish to consider, because this brings us nearest to the problems of philosophy. We shall find that it will lead us back to the other two questions, as to the world and as to ourselves, but that we shall understand both these better if we have considered first how the world acts upon us and how we act upon the world.
There are a number of sciences which deal with Man. We may deal with him in natural history, as one among the animals, having a certain place in evolution, and related to other animals in ascertainable ways. We may deal with him in physiology, as a structure capable of performing certain functions, and reacting to the environment in ways of which some, at least, can be explained by chemistry. We may study him in sociology, as a unit in various organisms, such as the family and the state. And we may study him, in psychology, as he appears to himself. This last gives what we may call an internal view of man, as opposed to the other three, which give an external view. That is to say, in psychology we use data which can only be obtained when the observer and the observed are the same person, whereas in the other ways of studying Man all our data can be obtained by observing other people. There are different ways of interpreting this distinction, and different views of its importance, but there can be no doubt that there is such a distinction. We can remember our own dreams, whereas we cannot know the dreams of others unless they tell us about them. We know when we have toothache, when our food tastes too salty, when we are remembering some past occurrence, and so on. All these events in our lives other people cannot know in the same direct way. In this sense, we all have an inner life, open to our own inspection but to no one else’s. This is no doubt the source of the traditional distinction of mind and body: the body was supposed to be that part of us which others could observe, and the mind that part which was private to ourselves. The importance of the distinction has been called in question in recent times, and I do not myself believe that it has any fundamental philosophical significance. But historically it has played a dominant part in determining the conceptions from which men set out when they began to philosophise, and on this account, if on no other, it deserves to be borne in mind.
Knowledge, traditionally, has been viewed from within, as something which we observe in ourselves rather than as something which we can see others displaying. When I say that it has been so viewed, I mean that this has been the practice of philosophers; in ordinary life, people have been more objective. In ordinary life, knowledge is something which can be tested by examinations, that is to say, it consists in a certain kind of response to a certain kind of stimulus. This objective way of viewing knowledge is, to my mind, much more fruitful than the way which has been customary in philosophy. I mean that, if we wish to give a definition of “knowing”, we ought to define it as a manner of reacting to the environment, not as involving something (a “state of mind”) which only the person who has the knowledge can observe. It is because I hold this view that I think it best to begin with Man and his environment, rather than with those matters in which the observer and the observed must be the same person. Knowing, as I view it, is a characteristic which may be displayed in our reactions to our environment; it is therefore necessary first of all to consider the nature of these reactions as they appear in science.
Let us take some everyday situation. Suppose you are watching a race, and at the appropriate moment you say, “they’re off”. This exclamation is a reaction to the environment, and is taken to show knowledge if it is made at the same time as others make it. Now let us consider what has been really happening, according to science. The complication of what has happened is almost incredible. It may conveniently be divided into four stages: first, what happened in the outside world between the runners and your eyes; secondly, what happened in your body from your eyes to your brain; thirdly, what happened in your brain; fourthly, what happened in your body from your brain to the movements of your throat and tongue which constituted your exclamation. Of these four stages, the first belongs to physics, and is dealt with in the main by the theory of light; the second and fourth belong to physiology; the third, though it should theoretically also belong to physiology, belongs in fact rather to psychology, owing to our lack of knowledge as to the brain. The third stage embodies the results of experience and learning. It is responsible for the fact that you speak, which an animal would not do, and that you speak English, which a Frenchman would not do. This immensely complicated occurrence is, nevertheless, about the simplest example of knowledge that could possibly be given.
For the moment, let us leave on one side the part of this process which happens in the outside world and belongs to physics. I shall have much to say about it later, but what has to be said is not altogether easy, and we will take less abstruse matters first. I will merely observe that the event which we are said to perceive, namely the runners starting, is separated by a longer or shorter chain of events from the event which happens at the surface of our eyes. It is this last that is what is called the “stimulus”. Thus the event that we are said to perceive when we see is not the stimulus, but an anterior event related to it in a way that requires investigation. The same applies to hearing and smell, but not to touch or to perception of states of our own body. In these cases, the first of the above four stages is absent. It is clear that, in the case of sight, hearing and smell, there must be a certain relation between the stimulus and the event said to be perceived, but we will not now consider what this relation must be. We will consider, rather, the second, third, and fourth stages in an act of perceptive knowledge. This is the more legitimate as these stages always exist, whereas the first is confined to certain senses.
The second stage is that which proceeds from the sense-organ to the brain. It is not necessary for our purposes to consider exactly what goes on during this journey. A purely physical event—the stimulus—happens at the boundary of the body, and has a series of effects which travel along the afferent nerves to the brain. If the stimulus is light, it must fall on the eye to produce the characteristic effects; no doubt light falling on other parts of the body has effects, but they are not those that distinguish vision. Similarly, if the stimulus is sound, it must fall on the ear. A sense-organ, like a photographic plate, is responsive to stimuli of a certain sort: light falling on the eye has effects which are different for different wave-lengths, intensities, and directions. When the events in the eye due to incident light have taken place, they are followed by events in the optic nerve, leading at last to some occurrence in the brain—an occurrence which varies with the stimulus. The occurrence in the brain must be different for different stimuli in all cases where we can perceive differences. Red and yellow, for instance, are distinguishable in perception; therefore the occurrences along the optic nerve and in the brain must have a different character when caused by red light from what they have when caused by yellow light. But when two shades of colour are so similar that they can only be distinguished by delicate instruments, not by perception, we cannot be sure that they cause occurrences of different characters in the optic nerve and brain.
When the disturbance has reached the brain, it may or may not cause a characteristic set of events in the brain. If it does not, we shall not be what is called “conscious” of it. For to be “conscious” of seeing yellow, whatever else it may be, must certainly involve some kind of cerebral reaction to the message brought by the optic nerve. It may be assumed that the great majority of messages brought to the brain by the afferent nerves never secure any attention at all—they are like letters to a government office which remain unanswered. The things in the margin of the field of vision, unless they are in some way interesting, are usually unnoticed; if they are noticed, they are brought into the centre of the field of vision unless we make a deliberate effort to prevent this from occurring. These things are visible, in the sense that we could be aware of them if we chose, without any change in our physical environment or in our sense-organs; that is to say, only a cerebral change is required to enable them to cause a reaction. But usually they do not provoke any reaction; life would be altogether too wearing if we had to be always reacting to everything in the field of vision. Where there is no reaction, the second stage completes the process, and the third and fourth stages do not arise. In that case, there has been nothing that could be called “perception” connected with the stimulus in question.
To us, however, the interesting case is that in which the process continues. In this case there is first a process in the brain, of which the nature is as yet conjectural, which travels from the centre appropriate to the sense in question to a motor centre. From these there is a process which travels along an efferent nerve, and finally results in a muscular event causing some bodily movement. In our illustration of the man watching the beginning of a race, a process travels from the part of the brain concerned with sight to the part concerned with speech; this is what we called the third stage. Then a process travels along the efferent nerves and brings about the movements which constitute saying “they’re off”; this is what we called the fourth stage.
Unless all four stages exist, there is nothing that can be called “knowledge”. And even when they are all present, various further conditions must be satisfied if there is to be “knowledge”. But these observations are premature, and we must return to the analysis of our third and fourth stages.
The third stage is of two sorts, according as we are concerned with a reflex or with a “learned reaction”, as Dr. Watson calls it. In the case of a reflex, if it is complete at birth, a new-born infant or animal has a brain so constituted that, without the need of any previous experience, there is a connection between a certain process in the afferent nerves and a certain other process in the efferent nerves. A good example of a reflex is sneezing. A certain kind of tickling in the nose produces a fairly violent movement having a very definite character, and this connection exists already in the youngest infants. Learned reactions, on the other hand, are such as only occur because of the effect of previous occurrences in the brain. One might illustrate by an analogy which, however, would be misleading if pressed. Imagine a desert in which no rain has ever fallen, and suppose that at last a thunderstorm occurs in it; then the course taken by the water will correspond to a reflex. But if rain continues to fall frequently, it will form watercourses and river valleys; when this has occurred, the water runs away along pre-formed channels, which are attributable to the past “experience” of the region. This corresponds to “learned reactions”. One of the most notable examples of learned reactions is speech: we speak because we have learned a certain language, not because our brain had originally any tendency to react in just that way. Perhaps all knowledge, certainly nearly all, is dependent upon learned reactions, i.e., upon connections in the brain which are not part of man’s congenital equipment but are the result of events which have happened to him.
To distinguish between learned and unlearned responses is not always an easy task. It cannot be assumed that responses which are absent during the first weeks of life are all learned. To take the most obvious instance; sexual responses change their character to a greater or less extent at puberty, as a result of changes in the ductless glands, not as a result of experience. But this instance does not stand alone: as the body grows and develops, new modes of response come into play, modified, no doubt, by experience, but not wholly due to it. For example: a new-born baby cannot run, and therefore does not run away from what is terrifying, as an older child does. The older child has learned to run, but has not necessarily learned to run away; the stimulus in learning to run may have never been a terrifying object. It would therefore be a fallacy to suppose that we can distinguish between learned and unlearned responses by observing what a new-born infant does, since reflexes may come into play at a later stage. Conversely, some things which a child does at birth may have been learned, when they are such as it could have done in the womb—for example, a certain amount of kicking and stretching. The whole distinction between learned and unlearned responses, therefore, is not so definite as we could wish. At the two extremes we get clear cases, such as sneezing on the one hand and speaking on the other; but there are intermediate forms of behaviour which are more difficult to classify.
This is not denied even by those who attach most importance to the distinction between learned and unlearned responses. In Dr. Watson’s Behaviorism (p. 103) there is a “Summary of Unlearned Equipment”, which ends with the following paragraph:
“Other activities appear at a later stage—such as blinking, reaching, handling, handedness, crawling, standing, sitting-up, walking, running, jumping. In the great majority of these later activities it is difficult to say how much of the act as a whole is due to training or conditioning. A considerable part is unquestionably due to the growth changes in structure, and the remainder is due to training and conditioning.” (Watson’s italics.)
It is not possible to make a logically sharp distinction in this matter; in certain cases we have to be satisfied with something less exact. For example, we might say that those developments which are merely due to normal growth are to count as unlearned, while those which depend upon special circumstances in the individual biography are to count as learned. But take, say, muscular development: this will not take place normally unless the muscles are used, and if they are used they are bound to learn some of the skill which is appropriate to them. And some things which must certainly count as learned, such as focussing with the eyes, depend upon circumstances which are normal and must be present in the case of every child who is not blind. The whole distinction, therefore, is one of degree rather than of kind; nevertheless it is valuable.
The value of the distinction between learned and unlearned reactions is connected with the laws of learning, to which we shall come in the next chapter. Experience modifies behaviour according to certain laws, and we may say that a learned reaction is one in the formation of which these laws have played a part. For example: children are frightened of loud noises from birth, but are not at first frightened of dogs; after they have heard a dog barking loudly, they may become frightened of dogs, which is a learned reaction. If we knew enough about the brain, we could make the distinction precise, by saying that learned reactions are those depending upon modifications of the brain other than mere growth. But as it is, we have to judge by observations of bodily behaviour, and the accompanying modifications in the brain are assumed on a basis of theory rather than actually observed.
The essential points, for our purposes, are comparatively simple. Man or any other animal, at birth, is such as to respond to certain stimuli in certain specific ways, i.e. by certain kinds of bodily movements; as he grows, these ways of responding change, partly as the mere result of developing structure, partly in consequence of events in his biography. The latter influence proceeds according to certain laws, which we shall consider, since they have much to do with the genesis of “knowledge”.
But—the indignant reader may be exclaiming—knowing something is not a bodily movement, but a state of mind, and yet you talk to us about sneezing and such matters. I must ask the indignant reader’s patience. He “knows” that he has states of mind, and that his knowing is itself a state of mind. I do not deny that he has states of mind, but I ask two questions: First, what sort of thing are they? Secondly, what evidence can he give me that he knows about them? The first question he may find very difficult; and if he wants, in his answer, to show that states of mind are something of a sort totally different from bodily movements, he will have to tell me also what bodily movements are, which will plunge him into the most abstruse part of physics. All this I propose to consider later on, and then I hope the indignant reader will be appeased. As to the second question, namely, what evidence of his knowledge another man can give me, it is clear that he must depend upon speech or writing, i.e. in either case upon bodily movements. Therefore whatever knowledge may be to the knower, as a social phenomenon it is something displayed in bodily movements. For the present I am deliberately postponing the question of what knowledge is to the knower, and confining myself to what it is for the external observer. And for him, necessarily, it is something shown by bodily movements made in answer to stimuli—more specifically, to examination questions. What else it may be I shall consider at a later stage.
However we may subsequently add to our present account by considering how knowledge appears to the knower, that will not invalidate anything that we may arrive at by considering how knowledge appears to the external observer. And there is something which it is important to realise, namely, that we are concerned with a process in which the environment first acts upon a man, and then he reacts upon the environment. This process has to be considered as a whole if we are to discuss what knowledge is. The older view would have been that the effect of the environment upon us might constitute a certain kind of knowledge (perception), while our reaction to the environment constituted volition. These were, in each case, “mental” occurrences, and their connection with nerves and brain remained entirely mysterious. I think the mystery can be eliminated, and the subject removed from the realm of guesswork, by starting with the whole cycle from stimulus to bodily movement. In this way, knowing becomes something active, not something contemplative. Knowing and willing, in fact, are merely aspects of the one cycle, which must be considered in its entirety if it is to be rightly understood.
A few words must be said about the human body as a mechanism. It is an inconceivably complicated mechanism, and some men of science think that it is not explicable in terms of physics and chemistry, but is regulated by some “vital principle” which makes its laws different from those of dead matter. These men are called “vitalists”. I do not myself see any reason to accept their view, but at the same time our knowledge is not sufficient to enable us to reject it definitely. What we can say is that their case is not proved, and that the opposite view is, scientifically, a more fruitful working hypothesis. It is better to look for physical and chemical explanations where we can, since we know of many processes in the human body which can be accounted for in this way, and of none which certainly cannot. To invoke a “vital principle” is to give an excuse for laziness, when perhaps more diligent research would have enabled us to do without it. I shall therefore assume, as a working hypothesis, that the human body acts according to the same laws of physics and chemistry as those which govern dead matter, and that it differs from dead matter, not by its laws, but by the extraordinary complexity of its structure.
The movements of the human body may, none the less, be divided into two classes, which we may call respectively “mechanical” and “vital”. As an example of the former, I should give the movement of a man falling from a cliff into the sea. To explain this, in its broad features, it is not necessary to take account of the fact that the man is alive; his centre of gravity moves exactly as that of a stone would move. But when a man climbs up a cliff, he does something that dead matter of the same shape and weight would never do; this is a “vital” movement. There is in the human body a lot of stored chemical energy in more or less unstable equilibrium; a very small stimulus can release this energy, and cause a considerable amount of bodily movement. The situation is analogous to that of a large rock delicately balanced on the top of a conical mountain; a tiny shove may send it thundering down into the valley, in one direction or another according to the direction of the shove. So if you say to a man “your house is on fire”, he will start running; although the stimulus contained very little energy, his expenditure of energy may be tremendous. He increases the available energy by panting, which makes his body burn up faster and increases the energy due to combustion; this is just like opening the draft in a furnace. “Vital” movements are those that use up this energy which is in unstable equilibrium. It is they alone that concern the bio-chemist, the physiologist, and the psychologist. The others, being just like the movements of dead matter, may be ignored when we are specially concerned with the study of Man.
Vital movements have a stimulus which may be inside or outside the body, or both at once. Hunger is a stimulus inside the body, but hunger combined with the sight of good food is a double stimulus, both internal and external. The effect of a stimulus may be, in theory, according to the laws of physics and chemistry, but in most cases this is, at present, no more than a pious opinion. What we know from observation is that behaviour is modified by experience, that is to say, that if similar stimuli are repeated at intervals they produce gradually changing reactions. When a bus conductor says, “Fares, please”, a very young child has no reaction, an older child gradually learns to look for pennies, and, if a male, ultimately acquires the power of producing the requisite sum on demand without conscious effort. The way in which our reactions change with experience is a distinctive characteristic of animals; moreover it is more marked in the higher than in the lower animals, and most marked of all in Man. It is a matter intimately connected with “intelligence”, and must be investigated before we can understand what constitutes knowledge from the standpoint of the external observer; we shall be concerned with it at length in the next chapter.
Speaking broadly, the actions of all living things are such as tend to biological survival, i.e. to the leaving of a numerous progeny. But when we descend to the lowest organisms, which have hardly anything that can be called individuality, and reproduce themselves by fission, it is possible to take a simpler view. Living matter, within limits, has the chemical peculiarity of being self-perpetuating, and of conferring its own chemical composition upon other matter composed of the right elements. One spore falling into a stagnant pond may produce millions of minute vegetable organisms; these, in turn, enable one small animal to have myriads of descendants living on the small plants; these, in turn, provide life for larger animals, newts, tadpoles, fishes, etc. In the end there is enormously more protoplasm in that region than there was to begin with. This is no doubt explicable as a result of the chemical constitution of living matter. But this purely chemical self-preservation and collective growth is at the bottom of everything else that characterises the behaviour of living things. Every living thing is a sort of imperialist, seeking to transform as much as possible of its environment into itself and its seed. The distinction between self and posterity is one which does not exist in a developed form in asexual unicellular organisms; many things, even in human life, can only be completely understood by forgetting it. We may regard the whole of evolution as flowing from this “chemical imperialism” of living matter. Of this, Man is only the last example (so far). He transforms the surface of the globe by irrigation, cultivation, mining, quarrying, making canals and railways, breeding certain animals, and destroying others; and when we ask ourselves, from the standpoint of an outside observer, what is the end achieved by all these activities, we find that it can be summed up in one very simple formula: to transform as much as possible of the matter on the earth’s surface into human bodies. Domestication of animals, agriculture, commerce, industrialism have been stages in this process. When we compare the human population of the globe with that of other large animals and also with that of former times, we see that “chemical imperialism” has been, in fact, the main end to which human intelligence has been devoted. Perhaps intelligence is reaching the point where it can conceive worthier ends, concerned with the quality rather than the quantity of human life. But as yet such intelligence is confined to minorities, and does not control the great movements of human affairs. Whether this will ever be changed I do not venture to predict. And in pursuing the simple purpose of maximising the amount of human life, we have at any rate the consolation of feeling at one with the whole movement of living things from their earliest origin on this planet.
[CHAPTER III]
THE PROCESS OF LEARNING IN ANIMALS AND INFANTS
In the present chapter I wish to consider the processes by which, and the laws according to which, an animal’s original repertoire of reflexes is changed into a quite different set of habits as a result of events that happen to it. A dog learns to follow his master in preference to anyone else; a horse learns to know his own stall in the stable; a cow learns to come to the cow-shed at milking time. All these are acquired habits, not reflexes; they depend upon the circumstances of the animals concerned, not merely upon the congenital characteristics of the species. When I speak of an animal “learning” something, I shall include all cases of acquired habits, whether or not they are useful to the animal. I have known horses in Italy “learn” to drink wine, which I cannot believe to have been a desirable habit. A dog may “learn” to fly at a man who has ill-treated it, and may do so with such regularity and ferocity as to lead to its being killed. I do not use learning in any sense involving praise, but merely to denote modification of behaviour as the result of experience.
The manner in which animals learn has been much studied in recent years, with a great deal of patient observation and experiment. Certain results have been obtained as regards the kinds of problems that have been investigated, but on general principles there is still much controversy. One may say broadly that all the animals that have been carefully observed have behaved so as to confirm the philosophy in which the observer believed before his observations began. Nay, more, they have all displayed the national characteristics of the observer. Animals studied by Americans rush about frantically, with an incredible display of hustle and pep, and at last achieve the desired result by chance. Animals observed by Germans sit still and think, and at last evolve the solution out of their inner consciousness. To the plain man, such as the present writer, this situation is discouraging. I observe, however, that the type of problem which a man naturally sets to an animal depends upon his own philosophy, and that this probably accounts for the differences in the results. The animal responds to one type of problem in one way and to another in another; therefore the results obtained by different investigators, though different, are not incompatible. But it remains necessary to remember that no one investigator is to be trusted to give a survey of the whole field.
The matters with which we shall be concerned in this chapter belong to behaviourist psychology, and in part to pure physiology. Nevertheless, they seem to me vital to a proper understanding of philosophy, since they are necessary for an objective study of knowledge and inference. I mean by an “objective” study one in which the observer and the observed need not be the same person; when they must be identical, I call the study “subjective.” For the present we are concerned with what is required for understanding “knowledge” as an objective phenomenon. We shall take up the question of the subjective study of knowledge at a later stage.
The scientific study of learning in animals is a very recent growth; it may almost be regarded as beginning with Thorndike’s Animal Intelligence, which was published in 1911. Thorndike invented the method which has been adopted by practically all subsequent American investigators. In this method an animal is separated from food, which he can see or smell, by an obstacle which he may overcome by chance. A cat, say, is put in a cage having a door with a handle which he may by chance push open with his nose. At first the cat makes entirely random movements, until he gets his result by a mere fluke. On the second occasion, in the same cage, he still makes some random movements, but not so many as on the first occasion. On the third occasion he does still better, and before long he makes no useless movements. Nowadays it has become customary to employ rats instead of cats, and to put them in a model of the Hampton Court maze rather than in a cage. They take all sorts of wrong turnings at first, but after a time they learn to run straight out without making any mistake. Dr. Watson gives averages for nineteen rats, each of which was put into the maze repeatedly, with food outside where the rat could smell it. In all the experiments care was taken to make sure that the animal was very hungry. Dr. Watson says: “The first trial required on the average over seventeen minutes. During this time the rat was running around the maze, into blind alleys, running back to the starting point, starting for the food again, biting at the wires around him, scratching himself, smelling this spot and that on the floor. Finally he got to the food. He was allowed only a bite. Again he was put back into the maze. The taste of the food made him almost frantic in his activity. He dashed about more rapidly. The average time for the group on the second trial is only a little over seven minutes; on the fourth trial not quite three minutes; from this point to the twenty-third trial the improvement is very gradual.” On the thirtieth trial the time required, on the average, was about thirty seconds.[3] This set of experiments may be taken as typical of the whole group of studies to which it belongs.
[3] Watson, Behaviorism, pp. 169–70.
Thorndike, as a result of experiments with cages and mazes, formulated two “provisional laws,” which are as follows:
“The Law of Effect is that: of several responses made to the same situation, those which are accompanied or closely followed by satisfaction to the animal will, other things being equal, be more firmly connected with the situation, so that, when it recurs, they will be more likely to recur; those which are accompanied or closely followed by dissatisfaction to the animal will, other things being equal, have their connections with that situation weakened, so that, when it recurs, they will be less likely to recur. The greater the satisfaction or discomfort, the greater the strengthening or weakening of the bond.
“The Law of Exercise is that: Any response to a situation will, other things being equal, be more strongly connected with the situation in proportion to the number of times it has been connected with that situation and to the average vigour and duration of the connections.”
We may sum up these two laws, roughly, in the two statements: First, an animal tends to repeat what has brought it pleasure; second, an animal tends to repeat what it has often done before. Neither of these laws is at all surprising, but, as we shall see, there are difficulties in the theory that they are adequate to account for the process of learning in animals.
Before going further there is a theoretical point to be cleared up. Thorndike, in his first law, speaks of satisfaction and discomfort, which are terms belonging to subjective psychology. We cannot observe whether an animal feels satisfaction or feels discomfort; we can only observe that it behaves in ways that we have become accustomed to interpret as signs of these feelings. Thorndike’s law, as it stands, does not belong to objective psychology, and is not capable of being experimentally tested. This, however, is not so serious an objection as it looks. Instead of speaking of a result that brings satisfaction, we can merely enumerate the results which, in fact, have the character which Thorndike mentions, namely, that the animal tends to behave so as to make them recur. The rat in the maze behaves so as to get the cheese, and when an act has led him to the cheese once, he tends to repeat it. We may say that this is what we mean when we say that the cheese “gives satisfaction”, or that the rat “desires” the cheese. That is to say, we may use Thorndike’s “Law of Effect” to give us an objective definition of desire, satisfaction, and discomfort. The law should then say: there are situations such that animals tend to repeat acts which have led to them; these are the situations which the animal is said to “desire” and in which it is said to “find satisfaction”. This objection to Thorndike’s first law is, therefore, not very serious, and need not further trouble us.
Dr. Watson considers one principle alone sufficient to account for all animal and human learning, namely, the principle of “learned reactions.” This principle may be stated as follows:
When the body of an animal or human being has been exposed sufficiently often to two roughly simultaneous stimuli, the earlier of them alone tends to call out the response previously called out by the other.
Although I do not agree with Dr. Watson in thinking this principle alone sufficient, I do agree that it is a principle of very great importance. It is the modern form of the principle of “association”. The “association of ideas” has played a great part in philosophy, particularly in British philosophy. But it now appears that this is a consequence of a wider and more primitive principle, namely, the association of bodily processes. It is this wider principle that is asserted above. Let us see what is the nature of the evidence in its favour.
Our principle becomes verifiable over a much larger field than the older principle owing to the fact that it is movements, not “ideas”, that are to be associated. Where animals are concerned, ideas are hypothetical, but movements can be observed; even with men, many movements are involuntary and unconscious. Yet animal movements and unconscious involuntary human movements are just as much subject to the law of association as the most conscious ideas. Take, e.g. the following example (Watson, p. 33). The pupil of the eye expands in darkness and contracts in bright light; this is an involuntary and unconscious action of which we only become aware by observing others. Now take some person and repeatedly expose him to bright light at the same moment that you ring an electric bell. After a time the electric bell alone will cause his pupils to contract. As far as can be discovered, all muscles behave in this way. So do glands where they can be tested. It is said that a brass band can be reduced to silence by sucking a lemon in front of it, owing to the effect upon the salivary glands of its members; I confess that I have never verified this statement. But you will find the exact scientific analogue for dogs in Watson, p. 26. You arrange a tube in a dog’s mouth so that saliva drops out at a measurable rate. When you give the dog food it stimulates the flow of saliva. At the same moment you touch his left thigh. After a certain length of time the touch on the left thigh will produce just as much saliva without the food as with it. The same sort of thing applies to emotions, which depend upon the ductless glands. Children at birth are afraid of loud noises, but not of animals. Watson took a child eleven months old, who was fond of a certain white rat; twice at the moment when the child touched the rat, a sudden noise was made just behind the child’s head. This was enough to cause fear of the rat on subsequent occasions, no doubt owing to the fact that the adrenal gland was now stimulated by the substitute stimulus, just like the salivary glands in the dog or the trumpet player. The above illustrations show that “ideas” are not the essential units in association. It seems that not merely is “mind” irrelevant, but even the brain is less important than was formerly supposed. At any rate, what is known experimentally is that the glands and muscles (both striped and unstriped) of the higher animals exhibit the law of transfer of response, i.e. when two stimuli have often been applied together, one will ultimately call out the response which formerly the other called out. This law is one of the chief bases of habit. It is also obviously essential to our understanding of language: the sight of a dog calls up the word “dog”, and the word “dog” calls up some of the responses appropriate to a real dog.
There is, however, another element in learning, besides mere habit. This is the element dealt with by Thorndike’s “Law of Effect.” Animals tend to repeat acts which have pleasant consequences, and to avoid such as have unpleasant consequences. But, as we saw a moment ago, “pleasant” and “unpleasant” are words which we cannot verify by objective observation. What we can verify by observation is that an animal seeks situations which in fact have had certain results, and avoids situations which in fact have had certain other results. Moreover, broadly speaking, the animal seeks results which tend to survival of itself or its offspring, and avoids results which tend in the opposite direction. This, however, is not invariable. Moths seek flames and men seek drink, though neither is biologically useful. It is only approximately, in situations long common, that animals are so adjusted to their environment as to act in a way which is advantageous from a biological standpoint. In fact, biological utility must never be employed as an explanation, but only noticed as a frequent characteristic, of the ways in which animals behave.
Dr. Watson is of the opinion that Thorndike’s “Law of Effect” is unnecessary. He first suggests that only two factors are called for in the explanation of habit, namely, frequency and recency. Frequency is covered by Thorndike’s “Law of Exercise”, but recency, which is almost certainly a genuine factor, is not covered by Thorndike’s two laws. That is to say, when a number of random movements have finally resulted in success, the more recent of these movements are likely to be repeated earlier, on a second trial, than the earlier ones. But Dr. Watson finally abandons this method of dealing with habit-formation in favour of the one law of “conditioned reflexes” or “learned reactions”. He says (Behaviorism, p. 166):
“Only a few psychologists have been interested in the problem. Most of the psychologists, it is to be regretted, have even failed to see that there is a problem. They believe habit formation is implanted by kind fairies. For example, Thorndike speaks of pleasure stamping in the successful movement and displeasure stamping out the unsuccessful movements. Most of the psychologists talk, too, quite volubly about the formation of new pathways in the brain, as though there were a group of tiny servants of Vulcan there who run through the nervous system with hammer and chisel digging new trenches and deepening old ones. I am not sure that the problem when phrased in this way is a soluble one. I feel that there must come some simpler way of envisaging the whole process of habit formation or else it may remain insoluble. Since the advent of the conditioned reflex hypothesis in psychology with all of the simplifications (and I am often fearful that it may be an oversimplification!) I have had my own laryngeal processes [i.e. what others call “thoughts”] stimulated to work upon this problem from another angle.”
I agree with Dr. Watson that the explanations of habit-formation which are usually given are very inadequate, and that few psychologists have realised either the importance or the difficulty of the problem. I agree also that a great many cases are covered by his formula of the conditioned reflex. He relates a case of a child who once touched a hot radiator, and afterward avoided it for two years. He adds: “If we should keep our old habit terminology, we should have in this example a habit formed by a single trial. There can be then in this case no ‘stamping in of the successful movement’ and ‘no stamping out of the unsuccessful movement.’” On the basis of such examples, he believes that the whole of habit-formation can be derived from the principle of the conditioned reflex, which he formulates as follows (p. 168):
Stimulus X will not now call out reaction R; stimulus Y will call out reaction R (unconditioned reflex); but when stimulus X is presented first and then Y (which does call out R) shortly thereafter, X will thereafter call out R. In other words, stimulus X becomes ever thereafter substituted for Y.
This law is so simple, so important, and so widely true that there is a danger lest its scope should be exaggerated, just as, in the eighteenth century, physicists tried to explain everything by means of gravitation. But when considered as covering all the ground, it seems to me to suffer from two opposite defects. In the first place, there are cases where no habit is set up, although by the law it should be. In the second place, there are habits which, so far as we can see at present, have a different genesis.
To take the first point first: the word “pepper” does not make people sneeze, though according to the law it should.[4] Words which describe succulent foods will make the mouth water; voluptuous words will have some of the effect that would be produced by the situations they suggest; but no words will produce sneezes or the reactions appropriate to tickling. In the diagram given by Dr. Watson (p. 106), there are four reflexes which appear to be not sources of conditioned reflexes, namely sneezing, hiccoughing, blinking, and the Babinski reflex; of these, however, blinking, it is suggested (p. 99) may be really itself a conditioned reflex. There may be some quite straightforward explanation of the fact that some reactions can be produced by substitute stimuli while others cannot, but none is offered. Therefore the law of the conditioned reflex, as formulated, is too wide, and it is not clear what is the principle according to which its scope should be restricted.
[4] Dr. Watson apparently entertains hopes of teaching babies to sneeze when they see the pepper box, but he has not yet done so. See Behaviorism, p. 90.
The second objection to Dr. Watson’s law of habit, if valid, is more important than the first; but its validity is more open to question. It is contended that the acts by which solutions of problems are obtained are, in cases of a certain kind, not random acts leading to success by mere chance, but acts proceeding from “insight”, involving a “mental” solution of the problem as a preliminary to the physical solution. This is especially the view of those who advocate Gestaltpsychologie or the psychology of configuration. We may take, as typical of their attitude on the subject of learning, Köhler’s Mentality of Apes. Köhler went to Tenerife with certain chimpanzees in the year 1913; owing to the war he was compelled to remain with them until 1917, so that his opportunities for study were extensive. He complains of the maze and cage problems set by American investigators that they are such as cannot be solved by intelligence. Sir Isaac Newton himself could not have got out of the Hampton Court maze by any method except trial and error. Köhler, on the other hand, set his apes problems which could be solved by what he calls “insight”. He would hang up a banana[5] out of reach, and leave boxes in the neighbourhood so that by standing on the boxes the chimpanzees could reach the fruit. Sometimes they had to pile three or even four boxes on top of each other before they could achieve success. Then he would put the banana outside the bars of the cage, leaving a stick inside, and the ape would get the banana by reaching for it with the stick. On one occasion, one of them, named Sultan, had two bamboo sticks, each too short to reach the banana; after vain efforts followed by a period of silent thought, he fitted the smaller into the hollow of the other, and so manufactured one stick which was long enough. It seems, however, from the account, that he first fitted the two together more or less accidentally, and only then realised that he had found a solution. Nevertheless, his behaviour when he had once realised that one stick could be made by joining the two was scarcely Watsonian: there was no longer anything tentative, but a definite triumph, first in anticipation and then in action. He was so pleased with his new trick that he drew a number of bananas into his cage before eating any of them. He behaved, in fact, as capitalists have behaved with machinery.
[5] Called by Köhler “the objective,” because the word “banana” is too humble for a learned work. The pictures disclose the fact that “the objective” was a mere banana.
Köhler says: “We can, from our own experience, distinguish sharply between the kind of conduct which, from the very beginning, arises out of a consideration of the characteristics of a situation, and one that does not. Only in the former case do we speak of insight, and only that behaviour of animals definitely appears to us intelligent which takes account from the beginning of the lie of the land, and proceeds to deal with it in a smooth continuous course. Hence follows this characteristic: to set up as the criterion of insight, the appearance of a complete solution with reference to the whole lay-out of the field.”
Genuine solutions of problems, Köhler says, do not improve by repetition; they are perfect on the first occasion, and, if anything, grow worse by repetition, when the excitement of discovery has worn off. The whole account that Köhler gives of the efforts of his chimpanzees makes a totally different impression from that of the rats in mazes, and one is forced to conclude that the American work is somewhat vitiated by confining itself to one type of problem, and drawing from that one type conclusions which it believes to be applicable to all problems of animal learning. It seems that there are two ways of learning, one by experience, and the other by what Köhler calls “insight”. Learning by experience is possible to most vertebrates, though rarely, so far as is known, to invertebrates. Learning by “insight”, on the contrary, is not known to exist in any animals lower than the anthropoid apes, though it would be extremely rash to assert that it will not be revealed by further observations on dogs or rats. Unfortunately, some animals—for instance, elephants—may be extremely intelligent, but the practical difficulty and expense of experimentation with them is so great that we are not likely to know much about them for some time to come. However, the real problem is already sufficiently definite in Köhler’s book: it is the analysis of “insight” as opposed to the method of the conditioned reflex.
Let us first be clear as to the nature of the problem, when described solely in terms of behaviour. A hungry monkey, if sufficiently near to a banana, will perform acts such as, in circumstances to which it has been accustomed, have previously enabled it to obtain bananas. This fits well with either Watson or Thorndike, so far. But if these familiar acts fail, the animal will, if it has been long without food, is in good health, and is not too tired, proceed to other acts which have never hitherto produced bananas. One may suppose, if one wishes to follow Watson, that these new acts are composed of a number of parts, each of which, on some former occasion, has occurred in a series which ended with the obtaining of the banana. Or one may suppose—as I think Thorndike does—that the acts of the baffled animal are random acts, so that the solution emerges by pure chance. But even in the first hypothesis, the element of chance is considerable. Let us suppose that the acts A, B, C, D, E, have each, on a former occasion, been part of a series ending with success, but that now for the first time it is necessary to perform them all, and in the right order. It is obvious that, if they are only combined by chance, the animal will be lucky if it performs them all in the right order before dying of hunger.
But Köhler maintains that to anyone watching his chimpanzees it was obvious they did not obtain “a composition of the solution out of chance parts”. He says (pp. 199–200):
“It is certainly not a characteristic of the chimpanzee, when he is brought into an experimental situation, to make any chance movements, out of which, among other things, a non-genuine solution could arise. He is very seldom seen to attempt anything which would have to be considered accidental in relation to the situation (excepting, of course, if his interest is turned away from the objective to other things). As long as his efforts are directed to the objective, all distinguishable stages of his behaviour (as with human beings in similar situations) tend to appear as complete attempts at solutions, none of which appears as the product of accidentally arrayed parts. This is true, most of all, of the solution which is finally successful. Certainly it often follows upon a period of perplexity or quiet (often a period of survey), but in real and convincing cases, the solution never appears in a disorder of blind impulses. It is one continuous smooth action, which can be resolved into its parts only by the imagination of the onlooker; in reality they do not appear independently. But that in so many ‘genuine’ cases as have been described, these solutions as wholes should have arisen from mere chance, is an entirely inadmissible supposition.”
Thus we may take it as an observed fact that, so far as overt behaviour is concerned, there are two objections to the type of theory with which we began, when considered as covering the whole field. The first objection is that in cases of a certain kind, the solution appears sooner than it should according to the doctrine of chances; the second is that it appears as a whole, i.e. that the animal, after a period of quiescence, suddenly goes through the right series of actions smoothly, and without hesitation.
Where human beings are concerned, it is difficult to obtain such good data as in the case of animals. Human mothers will not allow their children to be starved, and then shut up in a room containing a banana which can only be reached by putting a chair on the table and a footstool on the chair, and then climbing up without breaking any bones. Nor will they permit them to be put into the middle of a Hampton Court maze, with their dinner getting cold outside. Perhaps in time the State will perform these experiments with the children of political prisoners, but as yet, perhaps fortunately, the authorities are not sufficiently interested in science. One can observe, however, that human learning seems to be of both sorts, namely that described by Watson and that described by Köhler. I am persuaded that speech is learnt by the Watsonian method, so long as it is confined to single words: often the trial and error, in later stages, proceeds sotto voce, but it takes place overtly at first, and in some children until their speech is quite correct. The speaking of sentences, however, is already more difficult to explain without bringing in the apprehension of wholes which is the thing upon which Gestaltpsychologie lays stress. In the later stages of learning, the sort of sudden illumination which came to Köhler’s chimpanzees is a phenomenon with which every serious student must be familiar. One day, after a period of groping bewilderment, the schoolboy knows what algebra is all about. In writing a book, my own experience—which I know is fairly common, though by no means universal—is that for a time I fumble and hesitate, and then suddenly I see the book as a whole, and have only to write it down as if I were copying a completed manuscript.
If these phenomena are to be brought within the scope of behaviourist psychology, it must be by means of “implicit” behaviour. Watson makes much use of this in the form of talking to oneself, but in apes it cannot take quite this form. And it is necessary to have some theory to explain the success of “implicit” behaviour, whether we call it “thought” or not. Perhaps such a theory can be constructed on Watson’s lines, but it has certainly not yet been constructed. Until the behaviourists have satisfactorily explained the kind of discovery which appears in Köhler’s observations, we cannot say that their thesis is proved. This is a matter which will occupy us again at a later stage; for the present let us preserve an open mind.
[CHAPTER IV]
LANGUAGE
The subject of language is one which has not been studied with sufficient care in traditional philosophy. It was taken for granted that words exist to express “thoughts,” and generally also that “thoughts” have “objects” which are what the words “mean”. It was thought that, by means of language, we could deal directly with what it “means”, and that we need not analyse with any care either of the two supposed properties of words, namely that of “expressing” thoughts and that of “meaning” things. Often when philosophers intended to be considering the objects meant by words they were in fact considering only the words, and when they were considering words they made the mistake of supposing, more or less unconsciously, that a word is a single entity, not, as it really is, a set of more or less similar events. The failure to consider language explicitly has been a cause of much that was bad in traditional philosophy. I think myself that “meaning” can only be understood if we treat language as a bodily habit, which is learnt just as we learn football or bicycling. The only satisfactory way to treat language, to my mind, is to treat it in this way, as Dr. Watson does. Indeed, I should regard the theory of language as one of the strongest points in favour of behaviourism.
Man has various advantages over the beasts, for example, fire, clothing, agriculture, and tools—not the possession of domestic animals, for ants have them. But more important than any of these is language. It is not known how or when language arose, nor why chimpanzees do not speak. I doubt if it is even known whether writing or speech is the older form of language. The pictures made in caves by the Cro-Magnon men may have been intended to convey a meaning, and may have been a form of writing. It is known that writing developed out of pictures, for that happened in historical times; but it is not known to what extent pictures had been used in pre-historic times as a means of giving information or commands. As for spoken language, it differs from the cries of animals in being not merely an expression of emotion. Animals have cries of fear, cries expressing pleasure in the discovery of food, and so on, and by means of these cries they influence each other’s actions. But they do not appear to have any means of expressing anything except emotions, and then only emotions which they are actually feeling. There is no evidence that they possess anything analogous to narrative. We may say, therefore, without exaggeration, that language is a human prerogative, and probably the chief habit in which we are superior to the “dumb” animals.
There are three matters to be considered in beginning the study of language. First: what words are, regarded as physical occurrences; secondly, what are the circumstances that lead us to use a given word; thirdly, what are the effects of our hearing or seeing a given word. But as regards the second and third of these questions, we shall find ourselves led on from words to sentences and thus confronted with fresh problems perhaps demanding rather the methods of Gestaltpsychologie.
Ordinary words are of four kinds: spoken, heard, written, and read. It is of course largely a matter of convention that we do not use words of other kinds. There is the deaf-and-dumb language; a Frenchman’s shrug of the shoulders is a word; in fact, any kind of externally perceptible bodily movement may become a word, if social usage so ordains. But the convention which has given the supremacy to speaking is one which has a good ground, since there is no other way of producing a number of perceptibly different bodily movements so quickly or with so little muscular effort. Public speaking would be very tedious if statesmen had to use the deaf-and-dumb language, and very exhausting if all words involved as much muscular effort as a shrug of the shoulders. I shall ignore all forms of language except speaking, hearing, writing, and reading, since the others are relatively unimportant and raise no special psychological problems.
A spoken word consists of a series of movements in the larynx and the mouth, combined with breath. Two closely similar series of such movements may be instances of the same words, though they may also not be, since two words with different meanings may sound alike; but two such series which are not closely similar cannot be instances of the same word. (I am confining myself to one language.) Thus a single spoken word, say “dog,” is a certain set of closely similar series of bodily movements, the set having as many members as there are occasions when the word “dog” is pronounced. The degree of similarity required in order that the occurrence should be an instance of the word “dog” cannot be specified exactly. Some people say “dawg”, and this must certainly be admitted. A German might say “tok”, and then we should begin to be doubtful. In marginal cases, we cannot be sure whether a word has been pronounced or not. A spoken word is a form of bodily behaviour without sharp boundaries, like jumping or hopping or running. Is a man running or walking? In a walking-race the umpire may have great difficulty in deciding. Similarly there may be cases where it cannot be decided whether a man has said “dog” or “dock”. A spoken word is thus at once general and somewhat vague.
We usually take for granted the relation between a word spoken and a word heard. “Can you hear what I say?” we ask, and the person addressed says “yes”. This is of course a delusion, a part of the naive realism of our unreflective outlook on the world. We never hear what is said; we hear something having a complicated causal connection with what is said. There is first the purely physical process of sound-waves from the mouth of the speaker to the ear of the hearer, then a complicated process in the ear and nerves, and then an event in the brain, which is related to our hearing of the sound in a manner to be investigated later, but is at any rate simultaneous with our hearing of the sound. This gives the physical causal connection between the word spoken and the word heard. There is, however, also another connection of a more psychological sort. When a man utters a word, he also hears it himself, and so that the word spoken and the word heard become intimately associated for anyone who knows how to speak. And a man who knows how to speak can also utter any word he hears in his own language, so that the association works equally well both ways. It is because of the intimacy of this association that the plain man identifies the word spoken with the word heard, although in fact the two are separated by a wide gulf.
In order that speech may serve its purpose, it is not necessary, as it is not possible, that heard and spoken words should be identical, but it is necessary that when a man utters different words the heard words should be different, and when he utters the same word on two occasions the heard word should be approximately the same on the two occasions. The first of these depends upon the sensitiveness of the ear and its distance from the speaker; we cannot distinguish between two rather similar words if we are too far off from the man who utters them. The second condition depends upon uniformity in the physical conditions, and is realised in all ordinary circumstances. But if the speaker were surrounded by instruments which were resonant to certain notes but not to certain others, some tones of voice might carry and others might be lost. In that case, if he uttered the same word with two different intonations, the hearer might be quite unable to recognise the sameness. Thus the efficacy of speech depends upon a number of physical conditions. These, however, we will take for granted, in order to come as soon as possible to the more psychological parts of our topic.
Written words differ from spoken words in being material structures. A spoken word is a process in the physical world, having an essential time-order; a written word is a series of pieces of matter, having an essential space-order. As to what we mean by “matter”, that is a question with which we shall have to deal at length at a later stage. For the present it is enough to observe that the material structures which constitute written words, unlike the processes that constitute spoken words, are capable of enduring for a long time—sometimes for thousands of years. Moreover, they are not confined to one neighbourhood, but can be made to travel about the world. These are the two great advantages of writing over speech. This, at least, has been the case until recently. But with the coming of radio writing it has begun to lose its pre-eminence: one man can now speak to multitudes spread over a whole country. Even in the matter of permanence, speech may become the equal of writing. Perhaps, instead of legal documents, we shall have gramophone records, with voice signatures by the parties to the contract. Perhaps, as in Wells’s When the Sleeper Awakes, books will no longer be printed but merely arranged for the gramophone. In that case the need for writing may almost cease to exist. However, let us return from these speculations to the world of the present day.
The word read, as opposed to the written or printed word, is just as evanescent as the word spoken or heard. Whenever a written word, exposed to light, is in a suitable spatial relation to a normal eye, it produces a certain complicated effect upon the eye; the part of this process which occurs outside the eye is investigated by the science of light, whereas the part that occurs in the eye belongs to physiological optics. There is then a further process, first in the optic nerve and afterwards in the brain; the process in the brain is simultaneous with vision. What further relation it has to vision is a question as to which there has been much philosophical controversy; we shall return to it at a later stage. The essence of the matter, as regards the causal efficacy of writing, is that the act of writing produces quasi-permanent material structures which, throughout the whole of their duration, produce closely similar results upon all suitably placed normal eyes; and as in the case of speaking, different written words lead to different read words, and the same word written twice leads to the same read word—again with obvious limitations.
So much for the physical side of language, which is often unduly neglected. I come now to the psychological side, which is what really concerns us in this chapter.
The two questions we have to answer, apart from the problems raised by sentences as opposed to words, are: First, what sort of behaviour is stimulated by hearing a word? And secondly, what sort of occasion stimulates us to the behaviour that consists in pronouncing a word? I put the questions in this order because children learn to react to the words of others before they learn to use words themselves. It might be objected that, in the history of the race, the first spoken word must have preceded the first heard word, at least by a fraction of a second. But this is not very relevant, nor is it certainly true. A noise may have meaning to the hearer, but not to the utterer; in that case it is a heard word but not a spoken word. (I shall explain what I mean by “meaning” shortly.) Friday’s footprint had “meaning” for Robinson Crusoe but not for Friday. However that may be, we shall do better to avoid the very hypothetical parts of anthropology that would be involved, and take up the learning of language as it can be observed in the human infant of the present day. And in the human infant as we know him, definite reactions to the words of others come much earlier than the power of uttering words himself.
A child learns to understand words exactly as he learns any other process of bodily association. If you always say “bottle” when you give a child his bottle, he presently reacts to the word “bottle”, within limits, as he formerly reacted to the bottle. This is merely an example of the law of association which we considered in the [preceding chapter]. When the association has been established, parents say that the child “understands” the word “bottle”, or knows what the word “means”. Of course the word does not have all the effects that the actual bottle has. It does not exert gravitation, it does not nourish, it cannot bump on to the child’s head. The effects which are shared by the word and the thing are those which depend upon the law of association or “conditional reflexes” or “learned reactions”. These may be called “associative” effects or “mnemic” effects—the latter name being derived from Semon’s book Mneme,[6] in which he traces all phenomena analogous to memory to a law which is, in effect, not very different from the law of association or “conditioned reflexes”.
[6] London: George Allen & Unwin, Ltd.
It is possible to be a little more precise as to the class of effects concerned. A physical object is a centre from which a variety of causal chains emanate. If the object is visible to John Smith, one of the causal chains emanating from it consists first of light-waves (or light-quanta) which travel from the object to John Smith’s eye, then of events in his eye and optic nerve, then of events in his brain, and then (perhaps) of a reaction on his part. Now mnemic effects belong only to events in living tissue; therefore only those effects of the bottle which happen either inside John Smith’s body, or as a result of his reaction to the bottle, can become associated with his hearing the word “bottle”. And even then only certain events can be associated: nourishment happens in the body, yet the word “bottle” cannot nourish. The law of conditioned reflexes is subject to ascertainable limitations, but within its limits it supplies what is wanted to explain the understanding of words. The child becomes excited when he sees the bottle; this is already a conditioned reflex, due to experience that this sight precedes a meal. One further stage in conditioning makes the child grow excited when he hears the word “bottle”. He is then said to “understand” the word.
We may say, then, that a person understands a word which he hears if, so far as the law of conditioned reflexes is applicable, the effects of the word are the same as those of what it is said to “mean”. This of course only applies to words like “bottle”, which denote some concrete object or some class of concrete objects. To understand a word such as “reciprocity” or “republicanism” is a more complicated matter, and cannot be considered until we have dealt with sentences. But before considering sentences we have to examine the circumstances which make us use a word, as opposed to the consequences of hearing it used.
Saying a word is more difficult than using it, except in the case of a few simple sounds which infants make before they know that they are words, such as “ma-ma” and “da-da.” These two are among the many random sounds that all babies make. When a child says “ma-ma” in the presence of his mother by chance she thinks he knows what this noise means, and she shows pleasure in ways that are agreeable to the infant. Gradually, in accordance with Thorndike’s law of effect, he acquires the habit of making this noise in the presence of his mother, because in these circumstances the consequences are pleasant. But it is only a very small number of words that are acquired in this way. The great majority of words are acquired by imitation, combined with the association between thing and word which the parents deliberately establish in the early stages (after the very first stage). It is obvious that using words oneself involves something over and above the association between the sound of the word and its meaning. Dogs understand many words, and infants understand far more than they can say. The infant has to discover that it is possible and profitable to make noises like those which he hears. (This statement must not be taken quite literally, or it would be too intellectualistic.) He would never discover this if he did not make noises at random, without the intention of talking. He then gradually finds that he can make noises like those which he hears, and in general the consequences of doing so are pleasant. Parents are pleased, desired objects can be obtained, and—perhaps most important of all—there is a sense of power in making intended instead of accidental noises. But in this whole process there is nothing essentially different from the learning of mazes by rats. It resembles this form of learning, rather than that of Köhler apes, because no amount of intelligence could enable the child to find out the names of things—as in the case of the mazes, experience is the only possible guide.
When a person knows how to speak, the conditioning proceeds in the opposite direction to that which operates in understanding what others say. The reaction of a person who knows how to speak, when he notices a cat, is naturally to utter the word “cat”; he may not actually do so, but he will have a reaction leading towards this act, even if for some reason the overt act does not take place. It is true that he may utter the word “cat” because he is “thinking” about a cat, not actually seeing one. This, however, as we shall see in a moment, is merely one further stage in the process of conditioning. The use of single words, as opposed to sentences, is wholly explicable, so far as I can see, by the principles which apply to animals in mazes.
Certain philosophers who have a prejudice against analysis contend that the sentence comes first and the single word later. In this connection they always allude to the language of the Patagonians, which their opponents, of course, do not know. We are given to understand that a Patagonian can understand you if you say “I am going to fish in the lake behind the western hill”, but that he cannot understand the word “fish” by itself. (This instance is imaginary, but it represents the sort of thing that is asserted.) Now it may be that Patagonians are peculiar—indeed they must be, or they would not choose to live in Patagonia. But certainly infants in civilized countries do not behave in this way, with the exception of Thomas Carlyle and Lord Macaulay. The former never spoke before the age of three, when, hearing his younger brother cry, he said, “What ails wee Jock?” Lord Macaulay “learned in suffering what he taught in song”, for, having spilt a cup of hot tea over himself at a party, he began his career as a talker by saying to his hostess, after a time, “Thank you, Madam, the agony is abated”. These, however, are facts about biographers, not about the beginnings of speech in infancy. In all children that have been carefully observed, sentences come much later than single words.
Children, at first, are limited as to their power of producing sounds, and also by the paucity of their learned associations. I am sure the reason why “ma-ma” and “da-da” have the meaning they have is that they are sounds which infants make spontaneously at an early age, and are therefore convenient as sounds to which the elders can attach meaning. In the very beginning of speech there is not imitation of grownups, but the discovery that sounds made spontaneously have agreeable results. Imitation comes later, after the child has discovered that sounds can have this quality of “meaning”. The type of skill involved is throughout exactly similar to that involved in learning to play a game or ride a bicycle.
We may sum up this theory of meaning in a simple formula. When through the law of conditioned reflexes, A has come to be a cause of C, we will call A an “associative” cause of C, and C an “associative” effect of A. We shall say that, to a given person, the word A, when he hears it, “means” C, if the associative effects of A are closely similar to those of C; and we shall say that the word A, when he utters it, “means” C, if the utterance of A is an associative effect of C, or of something previously associated with C. To put the matter more concretely, the word “Peter” means a certain person if the associated effects of hearing the word “Peter” are closely similar to those of seeing Peter, and the associative causes of uttering the word “Peter” are occurrences previously associated with Peter. Of course as our experience increases in complexity this simple schema becomes obscured and overlaid, but I think it remains fundamentally true.
There is an interesting and valuable book by Messrs. C. K. Ogden and I. A. Richards, called The Meaning of Meaning. This book, owing to the fact that it concentrates on the causes of uttering words, not on the effects of hearing them, gives only half the above theory, and that in a somewhat incomplete form. It says that a word and its meaning have the same causes. I should distinguish between active meaning, that of the man uttering the word, and passive meaning, that of the man hearing the word. In active meaning the word is associatively caused by what it means or something associated with this; in passive meaning, the associative effects of the word are approximately the same as those of what it means.
On behaviourist lines, there is no important difference between proper names and what are called “abstract” or “generic” words. A child learns to use the word “cat”, which is general, just as he learns to use the word “Peter”, which is a proper name. But in actual fact “Peter” really covers a number of different occurrences, and is in a sense general. Peter may be near or far, walking or standing or sitting, laughing or frowning. All these produce different stimuli, but the stimuli have enough in common to produce the reaction consisting of the word “Peter”. Thus there is no essential difference, from a behaviourist point of view, between “Peter” and “man”. There are more resemblances between the various stimuli to the word “Peter” than between those to the word “man”, but this is only a difference of degree. We have not names for the fleeting particular occurrences which make up the several appearances of Peter, because they are not of much practical importance; their importance, in fact, is purely theoretic and philosophical. As such, we shall have a good deal to say about them at a later stage. For the present, we notice that there are many occurrences of Peter, and many occurrences of the word “Peter”; each, to the man who sees Peter, is a set of events having certain similarities. More exactly, the occurrences of Peter are causally connected, whereas the occurrences of the word “Peter” are connected by similarity. But this is a distinction which need not concern us yet.
General words such as “man” or “cat” or “triangle” are said to denote “universals”, concerning which, from the time of Plato to the present day, philosophers have never ceased to debate. Whether there are universals, and, if so, in what sense, is a metaphysical question, which need not be raised in connection with the use of language. The only point about universals that needs to be raised at this point is that the correct use of general words is no evidence that a man can think about universals. It has often been supposed that, because we can use a word like “man” correctly, we must be capable of a corresponding “abstract” idea of man, but this is quite a mistake. Some reactions are appropriate to one man, some to another, but all have certain elements in common. If the word “man” produces in us the reactions which are common but no others, we may be said to understand the word “man”. In learning geometry, one acquires the habit of avoiding special interpretations of such a word as “triangle”. We know that, when we have a proposition about triangles in general, we must not think specially of a right-angled triangle or any one kind of triangle. This is essentially the process of learning to associate with the word what is associated with all triangles; when we have learnt this, we understand the word “triangle”. Consequently there is no need to suppose that we ever apprehend universals, although we use general words correctly.
Hitherto we have spoken of single words, and among these we have considered only those that can naturally be employed singly. A child uses single words of a certain kind before constructing sentences; but some words presuppose sentences. No one would use the word “paternity” until after using such sentences as “John is the father of James”; no one would use the word “causality” until after using such sentences as “the fire makes me warm”. Sentences introduce new considerations, and are not quite so easily explained on behaviourist lines. Philosophy, however, imperatively demands an understanding of sentences, and we must therefore consider them.
As we found earlier, all infants outside Patagonia begin with single words, and only achieve sentences later. But they differ enormously in the speed with which they advance from the one to the other. My own two children adopted entirely different methods. My son first practised single letters, then single words, and only achieved correct sentences of more than three or four words at the age of two and three months. My daughter, on the contrary, advanced very quickly to sentences, in which there was hardly ever an error. At the age of eighteen months, when supposed to be sleeping, she was overheard saying to herself: “Last year I used to dive off the diving-board, I did.” Of course “last year” was merely a phrase repeated without understanding. And no doubt the first sentences used by children are always repetitions, unchanged, of sentences they have heard used by others. Such cases raise no new principle not involved in the learning of words. What does raise a new principle is the power of putting together known words into a sentence which has never been heard, but which expresses correctly what the infant wishes to convey. This involves the power to manipulate form and structure. It does not of course involve the apprehension of form or structure in the abstract, any more than the use of the word “man” involves apprehension of a universal. But it does involve a causal connection between the form of the stimulus and the form of the reaction. An infant very soon learns to be differently affected by the statement “cats eat mice” from the way he would be affected by the statement “mice eat cats”; and not much later he learns to make one of these statements rather than the other. In such a case, the cause (in hearing) or the effect (in speaking) is a whole sentence. It may be that one part of the environment is sufficient to cause one word, while another is sufficient to cause another, but it is only the two parts in their relation that can cause the whole sentence. Thus wherever sentences come in we have a causal relation between two complex facts, namely the fact asserted and the sentence asserting it; the facts as wholes enter into the cause-and-effect relation, which cannot be explained wholly as compounded of relations between their parts. Moreover, as soon as the child has learned to use correctly relational words, such as “eat”, he has become capable of being causally affected by a relational feature of the environment, which involves a new degree of complexity not required for the use of ordinary nouns.
Thus the correct use of relational words, i.e. of sentences, involves what may be correctly termed “perception of form”, i.e. it involves a definite reaction to a stimulus which is a form. Suppose, for example, that a child has learnt to say that one thing is “above” another when this is in fact the case. The stimulus to the use of the word “above” is a relational feature of the environment, and we may say that this feature is “perceived” since it produces a definite reaction. It may be said that the relation above is not very like the word “above”. That is true; but the same is true of ordinary physical objects. A stone, according to the physicists, is not at all like what we see when we look at it, and yet we may be correctly said to “perceive” it. This, however, is to anticipate. The definite point which has emerged is that, when a person can use sentences correctly, that is a proof of sensitiveness to formal or relational stimuli.
The structure of a sentence asserting some relational fact, such as “this is above that”, or “Brutus killed Cæsar”, differs in an important respect from the structure of the fact which it asserts. Above is a relation which holds between the two terms “this” and “that”; but the word “above” is not a relation. In the sentence the relation is the temporal order of the words (or the spatial order, if they are written), but the word for the relation is itself as substantial as the other words. In inflected languages, such as Latin, the order of the words is not necessary to show the “sense” of the relation; but in uninflected languages this is the only way of distinguishing between “Brutus killed Cæsar” and “Cæsar killed Brutus”. Words are physical phenomena, having spatial and temporal relations; we make use of these relations in our verbal symbolisation of other relations, chiefly to show the “sense” of the relation, i.e. whether it goes from A to B or from B to A.
A great deal of the confusion about relations which has prevailed in practically all philosophies comes from the fact, which we noticed just now, that relations are indicated, not by other relations, but by words which, in themselves, are just like other words. Consequently, in thinking about relations, we constantly hover between the unsubstantiality of the relation itself and the substantiality of the word. Take, say, the fact that lightning precedes thunder. If we were to express this by a language closely reproducing the structure of the fact, we should have to say simply: “lightning, thunder”, where the fact that the first word precedes the second means that what the first word means precedes what the second word means. But even if we adopted this method for temporal order, we should still need words for all other relations, because we could not without intolerable ambiguity symbolise them also by the order of our words. All this will be important to remember when we come to consider the structure of the world, since nothing but a preliminary study of language will preserve us from being misled by language in our metaphysical speculations.
Throughout this chapter I have said nothing about the narrative and imaginative uses of words; I have dealt with words in connection with an immediate sensible stimulus closely connected with what they mean. The other uses of words are difficult to discuss until we have considered memory and imagination. In the present chapter I have confined myself to a behaviouristic explanation of the effects of words heard as stimuli, and the causes of words spoken when the words apply to something sensibly present. I think we shall find that other uses of words, such as the narrative and imaginative, involve only new applications of the law of association. But we cannot develop this theme until we have discussed several further psychological questions.
[CHAPTER V]
PERCEPTION OBJECTIVELY REGARDED
It will be remembered that the task upon which we are at present engaged is the definition of “knowledge” as a phenomenon discoverable by an outside observer. When we have said what we can from this objective standpoint, we will ask ourselves whether anything further, and if so what, is to be learnt from the subjective standpoint, in which we take account of facts which can only be discovered when the observer and the observed are the same person. But for the present we will resolutely confine ourselves to those facts about a human being which another human being can observe, together with such inferences as can be drawn from these facts.
The word “knowledge” is very ambiguous. We say that Watson’s rats “know” how to get out of mazes, that a child of three “knows” how to talk, that a man “knows” the people with whom he is acquainted, that he “knows” what he had for breakfast this morning, and that he “knows” when Columbus first crossed the ocean. French and German are less ambiguous, since each has two words for different kinds of “knowing”, which we tend to confuse in our thoughts because we confuse them in our language. I shall not attempt as yet to deal with knowledge in general, but rather with certain less general concepts which would ordinarily be included under “knowledge”. And first of all I will deal with perception—not as it appears to the perceiver, but as it can be tested by an outside observer.
Let us try, first, to get a rough preliminary view of the sort of thing we are going to mean by “perception”. One may say that a man “perceives” anything that he notices through his senses. This is not a question of the sense-organs alone, though they are a necessary condition. No man can perceive by sight what is not in his field of vision, but he may look straight at a thing without perceiving it. I have frequently had the experience—supposed to be characteristic of philosophers—of looking everywhere for my spectacles although they were before my eyes when my search began. We cannot therefore tell what a man is perceiving by observing his sense-organs alone, though they may enable us to know that he is not perceiving something. The observer can only know that a man is perceiving something if the man reacts in some appropriate manner. If I say to a man “please pass the mustard” and he thereupon passes it, it is highly probable that he perceived what I said, although it may of course be a mere coincidence that he passed it at that moment. But if I say to him “the telephone number you want is 2467” and he proceeds to call that number, the odds against his doing so by mere chance are very great—roughly 10,000 to 1. And if a man reads aloud out of a book, and I look over his shoulder and perceive the same words, it becomes quite fantastic to suppose that he does not perceive the words he is uttering. We can thus in many cases achieve practical certainty as to some of the things that other people are perceiving.
Perception is a species of a wider genus, namely sensitivity. Sensitivity is not confined to living things; in fact it is best exemplified by scientific instruments. A material object is said to be “sensitive” to such and such a stimulus, if, when the stimulus is present, it behaves in a way noticeably different from that in which it behaves in the absence of the stimulus. A photographic plate is sensitive to light, a barometer is sensitive to pressure, a thermometer to temperature, a galvanometer to electric current, and so on. In all these cases, we might say, in a certain metaphorical sense, that an instrument “perceives” the stimulus to which it is sensitive. We do not in fact say so; we feel that perception involves something more than we find in scientific instruments. What is this something more?
The traditional answer would be: consciousness. But this answer, right or wrong, is not what we are seeking at the moment, because we are considering the percipient as he appears to an outside observer, to whom his “consciousness” is only an inference. Is there anything in perception as viewed from without that distinguishes it from the sensitivity of a scientific instrument?
There is, of course, the fact that human beings are sensitive to a greater variety of stimuli than any instrument. Each separate sense-organ can be surpassed by something made artificially sensitive to its particular stimulus. Photographic plates can photograph stars that we cannot see; clinical thermometers register differences of temperature that we cannot feel; and so on. But there is no way of combining a microscope, a microphone, a thermometer, a galvanometer, and so on, into a single organism which will react in an integral manner to the combination of all the different stimuli that affect its different “sense-organs”. This, however, is perhaps only a proof that our mechanical skill is not so great as it may in time become. It is certainly not enough to define the difference between a dead instrument and a living body.
The chief difference—perhaps the only one from our present point of view—is that living bodies are subject to the law of association or of the “conditioned reflex”. Consider, for instance, an automatic machine. It has a reflex which makes it sensitive to pennies, in response to which it gives up chocolate. But it never learns to give up chocolate on merely seeing a penny, or hearing the word “penny”. If you kept it in your house, and said “Abracadabra” to it every time you inserted a penny, it would not in the end be moved to action by the mere word “Abracadabra”. Its reflexes remain unconditioned, as do some of ours, such as sneezing. But with us sneezing is peculiar in this respect—hence its unimportance. Most of our reflexes can be conditioned, and the conditioned reflex can in turn be conditioned afresh, and so on without limit. This is what makes the reactions of the higher animals, and especially of man, so much more interesting and complicated than the reactions of machines. Let us see whether this one law will suffice to distinguish perception from other forms of sensitivity.
The variability in a human being’s responses to a given stimulus has given rise to the traditional distinction between cognition and volition. When one’s rich uncle comes for a visit, smiles are the natural response; after he has lost his money, a colder demeanour results from the new conditioning. Thus the reaction to the stimulus has come to be divided into two parts, one purely receptive and sensory, the other active and motor. Perception, as traditionally conceived, is, so to speak, the end term of the receptive-sensory part of the reaction, while volition (in its widest sense) is the first term of the active-motor part of the reaction. It was possible to suppose that the receptive part of the reaction would be always the same for the same stimulus, and that the difference due to experience would only arise in the motor part. The last term of the passive part, as it appears to the person concerned, was called “sensation”. But in fact the influence of the law of conditioned reflexes goes much deeper than this theory supposed. As we saw, the contraction of the pupil, which is normally due to bright light, can be conditioned so as to result from a loud noise. What we see depends largely upon muscular adjustments of the eyes, which we make quite unconsciously. But apart from the contraction of the pupil only one of them is a true reflex, namely turning the eyes towards a bright light. This is a movement which children can perform on the day of their birth; I know this, not merely from personal observation, but also, what is more, from the text-books. But new-born infants cannot follow a moving light with their eyes, nor can they focus or accommodate. As a consequence, the purely receptive part of their reaction to visual objects, in so far as this reaction is visual, is different from that of adults or older children, whose eye muscles adjust themselves so as to see clearly.
But here again all sorts of factors enter in. Innumerable objects are in our field of vision, but only some (at most) are interesting to us. If some one says “look, there’s a snake”, we adjust our eyes afresh and obtain a new “sensation”. Then, when the purely visual part is finished, there are stimulations, by association, of other centres in the brain. There are pictures, in Köhler’s book, of apes watching other apes on the top of insecure piles of boxes, and the spectators have their arms raised in sympathetic balancing movements. Any one who watches gymnastics or skilful dancing is liable to experience sympathetic muscular contractions. Any visual object that we might be touching will stimulate incipient touch reactions, but the sun, moon, and stars do not.
Conversely, visual reactions may be stimulated through association with other stimuli. When motor-cars were still uncommon, I was walking one day with a friend when a tire punctured in our neighbourhood with a loud report. He thought it was a revolver, and averred that he had seen the flash. In dreams, this sort of mechanism operates uncontrolled. Some stimulus—say the noise of the maid knocking at the door—becomes interpreted in fantastic ways which are governed by association. I remember once dreaming that I was in an inn in the country in Germany and was wakened by a choir singing outside my window. Finally I really woke, and found that a spring shower was making a very musical noise on the roof. At least, I heard a very musical noise, and now re-interpreted it as a shower on the roof. This hypothesis I confirmed by looking out of the window. In waking life we are critical of the interpretative hypotheses that occur to us, and therefore do not make such wild mistakes as in dreams. But the creative, as opposed to the critical, mechanism is the same in waking life as it is in dreams: there is always far more richness in the experience than the sensory stimulus alone would warrant. All adaptation to environment acquired during the life of the individual might be regarded as learning to dream dreams that succeed rather than dreams that fail. The dreams we have when we are asleep usually end in a surprise: the dreams we have in waking life are less apt to do so. Sometimes they do, as when pride goes before a fall; but in that case they are regarded as showing maladjustment, unless there is some large external cause, such as an earthquake. One might say that a person properly adapted to his environment is one whose dreams never end in the sort of surprise that would wake him up. In that case, he will think that his dreams are objective reality. But if modern physics is to be believed, the dreams we call waking perceptions have only a very little more resemblance to objective reality than the fantastic dreams of sleep. They have some truth, but only just so much as is required to make them useful.
Until we begin to reflect, we unhesitatingly assume that what we see is really “there” in the outside world, except in such cases as reflections in mirrors. Physics and the theory of the way in which perceptions are caused show that this naive belief cannot be quite true. Perception may, and I think does, enable us to know something of the outer world, but it is not the direct revelation that we naturally suppose it to be. We cannot go into this question adequately until we have considered what the philosopher has to learn from physics; I am merely giving, by anticipation, the reasons for regarding perception as a form of reaction to the environment, displayed in some bodily movement, rather than as a form of knowledge. When we have considered further what constitutes knowledge, we may find that perception is, after all, a form of knowledge, but only because knowledge is not quite what we naturally suppose it to be. For the present, let us stick to the view of perception that can be obtained by the external observer, i.e. as something displayed in the manner of reacting to the environment.
From the point of view of the external observer, perception is established just like any other causal correlation. We observe that, whenever a certain object stands in a certain spatial relation to a man’s body, the man’s body makes a certain movement or set of movements; we shall then say that the man “perceives” the object. So the new-born baby turns its eyes slowly towards a bright light which is not in the centre of the field of vision; this entitles us to say that the baby “perceives” the light. If he is blind, his eyes do not move in this way. A bird flying about in a wood does not bump into the branches, whereas in a room it will bump into the glass of the window. This entitles us to say that the bird perceives the branches but not the glass. Do we “perceive” the glass or do we merely know that it is there? This question introduces us to the complications produced by association. We know by experience, from the sense of touch, that there is usually glass in window-frames; thus it makes us react to the window-frames as if we could see the glass. But sometimes there is no glass, and still we shall perhaps behave as if there were. If this can happen, it shows that we do not perceive the glass, since our reaction is the same whether there is glass or not. If, however, the glass is coloured, or slightly distorting, or not perfectly clean, a person accustomed to glass will be able to distinguish a frame containing glass from one which has none. In that case it is more difficult to decide whether we are to say that he “perceives” the glass or not. It is certain that perception is affected by experience. A person who can read perceives print where another would not. A musician perceives differences between notes which to an untrained ear are indistinguishable. People unaccustomed to the telephone cannot understand what they hear in it; but this is perhaps not really a case in point.
The difficulty we are considering arises from the fact that a human body, unlike a scientific instrument, is perpetually changing its reaction to a given stimulus, under the influence of the law of association. Moreover, the human body is always doing something. How, then, are we to know whether what it is doing is the result of a given stimulus or not? In most cases, however, this difficulty is not very serious, particularly when we are dealing with people old enough to speak. When you go to the oculist he asks you to read a number of letters growing gradually smaller; at some point you fail. Where you have succeeded, he knows that you have perceived enough to make out what letter it is. Or you take a pair of compasses and press the points into a man’s back, asking him if he feels two pricks or only one. He may say one when the two points are near together; if he is on his guard against this error he may say two when in fact there is only one. But if the points are sufficiently far apart he will never make a mistake. That is to say, the bodily movement consisting in pronouncing the word “two” will invariably result from a certain stimulus. (Invariably, I mean, for a given subject on a given day.) This entitles us to say that the man can perceive that there are two points provided they are not too near together. Or you say: “What can you see on the horizon?” One man says, “I see a ship”. Another says, “I see a steamer with two funnels”. A third says, “I see a Cunarder going from Southampton to New York”. How much of what these three people say is to count as perception? They may all three be perfectly right in what they say, and yet we should not concede that a man can “perceive” that the ship is going from Southampton to New York. This, we should say, is inference. But it is by no means easy to draw a line; some things which are, in an important sense, inferential, must be admitted to be perceptions. The man who says “I see a ship” is using inference. Apart from experience, he only sees a queerly shaped dark dot on a blue background. Experience has taught him that that sort of dot “means” a ship; that is to say, he has a conditioned reflex which causes him to utter, aloud or to himself, the word “ship” when his eye is stimulated in a certain way. To disentangle what is due to experience, and what not, in the perceptions of an adult, is a hopeless task. Practically, if a word comes without previous verbal intermediaries, the ordinary man would include what the word means in the perception, while he would not do so if the man arrives at the word after verbal preliminaries, overt or internal. But this is itself a question of familiarity. Show a child a pentagon, and he will have to count the sides to know how many there are; but after a little experience of geometrical figures, the word “pentagon” will arise without any previous words. And in any case such a criterion is theoretically worthless. The whole affair is a matter of degree, and we cannot draw any sharp line between perception and inference. As soon as this is realised, our difficulties are seen to be purely verbal and therefore unimportant.
It will be observed that we are not attempting at present to say what constitutes perception, but only what kind of behaviour on the part of a person whom we are observing will justify us in saying that he has perceived this or that feature of his environment. I suggest that we are justified in saying that a man “perceives” such a feature if, throughout some such period as a day, there is some bodily act which he performs whenever that feature is present, but not at any other time. This condition is clearly sufficient, but not necessary—that is to say, there may be perception even when it is not fulfilled. A man’s reaction may change through conditioning, even in so short a period as a day. Again, there may be a reaction, but one which is too slight to be observable; in this case the criterion of perception is theoretically satisfied, but not practically, since no one can know that it is. We often have evidence later on that something was perceived, although at the moment there was no discoverable reaction. I have frequently known children repeat afterwards some remark which, at the time, they seemed not to have heard. This sort of case affords another kind of evidence of perception, namely, the evidence afforded by a delayed response. Some people will sit silent and impassive in a company of talkers, giving no evidence that they are listening; yet they may go home and write down the conversation verbatim in their journals. These are the typical writers of memoirs. More remarkable still, I know one man—a man of genius, it is true—who talks incessantly, who yet, after meeting a total stranger, knows exactly what the stranger would have said if he had been given the chance. How this is managed, I do not know; but such a man is rightly called “perceptive”.
Obviously, in dealing with human beings old enough to talk, words afford the best evidence of perception. A man’s verbal responses to perceptive situations do not change much after the first few years of life. If you see a kingfisher, and at the same moment your companion says “there’s a kingfisher”, that is pretty conclusive evidence that he saw it. But, as this case illustrates, our evidence that some one else has perceived something always depends upon our own perceptions. And our own perceptions are known to us in a different way from that in which the perceptions of others are known to us. This is one of the weak spots in the attempt at a philosophy from the objective standpoint. Such a philosophy really assumes knowledge as a going concern, and takes for granted the world which a man derives from his own perceptions. We cannot tackle all our philosophical problems by the objective method, but it is worth while to proceed with it as far as it will take us. This whole question of perception will have to be attacked afresh from a different angle, and we shall then find reason to regard the behaviouristic standpoint as inadequate, though valid so far as it goes. We have still, however, a long road to go before we shall be driven to consider the subjective standpoint; more particularly, we have to define “knowledge” and “inference” behaviouristically, and then, making a new start, to consider what modern physics makes of “matter”. But for the moment there are still some things to be said about perception from the objective standpoint.
It will be seen that, according to our criterion of perception, an object perceived need not be in contact with the percipient’s body. The sun, moon, and stars are perceived according to the above criterion. In order, however, that an object not in contact with the body should be perceived, there are physical as well as physiological conditions to be fulfilled. There must be some physical process which takes place at the surface of the body when the object in question is suitably situated, but not otherwise; and there must be sense-organs capable of being affected by such a process. There are, as we know from physics, many processes which fulfil the necessary physical conditions, but fail to affect us through the inadequacy of our sense-organs. Waves of a certain sort make sound, but waves of exactly the same sort become inaudible if they are too short. Waves of a certain sort make light, but if they are too long or too short they are invisible. The waves used in wireless are of the same sort as those that make light, but are too long. There is no reason a priori why we should not be aware of wireless messages through our senses, without the need of instruments. X-rays are also of the same sort as those that make light, but in this case they are too short to be seen. They might render the objects from which they come visible, if we had a different sort of eye. We are not sensitive to magnetism, unless it is enormously powerful; but if we had more iron in our bodies, we might have no need of the mariner’s compass. Our senses are a haphazard selection of those that the nature of physical processes renders possible; one may suppose that they have resulted from chance variation and the struggle for existence.
It is important to observe that our perceptions are very largely concerned with form or shape or structure. This is the point emphasised by what is called “Gestaltpsychologie”, or psychology of form. Reading is a case in point. Whether we read black letters on white paper or white letters on a blackboard is a matter which we hardly notice; it is the forms of the letters that affect us, not their colour or their size (so long as they remain legible). In this matter, the sense of sight is pre-eminent, although blind men (and others to a less degree) can acquire a good knowledge of form by the sense of touch.
Another point of importance about our perceptions is that they give us, within limits, a knowledge of temporal sequence. If you say to a man “Brutus killed Cæsar”, and then “Cæsar killed Brutus”, the difference between the two statements is likely to be perceived by him if he is listening; in the one case he will say “of course”, in the other “nonsense”, which is evidence of his having different perceptions in the two cases, according to our definition. Further, if you ask him what the difference is, he can tell you that it is a difference in the order of the words. Thus time-order within a short period of time is clearly perceptible.
The objective method, which we have been applying in this chapter, is the only possible one in studying the perceptions of animals or of infants before they can talk. Many animals too low in the scale of evolution to have eyes are yet sensitive to light, in the sense that they move towards it or move away from it. Such animals, according to our criterion, perceive light, though there is no reason to suppose that they perceive colour or visual form or anything beyond the bare presence of light. We can perceive the bare presence of light when our eyes are shut; perhaps one may imagine their sensitiveness to be more or less analogous in its limitations.
It is not to be supposed, in any case, that “perceiving” an object involves knowing what it is like. That is quite another matter. We shall see later that certain inferences, of a highly abstract character, can be drawn from our perceptions to the objects perceived; but these inferences are at once difficult and not quite certain. The idea that perception, in itself, reveals the character of objects, is a fond delusion, and one, moreover, which it is very necessary to overcome if our philosophy is to be anything more than a pleasant fairy-tale.
[CHAPTER VI]
MEMORY OBJECTIVELY REGARDED
We are concerned in these chapters with what we can know about other men by merely observing their behaviour. In this chapter, I propose to consider everything that would commonly be called “memory”, in so far as it can be made a matter of external observation. And perhaps it may be as well, at this point, to state my own view of the question of “behaviourism”. This philosophy, of which the chief protagonist is Dr. John B. Watson, holds that everything that can be known about man is discoverable by the method of external observation, i.e. that none of our knowledge depends, essentially and necessarily, upon data in which the observer and the observed are the same person. I do not fundamentally agree with this view, but I think it contains much more truth than most people suppose, and I regard it as desirable to develop the behaviourist method to the fullest possible extent. I believe that the knowledge to be obtained by this method, so long as we take physics for granted, is self-contained, and need not, at any point, appeal to data derived from introspection, i.e. from observations which a man can make upon himself but not upon any one else. Nevertheless, I hold that there are such observations and that there is knowledge which depends upon introspection. What is more, I hold that data of this kind are required for a critical exposition of physics, which behaviourism takes for granted. I shall, therefore, after setting forth the behaviourist view of man, proceed to a scrutiny of our knowledge of physics, returning thence to man, but now as viewed from within. Then, finally, I shall attempt to draw conclusions as to what we know of the universe in general.
The word “memory” or “remembering” is commonly used in a number of different senses, which it is important to distinguish. More especially, there is a broad sense, in which the word applies to the power of repeating any habitual act previously learnt, and a narrow sense, in which it applies only to recollection of past events. It is in the broad sense that people speak of a dog remembering his master or his name, and that Sir Francis Darwin spoke of memory in plants. Samuel Butler used to attribute the sort of behaviour that would usually be called instinctive to memory of ancestral experience, and evidently he was using the word “memory” in its widest possible sense. Bergson, on the contrary, dismisses “habit-memory” as not true memory at all. True memory, for him, is confined to the recollection of a past occurrence, which, he maintains, cannot be a habit, since the event remembered only occurred once. The behaviourist maintains that this contention is mistaken, and that all memory consists in the retention of a habit. For him, therefore, memory is not something requiring special study, but is merged into the study of habit. Dr. Watson says: “The behaviourist never uses the term ‘memory’. He believes that it has no place in an objective psychology.” He proceeds to give instances, beginning with a white rat in a maze. On the first occasion, he says, it took this rat forty minutes to get out of the maze, but after thirty-five trials he learnt to get out in six seconds, without taking any wrong turnings. He was then kept away from the maze for six months, and on being put in it again he got out in two minutes, with six mistakes. He was just as good as he had been before at the twentieth trial. We have here a measure of the extent to which the habit of the maze had been retained. A similar experiment with a monkey showed even more retentiveness. He was put into a problem box which at first took him twenty minutes to open, but at the twentieth trial he opened it in two seconds. He was then kept away from it for six months, and on being put back in it he opened it in four seconds.
With human beings, we know that many of the habits we learn are retained through long periods of disuse—skating, bicycling, swimming, golf, etc., are familiar instances. Perhaps Dr. Watson goes a trifle too far when he says: “If a poor shot or an inexpert golfer tells you that he was good five years ago but that lack of practice has made him poor, don’t believe him; he never was good!” At any rate, this is not the belief of violinists and pianists, who consider it essential to practise every day. But even if it be somewhat of an exaggeration, it is certainly true that we retain bodily habits pretty well. Some, such as swimming, seem to be more completely retained than others. The power of talking a foreign language, for example, is one which is greatly impaired by disuse. The whole matter is quantitative, and easily tested by experiment.
But memory in the sense of recollection of past events, if it can be explained as a habit, will have, one might suppose, to be a verbal habit. As to this, Dr. Watson says:
“What the man on the street ordinarily means by an exhibition of memory is what occurs in some such situation as this: An old friend comes to see him, after many years’ absence. The moment he sees this friend, he says: ‘Upon my life! Addison Sims of Seattle! I haven’t seen you since the World’s Fair in Chicago. Do you remember the gay parties we used to have in the Wilderness Hotel? Do you remember the Midway? Do you remember ... etc.,’ ad infinitum. The psychology of this process is so simple that it seems almost an insult to your intelligence to discuss it, and yet a good many of the behaviourists’ kindly critics have said that behaviourism cannot adequately explain memory. Let us see if this is a fact.”
He goes on to say that during the period, long ago, when the man on the street was seeing Mr. Sims, they formed verbal and manual habits towards one another, so that “finally, just the sight of man, even after months of absence, would call out not only the old verbal habits, but many other types of bodily and visceral responses.”
He sums up: “By ‘memory’, then, we mean nothing except the fact that when we meet a stimulus again after an absence, we do the old habitual thing (say the old words and show the old visceral—emotional—behaviour) that we learned to do when we were in the presence of that stimulus in the first place.”
This theory is preferable to ordinary psychological theories in many ways. In the first place, it is not an attempt to treat memory as some sort of mystical “faculty”, and does not suppose that we are always remembering everything that we should remember if a suitable stimulus were applied. It is concerned with the causation of specific acts of remembering, these acts being all externally observable. I do not see any good reason to question it. Bergson’s contention that the recollection of a unique occurrence cannot be explained by habit is clearly fallacious. There are many instances, both with animals and with human beings, of a habit becoming firmly established through one experience. It is, therefore, quite possible that a stimulus associated with a previous occurrence should set going a train of bodily events which, in turn, produce words describing that occurrence. There is here, however, a difficulty. The memory of a past occurrence cannot be a verbal habit, except when the occurrence has been frequently related. When Watson’s man on the street says “Do you remember the Midway”, he is not using words that have become habitual; very likely he never used these words before. He is using words which a verbal habit associates with an event that is now happening in him, and the event is called up by a habit associated with Mr. Sims. So at least we must suppose, if we accept Watson’s view. But this diminishes the plausibility and the verifiability of his view. It is not our actual language that can be regarded as habitual, but only what our words express. In repeating a poem we have learned by heart, the language is habitual, but not so when we recount a past incident in words we never used before. In this case, it is not the actual words that we repeat, but only their meaning. The habitual element, therefore, if it really accounts for the recollection, must not be sought in words.
This is something of a difficulty in the Watsonian theory of language. When a rat learns a maze, it learns certain definite bodily movements; so do we when we learn by heart. But I may say to one person, “I met Mr. Jones in the train to-day”, and to another “Joseph was in the 9.35 this morning.” With the exception of the words “in the”, these two sentences have nothing verbally in common, yet they may relate the same fact, and I may use either indifferently when I recall the fact. Thus my recollection is certainly not a definite verbal habit. Yet words are the only overt bodily movements by which I make known my recollections to other people. If the behaviourist tells me that my recollection is bodily habit, and begins by telling me that it is a verbal habit, he can be driven by such instances to the admission that it must be some other kind of habit. If he says this, he is abandoning the region of observable fact, and taking refuge in hypothetical bodily movements invoked to save a theory. But these are hardly any better than “thoughts.”
This question is more general than the problem of memory. Many different forms of words may be used to express the same “meaning”, and there seems no reason in mere habit to account for the fact that we sometimes use one form of words and sometimes another when we “think” of that which all the various forms of words express. The association seems to go, not direct from stimulus to words, but from stimulus to “meaning” and thence to words expressing the “meaning”. You may, for instance, be quite unable to recollect whether you were told “Jacob is older than Joseph”, or “Joseph is younger than Jacob”, though you may remember quite definitely that you were told the fact which both these forms of words express. Again, if you are learning, say, a proof of a mathematical theorem, you do not learn by heart what the book says, unless you are a very bad mathematician; you learn, as people say, to “understand” the proof, and then you can reproduce it in symbols quite different from those in the book. It is such facts, among others, that make it difficult to explain the mechanism of association, whether in memory or in “thought” in general, if we assume that words, or even sentences, are the terms associated.
Perhaps, however, the theory as to the “meaning” of words which we developed in an earlier chapter may help us out of the difficulty. We defined the “meaning” of a word by means of its associations; therefore, if two words are synonyms, they have the same associations; and any stimulus which calls up one may also call up the other. The question which of two synonyms we use will then depend upon some extraneous circumstance.
This is all very well so far as single words are concerned; it would account satisfactorily, for instance, for the fact that I call a man sometimes by his surname and sometimes by his Christian name. But it is hardly so adequate when we come to the question of sentences. To revert to the illustration of a moment ago, in response to the stimulus “Did anything happen on your journey?” you may say either “I met Mr. Jones in the train to-day”, or “Joseph was in the 9.35 this morning”, or any one of an indefinite number of other sentences expressive of the same occurrence. Are we to suppose that, while you were in the train, you were rehearsing all these different sentences to yourself, so that each of them became firmly associated with the words “journey to-day”? Clearly such a supposition would be absurd. Yet all the separate words of your sentence have many other associations; it is only the sentence as a whole that is associated with your journey. You have met other people besides Mr. Jones; you have had other contacts with Mr. Jones besides meeting him this morning; “train” and “to-day” equally are appropriate to other occurrences that you might relate. Thus it has to be the whole sentence that is the associative unit, and yet the sentence may never have been in your head before. It seems clear that it is possible to state in words something that you remember, although you never put it into words before. Suppose I say “What did you have for breakfast to-day?” Probably you will be able to tell me, though it is very likely that you have not given names to the things you ate until this moment.
This whole matter is connected with the distinction between sentences and single words, which we found important when we were discussing language. But even when we confine ourselves to single words, there are difficulties in Dr. Watson’s view. Cases are alleged in which children, after learning to speak, can recall incidents which occurred before they could speak, and describe them in correct words. This would show that the memory had persisted in a non-verbal form throughout the period before they learned to speak, and had only subsequently found verbal expression. Such extreme incidents are rare and might be questioned, but in a less extreme form it ought not to be difficult to obtain examples of the same sort of thing. Suppose, for example, that a young child hurt his wrist badly before he knew the word “wrist”, and that some time afterwards he learnt it; I should not be surprised if he could relate that he had hurt his wrist. Such instances, however, would not refute the essence of Watson’s theory. He would allow “visceral” memory, for example, and the association with the word “wrist” might be grafted on to this. The real difficulty in Dr. Watson’s view, to my mind, is the fact that our sentence may vary verbally as much as it likes so long as it retains the same “meaning”, and that we clearly do not rehearse to ourselves beforehand all the possible sentences having the “meaning” in question.
It should be realised that behaviourism loses much of its attractiveness if it is compelled to postulate movements that no one can observe and that there is no other reason to assume. Dr. Broad, in his book on The Mind and its Place in Nature, distinguishes between “molar” and “molecular” behaviourism: the former assumes only such bodily movements as can be observed, while the latter allows and utilises hypothetical minute movements, more especially in the brain. Now here we must make a distinction. Physics believes in a large number of phenomena which are too minute to be observed even with the strongest microscope, and if physics is at all correct, there must be minute movements in all parts of a human body, of a sort which we can never hope to see. We cannot reasonably demand of the behaviourist that he should abstain from an hypothesis which physics asserts for very good reasons. And in the process which leads from stimulus to reaction there are bound to be small occurrences in the brain, which, though they cannot be observed, are essential to the physiological explanation of what occurs. But when the behaviourist assumes small occurrences for which there is no ground in physics, and which are needed solely in order to safeguard his theory, he is in a less strong position. Dr. Watson asserts, for instance, that whenever we “think” there are small movements in the larynx which are beginnings of the movements we should make as if we spoke words out loud. It may be that this is true; certainly I am not prepared to deny it. But I am not prepared to say that it must be true merely because, if it were not, behaviourism would be false. We do not know in advance that behaviourism is true; we have to find out whether it will explain observed facts. Whenever it has to postulate something unobserved merely in order to avoid a refutation, it weakens its case. And if it maintains, as, from Dr. Watson’s language, it seems to do, that we only remember an occurrence by forming a verbal habit in connection with it, then it is obliged to postulate much implicit use of words of which we have no evidence.
To sum up this discussion. While it is quite possible, by behaviourist methods, to ascertain whether a person remembers a past occurrence or not, unless he is deliberately obstructing the observer, and while much memory can be quite adequately explained as habit, there do seem to be great difficulties in the view that memory consists entirely of habit, at least in the case of the recollection of an event. These difficulties seem insuperable if we suppose memory to be essentially a verbal habit. They are not insuperable if we postulate sufficient minute unobservable bodily movements. We have not considered whether they can be overcome by introducing data derived from introspection, since we wish, for the present, to maintain a strictly objective attitude to human behaviour. The introspective discussion of memory will be taken up at a later stage.
[CHAPTER VII]
INFERENCE AS A HABIT
In this chapter, we are concerned with inference as it can be observed when practised by some one else. Inference is supposed to be a mark of intelligence and to show the superiority of men to machines. At the same time, the treatment of inference in traditional logic is so stupid as to throw doubt on this claim, and syllogistic inference, which was taken as the type from Aristotle to Bacon (exclusive), is just the sort of thing that a calculating machine could do better than a professor. In syllogistic inference, you are supposed to know already that all men are mortal and that Socrates is a man; hence you deduce, what you never suspected before, that Socrates is mortal. This form of inference does actually occur, though very rarely. The only instance I have ever heard of was supplied by Dr. F. C. S. Schiller. He once produced a comic number of the philosophical periodical Mind, and sent copies to various philosophers, among others to a certain German, who was much puzzled by the advertisements. But at last he argued: “Everything in this book is a joke, therefore the advertisements are jokes”. I have never come across any other case of new knowledge obtained by means of a syllogism. It must be admitted that, for a method which dominated logic for two thousand years, this contribution to the world’s stock of information cannot be considered very weighty.
The inferences that we actually make in daily life differ from those of syllogistic logic in two respects, namely, that they are important and precarious, instead of being trivial and safe. The syllogism may be regarded as a monument to academic timidity: if an inference might be wrong, it was dangerous to draw it. So the mediæval monks, in their thinking as in their lives, sought safety at the expense of fertility.
With the Renaissance, a more adventurous spirit came into the world, but at first in philosophy, it only took the form of following Greeks other than Aristotle, and more especially Plato. It is only with Bacon and Galileo that the inductive method arrived at due recognition: with Bacon as a programme which was largely faulty, but with Galileo as something which actually led to brilliant results, namely, the foundation of modern mathematical physics. Unfortunately, when the pedants got hold of induction, they set to work to make it as tame and scholastic as deduction had been. They searched for a way of making it always lead to true results, and in so doing robbed it of its adventurous character. Hume turned upon them with sceptical arguments, proving quite conclusively that if an induction is worth making it may be wrong. Thereupon Kant deluged the philosophic world with muddle and mystery, from which it is only now beginning to emerge. Kant has the reputation of being the greatest of modern philosophers, but to my mind he was a mere misfortune.
Induction, as it appears in the text-books, consists, roughly speaking, in the inference that, because A and B have been found often together and never apart, therefore they are probably always together, and either may be taken as a sign of the other. I do not wish, at this stage, to examine the logical justification of this form of argumentation; for the present, I am considering it as a practice, which we can observe in the habits of men and animals.
As a practice, induction is nothing but our old friend, the law of conditioned reflexes or of association. A child touches a knob that gives him an electric shock; after that, he avoids touching the knob. If he is old enough to speak he may state that the knob hurts when it is touched; he has made an induction based on a single instance. But the induction will exist as a bodily habit even if he is too young to speak, and it occurs equally among animals, provided they are not too low in the scale. The theories of induction in logic are what Freudians call a “rationalisation”; that is to say, they consist of reasons invented afterwards to prove that what we have been doing is sensible. It does not follow that they are bad reasons; in view of the fact that we and our ancestors have managed to exist since the origin of life, our behaviour and theirs must have been fairly sensible, even if we and they were unable to prove that it was. This, however, is not the point that concerns us at present. What concerns us at present is the fact that verbal induction is a late development of induction in behaviour, which is nothing more or less than the principle of “learned reactions”.
This principle, as the reader will remember, states that, if a certain event calls out a certain response, and if another event is experienced just before it, or at the same moment, in time that other event will tend to call out the response which, originally, only the first event would call out. This applies both to muscles and to glands; it is because it applies to glands that words are capable of causing emotions. Moreover, we cannot set limits to the length of the chain of associations that may be established. If you hold an infant’s limbs, you call out a rage reaction; this appears to be an “unlearned reaction”. If you, and no one else, repeatedly hold an infant’s limbs, the mere sight of you will call out a rage reaction after a time. When the infant learns to talk your name may have the same effect. If, later, he learns that you are an optician, he may come to hate all opticians; this may lead him to hate Spinoza because he made spectacles, and thence he may come to hate metaphysicians and Jews. For doing so he will no doubt have the most admirable reasons, which will seem to him to be his real ones; he will never suspect the process of conditioning by which he has in fact arrived at his enthusiasm for the Ku Klux Klan. This is an example of conditioning in the emotional sphere; but it is rather in the muscular sphere that we must seek the origin of the practice of induction.
Domestic animals which are habitually fed by a certain person will run towards that person as soon as they see him. We say that they expect food, and in fact their behaviour is very like what it would be if they saw food. But really we have only an example of “conditioning”: they have often seen first the farmer and then the food, so that in time they react to the farmer as they originally reacted to the food. Infants soon learn to react to the sight of the bottle, although at first they only react to the touch of it. When they can speak, the same law makes them say “dinner” when they hear the dinner-bell. It is quite unnecessary to suppose that they first think “that bell means dinner”, and then say “dinner”. The sight of dinner (by previous “learned reaction”) causes the word “dinner”: the bell frequently precedes the sight of dinner; therefore in time the bell produces the word “dinner”. It is only subsequent reflection, probably at a much later age, that makes the child say “I knew dinner was ready because I heard the bell”. Long before he can say this, he is acting as if he knew it. And there is no good reason for denying that he knows it, when he acts as if he did. If knowledge is to be displayed by behaviour, there is no reason to confine ourselves to verbal behaviour as the sole kind by which knowledge can manifest itself.
The situation, stated abstractly, is as follows. Originally, stimulus A produced reaction C; now stimulus B produces it, as a result of association. Thus B has become a “sign” of A, in the sense that it causes the behaviour appropriate to A. All sorts of things may be signs of other things, but with human beings words are the supreme example of signs. All signs depend upon some practical induction. Whenever we read or hear a statement, its effect upon us depends upon induction in this sense, since the words are signs of what they mean, in the sense that we react to them, in certain respects, as we should to what they stand for. If some one says to you “your house is on fire”, the effect upon you is practically the same as if you saw the conflagration. You may, of course, be the victim of a hoax, and in that case your behaviour will not be such as achieves any purpose you have in view. This risk of error exists always, since the fact that two things have occurred together in the past cannot prove conclusively that they will occur together in the future.
Scientific induction is an attempt to regularise the above process, which we may call “physiological induction”. It is obvious that, as practised by animals, infants, and savages, physiological induction is a frequent source of error. There is Dr. Watson’s infant who induced, from two examples, that whenever he saw a certain rat there would be a loud noise. There is Edmund Burke, who induced from one example (Cromwell) that revolutions lead to military tyrannies. There are savages who argue, from one bad season, that the arrival of a white man causes bad crops. The inhabitants of Siena, in 1348, thought that the Black Death was a punishment for their pride in starting to build too large a cathedral. Of such examples there is no end. It is very necessary, therefore, if possible, to find some method by which induction can be practised so as to lead, in general, to correct results. But this is a problem of scientific method, with which we will not yet concern ourselves.
What does concern us at present is the fact that all inference, of the sort that really occurs, is a development of this one principle of conditioning. In practice, inference is of two kinds, one typified by induction, the other by mathematical reasoning. The former is by far the more important, since, as we have seen, it covers all use of signs and all empirical generalisations as well as the habits of which they are the verbal expression. I know that, from the traditional standpoint, it seems absurd to talk of inference in most cases of this sort. For example, you find it stated in the paper that such and such a horse has won the Derby. According to my own use of words, you practise an induction when you arrive thence at the belief that that horse has won. The stimulus consists of certain black marks on white paper—or perhaps on pink paper. This stimulus is only connected with horses and the Derby by association, yet your reaction is one appropriate to the Derby. Traditionally, there was only inference where there was a “mental process”, which, after dwelling upon the “premisses”, was led to assert the “conclusion” by means of insight into their logical connection. I am not saying that the process which such words as the above are intended to describe never takes place; it certainly does. What I am saying is that, genetically and causally, there is no important difference between the most elaborate induction and the most elementary “learned reaction”. The one is merely a developed form of the other, not something radically different. And our determination to believe in the results of inductions, even if, as logicians, we see no reason to do so, is really due to the potency of the principle of association; it is an example—perhaps the most important example—of what Dr. Santayana calls “animal faith”.
The question of mathematical reasoning is more difficult. I think we may lay it down that, in mathematics, the conclusion always asserts merely the whole or part of the premisses, though usually in new language. The difficulty of mathematics consists in seeing that this is so in particular cases. In practice, the mathematician has a set of rules according to which his symbols can be manipulated, and he acquires technical skill in working according to the rules in the same sort of way as a billiard-player does. But there is a difference between mathematics and billiards: the rules of billiards are arbitrary, whereas in mathematics some at least are in some sense “true”. A man cannot be said to understand mathematics unless he has “seen” that these rules are right. Now what does this consist of? I think it is only a more complicated example of the process of understanding that “Napoleon” and “Bonaparte” refer to the same person. To explain this, however, we must revert to what was said, in the [chapter on “Language”], about the understanding of form.
Human beings possess the power of reacting to form. No doubt some of the higher animals also possess it, though to nothing like the same extent as men do; and all animals, except a few of the most intelligent species, appear to be nearly devoid of it. Among human beings, it differs greatly from one individual to another, and increases, as a rule, up to adolescence. I should take it as what chiefly characterises “intellect”. But let us see, first, in what the power consists.
When a child is being taught to read, he learns to recognise a given letter, say H, whether it is large or small, black or white or red. However it may vary in these respects his reaction is the same: he says “H”. That is to say, the essential feature in the stimulus is its form. When my boy, at the age of just under three, was about to eat a three-cornered piece of bread and butter, I told him it was a triangle. (His slices were generally rectangular.) Next day, unprompted, he pointed to triangular bits in the pavement of the Albert Memorial, and called them “triangles”. Thus the form of the bread and butter, as opposed to its edibility, its softness, its colour, etc., was what had impressed him. This sort of thing constitutes the most elementary kind of reaction to form.
Now “matter” and “form” can be placed, as in the Aristotelian philosophy, in a hierarchy. From a triangle we can advance to a polygon, thence to a figure, thence to a manifold of points. Then we can go on and turn “point” into a formal concept, meaning “something that has relations which resemble spatial relations in certain formal respects”. Each of these is a step away from “matter” and further into the region of “form”. At each stage the difficulty increases. The difficulty consists in having a uniform reaction (other than boredom) to a stimulus of this kind. When we “understand” a mathematical expression, that means that we can react to it in an appropriate manner, in fact, that it has “meaning” for us. This is also what we mean by “understanding” the word “cat”. But it is easier to understand the word “cat”, because the resemblances between different cats are of a sort which causes even animals to have a uniform reaction to all cats. When we come to algebra, and have to operate with x and y, there is a natural desire to know what x and y really are. That, at least, was my feeling: I always thought the teacher knew what they really were, but would not tell me. To “understand” even the simplest formula in algebra, say (x + y)² = x² + 2xy + y², is to be able to react to two sets of symbols in virtue of the form which they express, and to perceive that the form is the same in both cases. This is a very elaborate business, and it is no wonder that boys and girls find algebra a bugbear. But there is no novelty in principle after the first elementary perceptions of form. And perception of form consists merely in reacting alike to two stimuli which are alike in form but very different in other respects. For, when we can do that, we can say, on the appropriate occasion, “that is a triangle”; and this is enough to satisfy the examiner that we know what a triangle is, unless he is so old-fashioned as to expect us to reproduce the verbal definition, which is of course a far easier matter, in which, with patience, we might teach even a parrot to succeed.
The meanings of complex mathematical symbols are always fixed by rules in relation to the meaning of simpler symbols; thus their meanings are analogous to those of sentences, not to those of single words. What was said earlier about the understanding of sentences applies, therefore, to any group of symbols which, in mathematics, will be declared to have the same meaning as another group, or part of that meaning.
We may sum up this discussion by saying that mathematical inference consists in attaching the same reactions to two different groups of signs, whose meanings are fixed by convention in relation to their constituent parts, whereas induction consists, first, in taking something as a sign of something else, and later, when we have learned to take A as a sign of B, in taking A as also a sign of C. Thus the usual cases of induction and deduction are distinguished by the fact that, in the former, the inference consists in taking one sign as a sign of two different things, while in the latter the inference consists in taking two different signs as signs of the same thing. This statement is a little too antithetical to be taken as an exact expression of the whole truth in the matter. What is true, however, is that both kinds of inferences are concerned with the relation of a sign to what it signifies, and therefore come within the scope of the law of association.
[CHAPTER VIII]
KNOWLEDGE BEHAVIOURISTICALLY CONSIDERED
The word “knowledge”, like the word “memory”, is avoided by the behaviourist. Nevertheless there is a phenomenon commonly called “knowledge”, which is tested behaviouristically in examinations. I want to consider this phenomenon in this chapter, with a view to deciding whether there is anything in it that the behaviourist cannot deal with adequately.
It will be remembered that, in [Chapter II], we were led to the view that knowledge is a characteristic of the complete process from stimulus to reaction, or even, in the cases of sight and hearing, from an external object to a reaction, the external object being connected with the stimulus by a chain of physical causation in the outer world. Let us, for the moment, leave on one side such cases as sight and hearing, and confine ourselves, for the sake of definiteness, to knowledge derived from touch.
We can observe touch producing reactions in quite humble animals, such as worms and sea anemones. Are we to say that they have “knowledge” of what they touch? In some sense, yes. Knowledge is a matter of degree. When it is regarded in a purely behaviouristic manner, we shall have to concede that it exists, in some degree, wherever there is a characteristic reaction to a stimulus of a certain kind, and this reaction does not occur in the absence of the right kind of stimulus. In this sense, “knowledge” is indistinguishable from “sensitivity”, which we considered in connection with perception. We might say that a thermometer “knows” the temperature, and that a compass “knows” the direction of the magnetic north. This is the only sense in which, on grounds of observation, we can attribute knowledge to animals that are low in the scale. Many animals, for example, hide themselves when exposed to light, but as a rule not otherwise. In this, however, they do not differ from a radiometer. No doubt the mechanism is different, but the observed molar motion has similar characteristics. Wherever there is a reflex, an animal may be said, in a sense, to “know” the stimulus. This is, no doubt, not the usual sense of “knowledge”, but it is the germ out of which knowledge in the usual sense has grown, and without which no knowledge would be possible.
Knowledge in any more advanced sense is only possible as a result of learning, in the sense considered in [Chapter III]. The rat that has learned the maze “knows” the way out of it; the boy who has learned certain verbal reactions “knows” the multiplication table. Between these two cases there is no important difference. In both cases, we say that the subject “knows” something because he reacts in a manner advantageous to himself, in which he could not react before he had had certain experiences. I do not think, however, that we ought to use such a notion as “advantageous” in connection with knowledge. What we can observe, for instance, with the rat in the maze, is violent activity until the food is reached, followed by eating when it is reached; also a gradual elimination of acts which do not lead to the food. Where this sort of behaviour is observed, we may say that it is directed towards the food, and that the animal “knows” the way to the food when he gets to it by the shortest possible route.
But if this view is right, we cannot define any knowledge acquired by learning except with reference to circumstances toward which an animal’s activity is directed. We should say, popularly, that the animal “desires” such circumstances. “Desire”, like “knowledge”, is capable of a behaviouristic definition, and it would seem that the two are correlative. Let us, then, spend a little time on the behaviouristic treatment of “desire”.
The best example of desire, from this point of view, is hunger. The stimulus to hunger is a certain well-ascertained bodily condition. When in this condition, an animal moves restlessly; if he sees or smells food, he moves in a manner which, in conditions to which he is accustomed, would bring him to the food; if he reaches it, he eats it, and if the quantity is sufficient he then becomes quiescent. This kind of behaviour may be summarised by saying that a hungry animal “desires” food. It is behaviour which is in various ways different from that of inanimate matter, because restless movements persist until a certain condition is realised. These movements may or may not be the best adapted to realising the condition in question. Every one knows about the pike that was put on one side of a glass partition, with minnows on the other side. He continually bumped his nose on the glass, and after six weeks gave up the attempt to catch them. When, after this, the partition was removed, he still refrained from pursuing them. I do not know whether the experiment was tried of leaving a possibility of getting to the minnows by a roundabout route. To have learned to take a roundabout route would perhaps have required a degree of intelligence beyond the capacity of fishes; this is a matter, however, which offers little difficulty to dogs or monkeys.
What applies to hunger applies equally to other forms of “desire”. Every animal has a certain congenital apparatus of “desires”; that is to say, in certain bodily conditions he is stimulated to restless activities which tend towards the performance of some reflex, and if a given situation is often repeated the animal arrives more and more quickly at the performance of the reflex. This last, however, is only true of the higher animals; in the lower, the whole process from beginning to end is reflex, and can therefore only succeed in normal circumstances. The higher animals, and more especially men, have a larger proportion of learning and a smaller proportion of reflexes in their behaviour, and are therefore better able to adapt themselves to new circumstances. The helplessness of infants is a necessary condition for the adaptability of adults; infants have fewer useful reflexes than the young of animals, but have far more power of forming useful habits, which can be adapted to circumstances and are not fatally fixed from birth. This fact is intimately connected with the superiority of the human intelligence above that of the brutes.
Desire is extremely subject to “conditioning”. If A is a primitive desire and B has on many occasions been a means to A, B comes to be desired in the same sense in which A was previously desired. It may even happen, as in misers, that the desire for B completely displaces the desire for A, so that B, when attained, is no longer used as a means to A. This, however, is more or less exceptional. In general, the desire for A persists, although the desire for B has a more or less independent life.
The “conditioning” of primitive desires in human beings is the source of much that distinguishes our life from that of animals. Most animals only seek food when they are hungry; they may, then, die of starvation before they find it. Men, on the contrary, must have early acquired pleasure in hunting as an art, and must have set out on hunting expeditions before they were actually hungry. A further stage in the conditioning of hunger came with domestic animals; a still further stage with agriculture. Nowadays, when a man sets to work to earn his living, his activity is still connected, though not very directly, with hunger and the other primitive desires that can be gratified by means of money. These primitive desires are still, so to speak, the power station, though their energy is widely distributed to all sorts of undertakings that seem, at first sight, to have no connection with them. Consider “freedom” and the political activities it inspires; this is derivable, by “conditioning”, from the emotion of rage which Dr. Watson observed in infants whose limbs are not “free”. Again we speak of the “fall” of empires and of “fallen” women; this is connected with the fear which infants display when left without support.
After this excursion into the realm of desire, we can now return to “knowledge”, which, as we saw, is a term correlative to “desire”, and applicable to another feature of the same kind of activity. We may say, broadly, that a response to a stimulus of the kind involving desire in the above sense shows “knowledge” if it leads by the quickest or easiest route to the state of affairs which, in the above sense, is behaviouristically the object of desire. Knowledge is thus a matter of degree: the rat, during its progressive improvements in the maze, is gradually acquiring more and more knowledge. Its “intelligence quotient”, so far as that particular task is concerned, will be the ratio of the time it took on the first trial to the time it takes now to get out of the maze. Another point, if our definition of knowledge is accepted, is, that there is no such thing as purely contemplative knowledge: knowledge exists only in relation to the satisfaction of desire, or, as we say, in the capacity to choose the right means to achieve our ends.
But can such a definition as the above really stand? Does it represent at all the sort of thing that would commonly be called knowledge? I think it does in the main, but there is need of some discussion to make this clear.
In some cases, the definition is obviously applicable. These are the cases that are analogous to the rat in the maze, the consideration of which led us to our definition. Do you “know” the way from Trafalgar Square to St. Pancras? Yes, if you can walk it without taking any wrong turnings. In practice, you can give verbal proof of such knowledge, without actually having to walk the distance; but that depends upon the correlation of names with streets, and is part of the process of substituting words for things. There may, it is true, come doubtful cases. I was once on a bus in Whitehall, and my neighbour asked “What street is this?” I answered him, not without surprise at his ignorance. He then said, “What building is that?” and I replied “The Foreign Office”. To this he retorted, “but I thought the Foreign Office was in Downing Street”. This time, it was his knowledge that surprised me. Should we say that he knew where the Foreign Office is? The answer is yes or no according to his purpose. From the point of view of sending a letter to it, he knew; from the point of view of walking to it, he did not know. He had, in fact, been a British Consul in South America, and was in London for the first time.
But now let us come to cases less obviously within the scope of our definition. The reader “knows” that Columbus crossed the ocean in 1492. What do we mean by saying that he “knows” this? We mean, no doubt, primarily that writing down this statement is the way to pass examinations, which is just as useful to us as getting out of the maze is to the rat. But we do not mean only this. There is historical evidence of the fact, at least I suppose there is. The historical evidence consists of printed books and manuscripts. Certain rules have been developed by historians as to the conditions in which statements in books or manuscripts may be accepted as true, and the evidence in our case is (I presume) in accordance with these rules. Historical facts often have importance in the present; for example, wills, or laws not yet repealed. The rules for weighing historical evidence are such as will, in general bring out self-consistent results. Two results are self-consistent when, in relation to a desire to which both are relevant, they enjoin the same action, or actions which can form part of the one movement towards the goal. At Coton, near Cambridge, there is (or was in my time) a signpost with two arms pointing in diametrically opposite directions, and each arm said “To Cambridge”. This was a perfect example of self-contradiction, since the two arms made statements representing exactly opposite actions. And this case illustrates why self-contradiction is to be avoided. But the avoidance of self-contradiction makes great demands upon us; Hegel and Bradley imagined that we could know the nature of the universe by means of this principle alone. In this they were pretty certainly mistaken, but nevertheless a great deal of our “knowledge” depends upon this principle to a greater or less extent.
Most of our knowledge is like that in a cookery book, maxims to be followed when occasion arises, but not useful at every moment of every day. Since knowledge may be useful at any time, we get gradually, through conditioning, a general desire for knowledge. The learned man who is helpless in practical affairs is analogous to the miser, in that he has become absorbed in a means. It should be observed, also, that knowledge is neutral as among different purposes. If you know that arsenic is a poison, that enables you equally to avoid it if you wish to remain in health, and to take it if you wish to commit suicide. You cannot judge from a man’s conduct in relation to arsenic whether he knows that it is a poison or not, unless you know his desires. He may be tired of life, but avoid arsenic because he has been told that it is a good medicine; in this case, his avoidance of it is evidence of lack of knowledge.
But to return to Columbus: surely, the reader will say, Columbus really did cross the Atlantic in 1492, and that is why we call this statement “knowledge”. This is the definition of “truth” as “correspondence with fact”. I think there is an important element of correctness in this definition, but it is an element to be elicited at a later stage, after we have discussed the physical world. And it has the defect—as pragmatists have urged—that there seems no way of getting at “facts” and comparing them with our beliefs: all that we ever reach consists of other beliefs. I do not offer our present behaviouristic and pragmatic definition of “knowledge” as the only possible one, but I offer it as the one to which we are led if we wish to regard knowledge as something causally important, to be exemplified in our reactions to stimuli. This is the appropriate point of view when we are studying man from without, as we have been doing hitherto.
There is, however, within the behaviourist philosophy, one important addition to be made to our definition. We began this chapter with sensitivity, but we went on to the consideration of learned reactions, where the learning depended upon association. But there is another sort of learning—at least it is prima facie another sort—which consists of increase of sensitivity. All sensitivity in animals and human beings must count as a sort of knowledge; that is to say, if an animal behaves, in the presence of a stimulus of a certain kind, as it would not behave in the absence of that stimulus then, in an important sense, it has “knowledge” as regards the stimulus. Now it appears that practice—e.g. in music—very greatly increases sensitivity. We learn to react differently to stimuli which only differ sightly; what is more, we learn to react to differences. A violin-player can react with great precision to an interval of a fifth; if the interval is very slightly greater or less, his behaviour in tuning is influenced by the difference from a fifth. And as we have already had occasion to notice, we become, through practice, increasingly sensitive to form. All this increased sensitivity must count as increase of knowledge.
But in saying this we are not saying anything inconsistent with our earlier definition of knowledge. Sensitivity is essential to choosing the right reaction in many cases. To take the cookery-book again; when it says “take a pinch of salt”, a good cook knows how much to take, which is an instance of sensitivity. Accurate scientific observation, which is of great practical importance, depends upon sensitivity. And so do many of our practical dealings with other people: if we cannot “feel” their moods, we shall be always getting at cross purposes.
The extent to which sensitivity is improved by practice is astonishing. Town-bred people do not know whether the weather is warm or cold until they read the weather reports in the paper. An entomologist perceives vastly more beetles in the course of a country walk than other people do. The subtlety with which connoisseurs can distinguish among wines and cigars is the despair of youths who wish to become men of the world. Whether this increase of sensitivity can be accounted for by the law of association, I do not know. In many cases, probably, it can, but I think sensitiveness to form, which is the essential element in the more difficult forms of abstract thought as well as in many other matters, cannot be regarded as derivative from the law of association, but is more analogous to the development of a new sense. I should therefore include improvement in sensitivity as an independent element in the advancement of knowledge. But I do so with some hesitation.
The above discussion does not pretend to cover the whole of the ground that has to be covered in discussing the definition of “knowledge”. There are other points of view, which are also necessary to a complete consideration of the question. But these must wait until, after considering the physical world, we come to the discussion of man as viewed from within.
PART II
THE PHYSICAL WORLD
[CHAPTER IX]
THE STRUCTURE OF THE ATOM
In all that we have said hitherto on the subject of man from without, we have taken a common-sense view of the material world. We have not asked ourselves: what is matter? Is there such a thing, or is the outside world composed of stuff of a different kind? And what light does a correct theory of the physical world throw upon the process of perception? These are questions which we must attempt to answer in the following chapters. And in doing so the science upon which we must depend is physics. Modern physics, however, is very abstract, and by no means easy to explain in simple language. I shall do my best, but the reader must not blame me too severely if, here and there, he finds some slight difficulty or obscurity. The physical world, both through the theory of relativity and through the most recent doctrines as to the structure of the atom, has become very different from the world of everyday life, and also from that of scientific materialism of the eighteenth-century variety. No philosophy can ignore the revolutionary changes in our physical ideas that the men of science have found necessary; indeed it may be said that all traditional philosophies have to be discarded, and we have to start afresh with as little respect as possible for the systems of the past. Our age has penetrated more deeply into the nature of things than any earlier age, and it would be a false modesty to over-estimate what can still be learned from the metaphysicians of the seventeenth, eighteenth and nineteenth centuries.
What physics has to say about matter, and the physical world generally, from the standpoint of the philosopher, comes under two main heads: first, the structure of the atom; secondly, the theory of relativity. The former was, until recently, the less revolutionary philosophically, though the more revolutionary in physics. Until 1925, theories of the structure of the atom were based upon the old conception of matter as indestructible substance, although this was already regarded as no more than a convenience. Now, owing chiefly to two German physicists, Heisenberg and Schrödinger, the last vestiges of the old solid atom have melted away, and matter has become as ghostly as anything in a spiritualist seance. But before tackling these newer views, it is necessary to understand the much simpler theory which they have displaced. This theory does not, except here and there, take account of the new doctrines on fundamentals that have been introduced by Einstein, and it is much easier to understand than relativity. It explains so much of the facts that, whatever may happen, it must remain a stepping-stone to a complete theory of the structure of the atom; indeed, the newer theories have grown directly out of it, and could hardly have arisen in any other way. We must therefore spend a little time in giving a bare outline, which is the less to be regretted as the theory is in itself fascinating.
The theory that matter consists of “atoms”, i.e. of little bits that cannot be divided, is due to the Greeks, but with them it was only a speculation. The evidence for what is called the atomic theory was derived from chemistry, and the theory itself, in its nineteenth-century form, was mainly due to Dalton. It was found that there were a number of “elements”, and that other substances were compounds of these elements. Compound substances were found to be composed of “molecules”, each molecule being composed of “atoms” of one substance combined with “atoms” of another or of the same. A molecule of water consists of two atoms of hydrogen and one atom of oxygen; they can be separated by electrolysis. It was supposed, until radio-activity was discovered, that atoms were indestructible and unchangeable. Substances which were not compounds were called “elements”. The Russian chemist Mendeleev discovered that the elements can be arranged in a series by means of progressive changes in their properties; in his time, there were gaps in this series, but most of them have since been filled by the discovery of new elements. If all the gaps were filled, there would be 92 elements; actually the number known is 87, or, including three about which there is still some doubt, 90. The place of an element in this series is called its “atomic number”. Hydrogen is the first, and has the atomic number 1; helium is the second, and has the atomic number 2; uranium is the last, and has the atomic number 92. Perhaps in the stars there are elements with higher atomic numbers, but so far none has been actually observed.
The discovery of radio-activity necessitated new views as to “atoms”. It was found that an atom of one radio-active element can break up into an atom of another element and an atom of helium, and that there is also another way in which it can change. It was found also that there can be different elements having the same place in the series; these are called “isotopes”. For example, when radium disintegrates it gives rise, in the end, to a kind of lead, but this is somewhat different from the lead found in lead-mines. A great many “elements” have been shown by Dr. F. W. Aston to be really mixtures of isotopes, which can be sorted out by ingenious methods. All this, but more especially the transmutation of elements in radio-activity, led to the conclusion that what had been called “atoms” were really complex structures, which could change into atoms of a different sort by losing a part. After various attempts to imagine the structure of an atom, physicists were led to accept the view of Sir Ernest Rutherford, which was further developed by Niels Bohr.
In this theory, which, in spite of recent developments, remains substantially correct, all matter is composed of two sorts of units, electrons and protons. All electrons are exactly alike, and all protons are exactly alike. All protons carry a certain amount of positive electricity, and all electrons carry an equal amount of negative electricity. But the mass of a proton is about 1835 times that of an electron: it takes 1835 electrons to weigh as much as one proton. Protons repel each other, and electrons repel each other, but an electron and a proton attract each other. Every atom is a structure consisting of electrons and protons. The hydrogen atom, which is the simplest, consists of one proton with one electron going round it as a planet goes round the sun. The electron may be lost, and the proton left alone; the atom is then positively electrified. But when it has its electron, it is, as a whole, electrically neutral, since the positive electricity of the proton is exactly balanced by the negative electricity of the electron.
The second element, helium, has already a much more complicated structure. It has a nucleus, consisting of four protons, and two electrons very close together, and in its normal state it has two planetary electrons going round the nucleus. But it may lose either or both of these, and it is then positively electrified.
All the latter elements consist, like helium, of a nucleus composed of protons and electrons, and a number of planetary electrons going round the nucleus. There are more protons than electrons in the nucleus, but the excess is balanced by the planetary electrons when the atom is unelectrified. The number of protons in the nucleus gives the “atomic weight” of the element: the excess of protons over electrons in the nucleus gives the “atomic number”, which is also the number of planetary electrons when the atom is unelectrified. Uranium, the last element, has 238 protons and 146 electrons in the nucleus, and when unelectrified it has 92 planetary electrons. The arrangement of the planetary electrons in atoms other than hydrogen is not accurately known, but it is clear that, in some sense, they form different rings, those in the outer rings being more easily lost than those nearer the nucleus.
I come now to what Bohr added to the theory of atoms as developed by Rutherford. This was a most curious discovery, introducing, in a new field, a certain type of discontinuity which was already known to be exhibited by some other natural processes. No adage had seemed more respectable in philosophy than “natura non facit saltum”, Nature makes no jumps. But if there is one thing more than another that the experience of a long life has taught me, it is that Latin tags always express falsehoods; and so it has proved in this case. Apparently Nature does make jumps, not only now and then, but whenever a body emits light, as well as on certain other occasions. The German physicist Planck was the first to demonstrate the necessity of jumps. He was considering how bodies radiate heat when they are warmer than their surroundings. Heat, as has long been known, consists of vibrations, which are distinguished by their “frequency”, i.e. by the number of vibrations per second. Planck showed that, for vibrations having a given frequency, not all amounts of energy are possible, but only those having to the frequency a ratio which is a certain quantity h multiplied by 1 or 2 or 3 or some other whole number, in practice always a small whole number. The quantity h is known as “Planck’s constant”; it has turned out to be involved practically everywhere where measurement is delicate enough to know whether it is involved or not. It is such a small quantity that, except where measurement can reach a very high degree of accuracy, the departure from continuity is not appreciable.[7]
[7] The dimensions of h are those of “action”, i.e. energy multiplied by time, or moment of momentum, or mass multiplied by length multiplied by velocity. Its magnitude is about 6.55 × 10.27 erg secs.
Bohr’s great discovery was that this same quantity h is involved in the orbits of the planetary electrons in atoms, and that it limits the possible orbits in ways for which nothing in Newtonian dynamics had prepared us, and for which so far, there is nothing in relativity-dynamics to account. According to Newtonian principles, an electron ought to be able to go round the nucleus in any circle with the nucleus in the centre, or in any ellipse with the nucleus in a focus; among possible orbits, it would select one or another according to its direction and velocity. But in fact only certain out of all these orbits occur. Those that occur are among those that are possible on Newtonian principles, but are only an infinitesimal selection from among these. It will simplify the explanation if we confine ourselves, as Bohr did at first, to circular orbits; moreover we will consider only the hydrogen atom, which has one planetary electron and a nucleus consisting of one proton. To define the circular orbits that are found to be possible, we proceed as follows: multiply the mass of the electron by the circumference of its orbit, and this again by the velocity of the electron; the result will always be h or 2h, or 3h, or some other small exact multiple of h, where h, as before, is “Planck’s constant”. There is thus a smallest possible orbit, in which the above product is h; the radius of the next orbit, in which the above produce is 2h, will have a length four times this minimum; the next, nine times; the next, sixteen times; and so on through the “square numbers” (i.e. those got by multiplying a number by itself). Apparently no other circular orbits than these are possible in the hydrogen atom. Elliptic orbits are possible, and these again introduce exact multiples of h: but we need not, for our purposes, concern ourselves with them.
When a hydrogen atom is left to itself, if the electron is in the minimum orbit it will continue to rotate in that orbit so long as nothing from outside disturbs it; but if the electron is in any of the larger possible orbits, it may sooner or later jump suddenly to a smaller orbit, either the minimum or one of the intermediate possible orbits. So long as the electron does not change its orbit, the atom does not radiate energy, but when the electron jumps to a smaller orbit, the atom loses energy, which is radiated out in the form of a light-wave. This light-wave is always such that its energy divided by its frequency is exactly h. The atom may absorb energy from without, and it does so by the electron jumping to a larger orbit. It may then afterwards, when the external source of energy is removed, jump back to the smaller orbit; this is the cause of fluorescence, since, in doing so, the atom gives out energy in the form of light.
The same principles, with greater mathematical complications, apply to the other elements. There is, however, with some of the latest elements, a phenomenon which cannot have any analogue in hydrogen, and that is radio-activity. When an atom is radio-active, it emits rays of three kinds, called respectively α-rays, β-rays, and γ-rays. Of these, the γ-rays are analogous to light, but of much higher frequencies, or shorter wave-lengths; we need not further concern ourselves with them. The α-rays and β-rays, on the contrary, are important as our chief source of knowledge concerning the nuclei of atoms. It is found that the α-rays consist of helium nuclei, while the β-rays consist of electrons. Both come out of the nucleus, since the atom after radio-activity disruption is a different element from what it was before. But no one knows just why the nucleus disintegrates when it does, nor why, in a piece of radium, for example, some atoms break down while others do not.
The three principal sources of our knowledge concerning atoms have been the light they emit, X-rays and radio-activity. As everyone knows, when the light emitted by a glowing gas is passed through a prism, it is found to consist of well-defined lines of different colours, which are characteristic for each element, and constitute what is called its “spectrum”. The spectrum extends beyond the range of visible light, both into the infra-red and into the ultra-violet. In the latter direction, it extends right into the region of X-rays, which are only ultra-ultra-violet light. By means of crystals, it has been found possible to study X-ray spectra as exactly as those of ordinary light. The great merit of Bohr’s theory was that it explained why elements have the spectra they do have, which had, before, been a complete mystery. In the cases of hydrogen and positively electrified helium, the explanation, particularly as extended by the German physicist Sommerfeld, gave the most minute numerical agreement between theory and observation; in other cases, mathematical difficulties made this completeness impossible, but there was every reason to think that the same principles were adequate. This was the main reason for accepting Bohr’s theory; and certainly it was a very strong one. It was found that visible light enabled us to study the outer rings of planetary electrons, X-rays enabled us to study the inner rings, and radio-activity enabled us to study the nucleus. For the latter purpose, there are also other methods, more particularly Rutherford’s “bombardment”, which aims at breaking up nuclei by firing projectiles at them, and sometimes succeeds in making a hit in spite of the smallness of the target.
The theory of atomic structure that has just been outlined, like everything in theoretical physics, is capable of expression in mathematical formulæ; but like many things in theoretical physics, it is also capable of expression in the form of an imaginative picture. But here, as always, it is necessary to distinguish sharply between the mathematical symbols and the pictorial words. The symbols are pretty sure to be right, or nearly so; the imaginative picture, on the other hand, should not be taken too seriously. When we consider the nature of the evidence upon which the above theory of the atom is based, we can see that the attempt to make a picture of what goes on has led us to be far more concrete than we have any right to be. If we want to assert only what we have good reason to believe, we shall have to abandon the attempt to be concrete about what goes on in the atom, and say merely something like this: An atom with its electrons is a system characterised by certain integers, all small, and all capable of changing independently. These integers are the multiples of h involved. When any of them changes to a smaller integer, energy of a definite amount is emitted, and its frequency will be obtained by dividing the energy of h. When any of the integers concerned changes to a larger integer, energy is absorbed, and again the amount absorbed is definite. But we cannot know what goes on when the atom is neither absorbing nor radiating energy, since then it has no effects in surrounding regions; consequently all evidence as to atoms is as to their changes, not as to their steady states.
The point is not that the facts do not fit with the hypothesis of the atom as a planetary system. There are, it is true, certain difficulties which afford empirical grounds for the newer theory which has superseded Bohr’s, and which we shall shortly consider. But even if no such grounds existed, it would be obvious that Bohr’s theory states more than we have a right to infer from what we can observe. Of theories that state so much, there must be an infinite number that are compatible with what is known, and it is only what all of these have in common that we are really entitled to assert. Suppose your knowledge of Great Britain were entirely confined to observing the people and goods that enter and leave the ports; you could, in that case, invent many theories as to the interior of Great Britain, all of which would agree with all known facts. This is an exact analogy. If you delimit in the physical universe any region, large or small, not containing a scientific observer, all scientific observers will have exactly the same experiences whatever happens inside this region, provided it does not affect the flow of energy across the boundary of the region. And so, if the region contains one atom, any two theories which give the same results as to the energy that the atom radiates or absorbs are empirically indistinguishable, and there can be no reason except simplicity for preferring one of them to the other. On this ground, even if on no other, prudence compels us to seek a more abstract theory of the atom than that which we owe to Rutherford and Bohr.
The newer theory has been put forward mainly by two physicists already mentioned, Heisenberg and Schrödinger, in forms which look different, but are in fact mathematically equivalent. It is as yet an impossible task to describe this theory in simple language, but something can be said to show its philosophical bearing. Broadly speaking, it describes the atom by means of the radiations that come out of it. In Bohr’s theory, the planetary electrons are supposed to describe orbits over and over again while the atom is not radiating; in the newer theory, we say nothing at all as to what happens at these times. The aim is to confine the theory to what is empirically verifiable, namely radiations; as to what there is where the radiations come from, we cannot tell, and it is scientifically unnecessary to speculate. The theory requires modifications in our conception of space, of a sort not yet quite clear. It also has the consequence that we cannot identify an electron at one time with an electron at another, if in the interval, the atom has radiated energy. The electron ceases altogether to have the properties of a “thing” as conceived by common sense; it is merely a region from which energy may radiate.
On the subject of discontinuity, there is disagreement between Schrödinger and other physicists. Most of them maintain that quantum changes—i.e. the changes that occur in an atom when it radiates or absorbs energy—must be discontinuous. Schrödinger thinks otherwise. This is a matter in debate among experts, as to which it would be rash to venture an opinion. Probably it will be decided one way or other before very long.
The main point for the philosopher in the modern theory is the disappearance of matter as a “thing”. It has been replaced by emanations from a locality—the sort of influences that characterise haunted rooms in ghost stories. As we shall see in the next chapter, the theory of relativity leads to a similar destruction of the solidity of matter, by a different line of argument. All sorts of events happen in the physical world, but tables and chairs, the sun and moon, and even our daily bread, have become pale abstractions, mere laws exhibited in the successions of events which radiate from certain regions.
[CHAPTER X]
RELATIVITY
We have seen that the world of the atom is a world of revolution rather than evolution: the electron which has been moving in one orbit hops quite suddenly into another, so that the motion is what is called “discontinuous”, that is to say, the electron is first in one place and then in another, without having passed over any intermediate places. This sounds like magic, and there may be some way of avoiding such a disconcerting hypothesis. At any rate, nothing of the sort seems to happen in the regions where there are no electrons and protons. In these regions, so far as we can discover, there is continuity, that is to say, everything goes by gradual transitions, not by jumps. The regions in which there are no electrons and protons may be called “æther” or “empty space” as you prefer: the difference is only verbal. The theory of relativity is especially concerned with what goes on in these regions, as opposed to what goes on where there are electrons and protons. Apart from the theory of relativity, what we know about these regions is that waves travel across them, and that these waves, when they are waves of light or electromagnetism (which are identical), behave in a certain fashion set forth by Maxwell in certain formulæ called “Maxwell’s equations”. When I say we “know” this, I am saying more than is strictly correct, because all we know is what happens when the waves reach our bodies. It is as if we could not see the sea, but could only see the people disembarking at Dover, and inferred the waves from the fact that the people looked green. It is obvious, in any case, that we can only know so much about the waves as is involved in their having such-and-such causes at one end and such-and-such effects at the other. What can be inferred in this way will be, at best, something wholly expressible in terms of mathematical structure. We must not think of the waves as being necessarily “in” the æther or “in” anything else; they are to be thought of merely as progressive periodic processes, whose laws are more or less known, but whose intrinsic character is not known and never can be.
The theory of relativity has arisen from the study of what goes on in the regions where there are no electrons and protons. While the study of the atom has led us to discontinuities, relativity has produced a completely continuous theory of the intervening medium—far more continuous than any theory formerly imagined. At the moment, these two points of view stand more or less opposed to each other, but no doubt before long they will be reconciled. There is not, even now, any logical contradiction between them; there is only a fairly complete lack of connection.
For philosophy, far the most important thing about the theory of relativity is the abolition of the one cosmic time and the one persistent space, and the substitution of space-time in place of both. This is a change of quite enormous importance, because it alters fundamentally our notion of the structure of the physical world, and has, I think, repercussions in psychology. It would be useless, in our day, to talk about philosophy without explaining this matter. Therefore I shall make the attempt, in spite of some difficulty.
Common-sense and pre-relativity physicists believed that, if two events happen in different places, there must always be a definite answer, in theory, to the question whether they were simultaneous. This is found to be a mistake. Let us suppose two persons A and B a long way apart, each provided with a mirror and means of sending out light-signals. The events that happen to A still have a perfectly definite time-order, and so have those that happen to B; the difficulty comes in connecting A’s time with B’s. Suppose A sends a flash to B, B’s mirror reflects it, and it returns to A after a certain time. If A is on the earth and B on the sun, the time will be about sixteen minutes. We shall naturally say that the time when B received the light-signal is half way between the times when A sent it out and received it back. But this definition turns out to be not unambiguous; it will depend upon how A and B are moving relatively to each other. The more this difficulty is examined, the more insuperable it is seen to be. Anything that happens to A after he sends out the flash and before he gets it back is neither definitely before nor definitely after nor definitely simultaneous with the arrival of the flash at B. To this extent, there is no unambiguous way of correlating times in different places.
The notion of a “place” is also quite vague. Is London a “place”? But the earth is rotating. Is the earth a “place”? But it is going round the sun. Is the sun a “place”? But it is moving relatively to the stars. At best you could talk of a place at a given time; but then it is ambiguous what is a given time, unless you confine yourself to one place. So the notion of “place” evaporates.
We naturally think of the universe as being in one state at one time and in another at another. This is a mistake. There is no cosmic time, and so we cannot speak of the state of the universe at a given time. And similarly we cannot speak unambiguously of the distance between two bodies at a given time. If we take the time appropriate to one of the two bodies, we shall get one estimate; if the time of the other, another. This makes the Newtonian law of gravitation ambiguous, and shows that it needs restatement, independently of empirical evidence.
Geometry also goes wrong. A straight line, for example, is supposed to be a certain track in space whose parts all exist simultaneously. We shall now find that what is a straight line for one observer is not a straight line for another. Therefore geometry ceases to be separable from physics.
The “observer” need not be a mind, but may be a photographic plate. The peculiarities of the “observer” in this region belong to physics, not to psychology.
So long as we continue to think in terms of bodies moving, and try to adjust this way of thinking to the new ideas by successive corrections, we shall only get more and more confused. The only way to get clear is to make a fresh start, with events instead of bodies. In physics, an “event” is anything which, according to the old notions, would be said to have both a date and a place. An explosion, a flash of lightning, the starting of a light-wave from an atom, the arrival of the light-wave at some other body, any of these would be an “event”. Some strings of events make up what we regard as the history of one body; some make up the course of one light-wave; and so on. The unity of a body is a unity of history—it is like the unity of a tune, which takes time to play, and does not exist whole in any one moment. What exists at any one moment is only what we call an “event”. It may be that the word “event”, as used in physics, cannot be quite identified with the same word as used in psychology; for the present we are concerned with “events” as the constituents of physical processes, and need not trouble ourselves about “events” in psychology.
The events in the physical world have relations to each other which are of the sort that have led to the notions of space and time. They have relations of order, so that we can say that one event is nearer to a second than to a third. In this way we can arrive at the notion of the “neighbourhood” of an event: it will consist roughly speaking of all the events that are very near the given event. When we say that neighbouring events have a certain relation, we shall mean that the nearer two events are to each other, the more nearly they have this relation, and that they approximate to having it without limit as they are taken nearer and nearer together.
Two neighbouring events have a measurable quantitative relation called “interval”, which is sometimes analogous to distance in space, sometimes to lapse of time. In the former case it is called space-like, in the latter time-like. The interval between two events is time-like when one body might be present at both—for example, when both are parts of the history of your body. The interval is space-like in the contrary case. In the marginal case between the two, the interval is zero; this happens when both are parts of one light-ray.
The interval between two neighbouring events is something objective, in the sense that any two careful observers will arrive at the same estimate of it. They will not arrive at the same estimate for the distance in space or the lapse of time between the two events, but the interval is a genuine physical fact, the same for all. If a body can travel freely from one event to the other, the interval between the two events will be the same as the time between them as measured by a clock travelling with the body. If such a journey is physically impossible, the interval will be the same as the distance as estimated by an observer to whom the two events are simultaneous. But the interval is only definite when the two events are very near together; otherwise the interval depends upon the route chosen for travelling from the one event to the other.
Four numbers are needed to fix the position of an event in the world; these correspond to the time and the three dimensions of space in the old reckoning. These four numbers are called the co-ordinates of the event. They may be assigned on any principle which gives neighbouring co-ordinates to neighbouring events; subject to this condition, they are merely conventional. For example, suppose an aeroplane has had an accident. You can fix the position of the accident by four numbers: latitude, longitude, altitude above sea-level, and Greenwich Mean Time. But you cannot fix the position of the explosion in space-time by means of less than four numbers.
Everything in relativity-theory goes (in a sense) from next to next; there are no direct relations between distant events, such as distance in time or space. And of course there are no forces acting at a distance; in fact, except as a convenient fiction, there are no “forces” at all. Bodies take the course which is easiest at each moment, according to the character of space-time in the particular region where they are; this course is called a geodesic.
Now it will be observed that I have been speaking freely of bodies and motion, although I said that bodies were merely certain strings of events. That being so, it is of course necessary to say what strings of events constitute bodies, since not all continuous strings of events do so, not even all geodesics. Until we have defined the sort of thing that makes a body, we cannot legitimately speak of motion, since this involves the presence of one body on different occasions. We must therefore set to work to define what we mean by the persistence of a body, and how a string of events constituting a body differs from one which does not. This topic will occupy the next chapter.
But it may be useful, as a preliminary, to teach our imagination to work in accordance with the new ideas. We must give up what Whitehead admirably calls the “pushiness” of matter. We naturally think of an atom as being like a billiard-ball; we should do better to think of it as like a ghost, which has no “pushiness” and yet can make you fly. We have to change our notions both of substance and of cause. To say that an atom persists is like saying that a tune persists. If a tune takes five minutes to play, we do not conceive of it as a single thing which exists throughout that time, but as a series of notes, so related as to form a unity. In the case of the tune, the unity is æsthetic; in the case of the atom, it is causal. But when I say “causal” I do not mean exactly what the word naturally conveys. There must be no idea of compulsion or “force”, neither the force of contact which we imagine we see between billiard-balls nor the action at a distance which was formerly supposed to constitute gravitation. There is merely an observed law of succession from next to next. An event at one moment is succeeded by an event at a neighbouring moment, which, to the first order of small quantities, can be calculated from the earlier event. This enables us to construct a string of events, each, approximately, growing out of a slightly earlier event according to an intrinsic law. Outside influences only affect the second order of small quantities. A string of events connected, in this way, by an approximate intrinsic law of development is called one piece of matter. This is what I mean by saying that the unity of a piece of matter is causal. I shall explain this notion more fully in later chapters.
[CHAPTER XI]
CAUSAL LAWS IN PHYSICS
In the last chapter we spoke about the substitution of space-time for space and time, and the effect which this has had in substituting strings of events for “things” conceived as substances. In this chapter we will deal with cause and effect as they appear in the light of modern science. It is at least as difficult to purge our imagination of irrelevances in this matter as in regard to substance. The old-fashioned notion of cause appeared in dynamics as “force”. We still speak of forces just as we still speak of the sunrise, but we recognise that this is nothing but a convenient way of speaking, in the one case as in the other.
Causation is deeply embedded in language and common sense. We say that people build houses or make roads: to “build” and to “make” are both notions involving causality. We say that a man is “powerful”, meaning that his volitions are causes over a wide range. Some examples of causation seem to us quite natural, others less so. It seems natural that our muscles should obey our will, and only reflection makes us perceive the necessity of finding an explanation of this phenomenon. It seems natural that when you hit a billiard-ball with a cue it moves. When we see a horse pulling a cart, or a heavy object being dragged by a rope, we feel as if we understood all about it. It is events of this sort that have given rise to the common-sense belief in causes and forces.
But as a matter of fact the world is incredibly more complicated than it seems to common sense. When we think we understand a process—I mean by “we” the non-reflective part in each of us—what really happens is that there is some sequence of events so familiar through past experience that at each stage we expect the next stage. The whole process seems to us peculiarly intelligible when human desires enter in, for example, in watching a game: what the ball does and what the players do seem “natural”, and we feel as if we quite understood how the stages succeed each other. We thus arrive at the notion of what is called “necessary” sequence. The text-books say that A is the cause of B if A is “necessarily” followed by B. This notion of “necessity” seems to be purely anthropomorphic, and not based upon anything that is a discoverable feature of the world. Things happen according to certain rules; the rules can be generalised, but in the end remain brute facts. Unless the rules are concealed conventions or definitions, no reason can be given why they should not be completely different.
To say that A is “necessarily” followed by B is thus to say no more than that there is some general rule, exemplified in a very large number of observed instances, and falsified in none, according to which events such as A are followed by events such as B. We must not have any notion of “compulsion”, as if the cause forced the effect to happen. A good test for the imagination in this respect is the reversibility of causal laws. We can just as often infer backwards as forwards. When you get a letter, you are justified in inferring that somebody wrote it, but you do not feel that your receiving it compelled the sender to write it. The notion of compulsion is just as little applicable to effects as to causes. To say that causes compel effects is as misleading as to say that effects compel causes. Compulsion is anthropomorphic: a man is compelled to do something when he wishes to do the opposite, but except where human or animal wishes come in the notion of compulsion is inapplicable. Science is concerned merely with what happens, not with what must happen.
When we look for invariable rules of sequence in nature, we find that they are not such as common sense sets up. Common sense says: thunder follows lightning, waves at sea follow wind, and so on. Rules of this sort are indispensable in practical life, but in science they are all only approximate. If there is any finite interval of time, however short, between the cause and the effect, something may happen to prevent the effect from occurring. Scientific laws can only be expressed in differential equations. This means that, although you cannot tell what may happen after a finite time, you can say that, if you make the time shorter and shorter, what will happen will be more and more nearly according to such-and-such a rule. To take a very simple case: I am now in this room; you cannot tell where I shall be in another second, because a bomb may explode and blow me sky-high, but if you take any two small fragments of my body which are now very close together, you can be sure that, after some very short finite time, they will still be very close together. If a second is not short enough, you must take a shorter time; you cannot tell in advance how short a time you may have to take, but you may feel fairly certain that there is a short enough time.
The laws of sequence in physics, apart from quantum phenomena, are of two sorts, which appeared in traditional dynamics as laws of velocity and laws of acceleration. In a very short time, the velocity of a body alters very little, and if the time is taken short enough, the change of velocity diminishes without limit. This is what, in the last chapter, we called an “intrinsic” causal law. Then there is the effect of the outer world, as it appeared in traditional dynamics, which is shown in acceleration. The small change which does occur in the velocity in a short time is attributed to surrounding bodies, because it is found to vary as they vary, and to vary according to ascertained laws. Thus we think of surrounding bodies as exerting an influence, which we call “force”, though this remains as mysterious as the influence of the stars in astrology.
Einstein’s theory of gravitation has done away with this conception in so far as gravitational forces are concerned. In this theory, a planet moving round the sun is moving in the nearest approach to a straight line that the neighbourhood permits. The neighbourhood is supposed to be non-Euclidean, that is to say, to contain no straight lines such as Euclid imagined. If a body is moving freely, as the planets do, it observes a certain rule. Perhaps the simplest way to state this rule is as follows: Suppose you take any two events which happen on the earth, and you measure the time between them by ideally accurate clocks which move with the earth. Suppose some traveller on a magic carpet had meanwhile cruised about the universe, leaving the earth at the time of the first event and returning at the time of the second. By his clocks the period elapsed will be less than by the terrestial clocks. This is what is meant by saying that the earth moves in a “geodesic”, which is the nearest approach to a straight line to be found in the region in which we live. All this is, so to speak, geometrical, and involves no “forces”. It is not the sun that makes the earth go round, but the nature of space-time where the earth is.
Even this is not quite correct. Space-time does not make the earth go round the sun; it makes us say the earth goes round the sun. That is to say, it makes this the shortest way of describing what occurs. We could describe it in other language, which would be equally correct, but less convenient.
The abolition of “force” in astronomy is perhaps connected with the fact that astronomy depends only upon the sense of sight. On the earth, we push and pull, we touch things, and we experience muscular strains. This all gives us a notion of “force”, but this notion is anthropomorphic. To imagine the laws of motion of heavenly bodies, think of the motions of objects in a mirror; they may move very fast, although in the mirror world there are no forces.
What we really have to substitute for force is laws of correlation. Events can be collected in groups by their correlations. This is all that is true in the old notion of causality. And this is not a “postulate” or “category”, but an observed fact—lucky, not necessary.
As we suggested before, it is these correlations of events that lead to the definition of permanent “things”. There is no essential difference, as regards substantiality, between an electron and a light-ray. Each is really a string of events or of sets of events. In the case of the light-ray, we have no temptation to think otherwise. But in the case of the electron, we think of it as a single persistent entity. There may be such an entity, but we can have no evidence that there is. What we can discover is (a) a group of events spreading outwards from a centre—say, for definiteness, the events constituting a wave of light—and attributed, hypothetically, to a “cause” in that centre; (b) more or less similar groups of events at other times, connected with the first group according to the laws of physics, and therefore attributed to the same hypothetical cause at other times. But all that we ought to assume is series of groups of events, connected by discoverable laws. These series we may define as “matter”. Whether there is matter in any other sense, no one can tell.
What is true in the old notion of causality is the fact that events at different times are connected by laws (differential equations). When there is a law connecting an event A with an event B, the two have a definite unambiguous time-order. But if the events are such that a ray of light starting from A would arrive at any body which was present at B after B had occurred, and vice versa, then there is no definite true order, and no possible causal law connecting A and B. A and B must then be regarded as separate facts of geography.
Perhaps the scope and purpose of this and the foregoing chapters may be made clearer by showing their bearing upon certain popular beliefs which may seem self-evident but are really in my opinion either false or likely to lead to falsehood. I shall confine myself to objections which have actually been made to me when trying to explain the philosophical outcome of modern physics.[8]
[8] These objections are quoted (with kind permission) from a letter written to me by a well-known engineer, Mr. Percy Griffith, who is also a writer on philosophical subjects.
“We cannot conceive of movement apart from some thing as moving.” This is, in a sense, a truism; but in the sense in which it is usually meant, it is a falsehood. We speak of the “movement” of a drama or piece of music, although we do not conceive either as a “thing” which exists complete at every moment of the performance. This is the sort of picture we must have in our minds when we try to conceive the physical world. We must think of a string of events, connected together by certain causal connections, and having enough unity to deserve a single name. We then begin to imagine that the single name denotes a single “thing”, and if the events concerned are not all in the same place, we say the “thing” has “moved.” But this is only a convenient shorthand. In the cinema, we seem to see a man falling off a skyscraper, catching hold of the telegraph wires, and reaching the ground none the worse. We know that, in fact, there are a number of different photographs, and the appearance of a single “thing” moving is deceptive. In this respect, the real world resembles the cinema.
In connection with motion one needs to emphasise the very difficult distinction between experience and prejudice. Experience, roughly, is what you see, and prejudice is what you only think you see. Prejudice tells you that you see the same table on two different occasions; you think that experience tells you this. If it really were experience, you could not be mistaken; yet a similar table may be substituted without altering the experience. If you look at a table on two different occasions, you have very similar sensations, and memory tells you that they are similar; but there is nothing to show that one identical entity causes the two sensations. If the table is in a cinema, you know that there is not such an entity, even though you can watch it changing with apparent continuity. The experience is just like that with a “real” table; so in the case of a “real” table also, there is nothing in the actual experience to show whether there is a persistent entity or not. I say, therefore: I do not know whether there is a persistent entity, but I do know that my experiences can be explained without assuming that there is. Therefore it can be no part of legitimate science to assert or deny the persistent entity; if it does either, it goes beyond the warrant of experience.
The following is a verbally cited passage in the letter referred to objecting to what was said above about “force”:
“The concept of Force is not of physical but of psychological origin. Rightly or wrongly it arises in the most impersonal contemplation of the Stellar Universe, where we observe an infinite number of spherical bodies revolving on their own axes and gyrating in orbits round each other. Rightly or wrongly, we naturally conceive of these as having been so constituted and so maintained by some Force or Forces.”
We do not, in fact, “observe” what it is here said that we observe; all this is inferred. What we observe, in astronomy, is a two-dimensional pattern of points of light, with a few bright surfaces of measurable size when seen through the telescope (the planets), and of course the larger bright surfaces that we call the sun and moon. Most of this pattern (the fixed stars) rotates round the earth once in every twenty-three hours and fifty-six minutes. The sun rotates in varying periods, which average twenty-four hours and never departs very far from the average. The moon and planets have apparent motions which are more irregular. These are the observed facts. There is no logical impossibility about the formulæ doctrine of spheres rotating round the earth, one for each planet and one for the stars. The modern doctrines are simpler, but not one whit more in accordance with observed facts; it is our passion for simple laws that has made us adopt them.
The last sentence of the above quotation raises some further points of interest. “Rightly or wrongly”, the writer says, “we naturally conceive of these as having been so constituted and so maintained by some Force or Forces.” I do not deny this. It is “natural”, and it is “right or wrong”—more specifically, it is wrong. “Force” is part of our love of explanations. Everyone knows about the Hindu who thought that the world does not fall because it is supported by an elephant, and the elephant does not fall because it is supported by a tortoise. When his European interlocutor said “But how about the tortoise?” he replied that he was tired of metaphysics and wanted to change the subject. “Force”, as an explanation, is no better than the elephant and the tortoise. It is an attempt to get at the “why” of natural processes, not only at the “how”. What we observe, to a limited extent, is what happens, and we can arrive at laws according to which observable things happen, but we cannot arrive at a reason for the laws. If we invent a reason, it needs a reason in its turn, and so on. “Force” is a rationalising of natural processes, but a fruitless one since “force” would have to be rationalised also.
When it is said, as it often is, that “force” belongs to the world of experience, we must be careful to understand what can be meant. In the first place, it may be meant that calculations which employ the notion of force work out right in practice. This, broadly speaking, is admitted: no one would suggest that the engineer should alter his methods, or should give up working out stresses and strains. But that does not prove that there are stresses and strains. A British medical man renders his accounts in guineas, although they have long since ceased to exist except as a name; he obtains a real payment, though he employs a fictitious coin. Similarly, the engineer is concerned with the question whether his bridge will stand: the fact of experience is that it stands (or does not stand), and the stresses and strains are only a way of explaining what sort of bridge will stand. They are as useful as guineas, but equally imaginary.
But when it is said that force is a fact of experience, there is something quite different that may be meant. It may be meant that we experience force when we experience such things as pressure or muscular exertion. We cannot discuss this contention adequately without going into the relation of physics to psychology, which is a topic we shall consider at length at a later stage. But we may say this much: if you press your finger-tip upon a hard object, you have an experience which you attribute to your finger-tip, but there is a long chain of intermediate causes in nerves and brain. If your finger were amputated you could still have the same experience by a suitable operation on the nerves that formerly connected the finger with the brain, so that the force between the finger-tip and the hard object, as a fact of experience, may exist when there is no finger-tip. This shows that force, in this sense, cannot be what concerns physics.
As the above example illustrates, we do not, in fact, experience many things that we think we experience. This makes it necessary to ask, without too much assurance, in what sense physics can be based upon experience, and what must be the nature of its entities and its inferences if it is to make good its claim to be empirically grounded. We shall begin this inquiry in the next chapter.
[CHAPTER XII]
PHYSICS AND PERCEPTION
It will be remembered that we regarded perception, in Chapter V, as a species of “sensitivity”. Sensitivity to a given feature of the environment we defined as consisting in some characteristic reaction which is exhibited whenever that feature is present, but not otherwise; this property is possessed more perfectly, in given directions, by scientific instruments than by living bodies, though scientific instruments are more selective as to the stimuli to which they will respond. We decided that what, from the standpoint of an external observer, distinguishes perception from other forms of sensitivity is the law of association or conditioned reflexes. But we also found that this purely external treatment of perception presupposes our knowledge of the physical world as a going concern. We have now to investigate this presupposition, and to consider how we come to know about physics, and how much we really do know.
According to the theory of [Chapter V], it is possible to perceive things that are not in a spatial contact with the body. There must be a reaction to a feature of the environment, but that feature may be at a greater or less distance from the body of the percipient; we can even perceive the sun and stars, within the limits of the definition. All that is necessary is that our reaction should depend upon the spatial relation between our body and the feature of the environment. When our back is towards the sun, we do not see it; when our face is towards it, we do.
When we consider perception—visual or auditory—of an external event, there are three different matters to be examined. There is first the process in the outside world, from the event to the percipient’s body; there is next the process in his body, in so far as this can be known by an outside observer; lastly, there is the question, which must be faced sooner or later, whether the percipient can perceive something of the process in his body which no other observer could perceive. We will take these points in order.
If it is to be possible to “perceive” an event not in the percipient’s body, there must be a physical process in the outer world such that, when a certain event occurs, it produces a stimulus of a certain kind at the surface of the percipient’s body. Suppose, for example, that pictures of different animals are exhibited on a magic lantern to a class of children, and all the children are asked to say the name of each animal in turn. We may assume that the children are sufficiently familiar with animals to say “cat”, “dog”, “giraffe”, “hippopotamus”, etc., at the right moments. We must then suppose—taking the physical world for granted—that some process travels from each picture to the eyes of the various children, retaining throughout these journeys such peculiarities that, when the process reaches their eyes, it can in one case stimulate the word “cat” and in another the word “dog”. All this the physical theory of light provides for. But there is one interesting point about language that should be noticed in this connection. If the usual physical theory of light is correct, the various children will receive stimuli which differ greatly according to their distance and direction from the picture, and according to the way the light falls. There are also differences in their reactions, for, though they all utter the word “cat”, some say it loud, others soft, some in a soprano voice, some in a contralto. But the differences in their reactions are much less than the differences in the stimuli. This is still more the case if we consider various different pictures of cats, to all of which they respond with the word “cat”. Thus language is a means of producing responses which differ less than the stimuli do, in cases where the resemblances between the stimuli are more important to us than the differences. This fact makes us apt to overlook the differences between stimuli which produce nearly identical responses.
As appears from the above, when a number of people simultaneously perceive a picture of a cat, there are differences between the stimuli to their various perceptions, and these differences must obviously involve differences in their reactions. The verbal responses may differ very little, but even the verbal responses could be made to differ by putting more complicated questions than merely “What animal is that?” One could ask: “Can the picture be covered by your thumb-nail held at arm’s length?” Then the answer would be different according as the percipient was near the picture or far off. But the normal percipient, if left to himself, will not notice such differences, that is to say, his verbal response will be the same in spite of the differences in the stimuli.
The fact that it is possible for a number of people to perceive the same noise or the same coloured pattern obviously depends upon the fact that a physical process can travel outward from a centre and retain certain of its characteristics unchanged, or very little changed. The most notable of such characteristics is frequency in a wave-motion. That, no doubt, affords a biological reason for the fact that our most delicate senses, sight and hearing, are sensitive to frequencies, which determine colour in what we see and pitch in what we hear. If there were not, in the physical world, processes spreading out from centres and retaining certain characters practically unchanged, it would be impossible for different percipients to perceive the same object from different points of view, and we should not have been able to discover that we all live in a common world.
We come now to the process in the percipient’s body, in so far as this can be perceived by an outside observer. This raises no new philosophical problems, because we are still concerned, as before, with the perception of events outside the observer’s body. The observer, now, is supposed to be a physiologist, observing, say, what goes on in the eye when light falls upon it. His means of knowing are, in principle, exactly the same as in the observation of dead matter. An event in an eye upon which light is falling causes light-waves to travel in a certain manner until they reach the eye of the physiologist. They there cause a process in the physiologist’s eye and optic nerve and brain, which ends in what he calls “seeing what happens in the eye he is observing”. But this event, which happens in the physiologist, is not what happened in the eye he was observing; it is only connected with this by a complicated causal chain. Thus our knowledge of physiology is no more direct or intimate than our knowledge of processes in dead matter; we do not know any more about our eyes than about the trees and fields and clouds that we see by means of them. The event which happens when a physiologist observes an eye is an event in him, not on the eye that he is observing.
We come now at last to the question of self-observation, which we have hitherto avoided. I say “self-observation” rather than “introspection”, because the latter word has controversial associations that I wish to avoid. I mean by “self-observation” anything that a man can perceive about himself but that others, however situated, cannot perceive about him. What follows is only preliminary, since the subject will be discussed at length in [Chapter XVI].
No one can deny that we know things about ourselves which others cannot know unless we tell them. We know when we have toothache, when we feel thirsty, what we were dreaming when we woke up, and so on. Dr. Watson might say that the dentist can know we have toothache by observing a cavity in a tooth. I will not reply that the dentist is often mistaken; this may be merely because the art of dentistry has not been sufficiently perfected. I will concede as possible, in the future, a state of odontology in which the dentist could always know whether I am feeling toothache. But even then his knowledge has a different character from mine. His knowledge is an inference, based upon the inducive law that people with such-and-such cavities suffer pain of a certain kind. But this law cannot be established by observation of cavities alone; it requires that, where these are observed, the people who have them should tell us that they feel toothache. And, more than that, they must be speaking the truth. Purely external observation can discover that people with cavities say they have toothache, but not that they have it. Saying one has toothache is a different thing from having it; if not we could cure toothache by not talking about it, and so save our dentists’ bills. I am sure the expert opinion of dentists will agree with me that this is impossible.
To this argument, however, it might be replied that having toothache is a state of the body, and that knowing I have toothache is a response to this bodily stimulus. It will be said that, theoretically, the state of my body when I have toothache can be observed by an outsider, who can then also know that I have toothache. This answer, however, does not really meet the point. When the outside observer knows that I have toothache, not only is his knowledge based upon an inductive inference, as we have already seen, but his knowledge of the inferred term, “toothache”, must be based upon personal experience. No knowledge of dentistry could enable a man to know what toothache is if he had never felt it. If, then, toothache is really a state of the body—which, at the moment, I neither affirm nor deny—it is a state of the body which only the man himself can perceive. In a word, whoever has experienced toothache and can remember it has knowledge that cannot be possessed by a man who has never experienced toothache.
Take next our knowledge of our own dreams. Dr. Watson has not, so far as I know, ever discussed dreams, but I imagine he would say something like this: In dreams, there are probably small laryngeal movements such as, if they were greater, would lead to speech; indeed, people do sometimes cry out in dreams. There may also be stimulations of the sense-organs, which produce unusual reactions owing to the peculiar physiological condition of the brain during sleep: but all these reactions must consist of small movements, which could theoretically be seen from outside, say by some elaboration of X-ray apparatus. This is all very well, but meantime it is hypothetical, and the dreamer himself knows his dreams without all this elaborate inference. Can we say that he really knows these hypothetical small bodily movements, although he thinks he knows something else? That would presumably be Dr. Watson’s position, and it must be admitted that, with a definition of “knowledge” such as we considered in [Chapter VIII], such a view is not to be dismissed offhand as obviously impossible. Moreover, if we are to say the perception gives knowledge of the physical world, we shall have to admit that what we are perceiving may be quite different from what it seems to be. A table does not look like a vast number of electrons and protons, nor like trains of waves meeting and clashing. Yet this is the sort of thing a table is said to be by modern physicists. If, then, what seems to us to be just a table such as may be seen any day is really this odd sort of thing, it is possible that what seems to us to be a dream is really a number of movements in the brain.
This again is all very well, but there is one point which it fails to explain, namely, what is meant by “seeming”. If a dream or a table “seems” to be one sort of thing while it is “really” another, we shall have to admit that it really seems, and that what it seems to be has a reality of its own. Nay more, we only arrive at what it “really” is by an inference, valid or invalid, from what it seems to be. If we are wrong about the seeming, we must be doubly wrong about the reality, since the sole ground for asserting the table composed of electrons and protons is the table that we see, i.e. the “seeming” table. We must therefore treat “seeming” with respect.
Let us consider Dr. Watson watching a rat in a maze. He means to be quite objective, and report only what really goes on. Can he succeed? In one sense he can. He can use words about what he sees which are the same as any other scientifically trained observer will use if he watches the same rat at the same time. But Dr. Watson’s objectivity emphatically does not consist in using the same words as other people use; his vocabulary is very different from that of most psychologists. He cannot take as the sole test of truth the consensus of mankind. “Securus judicat orbis terrarum” is another example of a Latin tag which is false, and which certainly Dr. Watson would not consider true. It has happened again and again in human history that a man who said something that had never been said before turned out to be right, while the people who repeated the wise saws of their forefathers were talking nonsense. Therefore, when Dr. Watson endeavours to eliminate subjectivity in observing rats, he does not mean that he says what everybody else says. He means that he refrains from inferring anything about the rat beyond its bodily movements. This is all to the good, but I think he fails to realise that almost as long and difficult an inference is required to give us knowledge of the rat’s bodily movements as to give us knowledge of its “mind”. And what is more, the data from which we must start in order to get to know the rat’s bodily movements are data of just the sort that Dr. Watson wishes to avoid, namely private data patent to self-observation but not patent to anyone except the observer. This is the point at which, in my opinion, behaviourism as a final philosophy breaks down.
When several people simultaneously watch a rat in a maze, or any other example of what we should naturally regard as matter in motion, there is by no means complete identity between the physical events which happen at the surface of their eyes and constitute the stimuli to their perceptions. There are differences of perspective, of light and shade, of apparent size, and so on, all of which will be reproduced in photographs taken from the places where the eyes of the several observers are. These differences produce differences in the reactions of the observers—differences which a quite unthinking person may overlook, but which are familiar to every artist. Now it is contrary to all scientific canons to suppose that the object perceived, in addition to affecting us in the way of stimulus and reaction, also affects us directly by some mystical epiphany; certainly it is not what any behaviourist would care to assert. Our knowledge of the physical world, therefore, must be contained in our reaction to the stimulus which reaches us across the intervening medium; and it seems hardly possible that our reaction should have a more intimate relation to the object than the stimulus has. Since the stimulus differs for different observers, the reaction also differs; consequently, in all our perceptions of physical processes there is an element of subjectivity. If, therefore, physics is true in its broad outlines (as the above argument supposes), what we call “perceiving” a physical process is something private and subjective, at least in part, and is yet the only possible starting-point for our knowledge of the physical world.
There is an objection to the above argument which might naturally be made, but it would be in fact invalid. It may be said that we do not in fact proceed to infer the physical world from our perceptions, but that we begin at once with a rough-and-ready knowledge of the physical world, and only at a late stage of sophistication compel ourselves to regard our knowledge of the physical world as an inference. What is valid in this statement is the fact that our knowledge of the physical world is not at first inferential, but that is only because we take our percepts to be the physical world. Sophistication and philosophy come in at the stage at which we realise that the physical world cannot be identified with our percepts. When my boy was three years old, I showed him Jupiter, and told him that Jupiter was larger than the earth. He insisted that I must be speaking of some other Jupiter, because, as he patiently explained, the one he was seeing was obviously quite small. After some efforts, I had to give it up and leave him unconvinced. In the case of the heavenly bodies, adults have got used to the idea that what is really there can only be inferred from what they see; but where rats in mazes are concerned, they still tend to think that they are seeing what is happening in the physical world. The difference, however, is only one of degree, and naive realism is as untenable in the one case as in the other. There are differences in the perceptions of two persons observing the same process; there are sometimes no discoverable differences between two perceptions of the same persons observing different processes, e.g. pure water and water full of bacilli. The subjectivity of our perceptions is thus of practical as well as theoretical importance.
I am not maintaining that what we primarily know is our own perceptions. This is largely a verbal question; but with the definition of knowledge given in [Chapter VIII], it will be correct to say that from the first we know external objects, the question is not as to what are the objects we know, but rather as to how accurately we know them. Our non-inferential knowledge of an object cannot be more accurate than our reaction to it, since it is part of that reaction. And our reaction cannot be more accurate than the stimulus. But what on earth can you mean by the “accuracy” of a stimulus? I may be asked. I mean just the same as by the accuracy of a map or a set of statistics. I mean a certain kind of correspondence. One pattern is an accurate representation of another if every element of the one can be taken as the representative of just one element of the other, and the relations that make the one set into a pattern correspond with relations making the other set into a pattern. In this sense, writing can represent speech with a certain degree of accuracy; to every spoken word a written word corresponds, and to the time-order of the spoken words the space-order of the written words corresponds. But there are inflexions and tones of voice that cannot be represented in writing, except, to some extent, by musical notation. A gramophone record is a much more accurate representation of vocal sounds than any writing can be; but even the best gramophone record fails to be completely accurate. The impression made upon an observer is very analogous to a gramophone record or a photograph, but usually less accurate owing to the influence of the law of association, and the lack of delicacy in our senses. And whatever limitations there are to the accuracy of our impressions are limitations to the accuracy of our non-inferential knowledge of the external world.
Another point: If we accept the definition of knowledge given in [Chapter VIII], which was framed so far as to be as favourable as possible to behaviourism, a given reaction may be regarded as knowledge of various different occurrences. When we see Jupiter, we have, according to the definition, knowledge of Jupiter, but we also have knowledge of the stimulus at the surface of the eye, and even of the process in the optic nerve. For it is arbitrary at what point we start in the process leading to a certain event in the brain: this event, and the consequent bodily action, may be regarded as a reaction to a process starting at any earlier point. And the nearer our starting-point is to the brain, the more accurate becomes the knowledge displayed in our reaction. A lamp at the top of a tall building might produce the same visual stimulus as Jupiter, or at any rate one practically indistinguishable from that produced by Jupiter. A blow on the nose might make us “see stars”. Theoretically, it should be possible to apply a stimulus direct to the optic nerve, which should give us a visual sensation. Thus when we think we see Jupiter, we may be mistaken. We are less likely to be mistaken if we say that the surface of the eye is being stimulated in a certain way, and still less likely to be mistaken if we say that the optic nerve is being stimulated in a certain way. We do not eliminate the risk of error completely unless we confine ourselves to saying that an event of a certain sort is happening in the brain; this statement may still be true if we see Jupiter in a dream.
But, I shall be asked, what do you know about what is happening in the brain? Surely nothing. Not so, I reply. I know about what is happening in the brain exactly what naive realism thinks it knows about what is happening in the outside world. But this needs explaining, and there are other matters that must be explained first.
When the light from a fixed star reaches me, I see the star if it is night and I am looking in the right direction. The light started years ago, probably many years ago, but my reaction is primarily to something that is happening now. When my eyes are open, I see the star; when they are shut, I do not. Children discover at a fairly early age that they see nothing when their eyes are shut. They are aware of the difference between seeing and not seeing, and also of the difference between eyes open and eyes shut; gradually they discover that these two differences are correlated—I mean that they have expectations of which this is the intellectualist transcription. Again, children learn to name the colours, and to state correctly whether a thing is blue or red or yellow or what-not. They ought not to be sure that light of the appropriate wave-length started from the object. The sun looks red in a London fog, grass looks blue through blue spectacles, everything looks yellow to a person suffering from jaundice. But suppose you ask: What colour are you seeing? The person who answers, in these cases, red for the sun, blue for the grass, and yellow for the sick-room of the jaundiced patient, is answering quite truly. And in each of these cases he is stating something that he knows. What he knows in such cases is what I call a “percept”. I shall contend later that, from the standpoint of physics, a percept is in the brain; for the present, I am only concerned to say that a percept is what is most indubitable in our knowledge of the world.
To behaviourism as a metaphysic one may put the following dilemma. Either physics is valid in its main lines, or it is not. If it is not, we know nothing about the movements of matter; for physics is the result of the most serious and careful study of which the human intelligence has hitherto been capable. If, on the other hand, physics is valid in its main lines, any physical process starting either inside or outside the body will, if it reaches the brain, be different if the intervening medium is different; moreover two persons, initially very different, may become indistinguishable as they spread and grow fainter. On both grounds, what happens in the brain is not connected quite accurately with what happens elsewhere, and our perceptions are therefore infected with subjectivity on purely physical grounds. Even, therefore, when we assume the truth of physics, what we know most indubitably through perception is not the movements of matter, but certain events in ourselves which are connected, in a manner not quite invariable, with the movements of matter. To be specific, when Dr. Watson watches rats in mazes, what he knows, apart from difficult inferences, are certain events in himself. The behaviour of the rats can only be inferred by the help of physics, and is by no means to be accepted as something accurately knowable by direct observation.
I do not in fact entertain any doubts that physics is true in its main lines. The interpretation of physical formulæ is a matter as to which a considerable degree of uncertainty is possible; but we cannot well doubt that there is an interpretation which is true roughly and in the main. I shall come to the question of interpretation later; for the present, I shall assume that we may accept physics in its broad outlines, without troubling to consider how it is to be interpreted. On this basis, the above remarks on perception seem undeniable. We are often misled as to what is happening, either by peculiarities of the medium between the object and our bodies, or by unusual states of our bodies, or by a temporary or permanent abnormality in the brain. But in all these cases something is really happening, as to which, if we turn our attention to it, we can obtain knowledge that is not misleading. At one time when, owing to illness, I had been taking a great deal of quinine, I became hypersensitive to noise, so that when the nurse rustled the newspaper I thought she was spilling a scuttle of coals on the floor. The interpretation was mistaken, but it was quite true that I heard a loud noise. It is a commonplace that a man whose leg has been amputated can still feel pains in it; here again, he does really feel the pains, and is only mistaken in his belief that they come from his leg. A percept is an observable event, but its interpretation as knowledge of this or that event in the physical world is liable to be mistaken, for reasons which physics and physiology can make fairly clear.
The subjectivity of percepts is a matter of degree. They are more subjective when people are drunk or asleep than when they are sober and awake. They are more subjective in regard to distant objects than in regard to such as are near. They may acquire various peculiar kinds of subjectivity through injuries to the brain or to the nerves. When I speak of a percept as “subjective” I mean that the physiological inferences to which it gives rise are mistaken or vague. This is always the case to some extent, but much more so in some circumstances than in others. And the sort of defect that leads to mistakes must be distinguished from the sort that leads to vagueness. If you see a man a quarter of a mile away, you can see that it is a man if you have normal eyesight, but you probably cannot tell who it is, even if in fact it is some one you know well. This is vagueness in the percept: the inferences you draw are correct so far as they go, but they do not go very far. On the other hand, if you are seeing double and think there are two men, you have a case of mistake. Vagueness, to a greater or less extent, is universal and inevitable; mistakes, on the other hand, can usually be avoided by taking trouble and by not always trusting to physiological inference. Anybody can see double on purpose, by focussing on a distant object and noticing a near one; but this will not cause mistakes, since the man is aware of the subjective element in his double vision. Similarly we are not deceived by after-images, and only dogs are deceived by gramophones.
From what has been said in this chapter, it is clear that our knowledge of the physical world, if it is to be made as reliable as possible, must start from percepts, and must scrutinize the physiological inferences by which percepts are accompanied. Physiological inference is inference in the sense that it sometimes leads to error and physics gives reason to expect that percepts will, in certain circumstances, be more or less deceptive if taken as signs of something outside the brain. It is these facts that give a subjective cast to the philosophy of physics, at any rate in its beginnings. We cannot start cheerfully with a world of matter in motion, as to which any two sane and sober observers must agree. To some extent, each man dreams his own dream, and the disentangling of the dream element in our percepts is no easy matter. This is, indeed, the work that scientific physics undertakes to do.
[CHAPTER XIII]
PHYSICAL AND PERCEPTUAL SPACE
Perhaps there is nothing so difficult for the imagination as to teach it to feel about space as modern science compels us to think. This is the task which must be attempted in the present chapter.
We said in [Chapter XII] that we know about what is happening in the brain exactly what naive realism thinks it knows about what is happening in the world. This remark may have seemed cryptic; it must now be expanded and expounded.
The gist of the matter is that percepts, which we spoke about at the end of last chapter, are in our heads; that percepts are what we can know with most certainty; and that percepts contain what naive realism thinks it knows about the world.
But when I say that my percepts are in my head, I am saying something which is ambiguous until the different kinds of space have been explained, for the statement is only true in connection with physical space. There is also a space in our percepts, and of this space the statement would not be true. When I say that there is space in our percepts, I mean nothing at all difficult to understand. I mean—to take the sense of sight, which is the most important in this connection—that in what we see at one time there is up and down, right and left, inside and outside. If we see, say, a circle on a blackboard, all these relations exist within what we see. The circle has a top half and a bottom half, a right-hand half and a left-hand half, an inside and an outside. Those relations alone are enough to make up a space of sorts. But the space of everyday life is filled out with what we derive from touch and movement—how a thing feels when we touch it, and what movements are necessary in order to grasp it. Other elements also come into the genesis of the space in which everybody believes who has not been troubled by philosophy; but it is unnecessary for our purposes to go into this question any more deeply. The point that concerns us is that a man’s percepts are private to himself: what I see, no one else sees; what I hear, no one else hears; what I touch, no one else touches; and so on. True, others hear and see something very like what I hear and see, if they are suitably placed; but there are always differences. Sounds are less loud at a distance; objects change their visual appearance according to the laws of perspective. Therefore it is impossible for two persons at the same time to have exactly identical percepts. It follows that the space of percepts, like the percepts, must be private; there are as many perceptual spaces as there are percipients. My percept of a table is outside my percept of my head, in my perceptual space; but it does not follow that it is outside my head as a physical object in physical space. Physical space is neutral and public: in this space, all my percepts are in my head, even the most distant star as I see it. Physical and perceptual space have relations, but they are not identical, and failure to grasp the difference between them is a potent source of confusion.
To say that you see a star when you see the light that has come from it is no more correct than to say that you see New Zealand when you see a New Zealander in London. Your perception when (as we say) you see a star is causally connected, in the first instance, with what happens in the brain, the optic nerve, and the eye, then with a light-wave which, according to physics, can be traced back to the star as its source. Your sensations will be closely similar if the light comes from a lamp at the top of a mast. The physical space in which you believe the “real” star to be is an elaborate inference; what is given is the private space in which the speck of light you see is situated. It is still an open question whether the space of sight has depth, or is merely a surface, as Berkeley contended. This does not matter for our purposes. Even if we admit that sight alone shows a difference between an object a few inches from the eyes and an object several feet distant, yet you certainly cannot, by sight alone, see that a cloud is less distant than a fixed star, though you may infer that it is, because it can hide the star. The world of astronomy, from the point of view of sight, is a surface. If you were put in a dark room with little holes cut in the ceiling in the pattern of the stars letting light come through, there would be nothing in your immediate visual data to show that you were not “seeing the stars”. This illustrates what I mean by saying that what you see is not “out there” in the sense of physics.
We learn in infancy that we can sometimes touch objects we see, and sometimes not. When we cannot touch them at once, we can sometimes do so by walking to them. That is to say, we learn to correlate sensations of sight with sensations of touch, and sometimes with sensations of movement followed by sensations of touch. In this way we locate our sensations in a three-dimensional world. Those which involve sight alone we think of as “external”, but there is no justification for this view. What you see when you see a star is just as internal as what you feel when you feel a headache. That is to say, it is internal from the standpoint of physical space. It is distant in your private space, because it is not associated with sensations of touch, and cannot be associated with them by means of any journey you can perform.
Your own body, as known to you through direct experience, is quite different from your own body as considered in physics. You know more about your own body than about any other through direct experience, because your own body can give you a number of sensations that no other body can, for instance all kinds of bodily pains. But you still know it only through sensations; apart from inference, it is a bundle of sensations, and therefore quite different, prima facie, from what physics calls a body.
Most of the things you see are outside what you see when (as one says) you see your own body. That is to say: you see certain other patches of colour, differently situated in visual space, and say you are seeing things outside your body. But from the point of view of physics, all that you see must count as inside your body; what goes on elsewhere can only be inferred. Thus the whole space of your sensible world with all its percepts counts as one tiny region from the point of view of physics.
There is no direct spatial relation between what one person sees and what another sees, because no two ever see exactly the same object. Each person carries about a private space of his own, which can be located in physical space by indirect methods, but which contains no place in common with another person’s private space. This shows how entirely physical space is a matter of inference and construction.
To make the matter definite, let us suppose that a physiologist is observing a living brain—no longer an impossible supposition, as it would have been formerly. It is natural to suppose that what the physiologist sees is in the brain he is observing. But if we are speaking of physical space, what the physiologist sees is in his own brain. It is in no sense in the brain that he is observing, though it is in the percept of that brain, which occupies part of the physiologist’s perceptual space. Causal continuity makes the matter perfectly evident: light-waves travel from the brain that is being observed to the eye of the physiologist, at which they only arrive after an interval of time, which is finite though short. The physiologist sees what he is observing only after the light-waves have reached his eye; therefore the event which constitutes his seeing comes at the end of a series of events which travel from the observed brain into the brain of the physiologist. We cannot, without a preposterous kind of discontinuity, suppose that the physiologist’s percept, which comes at the end of this series, is anywhere else but in the physiologist’s head.
This question is very important, and must be understood if metaphysics is ever to be got straight. The traditional dualism of mind and matter, which I regard as mistaken, is intimately connected with confusions on this point. So long as we adhere to the conventional notions of mind and matter, we are condemned to a view of perception which is miraculous. We suppose that a physical process starts from a visible object, travels to the eye, there changes into another physical process, causes yet another physical process in the optic nerve, finally produces some effect in the brain, simultaneously with which we see the object from which the process started, the seeing being something “mental”, totally different in character from the physical processes which precede and accompany it. This view is so queer that metaphysicians have invented all sorts of theories designed to substitute something less incredible. But nobody noticed an elementary confusion.
To return to the physiologist observing another man’s brain: what the physiologist sees is by no means identical with what happens in the brain he is observing, but is a somewhat remote effect. From what he sees, therefore, he cannot judge whether what is happening in the brain he is observing is, or is not, the sort of event that he would call “mental”. When he says that certain physical events in the brain are accompanied by mental events, he is thinking of physical events as if they were what he sees. He does not see a mental event in the brain he is observing, and therefore supposes there is in that brain a physical process which he can observe and a mental process which he cannot. This is a complete mistake. In the strict sense, he cannot observe anything in the other brain, but only the percepts which he himself has when he is suitably related to that brain (eye to microscope, etc.). We first identify physical processes with our percepts, and then, since our percepts are not other people’s thoughts, we argue that the physical processes in their brains are something quite different from their thoughts. In fact, everything that we can directly observe of the physical world happens inside our heads, and consists of “mental” events in at least one sense of the word “mental”. It also consists of events which form part of the physical world. The development of this point of view will lead us to the conclusion that the distinction between mind and matter is illusory. The stuff of the world may be called physical or mental or both or neither, as we please; in fact, the words serve no purpose. There is only one definition of the words that is unobjectionable: “physical” is what is dealt with by physics, and “mental” is what is dealt with by psychology. When, accordingly, I speak of “physical” space, I mean the space that occurs in physics.
It is extraordinarily difficult to divest ourselves of the belief that the physical world is the world we perceive by sight and touch; even if, in our philosophic moments, we are aware that this is an error, we nevertheless fall into it again as soon as we are off our guard. The notion that what we see is “out there” in physical space is one which cannot survive while we are grasping the difference between what physics supposes to be really happening, and what our senses show us as happening; but it is sure to return and plague us when we begin to forget the argument. Only long reflection can make a radically new point of view familiar and easy.
Our illustrations hitherto have been taken from the sense of sight; let us now take one from the sense of touch. Suppose that, with your eyes shut, you let your finger-tip press against a hard table. What is really happening? The physicist says that your finger-tip and the table consist, roughly speaking, of vast numbers of electrons and protons; more correctly, each electron and proton is to be thought of as a collection of processes of radiation, but we can ignore this for our present purposes. Although you think you are touching the table, no electron or proton in your finger ever really touches an electron or proton in the table, because this would develop an infinite force. When you press, repulsions are set up between parts of your finger and parts of the table. If you try to press upon a liquid or a gas, there is room in it for the parts that are repelled to get away. But if you press a hard solid, the electrons and protons that try to get away, because electrical forces from your finger repel them, are unable to do so, because they are crowded close to others which elbow them back to more or less their original position, like people in a dense crowd. Therefore the more you press the more they repel your finger. The repulsion consists of electrical forces, which set up in the nerves a current whose nature is not very definitely known. This current runs into the brain, and there has effects which, so far as the physiologist is concerned, are almost wholly conjectural. But there is one effect which is not conjectural, and that is the sensation of touch. This effect, owing to physiological inference or perhaps to a reflex, is associated by us with the finger-tip. But the sensation is the same if, by artificial means, the parts of the nerve nearer the brain are suitably stimulated—e.g. if your hand has been amputated and the right nerves are skilfully manipulated. Thus our confidence that touch affords evidence of the existence of bodies at the place which we think is being touched is quite misplaced. As a rule we are right, but we can be wrong; there is nothing of the nature of an infallible revelation about the matter. And even in the most favorable case, the perception of touch is something very different from the mad dance of electrons and protons trying to jazz out of each other’s way, which is what physics maintains is really taking place at your finger-tip. Or, at least, it seems very different. But as we shall see, the knowledge we derive from physics is so abstract that we are not warranted in saying that what goes on in the physical world is, or is not, intrinsically very different from the events that we know through our own experiences.