Dr Julian Davies (University of Edinburgh). In last year's lectures, you argued that our algorithmic mechanism cannot be completely rational. Do you think (i) that human beings are not ultimately algorithmic, or (ii) that they are not completely rational? (iii) Can the human brain as a neurological mechanism fail to be algorithmic?
LUCAS. The short answer is ‘Yes, Yes, Yes’. Take the second question first—‘Do you think that human beings are not completely rational?’ It is really an argument from experience. I believe there was a meeting of the Senate here yesterday, and if so the Professors here will vouch for it (if Senates in Edinburgh are anything like University meetings in Oxford), that not all human beings are completely rational. If I had to put it in the form of a formal proof, I would say that there are occasions when everybody else present disagrees with me; therefore, either I am wrong—which is absurd—or everybody else is; in which case by modus tollendo tollens it must be that everybody else is not completely rational; from which it follows, either way, that not all people are completely rational.
I don't think this is actually the sense in which Dr Davies was asking the question; he is putting this as an alternative to the main point of his question: ‘Do you think human beings are not ultimately algorithmic?’, and again the answer is ‘Yes’, for the reasons I gave last year; there are occasions when we can, as it were, out-do any algorithm; we can find problems which are not solvable algorithmically. Gödel's theorem shows that we could never have an algorithm which would generate just those things which are true, all of them and only those; and yet we find ourselves impelled to love the truest when we see it, and this is why it seemed to me—and I still maintain that it is characteristic of the mind—that it is in some sense autonomous and not completely algorithmic. And that raises Dr Davies' third question, which is the main point of what he wants to see discussed: ‘Can the human brain as a neurological mechanism fail to be algorithmic?’ Perhaps we should ask: ‘How can it?’
This question becomes more of a problem in view of what we were hearing yesterday and earlier this term, when we were being invited to see the development of the human body, including the human brain, as being governed by a program; and from this it seems to Dr Davies that it must therefore be a mechanism. Now I want to concede that, subject to the qualifications which Waddington made yesterday, the many disturbances due to the environment together with the possibility of a certain amount of internal noise, the body does develop programmatically—this seems to be the best biological theory that we have—and that therefore the brain does. But to borrow a distinction which Longuet-Higgins and Kenny are very fond of using to belabour each other, the distinction between hardware and software, it seems to me that clearly all this is showing is that the hardware, i.e. the brain, may be of a certain definite type, but it doesn't follow that the software has developed as a result of a program. In some cases it may but there are reasons why it is likely that it won't be entirely programmatic. It seems, for instance, that the software depends very largely on there being loops of neurons, and impulses going round and round and round, and a great deal of feed-back. Not nice, that is to say negative, feed-back of the sort that cyberneticians are happy to deal with, but in some cases quite possibly positive feed-back (for example, it seems a very general feature of people that one of their chief reasons for wanting to do something is they have already thought of doing it; that is, people find that the mere fact of their having decided to do something is itself a very powerful reinforcement for their persisting in doing it; in Greek Plato calls this characteristic
At this stage, I shall pick up the point I couldn't consider yesterday which is highly relevant to the algorithms, the St Jerome versus St Augustine dispute. How far do we inherit our mental qualities through the program in the DNA from our parents?; and how far do I, like St Jerome, think that each person is completely different? As far as inheritance is concerned, of course I did not want to say, as Kenny was trying to make out, that we don't inherit mental qualities at all. It is quite clear that father and son, mother and daughter, brother and sister, very often have not only a physical similarity, but also an intellectual similarity. Sometimes you can even detect in a man's style the style of his grandfather. There is no doubt that we in that sense should be good Augustinians. But when we are talking about the mind, we are not talking only about mental qualities. The most revealing phrase is: ‘to have a mind’, or ‘to have a mind of one's own’, or ‘to make up one's mind’, and this is something which we don't inherit. The power of making my own decisions is not as it were, something, just to be programmed into me. To put it another way, if we ask: ‘Do I inherit the power of making my own decisions?’, the answer in one sense is: ‘Yes, of course, I inherit from my father and my mother the fact that I am a person, and in that sense again St Augustine is right, I have a mind of my own because I have a body of my own. Yes, this is true.’ But if we ask the question: ‘Do I inherit being myself?’, in the sense of ‘Do I inherit being me?’, then the answer is ‘No’. I just am me, and this is not something which I could conceivably inherit; and there I am, as Kenny wanted me to be, on the side of St Jerome.
CHAIRMAN. Dr Aaron Sloman (on leave from the University of Sussex) has some questions for Professor Longuet-Higgins, and I propose to read now the letter in which he set forth the first of these questions:
Here is my question, arising out of your apparent change of mind on the issue of reductionism. Did you mean anything more than the belief that the human mind is somehow embodied in the human brain justifies the attempt to use studies of animal behaviour to shed light on human psychology, since animal and human brain are products of the same evolutionary processes? In particular, did you also mean to imply that neurophysiological studies can be used to shed light on psychological problems?
LONGUET-HIGGINS. Well, first about my ‘apparent change of mind on the issue of reductionism’. If I may say so, there is no change of mind here at all; I simply wanted to avoid being associated with the view that there's very little connection between what happens in the mind and what happens in the brain, as Dr Sloman's first question might be taken to imply. I believe that what happens in our minds is profoundly dependent on, and mediated by, what happens in our brains, and that if that were not so, there would be very little point in having brains at all. So I certainly mean—among other things—that we are well justified in attempting to use studies of animal behaviour to shed light on human psychology, and I regard the common evolutionary origin of human and animal brains as a good reason for conducting such comparative studies of animal and human behaviour.
Now for the second question: did I mean to imply also that neurophysiological studies can be used to shed light on psychological problems? My answer is, very definitely, yes. There are many psychological phenomena, especially perceptual phenomena, which can only be understood in relation to the physiological mechanisms of our senses. An example is the appearance of ‘Mach bands’. In a dim light, if you look at a black and white area separated by a sharp boundary, you see in the white area, just near the edge of the black, a region which looks whiter than the rest of the white area, and this is very directly understandable in terms of the way the retina is wired up to the optic nerve. Again, colour-blindness, which is a psychological fact (I know, because I am red-green colourblind), can be very closely correlated with the absence of certain visual pigments or their presence in only very small amounts. Or take the musical faculty of perfect pitch. This tends to go off beam when one gets into middle age, but more seriously at one end of the scale than the other; and this has a perfectly simple explanation in terms of the hardening of the cochlea. Cells which previously responded to a particular frequency now respond to a different frequency and so middle C sounds lower than it used to. So in certain matters at least, physiological considerations can shed a great deal of light on psychological facts. But, of course, the neurophysiologist cannot hope to understand how the brain works until he has equipped himself with a non-physiological account of the tasks which the brain and its various organs are able to perform. Only then can he form mature hypotheses as to how these tasks are carried out by the available hardware.
Aaron Sloman in his letter goes on to say something rather harsh about neurophysiology—he says that until much more elaborate theories about the nature of psychological processes are available, neurophysiological research is bound to be irrelevant to psychology. Probing a computer with screwdrivers would be no way to understand how a computer program managed to talk or understand English, if that is what it did. We certainly aren't going to discover how programs work by poking around with electrodes, or by detaching large chunks of computer. At the end of his letter, he says: ‘I'm so far very disappointed that none of the other speakers has shown any interest in the implications of the thesis that the mind consists essentially of programs. Perhaps they don't understand it. What have biologists to say about the evolution of programs? Do they understand enough about programming and computation to be able to think about the problems?’ Perhaps Wad had better field this one!
WADDINGTON. Of course I am not an expert on computers, and I am not sure that I know much about programs now that they have been turned into a professional expertise; but I have been discussing biology in terms of concepts which I think it is fair to call programs since long before computers emerged from the backrooms of laboratories. As a developmental biologist, one is concerned with controlled sequences of processes, and such questions as whether one step is controlled by the immediately preceding steps or by something else, and whether there are sub-routines which can be switched on in such a way that you can cause a cell to develop into some adult type which it would not have done if left alone. All these questions could very well be phrased in the language of programming, and I would be quite willing to use the programming language as soon as I become convinced that this adds anything to what biologists have been saying already. But I think that having new ideas, such as that of a chreod, which we could call a self-correcting set of programs, is more important than the actual language one uses to express them.
As a biologist, one is interested mainly to discover the actual programs carried out by, say, a newt's egg. One wants to find out about them both in terms of the hardware, that is what materials are being used, and also in terms of the software by which I mean such questions as whether the subroutines can be brought in in random order, or whether they must come in a definite sequence, or whether you can call them up at particular stages of development. In my experience, the general theory of computation hasn't added much to our understanding of the things a biologist deals with, even if you regard those things as operating like computers. However, I may not know enough about programming and computation, and perhaps it really does have something to tell us; but if so I wish somebody would tell us what it is.
CHAIRMAN. Now, Dr Sloman has what is not so much a question as a critical comment, again to Professor Longuet-Higgins:
Christopher Longuet-Higgins seems to want to compare the mind with an interpreter or compiler. This seems to be an extremely restrictive analogy; the mind must be a massive multi-purpose program including problem solving, subprograms and large stores of modifiable data, some procedural.
LONGUET-HIGGINS. Well, perhaps I should just explain what an interpreter or a compiler is. An interpreter is a system which will accept symbolic instructions in programmatic form, and actually get them carried out. When you use a computing system, in one sense the whole thing is an interpreter, but it is important that the interpreter should be able to understand the language in which you address it. For this purpose a particular clever bit of software called a compiler is specially written, and this enables the computing system to make sense, in terms of its own machine code, of the instructions as they arrive in the programming language. A compiler essentially converts what you type in the high-level programming language, such as Fortran or POP—2, into machine instructions which are then implemented. I was certainly not identifying the mind with a compiler in that sense; as I said last year on p. 25 of The Nature of Mind: ‘I want to suggest that the problem of describing the mind becomes very much clearer if we recognise that in speaking of “the mind” we are not speaking of a static or passive entity but of an enormously complex pattern of processes’.
I'm not singling out any one of those processes and saying: ‘That's the mind’, though some of the processes in the complex system of programs which we call the mind will be more crucial than others; in other words, there are certain programs which you can do without and still be yourself. If I were to go blind, then the programs which mediate my vision would be out of action; but the most important ones, which I could not do without, are those which really make the essential me. So I would wish to include, in any account of my mind, all the mental processes of which I am capable, and in which I engage, such as remembering, imagining, reasoning, using language, playing chess. So I would agree with Aaron Sloman that one mustn't be too restricted in one's use of the term ‘mind’, and if one accepts the idea that the human mind is essentially all the programs which a human brain can carry out, then we have the right to say that the brain is by far the most impressive computing system that has ever existed, or will—probably—be invented.
CHAIRMAN. Now we have another question from Dr Sloman, this time for Dr Kenny:
In your last lecture, did you not fail to see that a program is itself a new kind of manikin? In this sense: unlike previous instruments, that is, the product of previous technologies, a program can make, modify and control other programs including itself. For example, a program can call itself recursively, modifying itself with each call. It can also construct and use symbols for its own current purposes which the original programmer knows nothing about. That is, they are not the programmer's symbols
KENNY. Here I must begin by pleading guilty to having misunderstood Longuet-Higgins. I argued yesterday that he was undecided whether to compare the mind to a program or to a programmer. I argued this on the grounds that while in his own experimental work it is the program which corresponds to the mind of the language-speaker, in his paper, ‘The Seat of the Soul’, and in his lecture last week, he had compared the mind to the user of a computer, to a programmer. He has now convinced me that I quite misunderstood the passage in ‘The Seat of the Soul’ which I quoted in support of the accusation that he was smuggling in a homunculus or manikin. He said there that one of the things that we would want to know about the brain, was the master program which sees to it that the user's program is properly translated into machine code and implemented, and then he went on to talk of the soul as a controlling program. Now, I took it that this definition, though of course explicitly it equates the soul with the program, had by speaking of a user smuggled in an illegitimate reference to what could only be a homunculus or Cartesian spirit. But I now see that the role which in the analogy is played by the user is supposed to be played in the analogate not by the soul but by such things as eyes, and the ears. Consequently, this passage doesn't involve a homunculus and I apologise for saying that it did.
Likewise, when Christopher spoke of himself as reprogramming himself to cope with decimal currency in the lecture last week, this wasn't meant as a transaction between a homunculus and a program but rather a transaction between the master program with which Christopher identifies himself and the subordinate program which contained instructions for coping with currency.
Christopher indeed claimed that the great merit of his approach is that for the first time it enables us definitely to kick the homunculus out of doors when we are trying to explain the mind. I'm not yet convinced that it does so, simply because it hasn't yet got far enough. The type of work which has actually been done in artificial intelligence hasn't yet dealt with the areas in which the homunculus is traditionally invoked, for instance, the taking of spontaneous decisions and the conversational use of language. Christopher's program can answer in English questions put to it in an appropriate form; it can give yes-no answers, but it doesn't go in for the stimulus-free use of language which Chomsky has so much emphasised as the characteristic of human beings.
Now, I think that this recantation can serve as an answer to the first part of Aaron's question, but with regard to the second part about symbolism, I'm quite unrepentant. The operations of computers upon the information and instructions which they contain are not, I would maintain, symbolic operations, even though the input to and output from a computer may be symbolic in the fullest sense; as it is, for instance, when one talks to Christopher's program in English in playing this game ‘waiting for Cuthbert’ which he described to us so vividly last year. When one does use English sentences on the teletype in communicating with Christopher's program, then of course, one is using symbols which have meaning and symbolic meaning, but it is a meaning which we have given them; and moreover it is the meaning which we have given them in our transactions with each other and not in our transactions with the computer. But I suspect that here there lurks a deep philosophical disagreement between Dr Sloman and myself, not on the nature of computers but on the nature of symbolism.
Mr Cavanagh (University of Edinburgh). It is perhaps the case that among the constraints on the form of natural language are the following: (i) A linearity imposed by the physiological characteristics of the acoustic medium, (ii) The existence in the world of the physical dichotomies, motion vs state, and cause vs effect. (iii) The essentially ‘binaristic’ nature of much human thinking. Is this a reasonable assumption? If so, could Dr Kenny suggest what additional constraints might be imposed on the form of the grammars of natural languages by a universal grammar such as that posited by Chomsky?
KENNY. I'm in the fortunate position of being able to give an example of the type of constraints imposed by universal grammar. Last night I was privileged to attend a brilliant presentation by Dr Edward Keenan of the work on noun-phrase accessibility and universal grammar which he has been carrying out with his associates in Cambridge. Let me explain. Keenan's work concerns the formation of relative clauses. In English we can relativise noun phrases standing in very different positions in a sentence; for instance, we can relativise things standing in subject place, as in ‘the woman who is married to John’; or in direct object place, as in ‘the woman that John married’; or in indirect object place, ‘the woman to whom John gave a ring’; after a preposition, as in ‘the woman that John is sitting next to’; after a possessive as in ‘the woman whose sister John married’; or, finally, after a comparative particle, as in ‘the woman that John is taller than’. There are others too.
Now, Keenan was explaining that not all languages allow relativisation in all these places. Some Malay languages, for instance, allow relativisation only in the first case and not in the others. Keenan's work suggests that these possibilities can be arranged in a hierarchical order, in fact, in the order of the previous paragraph. It suggests that we have a hierarchy thus:
Any language which admits relativisation at any point on this scale, also admits it at any point further back towards the left, but not conversely; so that you can group languages according to where they drop out.
The hypothesis of such a hierarchy has been confirmed by the study of forty or so languages. Now it seems to me that this hypothesis provides a perfect example of the type of thing that a universal grammar is supposed to contain. The hierarchical principle will be in a principle of universal grammar, something obviously highly abstract.
Now I don't think that a constraint such as this hierarchical principle could be adequately explained by the constraints which the questioner listed. Consider the contrast between universal grammar and logic. The relevant part of logic would be the first order predicate calculus with identity. If we all spoke, instead of English, a version of first order predicate calculus which included English nouns and predicates then we would form all these relativisations by using something called the iota operator which means roughly ‘the x such that’; you have ‘the x such that x is a woman and x is married to John’, ‘the x such that x is a woman and John married x’, ‘the x such that x is a woman and John gave a ring to x’, ‘the x such that x is a woman and John is sitting next to x’, ‘the x such that x is a woman and John married the sister of x’, and so on.
Now, there are absolutely no constraints on relativisation in the first order predicate calculus. By this I mean that wherever a noun phrase of the appropriate kind can occur at all in a sentence, in any of the positions where these xs are, then it could also be relativised. There is no hierarchical principle either; there is nothing more or less natural about relativising in the most complicated sentences than in the simple ones with which we began. But natural languages, even ones which are as generous about relativisation as English is, don't allow things which are perfectly all right in logic. For instance, in logic there is nothing wrong with this: ‘the x such that x is a woman and grass is green and John married x’, but even English is pretty unhappy about ‘the woman that grass is green and John married’. Now, when we use logic we are of course subject to all the constraints imposed upon us by the linearity of the acoustic medium, by the contrast between concepts of motion and state, and so on, that were mentioned by the questioner. We are influenced by all of these, so we need some further explanation why there are extra constraints that operate in natural languages which don't operate in logic, and that is what Chomsky's postulation of innate universal grammar is meant to supply.
Mr Alex Solomons (University of Edinburgh). The human brain seems to have an unlimited ability to generalise and extend concepts. During this process, the mind is thinking in terms of the then highest level of organisation, that is, the highest system appropriate to some concept. It is often tried to place the concept in a higher and as yet unknown system. The fruit of this activity is the discovery of the higher, more generalised and usually more meaningful concept. This creative act may involve the thinker in momentarily relegating the lower levels of organisation to the semi-conscious but they are still accessible to him or her. My question is this: does the form of Chomsky's transformational grammar give any insight into this process of generalisation? If the answer to this last question is yes, I would like to extend the query, to generalise my question. It would seem to me that music and the visual arts also have their universal grammars. Is one not conscious of the generalisation and the extension of concepts in the fine arts? This being so, what is the possibility of a super universal grammar covering these universal grammars, or is Chomsky's transformational grammar adequate when applied to the realm of the fine arts?
KENNY. The question is, does Chomsky's transformational grammar throw light on the process of generalisation? Well, I think the answer is that it attempts to. Generalisation in the sense intended by the questioner, is a semantic matter. Grammar, as defined by Chomsky, includes a semantic component as well as a syntactic one, and a number of Chomsky's followers have produced theories about the semantic component of grammar. My impression is that these are not generally held to have been as successful as the syntactic studies of the transformational grammarians.
Again, in answer to the second question, there have been a number of attempts to apply techniques analogous to those of transformational generative grammars to such things as the rules governing musical composition in different musical traditions. I think that attempts have even been made to link this work with work on other species, to link it say with the work of those like Thorpe, who made studies to attempt to separate an innate and an acquired component in the song of song birds like the chaffinch. I'm not familiar with the studies nor competent in the relevant fields, so I cannot say how successful the work has been. Perhaps in a moment Waddington might say something about the latter studies. I should insist that I'm not a linguist and that when I took Wad to task the other night about Chomsky, it wasn't because I thought I was entitled to an opinion about the truth of the theory of universal grammar, but merely because I thought Wad had in some respects misrepresented the structure of the theory.
WADDINGTON. I don't think that I have very much to say on the subject of bird-song, but the situation seems to be this. Each species of bird has a more or less well-defined type of song. In many species this is completely fixed in its heredity, and you cannot persuade the birds to sing anything else. In quite a number of other species the song is more flexible; by bringing the bird up in suitable ways you can persuade it either not to sing at all, or in some cases to modify its song to some extent, or even to sing in a way characteristic of other species. I do not quite see how this links up with anything particularly universal. Possibly you might say that each species has its own ‘universal’ grammar. But most bird-songs are fairly simple; in many species it is just a repetition of a single phrase, and I do not see much point in discovering a universal grammar within a single phrase. Of course there are some complicated bird-songs, such as that of the nightingale which usually does not repeat itself. It would conceivably be interesting to see if you could make any sense of the idea of a universal grammar for the nightingale.
CHAIRMAN. Dr John Beloff (University of Edinburgh) has sent us a question which he addressed to Professor Longuet-Higgins, who has taken the liberty of re-routing it to Professor Waddington. I am presenting it here in somewhat shortened form:
In presenting your evolutionary approach to Mind you warned us against adopting a too intellectualist theory of Mind and asked us to remember that behind every thinker of today lies the ancestral hunter and man of action. While I would go along with that as a broad generalisation there are, I would say, some very striking facts that do not easily fit this evolutionary interpretation. Consider, for example, such human activities as music, mathematics and chess. A high level of aptitude for any of these fields seems to be both fairly specific and, to a large extent, hereditary, as evidenced by the fact that it reveals itself at such an early age. Now, the question I would like to put to you is: what conceivable advantage would the possession of such abilities have conferred on individuals at the time when our species was being established?
One other point. I would like to draw your attention to a passage in Koestler's Ghost in the Machine (pp. 272–3). There he cites several authorities, including Le Gros Clark, for the view that the enlargement of the hominid brain from the mid-Pleistocene era onwards proceeded at a pace ‘exceeding the rate of evolutionary change which has so far been recorded for any anatomical character in lower animals’ and that it represents a case of ‘explosive evolution’. What I should like to ask is: (1) can this development be accounted for in terms of what was strictly required for survival under the circumstances of that period? and (2) does it make sense to suggest that evolution can give rise to pathological developments? Koestler mentions the idea that ‘the human cortex is a sort of tumorous outgrowth that has got so big that its functions are out of control’. Do you think there is anything in this pessimistic idea?
WADDINGTON. Dr Beloff's letter contains quite a number of questions and I shall answer it under six headings. Firstly, there is the point about ‘ancestral hunter and man of action’. I would just like to remind you that our ancestry includes mothers as well as fathers. Our inheritance includes contributions from food gatherers, cooks and housekeepers as much as from the chaps who were good with the bow and arrows.
Next is the point about specialised activities, such as music, mathematics and chess. I think there is little evidence that most of these abilities are hereditary, except perhaps in the case of music. It seems to me almost certain that most of these highly specific activities depend on fortunate and relatively rare combinations of genes. They are underlain by complex genotypes, rather than the presence of single hereditary factors. I suspect they must involve something like the phenomenon which I referred to earlier as ‘hitting the jackpot’, achieving a sudden large leap in efficiency just when all members of some complex set-up fit perfectly into place. In this situation, evolutionary processes will have been concerned mainly with the natural selective value of the individual components of the complex system, rather than with the complex as a whole, which would only come together in the right configuration very occasionally.
Then there is the question, raised by Koestler and Le Gros Clark, that the enlargement of the hominid brain is a case of explosive evolution. I should point out that most major transition in evolution have gone extremely fast so that they have left very few intermediate stages behind. This was the case, for instance, in the evolution of reptiles into mammals, or into birds. I am not convinced that the evolution of the hominid brain was so much faster than these other examples.
Then Beloff asks can ‘this development be accounted for in terms of what was strictly required for survival?’. In connection with this I think I should again refer to ‘the jackpot effect’. There is really nothing surprising if evolution sometimes produces ‘more’ than is strictly ‘necessary’.
Then there is the question whether evolution can give rise to pathological development? The answer is that it certainly can produce states which will appear pathological if, for instance, the environmental situation changes so that the criteria of natural selection are altered. It is quite possible for natural selection to favour a developmental process which produces a condition favourable to the animal during the period before reproduction takes place, but which goes on developing into a possibly unfavourable condition in the later stages of life, with which, of course, natural selection is not much concerned. An example of this may have been the enormous horns which were grown by the Irish Elk, which must have been a great handicap in the later stages of life, and may have led to its eventual extinction when the environment became generally less favourable.
Finally, there is Koestler's idea that the human cortex is a sort of tumorous outgrowth which has got out of control. I don't much like the idea myself. Developmentally the human cortex is a perfectly well behaved organ, with a definite size and morphology, not at all comparable to a continuously enlarging disorderly cancer. I think that the real point that Koestler is making concerns the destructive tendencies in human behaviour, which seem to go along with increase in foresight and intellectual ability, and to have evolved along with the enlargement of the cortex. Personally, I do not look to anatomy to give an explanation of this. I have advanced another argument on this subject, which I have discussed in some detail in a book called The Ethical Animal. I have not time to expand it fully here, but the gist of the idea is that human mental abilities depend primarily on the fact that man is a language-using animal. He has only become so because he has found some method to convert neutral stimuli (e.g. noises) into symbols with meaning (e.g. words). In practice he does so by associating noises with the control of behaviour by external authorities (e.g. parents), in the experience of the very young infant; but during this process, by which the baby is initiated into the world of culturally transmitted information, a ‘hitting the jackpot’ phenomenon unfortunately tends to occur, by which the internalised authority necessary to convert noises into words becomes hypertrophied into what Freud referred to as a ‘super-ego’, and it is this which is responsible for the destructive tendencies which Koestler attributes to an overgrowth of the cortex.
CHAIRMAN. Perhaps there is just time for Professor Waddington to make one or two remarks about a final question which comes from Dr Walter P. Kennedy (University of Edinburgh):
May I raise one point that has not been mentioned so far, though of course it may come up in one of the later lectures. This is Stephen Black's view of the development of mind in his book Mind and Body, 1969. Two brief quotations must suffice to provide a basis: ‘I therefore put forward the definition “by Aristotle out of Schroedinger”: that life is a quality of matter which arises from the informational content inherent in the improbability of form’ (p. 46). ‘I therefore put forward the second hypothesis: that mind is the informational system derived from the improbability of form inherent in the material substances of living things’ (p. 56).
WADDINGTON. I think Stephen Black is right in arguing that both life and mind are more closely connected with what he refers to as ‘information traffic’, than with the material interactions studied by chemistry and physics; though I should prefer to use the phrase ‘instructional traffic’. We could, for instance, have systems worthy of being called alive which were built of quite different chemical materials to those used by living organisms on this earth. However, I think that when Black goes on to the second point quoted by Dr Kennedy, and identifies mind with any ‘informational system derived from the improbability of form’, he is really enlarging the scope of the word too much. I think that if the word ‘mind’ is to remain useful at all it has to be restricted to something to do with the central nervous system, and of course, there is plenty of improbability of form inherent in many other material things than that. I hope I shall have time to say something further about these points in my next lecture, on ‘The Evolution of Mind’.