You are here

2: Does Evolution Explain Representation?

For the last three centuries a certain metaphysical picture suggested by Newtonian or Galilean physics has been repeatedly confused with physics itself. (More recently, metaphysical pictures suggested by biology and by computer science have been confused with those subjects themselves, in much the same way.) Philosophers who love that picture do not have very much incentive to point out the confusion—if a philosophical picture is taken to be the picture endorsed by science, then attacks on the picture will seem to be attacks on science, and few philosophers will wish to be seen as enemies of science. As far as our ways of understanding mind and language are concerned, the thrust of that picture was well captured by the claim of La Mettrie that man is a machine.

The discovery of the idea of evolution by natural selection by Darwin and Wallace approximately a hundred years later seemed to add further evidence for the thesis that mind is to be understood by being reduced to physics and chemistry (we know from Darwin's journals that that is how he himself was inclined to see the matter). Today even materialist philosophers do not think that follows; it is on computer modeling, rather than on direct physical or chemical explanation, that thinkers of a reductionist bent, like my former self, now pin their hopes. But recently evolutionary theory has again come into play in discussions of the nature of mind, and of the relation of language to reality.

Philosophers who apply the theory of evolution generally do so in a very simple way. The philosopher picks some capacity that human beings have, a capacity which it is in one way or another useful for human beings to have, and argues that it must have been selected for in the evolutionary process. This use of the theory of evolution is one that many evolutionary biologists find extremely questionable.1 The working evolutionary biologist does not assume that every useful capacity of a species is the result of selection. A genetic alteration frequently has many different effects. If any one of those effects contributes markedly to the reproductive success of members of the species having that gene, then that new genetic trait will be selected for, and other side effects, provided they are not so negative as to cancel out the benefits of having the new genetic trait, will be carried along. In this way, it can even happen that a trait which does not contribute to the survival potential or the reproductive success of a species, or even one which it would be better for the species not to have, arises through natural selection without itself being selected for. But it can also happen that the trait which is carried along is actually beneficial to the species, although that is not the reason why the trait became universal in the species. In general, the assumption that every change in a species which is beneficial to the species was specifically selected for is rejected in contemporary evolutionary theory. Evolutionists are extremely cautious about saying which capacities and organs and so on were specifically selected for (were “adaptations”) in the evolutionary history of a species and which ones arose serendipitously. Philosophers, however, are not so cautious.2

Evolution, Language, and the World

My primary concern in this chapter is with philosophical views of the mind, and with the way in which philosophical issues about the mind become entangled with various other issues. In a famous letter to Marcus Herz, Kant described the problem of how anything in the mind can be a “representation” of anything outside the mind as the most difficult riddle in philosophy.3 Since the so-called linguistic turn in philosophy earlier in this century, that question has been replaced by the question “How does language hook onto the world?” but the replacement has not made finding an answer any easier. Recently certain philosophers4 have suggested that the answer is provided by the theory of natural selection, either directly or indirectly. I want to examine this idea partly for its own intrinsic interest, and partly because it provides a natural transition to questions about the language-world relation. I will first state the idea in a very general way.

Cognitive scientists have frequently suggested in recent years that the brain employs “representations” in processing data and guiding action. Even the simplest pattern-recognizing devices in the brain could be described as producing representations. In his books Neurobiology and The Remembered Present, Gerald Edelman has described a neural architecture which could enable the brain to construct its own pattern-recognizing devices without “knowing in advance” exactly which patterns it will have to recognize. This architecture will enable a brain to develop a neural assembly which will fire whenever the letter A is presented in the visual field, for example, or alternatively to develop a neural assembly which will fire whenever the Hebrew letter aleph or a Chinese character is presented in the visual field, without having “innate” A-recognizing devices or aleph-recognizing devices or Chinese-character-recognizing devices. If Edelman's model is right, then when an instance of the letter A is presented in my visual field and the appropriate neural assembly fires, that firing could be described as a “representation” of the shape “A”. But the representations that neurobiologists, linguists, and computer scientists hypothesize go far beyond mere pattern recognizers.

If an organism is to display what we call intelligence it is obviously useful, and perhaps necessary (as Edelman and others have suggested), for it to have, or to be able to construct, something that functions as a map of its environment, with aspects that represent the various salient features of that environment, such as food, enemies, and places of refuge. At a higher level, such a map might be elaborated to show not only salient features of the current environment, but also a representation of the creature itself, and perhaps even a representation of the creature's psychological states (“self-consciousness”). In The Remembered Present, Edelman speculates about the sorts of neural architecture that might support a capacity to develop such representational schemata. All this is exciting science, or at least exciting scientific speculation. Whether or not it will pay off is not for me to judge, but of course I hope that it will. My doubt concerns whether neural science (or computer science, insofar as it leads to an increase in our ability to model the brain) can really speak to the philosophical question that I mentioned.

What philosophers want to know is what representation is. They are concerned with discovering the “nature” of representation. To discover that, in addition to the representations we are all acquainted with—thoughts, words, and sentences—there are other things which don't look like thoughts or words, things in the brain which it is useful to analogize to representations, is not to tell us what representation is. If a philosopher asks what the nature of representation is, and one tells him or her that there are tens of millions of representations in the Widener Library, one has not answered the question. And if one tells him or her that there are tens of millions of representations in human brains, one has not answered the question either. Or so it would seem.

Let us take the form of the philosopher's question that I mentioned a few moments ago, “How does language hook onto the world?” Materialist philosophers generally favor one of two answers to this question. One kind of answer, which I shall not discuss here, uses notions from information theory. That answer has, however, run into apparently insuperable technical difficulties.5 The other answer, which is today the favorite one among philosophical materialists, is that in the case of language, reference is a matter of “causal connection”. The problem is to spell out the details, and in the next chapter I will examine one attempt to do this. Even before we look at such an attempt, it is apparent from the very beginning that there are going to be difficulties with the details—whether those difficulties prove insuperable or not. One cannot simply say that the word “cat” refers to cats because the word is causally connected to cats, for the word “cat”, or rather my way of using the word “cat”, is causally connected to many things. It is true that I wouldn't be using the word “cat” as I do if there were no cats; my causal history, or the causal history of others from whom I learned the language, involved interactions with cats; but I also wouldn't be using the word “cat” as I do if many other things were different. My present use of the word “cat” has a great many causes, not just one. The use of the word “cat” is causally connected to cats, but it is also causally connected to the behavior of Anglo-Saxon tribes, for example. Just mentioning “causal connection” does not explain how one thing can be a representation of another thing, as Kant was already aware.

For this reason, philosophers who offer this sort of account do not try to account for all forms of representation—that is too big a project to carry through at one fell swoop—but to account for forms of representation that might be thought basic, that is, for representation of observable objects in our immediate environment, such as trees and people and animals and tables and chairs.

It is natural to suppose that the ability to represent such objects is the result of natural selection. My ability to understand the word “cat”, for example, might involve my connecting that word with a more primitive representation of cats, a representation that is not itself a part of a language. It may be that in my representation of my environment, there is some “data structure” which “stands for” cats. To say that it stands for cats—this is the crucial move—is simply to say that an acccount of the evolutionary function of that data structure, and of the schematism to which that data structure belongs, will involve saying that that data structure enables us to survive and to reproduce our genes, or the entire schematism enables us to survive and reproduce our genes, because various parts of that schematism, including the data structure, have the “function” of corresponding to various things and kinds of things in the environment. Having the function of corresponding to things and kinds of things in that way just is representation, or rather, it is what we might call “primitive representation”. If the problem of saying what representation is is not solved, the thought is, then at least progress has been made if this story is right. If this story is right, then we have at least a hope of saying what primitive representation is, and then philosophers can work on the task of showing how the more elaborate notion of representation that we actually possess is related to and grows out of primitive representation.

Let me emphasize that, according to the view I am describing, the intentional notion “stands for” can be defined by saying that “A stands for B” (where A is the data structure in the brain and B is the external referent) just means that “A stands to B in a relation (a correspondence, that is, a function from As to Bs) which plays such-and-such-a-role in an evolutionary explanation.” “Stands for” is not being taken as primitive, if this works; “evolutionary explanation” is.

The way in which I just explained the idea makes it sound as if the notion of a cat is supposed to be innate. While some thinkers—Jerrry Fodor is the best known—would indeed posit that the various data structures that make up the mental schematism that we use to represent our environment are indeed innate, others, like Gerald Edelman, would not. If Edelman's story is right, what is innate is not the mental representations themselves, but only the architecture which permits us to form such representations. While Edelman himself is wary of trying to answer the philosophical question about the nature of reference, someone who takes the line I have described could very well accept Edelman's model. What such a philosopher would have to do is insist that the architecture was selected for to perform the function of creating data structures which correspond in a certain way to objects in the environment and to kinds of objects in the environment. And again the claim would be that that correspondence, the correspondence which we have to talk about in explaining how the whole mental schematism came to be as the result of natural selection, is at least the primitive form of reference.6

The idea is that natural selection is, so to speak, teleological: it produces things that have a telos or a “function”, and a structured way of performing that function. We can say what representation is by saying what the structures are that mediate representation, that is, how those structures function to enable the animal to survive and reproduce, and how a correspondence between structures in the head and things outside the head plays a role in that mediation, and then we can eliminate the mystery of teleology by showing how the teleology appears here, as it does, say, in the functioning of the foot or the functioning of the eye, as the result of familiar processes of natural selection.

In what follows, I am going to confine attention to the form of the theory in which the individual representations are themselves innate, but it should be clear how the philosophical arguments should be modified if one takes the other form of the theory.

To begin with, let us consider what it means to say that a trait was selected for. All explanations of selection involve counterfactual conditionals at one point or another. For example, if we say that the speed of the gazelle was selected for, what we will typically mean is something like this: that the gazelle needs to escape from certain predators, for instance from lions, if it is to survive and have offspring; and that the gazelles with a certain genotype, the one responsible for higher speed, lived to have offspring in greater numbers; and finally—we have to add—that they would not have survived in greater numbers if they had not run so fast: the lions would have caught them. This last addition is necessary, because without this addition the possibility is not ruled out that the genotype that is responsible for higher speed was selected for for some reason other than the higher speed that accompanies its presence.

If the gazelles had not run so fast, the lions would have caught them. This sentence, and others like it, “do the work” in powering explanations by natural selection. That an explanation by natural selection involves the use of counterfactuals is not a difficulty from the point of view of the philosophers I am talking about, since they are all committed to using counterfactuals, dispositional statements, and so on in the analysis of reference. There are, indeed, philosophers who regard counterfactuals as being just as problematic as reference itself—not to mention those, like Quine and Goodman, who regard counterfactuals as more problematic than reference itself—but this is an issue I have to defer to later chapters.

The sense in which the gazelles’ high speed is there for a purpose, the sense in which it has a “function”, is really rather minimal, which is why Ernst Mayr has proposed that we should speak not of teleology in evolutionary theory, but of teleology-simulation, or, as he puts it, “teleonomy”.7 Escaping lions is the function of the genetic trait in question only in the sense that if the high speed had not enabled the gazelles to escape the lions, then that feature would not have been selected for.

Now, let us suppose that there are innate mental representations in the case of a certain species, say the dog. Let us suppose that one of the innate representations in the dog's brain is the representation of meat. What this will mean should be clear: we are saying that the dog's mental processes involve a “data structure” which was selected for to do certain things; perhaps the data structure responds in a certain way when the dog sees meat, and this somehow triggers the appropriate responses, such as trying to get it, and eating it. Perhaps the data structure also operates in yet another way: when the dog wants meat, the data structure causes the dog to seek meat, or to whine to be fed, or whatever. Whatever the details may be, the point is that there are certain behaviors which involve meat and which involve the data structure, and the architecture which makes it possible for the data structure to mediate the behavior in that way was selected for. Again, any reference to teleology is unnecessary; all it means to say that this architecture was selected for this purpose is that if having a data structure which triggers these behaviors under these conditions had not enabled the dog to get food more often, then the dog would not have survived to reproduce its genes more often than other dogs with different genes. The important point (if some story like this one proves to be correct) is that the explanation of how the data structure came to be universal among dogs involves a certain “correspondence” between the data structure and meat.

Intentionality and Lower Animals

One difficulty in evaluating the importance of these ideas is that we are all too ready to apply intentional predicates to lower animals in an unreflective way. We all say things like “the dog is trying to reach the meat”, and we often seem to think that such descriptions mean that the dog has the propositional attitude of “thinking that what it sees is meat” just as they normally would if we were talking about a fully competent speaker of a natural language. Forget for the moment all evolutionary questions, and suppose we ask whether the dog really thinks that it is meat that it sees and reaches for, as opposed to stuff with a certain look and taste and edibility and so forth. Suppose we perform the following experiment: We make a “steak” out of textured vegetable protein. Let us suppose that our technology is so good that the TVP “steak” smells like real steak, looks like real steak, tastes like real steak, and so on. If one sees such a steak, one may well think, “I see a piece of meat”. If one eats it, one may be perfectly happy. But if one is told that what one ate was textured vegetable protein, one will revise one's judgment, and decide that one didn't really eat a piece of meat, one ate a piece of TVP. (My oldest child, Erika, started distinguishing between “real” things and “make-believe” things—the beginning of the distinction between appearance and reality—at about the age of two-and-a-half, by the way. I think that the appearance of this distinction between the “real” thing and the “unreal” thing is one of the most exciting developments of a child's language.) Now suppose that we give the synthetic steak to the dog. The dog eats the synthetic steak and is perfectly happy. Did the dog have a false belief? That is, did the dog believe that it saw real meat, just as we believed that we saw real meat, and not know that we had a false belief? Or did the dog's concept of meat include TVP “steaks” to begin with?

The question makes no sense. A speaker of a language can decide that part of his or her concept of meat is that it should come from an animal, for example. A more sophisticated speaker can decide that it is part of the concept of meat that it should have the normal microstructure, whatever that may be. There is probably nothing in the dog's neural architecture which would allow it to think “this piece of meat came from an animal”, and there is certainly nothing which would allow it to think “this piece of meat has a normal microstructure”.

To illustrate the same point in another way: Suppose we interpret the dog's concept, or as I would prefer to say, its “proto-concept”, as referring not to meat but to whatever has a certain appearance and smell and taste. If the “meat” the dog ate on a certain occasion were not really a piece of meat, but a bit of ectoplasm which has been magically endowed with the right smell and taste and texture and appearance, the dog's thought that this is meat would be true of what it ate, on this interpretation, for its thought is not about meat in our sense, but only about the appropriate smell and taste and texture and appearance. Once again, a human being who discovered that what had just been eaten was not meat, and indeed not even a piece of matter, but a piece of ectoplasm, would take back the judgment that he or she had eaten meat. But the dog lacks the conceptual resources to make such a discovery. To deploy the jargon of philosophers of language, assuming dogs have proto-concepts, the dog's proto-concept of meat is “referentially indeterminate” in ways in which human concepts are not. Human concepts are less indeterminate because they enter into complex sentences, and human beings can say whether they believe or disbelieve those various sentences. In the case of the dog, those sentences are missing—sentences like “this meat has a normal molecular structure”, “this meat came from an animal”, “this meat is matter and not ectoplasm”, and all the rest. But even the philosopher of language's way of putting the matter seems to me not radical enough.

The real point is this: human beings are reflective creatures. Human beings are able to think about their own practice, and to criticize it from a number of points of view. If I have a thought and act on it, I can later ask whether my thought was successful or not, whether it achieved its goal, whether it contributed to my well-being, my satisfaction, and so on; but I can also ask whether my thought was true or not, and this is not the same question. I may decide that one of my thoughts was successful in enabling me to maximize my well-being, but was not in fact true. I was deceived, but the deception was a fortunate one.8 No such cognitive performance is possible in the case of the dog. For a dog, the very distinction between having a true belief and having a successful belief simply does not make sense; and that means that the notion of a dog's thought as being true or false, and of its proto-concepts as referring or not referring to something, simply do not make sense. A dog can recognize that something is illusory only in the sense of encountering a disappointment. If something looks and smells like meat, but turns out to be made of rubber, then when the dog tries to chew it, it will experience disappointment. But the idea that even a successful encounter with “meat” may have succeeded although the belief was false is inapplicable in the case of a dog. Evolution didn't “design” dogs’ ideas to be true or false, it designed them to be successful or unsuccesful.

Evolution Again

With this “indeterminacy” of the dog's proto-concepts in mind, let us return to the evolutionary story. I postulated that if a certain data structure in the dog's brain (a proto-concept) didn't usually fire when the dog perceived meat, then dogs with a certain genotype wouldn't have survived to have offspring in greater frequency than did dogs with competing genotypes, as they in fact did. But the whole idea that a unique correspondence between the data structure and meat is involved in this bit of natural selection is an illusion, an artifact of the way we described the situation. We could just as well have said that the data structure was selected for because its action normally signals the presence of something which has a certain smell and taste and appearance and is edible.

To this objection the defender of “evolutionary intentionality” might reply that in fact the apparent indeterminacy if we look only at present-day dogs disappears if we consider evolutionary history. In the evolutionary history of the species, synthetic meat did not exist, for example. So, it might be argued, it would be wrong to regard the dog's proto-concept of meat as including synthetic meat. But it is difficult to see the force of this reply, since canned meat also didn't play any role in the evolutionary history of the dog, yet when a domestic dog sees some meat taken out of a can, the defender of evolutionary intentionality will presumably want to say that the dog thinks that that is meat, and that the dog's thought is true. It is also the case, by the way, that poisoned meat played no role in the selection process, since the dogs that ate poisoned meat did not survive to have offspring. Yet those who would take the dog's proto-concept to refer to meat would presumably say of the dog who sees and eats poisoned meat that it was right in thinking that what it saw was meat (although it didn't know the meat was poisoned), and not that what its proto-concept refers to is “unpoisoned meat”. Yet, on the evolutionary story, why should one not say that the dog's concept of meat (and the human one too?) refers not to meat but to unpoisoned meat? Alternatively, why should one not just as well say that when the dog is given synthetic meat (or even poisoned meat) the dog thinks that that is “meat-stuff” (where the concept of meat-stuff is wide enough to include synthetic meat) and that the dog's thought is true; or why shouldn't one say that the dog's thought is “that's that great stuff with such and such an appearance and such and such a taste”, and that the dog's thought is true? Or, better, why shouldn't one just give up on talk of truth in connection with the thought of lower animals? Perhaps, if one wants to speculate, all that goes on is that certain “mental phenomena” are associated with a feeling that a certain behavior is called for.

Isn't it with dogs as with gazelles? Dogs which tended to eat meat rather than vegetables when both were available produced more offspring (gazelles which ran faster than lions escaped the lions and were thus able to produce more offspring). Just as we aren't tempted to say that gazelles have a proto-concept of running fast, so dogs don't have a proto-concept of meat. Indeed, in the case of the dog, there are a variety of different descriptions of the adaptive behavior: that certain dogs recognize meat better, or that certain dogs recognize food with a certain appearance and taste better, or just that certain dogs just recognize stuff with a certain appearance and taste better. The “reference” we get out of this bit of hypothetical natural selection will be just the reference we put in our choice of a description. Evolution won't give you more intentionality than you pack into it.9

Reference and Counterfactuals

The most telling argument against the idea that evolution explains intentionality is that the whole reference to evolution plays no real role in the “explanation” just sketched. What seems to give us an account is not the theory of evolution, but the use of counterfactuals and the appeal to considerations of selective reproduction of certain functions. But both of these strategies are available to any philosopher of language, and they require no particular reference to the theory of evolution. For example, a philosopher of language might very well take the following sorts of sentences as basic, at least in an initial investigation of reference: “I see an X”, “I am touching an X”, “I want an X”, and so on. He or she might now say that when X is the name of a kind of observable thing, say a cat or dog, the way these sentences are “connected to the world” is the following: I would not normally assert that I see a cat, or I am touching a cat, or I want a cat unless I were (respectively) seeing a cat, or touching a cat, or wanting a cat. These claims are certainly correct. I wouldn't, in fact, normally assert that I see a cat unless I were seeing a cat, and so forth. Whether pointing to these counterfactuals is providing a sufficient explanation of what it is for the word “cat” to refer to cats is another question. But that question can be discussed whether or not the foregoing evolutionary speculations are true. In fact, as a biologist once remarked to me, people often forget that while biological evolution is Darwinian, cultural evolution is Lamarckian. What he meant, of course, is that in the case of cultural evolution we do see the inheritance of acquired characteristics, and there is no mystery about this. Suppose that, in fact, language is primarily the result of cultural evolution rather than of biological evolution, and that proto-concepts and so on play only a marginal role. The explanation of reference just suggested (using counterfactuals) would be no better and no worse off. If the idea is to give an account of intentionality by using counterfactuals, then we may as well discuss that idea directly, on its own merits, without the long detour through the at-present totally unproved speculations about our evolutionary history.10

Of course, that evolutionary theory does not answer Kant's riddle as to how anything in the mind can be a representation of anything outside the mind does not mean that there is anything wrong with evolutionary theory, just as the fact that physics does not answer the riddle of the nature of free will and that brain science does not explain induction and language learning does not mean that there is anything wrong with physics or brain science. But I shall not belabor this point. I hope that these first two chapters have helped us to recall how different philosophical and scientific questions actually are, without denying that philosophy needs to be informed by the best available scientific knowledge. In the next chapter I shall look at an attempt by a well-known philosopher of cognitive science to solve Kant's problem—a philosopher who certainly appreciates that lower-level sciences will not, in and of themselves, solve Kant's problem (the puzzle of the existence of “intentionality”, as it has come to be called), but who does think that it is possible to give an account, and who has put forward such an account for our consideration.

  • 1.

    See Stephen Jay Gould. The Panda's Thumb (New York: Norton, 1980) for a sharp criticism of this sort of Panglossian evolutionism.

  • 2.

    In my “The Place of Facts in a World of Values.” reprinted in Realism with a Human Face (Cambridge, Mass: Harvard University Press, 1990), I argue against the view, popular among philosophers, that our ability to discover scientific laws is “explained by evolution”. That view is a subtle form of just the mistake criticized in the text.

  • 3.

    Letter to Herz, 21 Feb. 1772, in Kant: Philosophical Correspondence, 1759–99, ed. and trans. Arnulf Zweig, pp. 70–75.

  • 4.

    Richard Boyd, Jerry Fodor, Ruth Millikan, and Daniel Dennet have suggested the sort of answer which I describe in the text, although each of them has also given other accounts of intentionality which do not depend on evolutionary theory. See Millikan's Language, Thought and Other Biological Categories (Cambridge, Mass.: MIT Press, 1984) and Dennett's Content and Consciousness (London: Routledge and Kegan Paul, 1969). Fodor's current explanation of intentionality is discussed in Chapter 3.

  • 5.

    See Fred Dretske, Knowledge and the Flow of Information (Cambridge, Mass.: MIT Press, 1981); my criticism of this approach in “Information and the Mental,” in E. LePore, ed., Truth and Interpretation: Perspectives on the Philosophy of Donald Davidson (Oxford: Basil Blackwell, 1986), pp. 262–271; and Jerry Fodor's criticism in A Theory of Content and Other Essays (Cambridge, Mass.: MIT Press, 1990).

  • 6.

    Daniel Dennett attributes this view to Millikan in “Error, Evolution, and Intentionality,” reprinted in The Intentional Stance (Cambridge, Mass.: MIT Press, 1987). (I myself do not find it in Language, Thought, and Other Biological Categories, althought it is compatible with what Millikan writes elsewhere (e.g., the reference to “a wise choice of evolutionary design” in her “Naturalist Reflections on Knowledge.” The Pacific Journal of Philosophy 65 (1984):314–334.

  • 7.

    See Ernst Mayr, “Teleological and Teleonomic, a New Analysis,” in R. Cohen and M. Wartofsky, eds., Boston Studies in the Philosophy of Science, XIV (Dordrecht: Reidel, 1974).

  • 8.

    When William James claims that the true is what is useful to believe, he doesn't claim that, like dogs, we can't distinguish true from (temporarily) successful beliefs, by the way.

  • 9.

    Indeed, this point is made by Dennett—who regards intentionality as relative to our “intentional stance”—in the course of praising Ruth Millikan, who thinks precisely the opposite! See “Error, Evolution, and Intentionality.”

  • 10.

    This point seems, in fact, to be well understood by Ruth Millikan. Although in “Naturalist Reflections on Knowledge” she does speak of “evolutionary design” as giving us “mechanisms for forming true beliefs and for learning to form new kinds of true beliefs” (p. 323), in Language, Thought, and Other Biological Categories, “evolution” does not even appear as an entry in the index. Instead, her idea is to give an explanation of cultural, not biological, evolution, in the style of a Darwinian explanation. The meanings of a word are its different “functions”, and each one of these functions becomes stabilized and reproduced because of its ability to serve some goal that is such that (because of various cultural processes) functions that do not serve it tend not to be reproduced, while functions that do serve it tend to be reproduced. The definition of “reproduction” (p. 20) involves counterfactuals (“had A been different with respect to its determinate character p within a specifiable range of variation, as a result, B would have differed accordingly”). Unfortunately, the book makes no real attempt to show that these “functions” can be described in language which does not presuppose the very intentional notions which are supposedly being explained. For example, using “capitulation” in the sense of “the act of capitulating” is one of her examples of a function (p. 74). Equally unfortunately, the intentional notions of “specifiability” and of “legitimate explanation” are taken as primitive throughout. Millikan's claim is that the word “cat” refers to cats because a “normal explanation” of how the (undescribed) “function” of that word has been reproduced and stabilized among English speakers would have to refer to a correspondence between that word and cats.

From the book: