I’ve always thought this argument of Putnam’s was dead wrong. It is about the most blatant and explicit instance of the Mind Projection Fallacy I know.
The real problem for Putnam is not his theory of chemistry; it is his theory of language. Like so many before and after him, Putnam thinks of meaning as being a kind of correspondence between words and either things or concepts; and in this paper he tries to show that the correspondence is to things rather than concepts. The error is in the assumption that words (and languages) have a sufficiently abstract existence to participate in such correspondences in the first place. (We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)
This is insufficiently reductionist. Language is nothing more than the human superpower of vibratory telepathy. If you say the word “chair”, this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain. For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is “correct” or not; there are only greater and lesser degrees of similarity between the stimulation patterns occurring in my brain when I hear the word and those occurring in yours when you say it.
The point that Putnam was trying to make, I think, was this: our mental concepts are causally related to things in the real world. (He may also, ironically, have been trying to warn against the Mind Projection Fallacy.) Unfortunately, like so many 20th-century analytic philosophers, he confused matters by introducing language into the picture; evidently due to a mistaken Whorfian belief that language is so fundamental to human thought that any discussion of human concepts must be a discussion about language.
(Incidentally, one of the things that most impressed me about Eliezer’s Sequences was that he seemed to have something close to the correct theory of language, which is exceedingly rare.)
(We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)
I’m not sure how you can appeal to map-territory talk if you do not allow language to refer to things. All the maps that we can share with one another are made of language. You apparently don’t believe that the word “Chicago” on a literal map refers to the physical city with that name. How then do you understand the map-territory metaphor to work? And, without the conventional “referentialist” understanding of language (including literal and metaphorical maps and territories), how do you even state the problem of the Mind-Projection Fallacy?
If you say the word “chair”, this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain.
It is hard for me to make sense of this paragraph when I gather that its writer doesn’t believe that he is referring to any actual neurons when he tells this story about what “neurons” are doing.
For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is “correct” or not; there are only greater and lesser degrees of similarity between the stimulation patterns occurring in my brain when I hear the word and those occurring in yours when you say it.
Suppose that you attempt an arithmetic computation in your head, and you do not communicate this fact with anyone else. Is it at all meaningful to ask whether your arithmetic computation was correct?
(Incidentally, one of the things that most impressed me about Eliezer’s Sequences was that he seemed to have something close to the correct theory of language, which is exceedingly rare.)
Eliezer cites Putnam’s XYZ argument approvingly in Heat vs. Motion. A quote:
I should note, in fairness to philosophers, that there are philosophers who have said these things. For example, Hilary Putnam, writing on the “Twin Earth” thought experiment:
Once we have discovered that water (in the actual world) is H20, nothing counts as a possible world in which water isn’t H20. In particular, if a “logically possible” statement is one that holds in some “logically possible world”, it isn’t logically possible that water isn’t H20.
On the other hand, we can perfectly well imagine having experiences that would convince us (and that would make it rational to believe that) water isn’t H20. In that sense, it is conceivable that water isn’t H20. It is conceivable but it isn’t logically possible! Conceivability is no proof of logical possibility.
Hilary Putnam’s “Twin Earth” thought experiment, where water is not H20 but some strange other substance denoted XYZ, otherwise behaving much like water, and the subsequent philosophical debate, helps to highlight this issue. “Snow” doesn’t have a logical definition known to us—it’s more like an empirically determined pointer to a logical definition. This is true even if you believe that snow is ice crystals is low-temperature tiled water molecules. The water molecules are made of quarks. What if quarks turn out to be made of something else? What is a snowflake, then? You don’t know—but it’s still a snowflake, not a fire hydrant.
ETA: The Heat vs. Motion post has a pretty explicit statement of Putnam’s thesis in Eliezer’s own words:
The words “heat” and “kinetic energy” can be said to “refer to” the same thing, even before we know how heat reduces to motion, in the sense that we don’t know yet what the reference is, but the references are in fact the same. You might imagine an Idealized Omniscient Science Interpreter that would give the same output when we typed in “heat” and “kinetic energy” on the command line.
I talk about the Science Interpreter to emphasize that, to dereference the pointer, you’ve got to step outside cognition. The end result of the dereference is something out there in reality, not in anyone’s mind. So you can say “real referent” or “actual referent”, but you can’t evaluate the words locally, from the inside of your own head.
(Bolding added.) Wouldn’t this be an example of “think[ing] of meaning as being a kind of correspondence between words and either things or concepts”?
You’ve probably thought more about this topic than I have, but it seems to me that words can at least be approximated as abstract referential entities, instead of just seen as a means of causing neuron stimulation in others. Using Putnam’s proposed theory of meaning, I can build a robot that would bring me a biological-cat when I say “please bring me a cat”, and bring the twin-Earth me a robot-cat when he says “please bring me a cat”, without having to make the robot simulate a human’s neural response to the acoustic vibration “cat”. That seems enough to put Putnam outside the category of “dead wrong”, as opposed to, perhaps, “claiming too much”?
I may bit a bit in over my head here, but I also don’t see a strong distinction between saying “Assume on Twin Earth that water is XYZ” and saying “Omega creates a world where...” Isn’t the point of a thought experiment to run with the hypothetical and trace out its implications? Yes, care must be taken not to over-infer from the result of that to a real system that may not match it, but how is this news? I seem to recall some folks (m’self included) finding that squicky with regard to “Torture vs Dust Specks”—if you stop resisting the question and just do the math the answer is obvious enough, but that doesn’t imply one believes that the scenario maps to a realizable condition.
I may just be confused here, but superficially it looks like a failure to apply the principle of “stop resisting the hypothetical” evenly.
I do worry that thought experiments involving Omega can lead decision theory research down wrong paths (for example by giving people misleading intuitions), and try to make sure the ones I create or pay attention to are not just arbitrary thought experiments but illuminate some aspect of real world decision making problems that we (or an AI) might face. Unfortunately, this relies largely on intuition and it’s hard to say what exactly is a valid or useful thought experiment and what isn’t, except maybe in retrospect.
That’s an interesting solution to the problem of translation (how do I know if I’ve got the meanings of the words right?) you’ve got there: just measure what’s going on in the respective participants’ brains! ;)
There are two reasons why you might not want to work at this level. Firstly, thinking about translation again, if I were translating the language of an alien species, their brain-equivalent would probably be sufficiently different that looking for neurological similarities would be hopeless. Secondly, it’s just easier to work at a higher level of abstraction, and it seems like we’ve got at least part of a system for doing that already: you can see it in action when people actually do talk about meanings etc. Perhaps it’s worth trying to make that work before we pronounce the whole affair worthless?
I’ve always thought this argument of Putnam’s was dead wrong. It is about the most blatant and explicit instance of the Mind Projection Fallacy I know.
The real problem for Putnam is not his theory of chemistry; it is his theory of language. Like so many before and after him, Putnam thinks of meaning as being a kind of correspondence between words and either things or concepts; and in this paper he tries to show that the correspondence is to things rather than concepts. The error is in the assumption that words (and languages) have a sufficiently abstract existence to participate in such correspondences in the first place. (We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)
This is insufficiently reductionist. Language is nothing more than the human superpower of vibratory telepathy. If you say the word “chair”, this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain. For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is “correct” or not; there are only greater and lesser degrees of similarity between the stimulation patterns occurring in my brain when I hear the word and those occurring in yours when you say it.
The point that Putnam was trying to make, I think, was this: our mental concepts are causally related to things in the real world. (He may also, ironically, have been trying to warn against the Mind Projection Fallacy.) Unfortunately, like so many 20th-century analytic philosophers, he confused matters by introducing language into the picture; evidently due to a mistaken Whorfian belief that language is so fundamental to human thought that any discussion of human concepts must be a discussion about language.
(Incidentally, one of the things that most impressed me about Eliezer’s Sequences was that he seemed to have something close to the correct theory of language, which is exceedingly rare.)
The real danger of thought experiments, including this one of Putnam’s, is that fundamental assumptions may be wrong.
I’m not sure how you can appeal to map-territory talk if you do not allow language to refer to things. All the maps that we can share with one another are made of language. You apparently don’t believe that the word “Chicago” on a literal map refers to the physical city with that name. How then do you understand the map-territory metaphor to work? And, without the conventional “referentialist” understanding of language (including literal and metaphorical maps and territories), how do you even state the problem of the Mind-Projection Fallacy?
It is hard for me to make sense of this paragraph when I gather that its writer doesn’t believe that he is referring to any actual neurons when he tells this story about what “neurons” are doing.
Suppose that you attempt an arithmetic computation in your head, and you do not communicate this fact with anyone else. Is it at all meaningful to ask whether your arithmetic computation was correct?
Eliezer cites Putnam’s XYZ argument approvingly in Heat vs. Motion. A quote:
See also Reductive Reference:
ETA: The Heat vs. Motion post has a pretty explicit statement of Putnam’s thesis in Eliezer’s own words:
(Bolding added.) Wouldn’t this be an example of “think[ing] of meaning as being a kind of correspondence between words and either things or concepts”?
You’ve probably thought more about this topic than I have, but it seems to me that words can at least be approximated as abstract referential entities, instead of just seen as a means of causing neuron stimulation in others. Using Putnam’s proposed theory of meaning, I can build a robot that would bring me a biological-cat when I say “please bring me a cat”, and bring the twin-Earth me a robot-cat when he says “please bring me a cat”, without having to make the robot simulate a human’s neural response to the acoustic vibration “cat”. That seems enough to put Putnam outside the category of “dead wrong”, as opposed to, perhaps, “claiming too much”?
I may bit a bit in over my head here, but I also don’t see a strong distinction between saying “Assume on Twin Earth that water is XYZ” and saying “Omega creates a world where...” Isn’t the point of a thought experiment to run with the hypothetical and trace out its implications? Yes, care must be taken not to over-infer from the result of that to a real system that may not match it, but how is this news? I seem to recall some folks (m’self included) finding that squicky with regard to “Torture vs Dust Specks”—if you stop resisting the question and just do the math the answer is obvious enough, but that doesn’t imply one believes that the scenario maps to a realizable condition.
I may just be confused here, but superficially it looks like a failure to apply the principle of “stop resisting the hypothetical” evenly.
I do worry that thought experiments involving Omega can lead decision theory research down wrong paths (for example by giving people misleading intuitions), and try to make sure the ones I create or pay attention to are not just arbitrary thought experiments but illuminate some aspect of real world decision making problems that we (or an AI) might face. Unfortunately, this relies largely on intuition and it’s hard to say what exactly is a valid or useful thought experiment and what isn’t, except maybe in retrospect.
That’s an interesting solution to the problem of translation (how do I know if I’ve got the meanings of the words right?) you’ve got there: just measure what’s going on in the respective participants’ brains! ;)
There are two reasons why you might not want to work at this level. Firstly, thinking about translation again, if I were translating the language of an alien species, their brain-equivalent would probably be sufficiently different that looking for neurological similarities would be hopeless. Secondly, it’s just easier to work at a higher level of abstraction, and it seems like we’ve got at least part of a system for doing that already: you can see it in action when people actually do talk about meanings etc. Perhaps it’s worth trying to make that work before we pronounce the whole affair worthless?