(We can of course draw any correspondence we like, but it need not represent any objective fact about the territory.)
I’m not sure how you can appeal to map-territory talk if you do not allow language to refer to things. All the maps that we can share with one another are made of language. You apparently don’t believe that the word “Chicago” on a literal map refers to the physical city with that name. How then do you understand the map-territory metaphor to work? And, without the conventional “referentialist” understanding of language (including literal and metaphorical maps and territories), how do you even state the problem of the Mind-Projection Fallacy?
If you say the word “chair”, this physical action of yours causes a certain pattern of neurons to be stimulated in my brain, which bears a similarity relationship to a pattern of neurons in your brain.
It is hard for me to make sense of this paragraph when I gather that its writer doesn’t believe that he is referring to any actual neurons when he tells this story about what “neurons” are doing.
For philosophical purposes, there is no fact of the matter about whether the pattern of neurons being stimulated in my brain is “correct” or not; there are only greater and lesser degrees of similarity between the stimulation patterns occurring in my brain when I hear the word and those occurring in yours when you say it.
Suppose that you attempt an arithmetic computation in your head, and you do not communicate this fact with anyone else. Is it at all meaningful to ask whether your arithmetic computation was correct?
(Incidentally, one of the things that most impressed me about Eliezer’s Sequences was that he seemed to have something close to the correct theory of language, which is exceedingly rare.)
Eliezer cites Putnam’s XYZ argument approvingly in Heat vs. Motion. A quote:
I should note, in fairness to philosophers, that there are philosophers who have said these things. For example, Hilary Putnam, writing on the “Twin Earth” thought experiment:
Once we have discovered that water (in the actual world) is H20, nothing counts as a possible world in which water isn’t H20. In particular, if a “logically possible” statement is one that holds in some “logically possible world”, it isn’t logically possible that water isn’t H20.
On the other hand, we can perfectly well imagine having experiences that would convince us (and that would make it rational to believe that) water isn’t H20. In that sense, it is conceivable that water isn’t H20. It is conceivable but it isn’t logically possible! Conceivability is no proof of logical possibility.
Hilary Putnam’s “Twin Earth” thought experiment, where water is not H20 but some strange other substance denoted XYZ, otherwise behaving much like water, and the subsequent philosophical debate, helps to highlight this issue. “Snow” doesn’t have a logical definition known to us—it’s more like an empirically determined pointer to a logical definition. This is true even if you believe that snow is ice crystals is low-temperature tiled water molecules. The water molecules are made of quarks. What if quarks turn out to be made of something else? What is a snowflake, then? You don’t know—but it’s still a snowflake, not a fire hydrant.
ETA: The Heat vs. Motion post has a pretty explicit statement of Putnam’s thesis in Eliezer’s own words:
The words “heat” and “kinetic energy” can be said to “refer to” the same thing, even before we know how heat reduces to motion, in the sense that we don’t know yet what the reference is, but the references are in fact the same. You might imagine an Idealized Omniscient Science Interpreter that would give the same output when we typed in “heat” and “kinetic energy” on the command line.
I talk about the Science Interpreter to emphasize that, to dereference the pointer, you’ve got to step outside cognition. The end result of the dereference is something out there in reality, not in anyone’s mind. So you can say “real referent” or “actual referent”, but you can’t evaluate the words locally, from the inside of your own head.
(Bolding added.) Wouldn’t this be an example of “think[ing] of meaning as being a kind of correspondence between words and either things or concepts”?
I’m not sure how you can appeal to map-territory talk if you do not allow language to refer to things. All the maps that we can share with one another are made of language. You apparently don’t believe that the word “Chicago” on a literal map refers to the physical city with that name. How then do you understand the map-territory metaphor to work? And, without the conventional “referentialist” understanding of language (including literal and metaphorical maps and territories), how do you even state the problem of the Mind-Projection Fallacy?
It is hard for me to make sense of this paragraph when I gather that its writer doesn’t believe that he is referring to any actual neurons when he tells this story about what “neurons” are doing.
Suppose that you attempt an arithmetic computation in your head, and you do not communicate this fact with anyone else. Is it at all meaningful to ask whether your arithmetic computation was correct?
Eliezer cites Putnam’s XYZ argument approvingly in Heat vs. Motion. A quote:
See also Reductive Reference:
ETA: The Heat vs. Motion post has a pretty explicit statement of Putnam’s thesis in Eliezer’s own words:
(Bolding added.) Wouldn’t this be an example of “think[ing] of meaning as being a kind of correspondence between words and either things or concepts”?