Putnam perhaps chose poor examples, but his thought-experiment works under any situation where we have limited knowledge.
Instead of Twin Earth, say that I have a jar of clear liquid on my desk. Working off of just that information (and the information that much of the clear liquid that humans keep around are water) people start calling the thing on my desk a “Jar of Water.” That is, until someone knocks it over and it starts to eat through the material on my desk: obviously, that wasn’t water.
Putnam doesn’t think that XYZ will look like water in every circumstance: his thought-experiment includes the idea that we can distinguish between XYZ and water with, say, an electron microscope. So obviously there are some properties of XYZ that are not the same as water, or else they really would look the same under every possible circumstance.
The difference (which is where some philosophers make the mistake) is when you assume that the “thought-experiment” stuff looks like the “real” stuff in every possible circumstance. If Putnam had said that the difference between H2O and XYZ was purely ephiphenomenal or something like that, he’d be obviously wrong. For instance, if we looked at XYZ and it “fooled” us into thinking it was H2O (say, if we broke apart XYZ and got a 2:1 ration of oxygen to hydrogen and no other parts) then Putnam’s argument wouldn’t hold. (This is where p-zombies fail: it is stipulated that there is no experiment that can tell the difference.)
Putnam’s main point was that we can be mistaken about what a thing is. Moreover, that when we can have two things (call them A and B) that we think are of the same type that we can not only be mistaken that A and B are of the same type, but that A could fit the type and B might not.
If this seems incredibly basic… it is. People make a big deal about it because prior to Putnam (and sometimes afterward) philosophers were saying crazy things like “the meanings in our heads don’t have to refer to anything in the world,” which essentially translates to “I can make a word mean anything I want!”
I agree with this to the extent that we shouldn’t make the mistake that just because we have a model of something in our head means that our model corresponds to the real world. It’s even stickier, because when a model doesn’t conform we often keep the words around because they can be useful descriptions of the new thing we’ve fround. That can create confusion, especially during a period of transition. (Imagine someone saying that “Water cannot be H2O, because it is necessarily an Aristotelian element.”) But thought experiments are very, very useful since all a “thought experiment” really is, is when you use the information already in your head and say, “Given what I already know, what do I think would happen in this circumstance?”
Putnam’s main point was that we can be mistaken about what a thing is. Moreover, that when we can have two things (call them A and B) that we think are of the same type that we can not only be mistaken that A and B are of the same type, but that A could fit the type and B might not.
I think he’s making a slightly different point. His point is that the reference of a term, which determines whether, say, the setence “Water is H2O” is true or not, depends on the environment in which that term came to be used. And this could be true even for speakers who were otherwise molecule-for-molecule identical. So just looking inside your head doesn’t tell me enough to figure out whether your utterances of “Water is H2O” are true or not: I need to find out what kind of stuff was watery around you when you learnt that term! Which is the kind of surprising bit.
Yeah, this is basically right. Putnam was defending externalism about mental content, the idea that the content of our mental representations isn’t fully determined by intrinsic facts about our brains. The twin earth thought experiment was meant to be an illustration of how two people could be in identical brain states yet be representing different things. In order to fully determine the content of my mental states, you need to take account of my environment and the way in which I’m related to it.
Another crazy thought experiment meant to illustrate semantic externalism: Suppose a flash of lightning strikes a distant swamp and by coincidence leads to the creation of a swampman who is a molecule-for-molecule duplicate of me. By hypothesis, the swampman has the exact same brain states as I do. But does the swampman have the same beliefs as I do? Semantic externalists would say no. I have a belief that my mother is an editor. Swampman cannot have this belief because there is no appropriate causal connection between himself and my mother. Sure he has the same brain state that instantiates this belief in my head. But what gives the belief in my head its content, what makes it a belief about my mother, is the causal history of this brain state, a causal history swampman doesn’t share.
Putnam was not really arguing against the view that “the meanings in our heads don’t have to refer to anything in the world”. He was arguing against what he called “magic theories of reference”, theories of reference according to which the content of a representation is intrinsic to that representation. For instance, a magic theory of reference would say that swampman does have a belief about my mother, since his brain state is identical to mine. Or if an ant just happens to walk around on a beach in such a manner that it produces a trail we would recognize as a likeness of Winston Churchill, then that is in fact a representation of Churchill, irrespective of the fact that the ant has never heard of Churchill and does not have the cognitive wherewithal to intentionally represent him even if it had heard of him.
Putnam perhaps chose poor examples, but his thought-experiment works under any situation where we have limited knowledge.
Instead of Twin Earth, say that I have a jar of clear liquid on my desk. Working off of just that information (and the information that much of the clear liquid that humans keep around are water) people start calling the thing on my desk a “Jar of Water.” That is, until someone knocks it over and it starts to eat through the material on my desk: obviously, that wasn’t water.
Putnam doesn’t think that XYZ will look like water in every circumstance: his thought-experiment includes the idea that we can distinguish between XYZ and water with, say, an electron microscope. So obviously there are some properties of XYZ that are not the same as water, or else they really would look the same under every possible circumstance.
The difference (which is where some philosophers make the mistake) is when you assume that the “thought-experiment” stuff looks like the “real” stuff in every possible circumstance. If Putnam had said that the difference between H2O and XYZ was purely ephiphenomenal or something like that, he’d be obviously wrong. For instance, if we looked at XYZ and it “fooled” us into thinking it was H2O (say, if we broke apart XYZ and got a 2:1 ration of oxygen to hydrogen and no other parts) then Putnam’s argument wouldn’t hold. (This is where p-zombies fail: it is stipulated that there is no experiment that can tell the difference.)
Putnam’s main point was that we can be mistaken about what a thing is. Moreover, that when we can have two things (call them A and B) that we think are of the same type that we can not only be mistaken that A and B are of the same type, but that A could fit the type and B might not.
If this seems incredibly basic… it is. People make a big deal about it because prior to Putnam (and sometimes afterward) philosophers were saying crazy things like “the meanings in our heads don’t have to refer to anything in the world,” which essentially translates to “I can make a word mean anything I want!”
I agree with this to the extent that we shouldn’t make the mistake that just because we have a model of something in our head means that our model corresponds to the real world. It’s even stickier, because when a model doesn’t conform we often keep the words around because they can be useful descriptions of the new thing we’ve fround. That can create confusion, especially during a period of transition. (Imagine someone saying that “Water cannot be H2O, because it is necessarily an Aristotelian element.”) But thought experiments are very, very useful since all a “thought experiment” really is, is when you use the information already in your head and say, “Given what I already know, what do I think would happen in this circumstance?”
I think he’s making a slightly different point. His point is that the reference of a term, which determines whether, say, the setence “Water is H2O” is true or not, depends on the environment in which that term came to be used. And this could be true even for speakers who were otherwise molecule-for-molecule identical. So just looking inside your head doesn’t tell me enough to figure out whether your utterances of “Water is H2O” are true or not: I need to find out what kind of stuff was watery around you when you learnt that term! Which is the kind of surprising bit.
Yeah, this is basically right. Putnam was defending externalism about mental content, the idea that the content of our mental representations isn’t fully determined by intrinsic facts about our brains. The twin earth thought experiment was meant to be an illustration of how two people could be in identical brain states yet be representing different things. In order to fully determine the content of my mental states, you need to take account of my environment and the way in which I’m related to it.
Another crazy thought experiment meant to illustrate semantic externalism: Suppose a flash of lightning strikes a distant swamp and by coincidence leads to the creation of a swampman who is a molecule-for-molecule duplicate of me. By hypothesis, the swampman has the exact same brain states as I do. But does the swampman have the same beliefs as I do? Semantic externalists would say no. I have a belief that my mother is an editor. Swampman cannot have this belief because there is no appropriate causal connection between himself and my mother. Sure he has the same brain state that instantiates this belief in my head. But what gives the belief in my head its content, what makes it a belief about my mother, is the causal history of this brain state, a causal history swampman doesn’t share.
Putnam was not really arguing against the view that “the meanings in our heads don’t have to refer to anything in the world”. He was arguing against what he called “magic theories of reference”, theories of reference according to which the content of a representation is intrinsic to that representation. For instance, a magic theory of reference would say that swampman does have a belief about my mother, since his brain state is identical to mine. Or if an ant just happens to walk around on a beach in such a manner that it produces a trail we would recognize as a likeness of Winston Churchill, then that is in fact a representation of Churchill, irrespective of the fact that the ant has never heard of Churchill and does not have the cognitive wherewithal to intentionally represent him even if it had heard of him.