Mary certainly experiences something new, but does she learn something new?
That’s the question.If you don’t have answer, you are basically comparing something unknown to something else unknown.
Maybe for humans. Since we use empathy to project our own experiences onto those of others, humans tend to learn something new when they feel something new.
What’s the relevance of empathy? If you learn what something is for you, subjeectively, then I suppose empathy will tell you what it feels like for others, in addition. But that is posited on your having the novel subective knowkedge in the first place, not an alternative.
If we already had perfect knowledge of the other, it’s not clear that we learn anything new, even when we feel something new.
Mary s posited as having perfect objective knowledge, and only objective knowledge. Whether that encompasses whatever there is to subjective knowledge is the whole question.
Humans also have a distinction between alief and belief, that seems to map closely here. Most people believe that stoves are hot and that torture is painful. However, they’d only alieve them if they experience either one. So part of experiencing qualia might be moving things to the alieve level.
So what would we say about a Mary that has never touched anything hot, and has not only a full objective understanding of what hotness is and what kinds of items are hot, but has trained her instincts to recoil in the correct way from hot objects, etc… It would seem that that kind of Mary (objective knowledge+correct aliefs) would arguably learn nothing from touching a hot stove.
It also strikes me as interesting that the argument is made only about totally new sensations or qualia. When I’m looking at something red, as I am right now, I’m experiencing the qualia, yet not knowing anything new. So any gain of info that Mary has upon seeing red for the first time can only be self-knowledge.
So what would we say about a Mary that has never touched anything hot, and has not only a full objective understanding of what hotness is and what kinds of items are hot, but has trained her instincts to recoil in the correct way from hot objects, etc… It would seem that that kind of Mary (objective knowledge+correct aliefs) would arguably learn nothing from touching a hot stove.
The argument could go through if it could be argued that aliefs are the only thing anyone gets from novel experiences.
When I’m looking at something red, as I am right now, I’m experiencing the qualia, yet not knowing anything new. So any gain of info that Mary has upon seeing red for the first time can only be self-knowledge.
I don’t follow that. You don’t learn anything new the second or third time you are told that Oslo s the capital of Norway.,.what’s that got to do with self knowledge?
The argument could go through if it could be argued that aliefs are the only thing anyone gets from novel experiences.
This gets tricky. Suppose Mary has all the aliefs of hot pain. And also has the knowledge and alief of standard pain types. Then it would seem she learns nothing from the experience. She wouldn’t even say “oh, hot pain is kind of a mix of 70% cold pain and 30% sharp pain” (or whatever), because she’d know that fact from objective observations.
However this is a case where Mary could use her own past experience to model or imagine experiencing hot pain, ahead of time. So its not clear what’s really going on here in terms of knowledge.
If you look at it in terms of objectively confirmable criteria, Mary may not seem to learn much, but looking at things that way is kind of question begging.
Can you clarify? It seems that there’s two clear cases: a) Mary has the aliefs of pain and the necessary background to imagine the experience. She learns nothing by experiencing it herself (“yep, just as expected”). b) Mary has no aliefs and cannot imagine the experience. She learns something.
Then we get into odd situations where she has the aliefs but not the imagination, or vice versa. Then she does learn something—maybe? - but this is an odd situation for a human to be in.
My impression here is that as we investigate the problem further, the issue will dissolve. We’ll confront issues like “can a sufficiently imaginative human use objective observations to replicate within themselves the subjective state that comes from experiencing something?” and end up with a better understanding of Mary’s room, new qualia, what role aliefs play in knowledge, etc… - but the case against physicalism will be gone.
If I ever have the time, I’ll try and work through that thoroughly.
Stop being a sophist about terminology. This sequence showed how a purely physical world could produce something; I couldn’t care less if you choose to call that something “subjective knowledge” or not, the point is that it doesn’t disprove physicalism.
You want to make an argument, make an argument about the facts. To discuss a category without discussing its purpose(s) is to lie. Or as Scott put it, “Categories are made for man and not man for the categories.”
The answer to “it s been explained in the sequences” is usually read the comments...”, in this case RobbB’s lengthy quote from Chalmers.
“[I]magine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives. What would such a system be like? Would it have any concept of consciousness, or any related notions?
“To see that it might, note that one the most natural design such a system would surely have some concept of self — for instance, it would have the ability to distinguish itself from the rest of the world, and from other entities resembling it. It also seems reasonable that such a system would be able to access its own cognitive contents much more directly than it could those of others. If it had the capacity to reflect, it would presumably have a certain direct awareness of its own thought contents, and could reason about that fact. Furthermore, such a system would most naturally have direct access to perceptual information, much as our own cognitive system does.
“When we asked the system what perception was like, what would it say? Would it say, “It’s not like anything”? Might it say, “Well, I know there is a red tricycle over there, but I have no idea how I know it. The information just appeared in my database”? Perhaps, but it seems unlikely. A system designed this way would be curiously indirect. It seems much more likely that it would say, “I know there is a red tricycle because I see it there.” When we ask it in turn how it knows that it is seeing the tricycle, the answer would very likely be something along the lines of “I just see it.”
“It would be an odd system that replied, “I know I see it because sensors 78-84 are activated in such-and-such a way.” As Hofstadter (1979) points out, there is no need to give a system such detailed access to its low-level parts. Even Winograd’s program SHRDLU (1972) did not have knowledge about the code it was written in, despite the fact that it could perceive a virtual world, make inferences about the world, and even justify its knowledge to a limited degree. Such extra knowledge would seem to be quite unnecessary, and would only complicate the processes of awareness and inference.
“Instead, it seems likely that such a system would have the same kind of attitude toward its perceptual contents as we do toward ours, with its knowledge of them being directed and unmediated, at least as far as the system is concerned. When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, “I just see it!” When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: “It just looks red.” If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system’s point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine’s point of view this does not help.
“As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. A reflective machine that was designed to have direct access to the contents of its perception and thought might very soon start wondering about the mysteries of consciousness (Hofstadter 1985a gives a rich discussion of this idea): “Why is it that heat feels this way?”; “Why am I me, and not someone else?”; “I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?”
“Of course, the speculation I have engaged in here is not to be taken too seriously, but it helps to bring out the naturalness of the fact that we judge and claim that we are conscious, given a reasonable design. It would be a strange kind of cognitive system that had no idea what we were talking about when we asked what it was like to be it. The fact that we think and talk about consciousness may be a consequence of very natural features of our design, just as it is with these systems. And certainly, in the explanation of why these systems think and talk as they do, we will never need to invoke full-fledged consciousness. Perhaps these systems are really conscious and perhaps they are not, but the explanation works independently of this fact. Any explanation of how these systems function can be given solely in computational terms. In such a case it is obvious that there is no room for a ghost in the machine to play an explanatory role.
“All this is to say (expanding on a claim in Chapter 1) that consciousness is surprising, but claims about consciousness are not. Although consciousness is a feature of the world that we would not predict from the physical facts, the things we say about consciousness are a garden-variety cognitive phenomenon. Somebody who knew enough about cognitive structure would immediately be able to predict the likelihood of utterances such as “I feel conscious, in a way that no physical object could be,” or even Descartes’s “Cogito ergo sum.” In principle, some reductive explanation in terms of internal processes should render claims about consciousness no more deeply surprising than any other aspect of behavior. [...]
“At this point a natural thought has probably occurred to many readers, especially those of a reductionist bent: If one has explained why we say we are conscious, and why we judge that we are conscious, haven’t we explained all that there is to be explained? Why not simply give up on the quest for a theory of consciousness, declaring consciousness itself a chimera? Even better, why not declare one’s theory of why we judge that we are conscious to be a theory of consciousness in its own right? It might well be suggested that a theory of our judgments is all the theory of consciousness that we need. [...]
“This is surely the single most powerful argument for a reductive or eliminative view of consciousness. But it is not enough. [...] Explaining our judgments about consciousness does not come close to removing the mysteries of consciousness. Why? Because consciousness is itself an explanandum. The existence of God was arguably hypothesized largely in order to explain all sorts of evident facts about the world, such as its orderliness and its apparent design. When it turns out that an alternative hypothesis can explain the evidence just as well, then there is no need for the hypothesis of God. There is no separate phenomenon God that we can point to and say: that needs explaining. At best, there is indirect evidence. [...]
“But consciousness is not an explanatory construct, postulated to help explain behavior or events in the world. Rather, it is a brute explanandum, a phenomenon in its own right that is in need of explanation. It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place. Even if our judgments about consciousness are reductively explained, all this shows is that our judgments can be explained reductively. The mind-body problem is not that of explaining our judgments about consciousness. If it were, it would be a relatively trivial problem. Rather, the mind-body problem is that of explaining consciousness itself. If the judgments can be explained without explaining consciousness, then that is interesting and perhaps surprising, but it does not remove the mind-body problem.
“To take the line that explaining our judgments about consciousness is enough [...] is most naturally understood as an eliminativist position about consciousness [...]. As such it suffers from all the problems that eliminativism naturally faces. In particular, it denies the evidence of our own experience. This is the sort of thing that can only be done by a philosopher — or by someone else tying themselves in intellectual knots. Our experiences of red do not go away upon making such a denial. It is still like something to be us, and that is still something that needs explanation. [...]
“There is a certain intellectual appeal to the position that explaining phenomenal judgments is enough. It has the feel of a bold stroke that cleanly dissolves all the problems, leaving our confusion lying on the ground in front of us exposed for all to see. Yet it is the kind of “solution” that is satisfying only for about half a minute. When we stop to reflect, we realize that all we have done is to explain certain aspects of behavior. We have explained why we talk in certain ways, and why we are disposed to do so, but we have not remotely come to grips with the central problem, namely conscious experience itself. When thirty seconds are up, we find ourselves looking at a red rose, inhaling its fragrance, and wondering: “Why do I experience it like this?” And we realize that this explanation has nothing to say about the matter. [...]
“This line of argument is perhaps the most interesting that a reductionist or eliminativist can take — if I were a reductionist, I would be this sort of reductionist — but at the end of the day it suffers from the problem that all such positions face: it does not explain what needs to be explained. Tempting as this position is, it ends up failing to take the problem seriously. The puzzle of consciousness cannot be removed by such simple means.”
—David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)
Yeah, I thought you might go to Chalmers’ Zombie Universe argument, since the Mary’s Room argument is an utter failure and the linked sequence shows this clearly. But then phrasing your argument as a defense of Mary’s Room would be somewhat dishonest, and linking the paper a waste of everyone’s time; it adds nothing to the argument.
Now we’ve almost reached the actual argument, but this wall of text still has a touch of sophistry to it. Plainly none of us on the other side agree that the Martha’s Room response “denies the evidence of our own experience.” How does it do so? What does it deny? My intuition tells me that Martha experiences the color red and the sense of “ineffable” learning despite being purely physical. Does Chalmers have a response except to say that she doesn’t, according to his own intuition?
Mary’s Room is an attempt to disprove physicalism. If an example such as Martha’s Room shows how physicalism can produce the same results, the argument fails. If, on the other hand, one needs an entirely different argument to show that doesn’t happen, and this other argument works just as well on its own (as Chalmers apparently thinks) then Mary’s R adds nothing and you should forthrightly admit this. Anything else would be like trying to save an atheist argument about talking snakes in the Bible by turning it into an argument about cognitive science, the supernatural, and attempts to formalize Occam’s Razor.
The Zombie Universe Arguments seems like the only extant dualist claim worth considering because Chalmers at least tries to argue that (contrary to my intuition) a physical agent similar to Martha might not have qualia. But even this argument just seems to end in dueling intuitions. (If you can’t go any further, then we should mistrust our intuitions and trust the abundant evidence that our reality is somehow made of math.)
That’s the question.If you don’t have answer, you are basically comparing something unknown to something else unknown.
What’s the relevance of empathy? If you learn what something is for you, subjeectively, then I suppose empathy will tell you what it feels like for others, in addition. But that is posited on your having the novel subective knowkedge in the first place, not an alternative.
Mary s posited as having perfect objective knowledge, and only objective knowledge. Whether that encompasses whatever there is to subjective knowledge is the whole question.
Humans also have a distinction between alief and belief, that seems to map closely here. Most people believe that stoves are hot and that torture is painful. However, they’d only alieve them if they experience either one. So part of experiencing qualia might be moving things to the alieve level.
So what would we say about a Mary that has never touched anything hot, and has not only a full objective understanding of what hotness is and what kinds of items are hot, but has trained her instincts to recoil in the correct way from hot objects, etc… It would seem that that kind of Mary (objective knowledge+correct aliefs) would arguably learn nothing from touching a hot stove.
It also strikes me as interesting that the argument is made only about totally new sensations or qualia. When I’m looking at something red, as I am right now, I’m experiencing the qualia, yet not knowing anything new. So any gain of info that Mary has upon seeing red for the first time can only be self-knowledge.
The argument could go through if it could be argued that aliefs are the only thing anyone gets from novel experiences.
I don’t follow that. You don’t learn anything new the second or third time you are told that Oslo s the capital of Norway.,.what’s that got to do with self knowledge?
This gets tricky. Suppose Mary has all the aliefs of hot pain. And also has the knowledge and alief of standard pain types. Then it would seem she learns nothing from the experience. She wouldn’t even say “oh, hot pain is kind of a mix of 70% cold pain and 30% sharp pain” (or whatever), because she’d know that fact from objective observations.
However this is a case where Mary could use her own past experience to model or imagine experiencing hot pain, ahead of time. So its not clear what’s really going on here in terms of knowledge.
If you look at it in terms of objectively confirmable criteria, Mary may not seem to learn much, but looking at things that way is kind of question begging.
Can you clarify? It seems that there’s two clear cases: a) Mary has the aliefs of pain and the necessary background to imagine the experience. She learns nothing by experiencing it herself (“yep, just as expected”). b) Mary has no aliefs and cannot imagine the experience. She learns something.
Then we get into odd situations where she has the aliefs but not the imagination, or vice versa. Then she does learn something—maybe? - but this is an odd situation for a human to be in.
My impression here is that as we investigate the problem further, the issue will dissolve. We’ll confront issues like “can a sufficiently imaginative human use objective observations to replicate within themselves the subjective state that comes from experiencing something?” and end up with a better understanding of Mary’s room, new qualia, what role aliefs play in knowledge, etc… - but the case against physicalism will be gone.
If I ever have the time, I’ll try and work through that thoroughly.
The point about learning is not essential , it is just here to dramatise the real point , which is the existence of subjective states.
Promissory note accepted.
Stop being a sophist about terminology. This sequence showed how a purely physical world could produce something; I couldn’t care less if you choose to call that something “subjective knowledge” or not, the point is that it doesn’t disprove physicalism.
You want to make an argument, make an argument about the facts. To discuss a category without discussing its purpose(s) is to lie. Or as Scott put it, “Categories are made for man and not man for the categories.”
The answer to “it s been explained in the sequences” is usually read the comments...”, in this case RobbB’s lengthy quote from Chalmers.
“[I]magine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives. What would such a system be like? Would it have any concept of consciousness, or any related notions?
“To see that it might, note that one the most natural design such a system would surely have some concept of self — for instance, it would have the ability to distinguish itself from the rest of the world, and from other entities resembling it. It also seems reasonable that such a system would be able to access its own cognitive contents much more directly than it could those of others. If it had the capacity to reflect, it would presumably have a certain direct awareness of its own thought contents, and could reason about that fact. Furthermore, such a system would most naturally have direct access to perceptual information, much as our own cognitive system does.
“When we asked the system what perception was like, what would it say? Would it say, “It’s not like anything”? Might it say, “Well, I know there is a red tricycle over there, but I have no idea how I know it. The information just appeared in my database”? Perhaps, but it seems unlikely. A system designed this way would be curiously indirect. It seems much more likely that it would say, “I know there is a red tricycle because I see it there.” When we ask it in turn how it knows that it is seeing the tricycle, the answer would very likely be something along the lines of “I just see it.”
“It would be an odd system that replied, “I know I see it because sensors 78-84 are activated in such-and-such a way.” As Hofstadter (1979) points out, there is no need to give a system such detailed access to its low-level parts. Even Winograd’s program SHRDLU (1972) did not have knowledge about the code it was written in, despite the fact that it could perceive a virtual world, make inferences about the world, and even justify its knowledge to a limited degree. Such extra knowledge would seem to be quite unnecessary, and would only complicate the processes of awareness and inference.
“Instead, it seems likely that such a system would have the same kind of attitude toward its perceptual contents as we do toward ours, with its knowledge of them being directed and unmediated, at least as far as the system is concerned. When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, “I just see it!” When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: “It just looks red.” If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system’s point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine’s point of view this does not help.
“As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. A reflective machine that was designed to have direct access to the contents of its perception and thought might very soon start wondering about the mysteries of consciousness (Hofstadter 1985a gives a rich discussion of this idea): “Why is it that heat feels this way?”; “Why am I me, and not someone else?”; “I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?”
“Of course, the speculation I have engaged in here is not to be taken too seriously, but it helps to bring out the naturalness of the fact that we judge and claim that we are conscious, given a reasonable design. It would be a strange kind of cognitive system that had no idea what we were talking about when we asked what it was like to be it. The fact that we think and talk about consciousness may be a consequence of very natural features of our design, just as it is with these systems. And certainly, in the explanation of why these systems think and talk as they do, we will never need to invoke full-fledged consciousness. Perhaps these systems are really conscious and perhaps they are not, but the explanation works independently of this fact. Any explanation of how these systems function can be given solely in computational terms. In such a case it is obvious that there is no room for a ghost in the machine to play an explanatory role.
“All this is to say (expanding on a claim in Chapter 1) that consciousness is surprising, but claims about consciousness are not. Although consciousness is a feature of the world that we would not predict from the physical facts, the things we say about consciousness are a garden-variety cognitive phenomenon. Somebody who knew enough about cognitive structure would immediately be able to predict the likelihood of utterances such as “I feel conscious, in a way that no physical object could be,” or even Descartes’s “Cogito ergo sum.” In principle, some reductive explanation in terms of internal processes should render claims about consciousness no more deeply surprising than any other aspect of behavior. [...]
“At this point a natural thought has probably occurred to many readers, especially those of a reductionist bent: If one has explained why we say we are conscious, and why we judge that we are conscious, haven’t we explained all that there is to be explained? Why not simply give up on the quest for a theory of consciousness, declaring consciousness itself a chimera? Even better, why not declare one’s theory of why we judge that we are conscious to be a theory of consciousness in its own right? It might well be suggested that a theory of our judgments is all the theory of consciousness that we need. [...]
“This is surely the single most powerful argument for a reductive or eliminative view of consciousness. But it is not enough. [...] Explaining our judgments about consciousness does not come close to removing the mysteries of consciousness. Why? Because consciousness is itself an explanandum. The existence of God was arguably hypothesized largely in order to explain all sorts of evident facts about the world, such as its orderliness and its apparent design. When it turns out that an alternative hypothesis can explain the evidence just as well, then there is no need for the hypothesis of God. There is no separate phenomenon God that we can point to and say: that needs explaining. At best, there is indirect evidence. [...]
“But consciousness is not an explanatory construct, postulated to help explain behavior or events in the world. Rather, it is a brute explanandum, a phenomenon in its own right that is in need of explanation. It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place. Even if our judgments about consciousness are reductively explained, all this shows is that our judgments can be explained reductively. The mind-body problem is not that of explaining our judgments about consciousness. If it were, it would be a relatively trivial problem. Rather, the mind-body problem is that of explaining consciousness itself. If the judgments can be explained without explaining consciousness, then that is interesting and perhaps surprising, but it does not remove the mind-body problem.
“To take the line that explaining our judgments about consciousness is enough [...] is most naturally understood as an eliminativist position about consciousness [...]. As such it suffers from all the problems that eliminativism naturally faces. In particular, it denies the evidence of our own experience. This is the sort of thing that can only be done by a philosopher — or by someone else tying themselves in intellectual knots. Our experiences of red do not go away upon making such a denial. It is still like something to be us, and that is still something that needs explanation. [...]
“There is a certain intellectual appeal to the position that explaining phenomenal judgments is enough. It has the feel of a bold stroke that cleanly dissolves all the problems, leaving our confusion lying on the ground in front of us exposed for all to see. Yet it is the kind of “solution” that is satisfying only for about half a minute. When we stop to reflect, we realize that all we have done is to explain certain aspects of behavior. We have explained why we talk in certain ways, and why we are disposed to do so, but we have not remotely come to grips with the central problem, namely conscious experience itself. When thirty seconds are up, we find ourselves looking at a red rose, inhaling its fragrance, and wondering: “Why do I experience it like this?” And we realize that this explanation has nothing to say about the matter. [...]
“This line of argument is perhaps the most interesting that a reductionist or eliminativist can take — if I were a reductionist, I would be this sort of reductionist — but at the end of the day it suffers from the problem that all such positions face: it does not explain what needs to be explained. Tempting as this position is, it ends up failing to take the problem seriously. The puzzle of consciousness cannot be removed by such simple means.”
—David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)
Yeah, I thought you might go to Chalmers’ Zombie Universe argument, since the Mary’s Room argument is an utter failure and the linked sequence shows this clearly. But then phrasing your argument as a defense of Mary’s Room would be somewhat dishonest, and linking the paper a waste of everyone’s time; it adds nothing to the argument.
Now we’ve almost reached the actual argument, but this wall of text still has a touch of sophistry to it. Plainly none of us on the other side agree that the Martha’s Room response “denies the evidence of our own experience.” How does it do so? What does it deny? My intuition tells me that Martha experiences the color red and the sense of “ineffable” learning despite being purely physical. Does Chalmers have a response except to say that she doesn’t, according to his own intuition?
Chalmers does not mention zombies in the quoted argument, in view of which your comment would seem to be a smear by association.
Saying that doesnt make it so, and putting it in bold type doesn’t make it so.
You have that the wrong way around. The copied passage is a response to the sequence. It neefs to be answered itself,.
You also don’t prove physicalism by assuming physicalsm.
Mary’s Room is an attempt to disprove physicalism. If an example such as Martha’s Room shows how physicalism can produce the same results, the argument fails. If, on the other hand, one needs an entirely different argument to show that doesn’t happen, and this other argument works just as well on its own (as Chalmers apparently thinks) then Mary’s R adds nothing and you should forthrightly admit this. Anything else would be like trying to save an atheist argument about talking snakes in the Bible by turning it into an argument about cognitive science, the supernatural, and attempts to formalize Occam’s Razor.
The Zombie Universe Arguments seems like the only extant dualist claim worth considering because Chalmers at least tries to argue that (contrary to my intuition) a physical agent similar to Martha might not have qualia. But even this argument just seems to end in dueling intuitions. (If you can’t go any further, then we should mistrust our intuitions and trust the abundant evidence that our reality is somehow made of math.)
Possibly one could construct a better argument by starting with an attempt to fix Solomonoff Induction.