The answer to “it s been explained in the sequences” is usually read the comments...”, in this case RobbB’s lengthy quote from Chalmers.
“[I]magine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives. What would such a system be like? Would it have any concept of consciousness, or any related notions?
“To see that it might, note that one the most natural design such a system would surely have some concept of self — for instance, it would have the ability to distinguish itself from the rest of the world, and from other entities resembling it. It also seems reasonable that such a system would be able to access its own cognitive contents much more directly than it could those of others. If it had the capacity to reflect, it would presumably have a certain direct awareness of its own thought contents, and could reason about that fact. Furthermore, such a system would most naturally have direct access to perceptual information, much as our own cognitive system does.
“When we asked the system what perception was like, what would it say? Would it say, “It’s not like anything”? Might it say, “Well, I know there is a red tricycle over there, but I have no idea how I know it. The information just appeared in my database”? Perhaps, but it seems unlikely. A system designed this way would be curiously indirect. It seems much more likely that it would say, “I know there is a red tricycle because I see it there.” When we ask it in turn how it knows that it is seeing the tricycle, the answer would very likely be something along the lines of “I just see it.”
“It would be an odd system that replied, “I know I see it because sensors 78-84 are activated in such-and-such a way.” As Hofstadter (1979) points out, there is no need to give a system such detailed access to its low-level parts. Even Winograd’s program SHRDLU (1972) did not have knowledge about the code it was written in, despite the fact that it could perceive a virtual world, make inferences about the world, and even justify its knowledge to a limited degree. Such extra knowledge would seem to be quite unnecessary, and would only complicate the processes of awareness and inference.
“Instead, it seems likely that such a system would have the same kind of attitude toward its perceptual contents as we do toward ours, with its knowledge of them being directed and unmediated, at least as far as the system is concerned. When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, “I just see it!” When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: “It just looks red.” If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system’s point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine’s point of view this does not help.
“As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. A reflective machine that was designed to have direct access to the contents of its perception and thought might very soon start wondering about the mysteries of consciousness (Hofstadter 1985a gives a rich discussion of this idea): “Why is it that heat feels this way?”; “Why am I me, and not someone else?”; “I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?”
“Of course, the speculation I have engaged in here is not to be taken too seriously, but it helps to bring out the naturalness of the fact that we judge and claim that we are conscious, given a reasonable design. It would be a strange kind of cognitive system that had no idea what we were talking about when we asked what it was like to be it. The fact that we think and talk about consciousness may be a consequence of very natural features of our design, just as it is with these systems. And certainly, in the explanation of why these systems think and talk as they do, we will never need to invoke full-fledged consciousness. Perhaps these systems are really conscious and perhaps they are not, but the explanation works independently of this fact. Any explanation of how these systems function can be given solely in computational terms. In such a case it is obvious that there is no room for a ghost in the machine to play an explanatory role.
“All this is to say (expanding on a claim in Chapter 1) that consciousness is surprising, but claims about consciousness are not. Although consciousness is a feature of the world that we would not predict from the physical facts, the things we say about consciousness are a garden-variety cognitive phenomenon. Somebody who knew enough about cognitive structure would immediately be able to predict the likelihood of utterances such as “I feel conscious, in a way that no physical object could be,” or even Descartes’s “Cogito ergo sum.” In principle, some reductive explanation in terms of internal processes should render claims about consciousness no more deeply surprising than any other aspect of behavior. [...]
“At this point a natural thought has probably occurred to many readers, especially those of a reductionist bent: If one has explained why we say we are conscious, and why we judge that we are conscious, haven’t we explained all that there is to be explained? Why not simply give up on the quest for a theory of consciousness, declaring consciousness itself a chimera? Even better, why not declare one’s theory of why we judge that we are conscious to be a theory of consciousness in its own right? It might well be suggested that a theory of our judgments is all the theory of consciousness that we need. [...]
“This is surely the single most powerful argument for a reductive or eliminative view of consciousness. But it is not enough. [...] Explaining our judgments about consciousness does not come close to removing the mysteries of consciousness. Why? Because consciousness is itself an explanandum. The existence of God was arguably hypothesized largely in order to explain all sorts of evident facts about the world, such as its orderliness and its apparent design. When it turns out that an alternative hypothesis can explain the evidence just as well, then there is no need for the hypothesis of God. There is no separate phenomenon God that we can point to and say: that needs explaining. At best, there is indirect evidence. [...]
“But consciousness is not an explanatory construct, postulated to help explain behavior or events in the world. Rather, it is a brute explanandum, a phenomenon in its own right that is in need of explanation. It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place. Even if our judgments about consciousness are reductively explained, all this shows is that our judgments can be explained reductively. The mind-body problem is not that of explaining our judgments about consciousness. If it were, it would be a relatively trivial problem. Rather, the mind-body problem is that of explaining consciousness itself. If the judgments can be explained without explaining consciousness, then that is interesting and perhaps surprising, but it does not remove the mind-body problem.
“To take the line that explaining our judgments about consciousness is enough [...] is most naturally understood as an eliminativist position about consciousness [...]. As such it suffers from all the problems that eliminativism naturally faces. In particular, it denies the evidence of our own experience. This is the sort of thing that can only be done by a philosopher — or by someone else tying themselves in intellectual knots. Our experiences of red do not go away upon making such a denial. It is still like something to be us, and that is still something that needs explanation. [...]
“There is a certain intellectual appeal to the position that explaining phenomenal judgments is enough. It has the feel of a bold stroke that cleanly dissolves all the problems, leaving our confusion lying on the ground in front of us exposed for all to see. Yet it is the kind of “solution” that is satisfying only for about half a minute. When we stop to reflect, we realize that all we have done is to explain certain aspects of behavior. We have explained why we talk in certain ways, and why we are disposed to do so, but we have not remotely come to grips with the central problem, namely conscious experience itself. When thirty seconds are up, we find ourselves looking at a red rose, inhaling its fragrance, and wondering: “Why do I experience it like this?” And we realize that this explanation has nothing to say about the matter. [...]
“This line of argument is perhaps the most interesting that a reductionist or eliminativist can take — if I were a reductionist, I would be this sort of reductionist — but at the end of the day it suffers from the problem that all such positions face: it does not explain what needs to be explained. Tempting as this position is, it ends up failing to take the problem seriously. The puzzle of consciousness cannot be removed by such simple means.”
—David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)
Yeah, I thought you might go to Chalmers’ Zombie Universe argument, since the Mary’s Room argument is an utter failure and the linked sequence shows this clearly. But then phrasing your argument as a defense of Mary’s Room would be somewhat dishonest, and linking the paper a waste of everyone’s time; it adds nothing to the argument.
Now we’ve almost reached the actual argument, but this wall of text still has a touch of sophistry to it. Plainly none of us on the other side agree that the Martha’s Room response “denies the evidence of our own experience.” How does it do so? What does it deny? My intuition tells me that Martha experiences the color red and the sense of “ineffable” learning despite being purely physical. Does Chalmers have a response except to say that she doesn’t, according to his own intuition?
Mary’s Room is an attempt to disprove physicalism. If an example such as Martha’s Room shows how physicalism can produce the same results, the argument fails. If, on the other hand, one needs an entirely different argument to show that doesn’t happen, and this other argument works just as well on its own (as Chalmers apparently thinks) then Mary’s R adds nothing and you should forthrightly admit this. Anything else would be like trying to save an atheist argument about talking snakes in the Bible by turning it into an argument about cognitive science, the supernatural, and attempts to formalize Occam’s Razor.
The Zombie Universe Arguments seems like the only extant dualist claim worth considering because Chalmers at least tries to argue that (contrary to my intuition) a physical agent similar to Martha might not have qualia. But even this argument just seems to end in dueling intuitions. (If you can’t go any further, then we should mistrust our intuitions and trust the abundant evidence that our reality is somehow made of math.)
The answer to “it s been explained in the sequences” is usually read the comments...”, in this case RobbB’s lengthy quote from Chalmers.
“[I]magine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives. What would such a system be like? Would it have any concept of consciousness, or any related notions?
“To see that it might, note that one the most natural design such a system would surely have some concept of self — for instance, it would have the ability to distinguish itself from the rest of the world, and from other entities resembling it. It also seems reasonable that such a system would be able to access its own cognitive contents much more directly than it could those of others. If it had the capacity to reflect, it would presumably have a certain direct awareness of its own thought contents, and could reason about that fact. Furthermore, such a system would most naturally have direct access to perceptual information, much as our own cognitive system does.
“When we asked the system what perception was like, what would it say? Would it say, “It’s not like anything”? Might it say, “Well, I know there is a red tricycle over there, but I have no idea how I know it. The information just appeared in my database”? Perhaps, but it seems unlikely. A system designed this way would be curiously indirect. It seems much more likely that it would say, “I know there is a red tricycle because I see it there.” When we ask it in turn how it knows that it is seeing the tricycle, the answer would very likely be something along the lines of “I just see it.”
“It would be an odd system that replied, “I know I see it because sensors 78-84 are activated in such-and-such a way.” As Hofstadter (1979) points out, there is no need to give a system such detailed access to its low-level parts. Even Winograd’s program SHRDLU (1972) did not have knowledge about the code it was written in, despite the fact that it could perceive a virtual world, make inferences about the world, and even justify its knowledge to a limited degree. Such extra knowledge would seem to be quite unnecessary, and would only complicate the processes of awareness and inference.
“Instead, it seems likely that such a system would have the same kind of attitude toward its perceptual contents as we do toward ours, with its knowledge of them being directed and unmediated, at least as far as the system is concerned. When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, “I just see it!” When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: “It just looks red.” If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system’s point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine’s point of view this does not help.
“As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. A reflective machine that was designed to have direct access to the contents of its perception and thought might very soon start wondering about the mysteries of consciousness (Hofstadter 1985a gives a rich discussion of this idea): “Why is it that heat feels this way?”; “Why am I me, and not someone else?”; “I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?”
“Of course, the speculation I have engaged in here is not to be taken too seriously, but it helps to bring out the naturalness of the fact that we judge and claim that we are conscious, given a reasonable design. It would be a strange kind of cognitive system that had no idea what we were talking about when we asked what it was like to be it. The fact that we think and talk about consciousness may be a consequence of very natural features of our design, just as it is with these systems. And certainly, in the explanation of why these systems think and talk as they do, we will never need to invoke full-fledged consciousness. Perhaps these systems are really conscious and perhaps they are not, but the explanation works independently of this fact. Any explanation of how these systems function can be given solely in computational terms. In such a case it is obvious that there is no room for a ghost in the machine to play an explanatory role.
“All this is to say (expanding on a claim in Chapter 1) that consciousness is surprising, but claims about consciousness are not. Although consciousness is a feature of the world that we would not predict from the physical facts, the things we say about consciousness are a garden-variety cognitive phenomenon. Somebody who knew enough about cognitive structure would immediately be able to predict the likelihood of utterances such as “I feel conscious, in a way that no physical object could be,” or even Descartes’s “Cogito ergo sum.” In principle, some reductive explanation in terms of internal processes should render claims about consciousness no more deeply surprising than any other aspect of behavior. [...]
“At this point a natural thought has probably occurred to many readers, especially those of a reductionist bent: If one has explained why we say we are conscious, and why we judge that we are conscious, haven’t we explained all that there is to be explained? Why not simply give up on the quest for a theory of consciousness, declaring consciousness itself a chimera? Even better, why not declare one’s theory of why we judge that we are conscious to be a theory of consciousness in its own right? It might well be suggested that a theory of our judgments is all the theory of consciousness that we need. [...]
“This is surely the single most powerful argument for a reductive or eliminative view of consciousness. But it is not enough. [...] Explaining our judgments about consciousness does not come close to removing the mysteries of consciousness. Why? Because consciousness is itself an explanandum. The existence of God was arguably hypothesized largely in order to explain all sorts of evident facts about the world, such as its orderliness and its apparent design. When it turns out that an alternative hypothesis can explain the evidence just as well, then there is no need for the hypothesis of God. There is no separate phenomenon God that we can point to and say: that needs explaining. At best, there is indirect evidence. [...]
“But consciousness is not an explanatory construct, postulated to help explain behavior or events in the world. Rather, it is a brute explanandum, a phenomenon in its own right that is in need of explanation. It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place. Even if our judgments about consciousness are reductively explained, all this shows is that our judgments can be explained reductively. The mind-body problem is not that of explaining our judgments about consciousness. If it were, it would be a relatively trivial problem. Rather, the mind-body problem is that of explaining consciousness itself. If the judgments can be explained without explaining consciousness, then that is interesting and perhaps surprising, but it does not remove the mind-body problem.
“To take the line that explaining our judgments about consciousness is enough [...] is most naturally understood as an eliminativist position about consciousness [...]. As such it suffers from all the problems that eliminativism naturally faces. In particular, it denies the evidence of our own experience. This is the sort of thing that can only be done by a philosopher — or by someone else tying themselves in intellectual knots. Our experiences of red do not go away upon making such a denial. It is still like something to be us, and that is still something that needs explanation. [...]
“There is a certain intellectual appeal to the position that explaining phenomenal judgments is enough. It has the feel of a bold stroke that cleanly dissolves all the problems, leaving our confusion lying on the ground in front of us exposed for all to see. Yet it is the kind of “solution” that is satisfying only for about half a minute. When we stop to reflect, we realize that all we have done is to explain certain aspects of behavior. We have explained why we talk in certain ways, and why we are disposed to do so, but we have not remotely come to grips with the central problem, namely conscious experience itself. When thirty seconds are up, we find ourselves looking at a red rose, inhaling its fragrance, and wondering: “Why do I experience it like this?” And we realize that this explanation has nothing to say about the matter. [...]
“This line of argument is perhaps the most interesting that a reductionist or eliminativist can take — if I were a reductionist, I would be this sort of reductionist — but at the end of the day it suffers from the problem that all such positions face: it does not explain what needs to be explained. Tempting as this position is, it ends up failing to take the problem seriously. The puzzle of consciousness cannot be removed by such simple means.”
—David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)
Yeah, I thought you might go to Chalmers’ Zombie Universe argument, since the Mary’s Room argument is an utter failure and the linked sequence shows this clearly. But then phrasing your argument as a defense of Mary’s Room would be somewhat dishonest, and linking the paper a waste of everyone’s time; it adds nothing to the argument.
Now we’ve almost reached the actual argument, but this wall of text still has a touch of sophistry to it. Plainly none of us on the other side agree that the Martha’s Room response “denies the evidence of our own experience.” How does it do so? What does it deny? My intuition tells me that Martha experiences the color red and the sense of “ineffable” learning despite being purely physical. Does Chalmers have a response except to say that she doesn’t, according to his own intuition?
Chalmers does not mention zombies in the quoted argument, in view of which your comment would seem to be a smear by association.
Saying that doesnt make it so, and putting it in bold type doesn’t make it so.
You have that the wrong way around. The copied passage is a response to the sequence. It neefs to be answered itself,.
You also don’t prove physicalism by assuming physicalsm.
Mary’s Room is an attempt to disprove physicalism. If an example such as Martha’s Room shows how physicalism can produce the same results, the argument fails. If, on the other hand, one needs an entirely different argument to show that doesn’t happen, and this other argument works just as well on its own (as Chalmers apparently thinks) then Mary’s R adds nothing and you should forthrightly admit this. Anything else would be like trying to save an atheist argument about talking snakes in the Bible by turning it into an argument about cognitive science, the supernatural, and attempts to formalize Occam’s Razor.
The Zombie Universe Arguments seems like the only extant dualist claim worth considering because Chalmers at least tries to argue that (contrary to my intuition) a physical agent similar to Martha might not have qualia. But even this argument just seems to end in dueling intuitions. (If you can’t go any further, then we should mistrust our intuitions and trust the abundant evidence that our reality is somehow made of math.)
Possibly one could construct a better argument by starting with an attempt to fix Solomonoff Induction.