If Mary knows everything physical about color, then there’s nothing for her to be surprised about when she sees red. If your intuitions tell you otherwise, then your intuitions are wrong.
Not really; it just means that our ability to imagine sensory experiences is underpowered. There are limits to what you can imagine and call up in conscious experience, even of things you have experienced. A person could imagine what it would be like to be betrayed by a friend, and yet still not be able to experience the same “qualia” as they would in the actual situation.
So, you can know precisely which neurons should fire to create a sensation of red (or anything else), and yet not be able to make them fire as a result.
Mere knowledge isn’t sufficient to recreate any experience, but that’s just a fact about the structure and limitations of human brains, not evidence of some special status for qualia. (It’s certainly not an argument for non-materialism.)
That more or less corresponds to the way I break it down, and I’d take it a step further by saying that thinking of the problem this way reduces Mary’s room to a definitional conflict. If we classify the experiential feeling of redness under “everything physical about color”—which is quite viable given a reductionist interpretation of the problem—then Mary by definition knows how it feels. This is probably impossible in practice if Mary has a normal human cognitive architecture, but that’s okay, since we’re working in the magical world of thought experiments where anything goes.
If we don’t, on the other hand, then Mary can quite easily lack experiential knowledge of redness without fear of contradiction, by the process you’ve outlined. It’s only an apparent paradox because of an ambiguity in our formulation of experiential knowledge.
If we classify the experiential feeling of redness under “everything physical about color”—which is quite viable given a reductionist interpretation of the problem—then Mary by definition knows how it feels.
That’s not how reduction works. You don’t just declare a problem to consist only
of (known) physics, and then declare it solved.You attempt to understand it
in terms of known physics, and that attempt either succeeds or fails. Reductionism is
not an apriori truth, or a method guaranteed to succeed. And no reduction of qualia
has succeeded. Whether that me we need new explanations, new physics, non-reductionism or dualism is an open question.
I’m not sure you understand what I’m trying to say—or, for that matter, what pjeby was trying to say. Notice how I never used the word “qualia”? That’s because I’m trying to avoid becoming entangled in issues surrounding the reduction of qualia; instead, I’m examining the results of Mary’s room given two mutually exclusive possible assumptions—that such a reduction exists or that it doesn’t—and pointing out that the thought experiment generates results consistent with known physics in either case, provided we keep that assumption consistent within it. That doesn’t reduce qualia as traditionally conceived to known physics, but it does demonstrate that Mary’s room doesn’t provide evidence either way.
Not being able to make the neurons fire doesn’t mean you don’t know how it would feel if they did.
I hate this whole scenario for this kind of “This knowledge is a given but wait no it is not.” kind of thinking.
Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.
Not being able to make the neurons fire doesn’t mean you don’t know how it would feel if they did.
Huh? That sounds confused to me. As I said, I can “know” how it would feel to be betrayed by a friend, without actually experiencing it. And that difference between “knowing” and “experiencing” is what we’re talking about here.
I thought you were arguing that there was something for her to be surprised about.
Of course there’s something for her to be surprised about. The non-materialists are merely wrong to think this means there’s something mysterious or non-physical about that something.
It may be more accurate to say that when she sees a red object, that generates a feeling of surprise, because her visual cortex is doing something it has never done before. Not that there was ever any information missing—but the surprise still happens as a fact about the brain.
It may be more accurate to say that when she sees a red object, that generates a feeling of surprise, because her visual cortex is doing something it has never done before. Not that there was ever any information missing—but the surprise still happens as a fact about the brain.
We measure information in terms of surprise, so you’re kind of contradicting yourself there.
The entire “thought experiment” hinges on getting you to accept a false premise: that “knowledge” is of a single kind. It then encourages you to follow this premise through to the seeming contradiction that Mary shouldn’t be able to be surprised. It ignores the critical role of knowledge representation, and is thus a paradox of the form, “If the barber shaves everyone who doesn’t shave themselves, does the barber shave him/herself?” The paradox comes from mixing two levels of knowledge, and pretending they’re the same, in precisely the same way that Mary’s Room does.
I mean surprise in the sense of the feeling, which doesn’t have to be justified to be felt. Perhaps a better word is “enlightenment”. Seeing red feels like enlightenment because the brain is put into a state it has never been in before, as a result of which Mary gains the ability (through memory) to put her brain into that state at will.
I think the idea that “what it actually feels like” is knowledge beyond “every physical fact on various levels” is just asserting the conclusion.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We’ve never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we’re also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.
I think the idea that “what it actually feels like” is knowledge beyond “every physical fact on various levels” is just asserting the conclusion.
Ah, but what conclusion?
I’m saying, it doesn’t matter whether you assume they’re the same or different. Either way, the whole “experiment” is another stupid definitional argument.
However, materialism does not require us to believe that looking at a menu can make you feel full. So, there’s no reason not to accept the experiment’s premise that Mary experiences something new by seeing red. That’s not where the error comes from.
The error is in assuming that a brain ought to be able to translate knowledge of one kind into another, independent of its physical form. If you buy that implicit premise, then you seem to run into a contradiction.
However, since materialism doesn’t require this premise, there’s no reason to assume it. I don’t, so I see no contradiction in the experiment.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We’ve never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we’re also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
If you think that you can be “smart enough” then you are positing a different brain architecture than the ones human beings have.
But let’s assume that Mary isn’t human. She’s a transhuman, or posthuman, or some sort of alien being.
In order for her to know what red actually feels like, she’d need to be able to create the experience -- i.e., have a neural architecture that lets her go, “ah, so it’s that neuron that does ‘red’… let me go ahead and trigger that.”
At this point, we’ve reduced the “experiment” to an absurdity, because now Mary has experienced “red”.
Neither with a plain human architecture, nor with a super-advanced alien one, do we get a place where there is some mysterious non-material thing left over.
I think the chinese room has a similar problem
Not exactly. It’s an intuition pump, drawing on your intuitive sense that the only thing in the room that could “understand” Chinese is the human… and he clearly doesn’t, so there must not be any understanding going on. If you replace the room with a computer, then the same intuition pump needn’t apply.
For that matter, suppose you replace the chinese room with a brain filled with individual computing units… then the same “experiment” “proves” that brains can’t possibly “understand” anything!
However, materialism does not require us to believe that looking at a menu can make you feel full.
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
In order for her to know what red actually feels like, she’d need to be able to create the experience—i.e., have a neural architecture that lets her go, “ah, so it’s that neuron that does ‘red’… let me go ahead and trigger that.”
That is the conclusion you’re asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say “oh wow”, she says “ha, nailed it”
If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation’s red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.
Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it’s an automatic consequence of knowing all the physical facts.
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire.
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent “this experience is imaginary”.)
However, none of this matters in the slightest with regard to dissolving Mary’s Room. I’m simply pointing out that it isn’t necessary to assume perfect knowledge in order to dissolve the paradox. It’s just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is—and possibly must—be imperfect.
But materialism doesn’t require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
No matter how much information is on the menu, it’s not going to make you feel full.
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it.
Nor is the question “can she imagine red”.
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
If there’s a difference in the experience, then there’s information about the difference, and surprise is thus possible.
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
How, exactly? How will this knowledge be represented?
If “red” is truly a material subject—something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge “about” this is necessarily separate from the thing itself. The word “red” is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see “colors” that don’t correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn’t include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That’s because in any hypothetical state where I’m thinking “that’s what red is”, my mental state is not “red”, but “that’s what red is”. Thus, there’s a difference in my state, and thus, something to be surprised about.
Trying to say, “yeah, but you can take that into account” is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you’re in still contains the label, “this is what I think it would be like”, and lacks the portion of that state containing the actual experience of red.
The information about the difference is included in Mary’s education. That is what was given.
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary’s knowing what it’s like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
That treatment of the POAT is flawed. The question that matter is whether there
is relative motion between the air and the plane. A horizontally tethered plane
in a wind tunnel would rise. The treadmill is just a fancy tether.
What? That’s the best treatment of the question I’ve seen yet, and seems to account for every possible angle. This makes no sense:
A horizontally tethered plane in a wind tunnel would rise.
The plane in the thought experiment is not in a wind tunnel.
The treadmill is just a fancy tether.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else’s).
Through study, no. I think the knowledge postulated is beyond what we currently have, and must include how the algorithm feels from the inside. (edit: Mary does know through study.)
Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.
That does sound fallacious. Fortunately you don’t need additional evidence.
An even better proposal: You should put the answer in the prologue and then not bother writing a story at all. Because we moved on from that kind of superstition years ago.
So, you can know precisely which neurons should fire to create a sensation of red (or anything else), and yet not be able to make them fire as a result.
Maybe, but Mary nonetheless by hypothesis knows exactly what it would feel like if those neurons fire, since that’s a physical fact about color. Like I said, that’s begging the question in the direction of materialism, but assuming that fact is non-physical is begging the question in the direction of non-materialism.
Like I said, that’s begging the question in the direction of materialism
Not at all. The question is only confused because the paradox confuses “knowing what would happen if neurons fire” and “having those neurons actually fire” as being the same sort of knowledge. In the human cognitive architecture, they aren’t the same thing, but that doesn’t mean there’s any mysterious non-physical “qualia” involved. It’s just that we have different neuronal firings for knowing and experiencing.
If you taboo enough words and expand enough definitions, the qualia question is reduced to “if Mary has mental-state-representing-knowledge-of-red, but does not have mental-state-representing-experience-of-red, then what new thing does she learn upon experiencing red?”
And of course the bloody obvious answer is, the mental state representing the experience of red. The question is idiotic because it basically assumes two fundamentally different things are the same, and then tries to turn the difference between them into something mysterious. It makes no more sense than saying, “if cubes are square, then why is a sphere round? some extra mysterious thing is happening!”
So, it’s not begging the question for materialism, because it doesn’t matter how complete Mary’s state of knowledge about neurons is. The question itself is a simple confusion of definitions, like the classic tree-forest-sound question.
The question itself is a simple confusion of definitions, like the classic tree-forest-sound question.
I think we’ve at least touched upon why this question needs to be dissolved.
Reading the thought experiment as a logic problem, one should accept the conflation of the two putative mental states you’ve identified (calling them both ‘knowing’) and note that by hypothesis Mary ‘knows’ everything physical about color. Thus, the question is resolved entirely by determining whether the quale is non-physical. And so if you accept the premises of the thought experiment, it is not good for resolving disputes over materialism. Dennet, being a materialist, reads the question in this manner and simply agrees that Mary will not be surprised, since materialism is true.
Personally, I’m pretty okay with mental-state-representing-experience-of-red being part of “knowledge”. Even if humans don’t work that way, that’s kindof irrelevant to the discussion (though it might explain why we have confused intuitions about this).
Dennet, being a materialist, reads the question in this manner and simply agrees that Mary will not be surprised, since materialism is true.
Then he is quite simply wrong. Knowledge can never be fully separated from its representation, just as one can never quite untangle a mind from the body it wears. ;-)
This conclusion is a requirement of actual materialism, since if you’re truly materialist, you know that knowledge can’t exist apart from a representation. Our applying the same label to two different representations is our own confusion, not one that exists in reality.
Reading the thought experiment as a logic problem, one should accept the conflation of the two putative mental states you’ve identified (calling them both ‘knowing’)
If you start from a nonsensical premise, you can prove just about anything. In this case, the premise is begging a question: you can only conflate the relevant types of knowledge under discussion, if you already assume that knowledge is independent of physical form… an assumption that any sufficiently advanced materialism should hold false.
This conclusion is a requirement of actual materialism, since if you’re truly materialist, you know that knowledge can’t exist apart from a representation. Our applying the same label to two different representations is our own confusion, not one that exists in reality.
It really doesn’t have to be a confusion though. We apply the label ‘fruit’ to both apples and oranges—that doesn’t mean we’re confused just because apples are different from oranges.
Then he is quite simply wrong. Knowledge can never be fully separated from its representation, just as one can never quite untangle a mind from the body it wears. ;-)
I don’t think either I or Dennett made that claim. You don’t need it for the premise of the thought experiment. You just need to understand that any mental state is going to be represented using some configuration of brain-stuff...
According to the thought experiment, Mary “knows” everything physical about the color red, and that will include any relevant sense of the word “knows”. And so if the only way to “know” what experiencing the color red feels like is to have the neurons fire that actually fire when seeing red, then she’s had those neurons fire. It could be by surgery, or hallucination, or divine intervention—it doesn’t matter, it was given as a premise in the thought experiment that she knows what that’s like.
One way to make such a Mary would be to determine what the configuration of neurons in Mary’s brain would be after experiencing red, then surgically alter her brain to have that configuration. The premise of the thought experiment is that she has this information, and so if that’s the only way she could have gotten it, then that’s what happened.
And so if the only way to “know” what experiencing the color red feels like is to have the neurons fire that actually fire when seeing red, then she’s had those neurons fire.
This is going way beyond what I’d consider to be a reasonable reading of the intent of the thought experiment. If you’re allowed to expand the meaning of the non-specific phrase “knows everything physical” to include an exact analogue of subjective experience, then the original meaning of the thought experiment goes right out the window.
My reading of this entire exchange has thomblake and JamesAndrix repeatedly begging the question in every comment, taking great license with the intent of the thought experiment, while pjeby keeps trying to ground the discussion in reality by pinning down what brain states are being compared. So the exchange as a whole is mildly illuminating, but only because the former are acting as foils for the latter.
You can’t keep arguing this on the verbal/definitional level. The meat is in the bit about brain states.
Call the set of brain states that enable Mary to recall the subjective experience of red, Set R. If seeing red for the first time imparts an ability to recall redness that was not there before, then as far as I’m concerned that’s what’s meant by “surprise”.
We know that seeing something red with her eyes puts her brain into a state that is in Set R. The question is whether there is a body of knowledge, this irritatingly ill-defined concept of “all ‘physical’ knowledge about red”, that places her brain into a state in Set R. It is a useless mental exercise to divorce this from how human brains and eyes actually work. Either a brain can be put into Set R without experiencing red, or it can’t. It seems very unlikely that descriptive knowledge could accomplish this. If you’re just going to toss direct neuronal manipulation in there with descriptive knowledge, then the whole thought experiment becomes a farce.
The question is idiotic because it basically assumes two fundamentally different things are the same, and then tries to turn the difference between them into something mysterious.
On the contrary, it is uncontentious that knowledge-by-descriptions and knowledge-by-acquaintance are both knowledge.
Then she knows things humans in their current form can’t learn except by seeing red. Either she found a way to reprogram herself, or she has seen red, or the problerm is ill-posed because it equivocates between what humans can learn at all and what they can learn from reading words in textbooks.
Not really; it just means that our ability to imagine sensory experiences is underpowered.
Why does Mary need to imagine red in order to know what it looks like? If the physical understanding she already has accounts for it, then she should be able to figure it out from that, as per the Dennett response. Like several people in this thread, you are tacitly assuming that there is something special about qualia, such that they need to be imagined or instantiated in order to be known—something that is unique about them, even though they are ultimately physical like everything else.
Not really; it just means that our ability to imagine sensory experiences is underpowered. There are limits to what you can imagine and call up in conscious experience, even of things you have experienced. A person could imagine what it would be like to be betrayed by a friend, and yet still not be able to experience the same “qualia” as they would in the actual situation.
So, you can know precisely which neurons should fire to create a sensation of red (or anything else), and yet not be able to make them fire as a result.
Mere knowledge isn’t sufficient to recreate any experience, but that’s just a fact about the structure and limitations of human brains, not evidence of some special status for qualia. (It’s certainly not an argument for non-materialism.)
That more or less corresponds to the way I break it down, and I’d take it a step further by saying that thinking of the problem this way reduces Mary’s room to a definitional conflict. If we classify the experiential feeling of redness under “everything physical about color”—which is quite viable given a reductionist interpretation of the problem—then Mary by definition knows how it feels. This is probably impossible in practice if Mary has a normal human cognitive architecture, but that’s okay, since we’re working in the magical world of thought experiments where anything goes.
If we don’t, on the other hand, then Mary can quite easily lack experiential knowledge of redness without fear of contradiction, by the process you’ve outlined. It’s only an apparent paradox because of an ambiguity in our formulation of experiential knowledge.
That’s not how reduction works. You don’t just declare a problem to consist only of (known) physics, and then declare it solved.You attempt to understand it in terms of known physics, and that attempt either succeeds or fails. Reductionism is not an apriori truth, or a method guaranteed to succeed. And no reduction of qualia has succeeded. Whether that me we need new explanations, new physics, non-reductionism or dualism is an open question.
I’m not sure you understand what I’m trying to say—or, for that matter, what pjeby was trying to say. Notice how I never used the word “qualia”? That’s because I’m trying to avoid becoming entangled in issues surrounding the reduction of qualia; instead, I’m examining the results of Mary’s room given two mutually exclusive possible assumptions—that such a reduction exists or that it doesn’t—and pointing out that the thought experiment generates results consistent with known physics in either case, provided we keep that assumption consistent within it. That doesn’t reduce qualia as traditionally conceived to known physics, but it does demonstrate that Mary’s room doesn’t provide evidence either way.
Not being able to make the neurons fire doesn’t mean you don’t know how it would feel if they did.
I hate this whole scenario for this kind of “This knowledge is a given but wait no it is not.” kind of thinking.
Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.
Huh? That sounds confused to me. As I said, I can “know” how it would feel to be betrayed by a friend, without actually experiencing it. And that difference between “knowing” and “experiencing” is what we’re talking about here.
From what you quoted I thought you were arguing that there was something for her to be surprised about.
Of course there’s something for her to be surprised about. The non-materialists are merely wrong to think this means there’s something mysterious or non-physical about that something.
It may be more accurate to say that when she sees a red object, that generates a feeling of surprise, because her visual cortex is doing something it has never done before. Not that there was ever any information missing—but the surprise still happens as a fact about the brain.
We measure information in terms of surprise, so you’re kind of contradicting yourself there.
The entire “thought experiment” hinges on getting you to accept a false premise: that “knowledge” is of a single kind. It then encourages you to follow this premise through to the seeming contradiction that Mary shouldn’t be able to be surprised. It ignores the critical role of knowledge representation, and is thus a paradox of the form, “If the barber shaves everyone who doesn’t shave themselves, does the barber shave him/herself?” The paradox comes from mixing two levels of knowledge, and pretending they’re the same, in precisely the same way that Mary’s Room does.
I mean surprise in the sense of the feeling, which doesn’t have to be justified to be felt. Perhaps a better word is “enlightenment”. Seeing red feels like enlightenment because the brain is put into a state it has never been in before, as a result of which Mary gains the ability (through memory) to put her brain into that state at will.
That isn’t a paradox. It is a simple logical question with the answer yes.
Hm, I guess that should probably be, “if the barber shaves only those who don’t shave themselves.”
“if and only if”-type language has to enter into.
If the barber shaves all and only those who don’t save themselves...
Cracked me up. I think you might mean “shave” here.
Oh no! The barber of Seville is coming! I’ll hold him off, you save yourself!
But what if I run into the barber of Fleet Street?!
What is it that she’s surprised about?
The difference between knowing what seeing red is supposed to feel like, and what it actually feels like.
I think the idea that “what it actually feels like” is knowledge beyond “every physical fact on various levels” is just asserting the conclusion.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We’ve never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we’re also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.
Ah, but what conclusion?
I’m saying, it doesn’t matter whether you assume they’re the same or different. Either way, the whole “experiment” is another stupid definitional argument.
However, materialism does not require us to believe that looking at a menu can make you feel full. So, there’s no reason not to accept the experiment’s premise that Mary experiences something new by seeing red. That’s not where the error comes from.
The error is in assuming that a brain ought to be able to translate knowledge of one kind into another, independent of its physical form. If you buy that implicit premise, then you seem to run into a contradiction.
However, since materialism doesn’t require this premise, there’s no reason to assume it. I don’t, so I see no contradiction in the experiment.
If you think that you can be “smart enough” then you are positing a different brain architecture than the ones human beings have.
But let’s assume that Mary isn’t human. She’s a transhuman, or posthuman, or some sort of alien being.
In order for her to know what red actually feels like, she’d need to be able to create the experience -- i.e., have a neural architecture that lets her go, “ah, so it’s that neuron that does ‘red’… let me go ahead and trigger that.”
At this point, we’ve reduced the “experiment” to an absurdity, because now Mary has experienced “red”.
Neither with a plain human architecture, nor with a super-advanced alien one, do we get a place where there is some mysterious non-material thing left over.
Not exactly. It’s an intuition pump, drawing on your intuitive sense that the only thing in the room that could “understand” Chinese is the human… and he clearly doesn’t, so there must not be any understanding going on. If you replace the room with a computer, then the same intuition pump needn’t apply.
For that matter, suppose you replace the chinese room with a brain filled with individual computing units… then the same “experiment” “proves” that brains can’t possibly “understand” anything!
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
That is the conclusion you’re asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say “oh wow”, she says “ha, nailed it”
If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation’s red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.
Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it’s an automatic consequence of knowing all the physical facts.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent “this experience is imaginary”.)
However, none of this matters in the slightest with regard to dissolving Mary’s Room. I’m simply pointing out that it isn’t necessary to assume perfect knowledge in order to dissolve the paradox. It’s just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is—and possibly must—be imperfect.
But materialism doesn’t require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
Nor is the question “can she imagine red”.
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
If there’s a difference in the experience, then there’s information about the difference, and surprise is thus possible.
How, exactly? How will this knowledge be represented?
If “red” is truly a material subject—something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge “about” this is necessarily separate from the thing itself. The word “red” is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see “colors” that don’t correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn’t include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That’s because in any hypothetical state where I’m thinking “that’s what red is”, my mental state is not “red”, but “that’s what red is”. Thus, there’s a difference in my state, and thus, something to be surprised about.
Trying to say, “yeah, but you can take that into account” is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you’re in still contains the label, “this is what I think it would be like”, and lacks the portion of that state containing the actual experience of red.
The information about the difference is included in Mary’s education. That is what was given.
Are you surprised all the time? If the change in Mary’s mental state is what Mary expected it to be, then there is no surprise.
How do you know?
Isn’t a mind that knows every fact about a process itself an analogous physical process?
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary’s knowing what it’s like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
That treatment of the POAT is flawed. The question that matter is whether there is relative motion between the air and the plane. A horizontally tethered plane in a wind tunnel would rise. The treadmill is just a fancy tether.
What? That’s the best treatment of the question I’ve seen yet, and seems to account for every possible angle. This makes no sense:
The plane in the thought experiment is not in a wind tunnel.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
A plane can move air over its own airfoils. Or why not make it a truck on a treadmill?
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else’s).
I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.
It’s at least evidence about the way our minds model other minds, and as such it might be helpful to understand where that intuition comes from.
OK. Do you know that? Does Mary?
Well, through seeing red, yes ;-)
Through study, no. I think the knowledge postulated is beyond what we currently have, and must include how the algorithm feels from the inside. (edit: Mary does know through study.)
That does sound fallacious. Fortunately you don’t need additional evidence.
An even better proposal: You should put the answer in the prologue and then not bother writing a story at all. Because we moved on from that kind of superstition years ago.
Maybe, but Mary nonetheless by hypothesis knows exactly what it would feel like if those neurons fire, since that’s a physical fact about color. Like I said, that’s begging the question in the direction of materialism, but assuming that fact is non-physical is begging the question in the direction of non-materialism.
Not at all. The question is only confused because the paradox confuses “knowing what would happen if neurons fire” and “having those neurons actually fire” as being the same sort of knowledge. In the human cognitive architecture, they aren’t the same thing, but that doesn’t mean there’s any mysterious non-physical “qualia” involved. It’s just that we have different neuronal firings for knowing and experiencing.
If you taboo enough words and expand enough definitions, the qualia question is reduced to “if Mary has mental-state-representing-knowledge-of-red, but does not have mental-state-representing-experience-of-red, then what new thing does she learn upon experiencing red?”
And of course the bloody obvious answer is, the mental state representing the experience of red. The question is idiotic because it basically assumes two fundamentally different things are the same, and then tries to turn the difference between them into something mysterious. It makes no more sense than saying, “if cubes are square, then why is a sphere round? some extra mysterious thing is happening!”
So, it’s not begging the question for materialism, because it doesn’t matter how complete Mary’s state of knowledge about neurons is. The question itself is a simple confusion of definitions, like the classic tree-forest-sound question.
I think we’ve at least touched upon why this question needs to be dissolved.
Reading the thought experiment as a logic problem, one should accept the conflation of the two putative mental states you’ve identified (calling them both ‘knowing’) and note that by hypothesis Mary ‘knows’ everything physical about color. Thus, the question is resolved entirely by determining whether the quale is non-physical. And so if you accept the premises of the thought experiment, it is not good for resolving disputes over materialism. Dennet, being a materialist, reads the question in this manner and simply agrees that Mary will not be surprised, since materialism is true.
Personally, I’m pretty okay with mental-state-representing-experience-of-red being part of “knowledge”. Even if humans don’t work that way, that’s kindof irrelevant to the discussion (though it might explain why we have confused intuitions about this).
Then he is quite simply wrong. Knowledge can never be fully separated from its representation, just as one can never quite untangle a mind from the body it wears. ;-)
This conclusion is a requirement of actual materialism, since if you’re truly materialist, you know that knowledge can’t exist apart from a representation. Our applying the same label to two different representations is our own confusion, not one that exists in reality.
If you start from a nonsensical premise, you can prove just about anything. In this case, the premise is begging a question: you can only conflate the relevant types of knowledge under discussion, if you already assume that knowledge is independent of physical form… an assumption that any sufficiently advanced materialism should hold false.
It really doesn’t have to be a confusion though. We apply the label ‘fruit’ to both apples and oranges—that doesn’t mean we’re confused just because apples are different from oranges.
I don’t think either I or Dennett made that claim. You don’t need it for the premise of the thought experiment. You just need to understand that any mental state is going to be represented using some configuration of brain-stuff...
According to the thought experiment, Mary “knows” everything physical about the color red, and that will include any relevant sense of the word “knows”. And so if the only way to “know” what experiencing the color red feels like is to have the neurons fire that actually fire when seeing red, then she’s had those neurons fire. It could be by surgery, or hallucination, or divine intervention—it doesn’t matter, it was given as a premise in the thought experiment that she knows what that’s like.
One way to make such a Mary would be to determine what the configuration of neurons in Mary’s brain would be after experiencing red, then surgically alter her brain to have that configuration. The premise of the thought experiment is that she has this information, and so if that’s the only way she could have gotten it, then that’s what happened.
This is going way beyond what I’d consider to be a reasonable reading of the intent of the thought experiment. If you’re allowed to expand the meaning of the non-specific phrase “knows everything physical” to include an exact analogue of subjective experience, then the original meaning of the thought experiment goes right out the window.
My reading of this entire exchange has thomblake and JamesAndrix repeatedly begging the question in every comment, taking great license with the intent of the thought experiment, while pjeby keeps trying to ground the discussion in reality by pinning down what brain states are being compared. So the exchange as a whole is mildly illuminating, but only because the former are acting as foils for the latter.
You can’t keep arguing this on the verbal/definitional level. The meat is in the bit about brain states.
Call the set of brain states that enable Mary to recall the subjective experience of red, Set R. If seeing red for the first time imparts an ability to recall redness that was not there before, then as far as I’m concerned that’s what’s meant by “surprise”.
We know that seeing something red with her eyes puts her brain into a state that is in Set R. The question is whether there is a body of knowledge, this irritatingly ill-defined concept of “all ‘physical’ knowledge about red”, that places her brain into a state in Set R. It is a useless mental exercise to divorce this from how human brains and eyes actually work. Either a brain can be put into Set R without experiencing red, or it can’t. It seems very unlikely that descriptive knowledge could accomplish this. If you’re just going to toss direct neuronal manipulation in there with descriptive knowledge, then the whole thought experiment becomes a farce.
On the contrary, it is uncontentious that knowledge-by-descriptions and knowledge-by-acquaintance are both knowledge.
Then she knows things humans in their current form can’t learn except by seeing red. Either she found a way to reprogram herself, or she has seen red, or the problerm is ill-posed because it equivocates between what humans can learn at all and what they can learn from reading words in textbooks.
Why does Mary need to imagine red in order to know what it looks like? If the physical understanding she already has accounts for it, then she should be able to figure it out from that, as per the Dennett response. Like several people in this thread, you are tacitly assuming that there is something special about qualia, such that they need to be imagined or instantiated in order to be known—something that is unique about them, even though they are ultimately physical like everything else.