I think the idea that “what it actually feels like” is knowledge beyond “every physical fact on various levels” is just asserting the conclusion.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We’ve never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we’re also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.
I think the idea that “what it actually feels like” is knowledge beyond “every physical fact on various levels” is just asserting the conclusion.
Ah, but what conclusion?
I’m saying, it doesn’t matter whether you assume they’re the same or different. Either way, the whole “experiment” is another stupid definitional argument.
However, materialism does not require us to believe that looking at a menu can make you feel full. So, there’s no reason not to accept the experiment’s premise that Mary experiences something new by seeing red. That’s not where the error comes from.
The error is in assuming that a brain ought to be able to translate knowledge of one kind into another, independent of its physical form. If you buy that implicit premise, then you seem to run into a contradiction.
However, since materialism doesn’t require this premise, there’s no reason to assume it. I don’t, so I see no contradiction in the experiment.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We’ve never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we’re also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
If you think that you can be “smart enough” then you are positing a different brain architecture than the ones human beings have.
But let’s assume that Mary isn’t human. She’s a transhuman, or posthuman, or some sort of alien being.
In order for her to know what red actually feels like, she’d need to be able to create the experience -- i.e., have a neural architecture that lets her go, “ah, so it’s that neuron that does ‘red’… let me go ahead and trigger that.”
At this point, we’ve reduced the “experiment” to an absurdity, because now Mary has experienced “red”.
Neither with a plain human architecture, nor with a super-advanced alien one, do we get a place where there is some mysterious non-material thing left over.
I think the chinese room has a similar problem
Not exactly. It’s an intuition pump, drawing on your intuitive sense that the only thing in the room that could “understand” Chinese is the human… and he clearly doesn’t, so there must not be any understanding going on. If you replace the room with a computer, then the same intuition pump needn’t apply.
For that matter, suppose you replace the chinese room with a brain filled with individual computing units… then the same “experiment” “proves” that brains can’t possibly “understand” anything!
However, materialism does not require us to believe that looking at a menu can make you feel full.
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
In order for her to know what red actually feels like, she’d need to be able to create the experience—i.e., have a neural architecture that lets her go, “ah, so it’s that neuron that does ‘red’… let me go ahead and trigger that.”
That is the conclusion you’re asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say “oh wow”, she says “ha, nailed it”
If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation’s red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.
Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it’s an automatic consequence of knowing all the physical facts.
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire.
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent “this experience is imaginary”.)
However, none of this matters in the slightest with regard to dissolving Mary’s Room. I’m simply pointing out that it isn’t necessary to assume perfect knowledge in order to dissolve the paradox. It’s just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is—and possibly must—be imperfect.
But materialism doesn’t require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
No matter how much information is on the menu, it’s not going to make you feel full.
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it.
Nor is the question “can she imagine red”.
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
If there’s a difference in the experience, then there’s information about the difference, and surprise is thus possible.
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
How, exactly? How will this knowledge be represented?
If “red” is truly a material subject—something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge “about” this is necessarily separate from the thing itself. The word “red” is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see “colors” that don’t correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn’t include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That’s because in any hypothetical state where I’m thinking “that’s what red is”, my mental state is not “red”, but “that’s what red is”. Thus, there’s a difference in my state, and thus, something to be surprised about.
Trying to say, “yeah, but you can take that into account” is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you’re in still contains the label, “this is what I think it would be like”, and lacks the portion of that state containing the actual experience of red.
The information about the difference is included in Mary’s education. That is what was given.
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary’s knowing what it’s like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
That treatment of the POAT is flawed. The question that matter is whether there
is relative motion between the air and the plane. A horizontally tethered plane
in a wind tunnel would rise. The treadmill is just a fancy tether.
What? That’s the best treatment of the question I’ve seen yet, and seems to account for every possible angle. This makes no sense:
A horizontally tethered plane in a wind tunnel would rise.
The plane in the thought experiment is not in a wind tunnel.
The treadmill is just a fancy tether.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else’s).
I think the idea that “what it actually feels like” is knowledge beyond “every physical fact on various levels” is just asserting the conclusion.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We’ve never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we’re also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.
Ah, but what conclusion?
I’m saying, it doesn’t matter whether you assume they’re the same or different. Either way, the whole “experiment” is another stupid definitional argument.
However, materialism does not require us to believe that looking at a menu can make you feel full. So, there’s no reason not to accept the experiment’s premise that Mary experiences something new by seeing red. That’s not where the error comes from.
The error is in assuming that a brain ought to be able to translate knowledge of one kind into another, independent of its physical form. If you buy that implicit premise, then you seem to run into a contradiction.
However, since materialism doesn’t require this premise, there’s no reason to assume it. I don’t, so I see no contradiction in the experiment.
If you think that you can be “smart enough” then you are positing a different brain architecture than the ones human beings have.
But let’s assume that Mary isn’t human. She’s a transhuman, or posthuman, or some sort of alien being.
In order for her to know what red actually feels like, she’d need to be able to create the experience -- i.e., have a neural architecture that lets her go, “ah, so it’s that neuron that does ‘red’… let me go ahead and trigger that.”
At this point, we’ve reduced the “experiment” to an absurdity, because now Mary has experienced “red”.
Neither with a plain human architecture, nor with a super-advanced alien one, do we get a place where there is some mysterious non-material thing left over.
Not exactly. It’s an intuition pump, drawing on your intuitive sense that the only thing in the room that could “understand” Chinese is the human… and he clearly doesn’t, so there must not be any understanding going on. If you replace the room with a computer, then the same intuition pump needn’t apply.
For that matter, suppose you replace the chinese room with a brain filled with individual computing units… then the same “experiment” “proves” that brains can’t possibly “understand” anything!
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
That is the conclusion you’re asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say “oh wow”, she says “ha, nailed it”
If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation’s red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.
Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it’s an automatic consequence of knowing all the physical facts.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent “this experience is imaginary”.)
However, none of this matters in the slightest with regard to dissolving Mary’s Room. I’m simply pointing out that it isn’t necessary to assume perfect knowledge in order to dissolve the paradox. It’s just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is—and possibly must—be imperfect.
But materialism doesn’t require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
Nor is the question “can she imagine red”.
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
If there’s a difference in the experience, then there’s information about the difference, and surprise is thus possible.
How, exactly? How will this knowledge be represented?
If “red” is truly a material subject—something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge “about” this is necessarily separate from the thing itself. The word “red” is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see “colors” that don’t correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn’t include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That’s because in any hypothetical state where I’m thinking “that’s what red is”, my mental state is not “red”, but “that’s what red is”. Thus, there’s a difference in my state, and thus, something to be surprised about.
Trying to say, “yeah, but you can take that into account” is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you’re in still contains the label, “this is what I think it would be like”, and lacks the portion of that state containing the actual experience of red.
The information about the difference is included in Mary’s education. That is what was given.
Are you surprised all the time? If the change in Mary’s mental state is what Mary expected it to be, then there is no surprise.
How do you know?
Isn’t a mind that knows every fact about a process itself an analogous physical process?
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary’s knowing what it’s like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
That treatment of the POAT is flawed. The question that matter is whether there is relative motion between the air and the plane. A horizontally tethered plane in a wind tunnel would rise. The treadmill is just a fancy tether.
What? That’s the best treatment of the question I’ve seen yet, and seems to account for every possible angle. This makes no sense:
The plane in the thought experiment is not in a wind tunnel.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
A plane can move air over its own airfoils. Or why not make it a truck on a treadmill?
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else’s).
I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.