Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire.
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent “this experience is imaginary”.)
However, none of this matters in the slightest with regard to dissolving Mary’s Room. I’m simply pointing out that it isn’t necessary to assume perfect knowledge in order to dissolve the paradox. It’s just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is—and possibly must—be imperfect.
But materialism doesn’t require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
No matter how much information is on the menu, it’s not going to make you feel full.
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it.
Nor is the question “can she imagine red”.
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
If there’s a difference in the experience, then there’s information about the difference, and surprise is thus possible.
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
How, exactly? How will this knowledge be represented?
If “red” is truly a material subject—something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge “about” this is necessarily separate from the thing itself. The word “red” is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see “colors” that don’t correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn’t include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That’s because in any hypothetical state where I’m thinking “that’s what red is”, my mental state is not “red”, but “that’s what red is”. Thus, there’s a difference in my state, and thus, something to be surprised about.
Trying to say, “yeah, but you can take that into account” is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you’re in still contains the label, “this is what I think it would be like”, and lacks the portion of that state containing the actual experience of red.
The information about the difference is included in Mary’s education. That is what was given.
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary’s knowing what it’s like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
That treatment of the POAT is flawed. The question that matter is whether there
is relative motion between the air and the plane. A horizontally tethered plane
in a wind tunnel would rise. The treadmill is just a fancy tether.
What? That’s the best treatment of the question I’ve seen yet, and seems to account for every possible angle. This makes no sense:
A horizontally tethered plane in a wind tunnel would rise.
The plane in the thought experiment is not in a wind tunnel.
The treadmill is just a fancy tether.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else’s).
No matter how much information is on the menu, it’s not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
In which case, we’re using different definitions of what it means to know what something is like. In mine, knowing what something is “like” is not the same as actually experiencing it—which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent “this experience is imaginary”.)
However, none of this matters in the slightest with regard to dissolving Mary’s Room. I’m simply pointing out that it isn’t necessary to assume perfect knowledge in order to dissolve the paradox. It’s just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is—and possibly must—be imperfect.
But materialism doesn’t require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
“Feeling full” and “seeing red” also jumbles up the question. It is not “would she see red”
But isn’t your “knowing what something is like” based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
Nor is the question “can she imagine red”.
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
If there’s a difference in the experience, then there’s information about the difference, and surprise is thus possible.
How, exactly? How will this knowledge be represented?
If “red” is truly a material subject—something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge “about” this is necessarily separate from the thing itself. The word “red” is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see “colors” that don’t correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn’t include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That’s because in any hypothetical state where I’m thinking “that’s what red is”, my mental state is not “red”, but “that’s what red is”. Thus, there’s a difference in my state, and thus, something to be surprised about.
Trying to say, “yeah, but you can take that into account” is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you’re in still contains the label, “this is what I think it would be like”, and lacks the portion of that state containing the actual experience of red.
The information about the difference is included in Mary’s education. That is what was given.
Are you surprised all the time? If the change in Mary’s mental state is what Mary expected it to be, then there is no surprise.
How do you know?
Isn’t a mind that knows every fact about a process itself an analogous physical process?
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary’s knowing what it’s like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
That treatment of the POAT is flawed. The question that matter is whether there is relative motion between the air and the plane. A horizontally tethered plane in a wind tunnel would rise. The treadmill is just a fancy tether.
What? That’s the best treatment of the question I’ve seen yet, and seems to account for every possible angle. This makes no sense:
The plane in the thought experiment is not in a wind tunnel.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
A plane can move air over its own airfoils. Or why not make it a truck on a treadmill?
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else’s).
I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.