You… disagree? Do you mean your own intuition is different, or do you mean you have some special insight into my psychology that tells you that I’m misunderstanding or misrepresenting my own intuitions?
I mean my intuition is different.
I don’t feel that mental states are simple! Yet the Mary hunch persists. You seem to be hopping back and forth between the explanations ‘qualia seem irreducible because we don’t know enough about them yet’ and ‘qualia seem irreducible because we don’t realize how complicated they are’.
Alright, I’ll try to stop hopping and nail down what I’m saying:
I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of “experiential knowledge” found in the unconscious “black box” parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain “experiential knowledge” just by reading the verbal statements.
In addition to making qualia seem irreducible, this phenomenon explains other things, such as the fact that many activities are easier to learn to do by experience.
I’ve never actually read any Denett, except for short summaries of some of his criticisms written by other people. One person who has influenced me a lot is Thomas Sowell, who frequently argues that the most important knowledge is implicit and extremely difficult, if not impossible, to articulate into verbal form. He does this in terms of economics, but when I started reading about the ineffability of qualia I immediately began to think “This probably has a similar explanation.”
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain “experiential knowledge” just by reading the verbal statements.
Mary isn’t a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.
I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of “experiential knowledge” found in the unconscious “black box” parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.
Whatever is stopping Mary from understanding qualia, if you grant that she does not, is not their difficulty in relation to her abilities, as explained above. We might not be able to understand oiur qualia because we are too stupid, but Mary does notnhave that problem.
If you’re asserting that Mary does not have the software problem that makes it impossible to derive “experential knowledge” from verbal data, then the answer to the puzzle is “Yes, Mary does know what red looks like, and won’t be at all surprised. BTW the reason our intuition tells us the opposite is because our normal simulate-other-humans procedures aren’t capable of imagining that kind of architecture.”
Otherwise, simply postulating that she has unlimited intelligence is a bit of a red herring. All that means is she has a lot of verbal processing power, it doesn’t mean all bugs in her mental architecture are fixed. To follow the kernel object analogy: I can run a program on any speed of CPU, it will never be able to get a handle to a kernel redness object if it doesn’t have access to the OS API. “Intelligence” of the program isn’t a factor (this is how we’re able to run high-speed javascript in browsers without every JS program being a severe security risk).
I mean my intuition is different.
Alright, I’ll try to stop hopping and nail down what I’m saying:
I think the most likely reason that qualia seem irreducible is because of some kind of software problem in the brain that makes it extremely difficult, if not impossible, for us to translate the sort of “experiential knowledge” found in the unconscious “black box” parts of the brain into the sort of verbal, propositional knowledge that we can communicate to other people by language. The high complexity of our minds probably compounds the difficulty even further.
I think this problem goes both ways. So even if we could get some kind of AI to translate the knowledge into verbal statements for us, it would be impossible, or very difficult, for anything resembling a normal human to gain “experiential knowledge” just by reading the verbal statements.
In addition to making qualia seem irreducible, this phenomenon explains other things, such as the fact that many activities are easier to learn to do by experience.
I’ve never actually read any Denett, except for short summaries of some of his criticisms written by other people. One person who has influenced me a lot is Thomas Sowell, who frequently argues that the most important knowledge is implicit and extremely difficult, if not impossible, to articulate into verbal form. He does this in terms of economics, but when I started reading about the ineffability of qualia I immediately began to think “This probably has a similar explanation.”
Mary isn’t a normal human. The point of the story is to explore the limites of explanation. That being the case, Mary is granted unlimited intelligence, so that whatever limits he encountes are limits of explanation, and not her own limits.
Whatever is stopping Mary from understanding qualia, if you grant that she does not, is not their difficulty in relation to her abilities, as explained above. We might not be able to understand oiur qualia because we are too stupid, but Mary does notnhave that problem.
If you’re asserting that Mary does not have the software problem that makes it impossible to derive “experential knowledge” from verbal data, then the answer to the puzzle is “Yes, Mary does know what red looks like, and won’t be at all surprised. BTW the reason our intuition tells us the opposite is because our normal simulate-other-humans procedures aren’t capable of imagining that kind of architecture.”
Otherwise, simply postulating that she has unlimited intelligence is a bit of a red herring. All that means is she has a lot of verbal processing power, it doesn’t mean all bugs in her mental architecture are fixed. To follow the kernel object analogy: I can run a program on any speed of CPU, it will never be able to get a handle to a kernel redness object if it doesn’t have access to the OS API. “Intelligence” of the program isn’t a factor (this is how we’re able to run high-speed javascript in browsers without every JS program being a severe security risk).
If this is the case then, as I said before, my intuition that she would not understand qualia disappears.
For any value of abnormal? SHe is only quantitatively superior: she does not have brain-rewiring abilities.