This is somewhat circular. There isn’t anyone who knows everything about the visual system. Thus, we’re hypothesizing that knowing everything about the visual system is insufficient to understand what red looks like… prove that knowing everything about the visual system is insufficient to understand what red looks like.
Even given this, the obvious solution seems to be that “What red looks like” is a fact about Mary’s brain. She needn’t have seen red light to see red; properly stimulating some neurons would result in the same effect. That the experience is itself a data point that cannot be explained through other means seems obvious. One could not experience a taste by reading about it.
Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let’s pretend) every zero and every one in that DVD. But if you don’t have a DVD player, you can never watch it. The human brain does not appear to be able to translate zeroes and ones into a visual experience. Similarly, people can’t know what sex feels like for the opposite sex; you simply don’t have the equipment.
DVD players do not require magic to work, why should the brain?
Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let’s pretend) every zero and every one in that DVD. But if you don’t have a DVD player, you can never watch it
A better analogy would be: you have a DVD and a complete set of schematics
for a DVD player, and the ability to understand both, but still can’t figure out what
the DVD would look like when viewed.
I think your analogy betrays you: an AI wouldn’t need to have an actual DVD player to turn the ones and zeroes into an experience of the film, it would just need to know the right algorithm.
Let’s be clear here- you’re advocating an epistemically non-reductionist position, which should seem at least a little weird: if brains are made of atoms, why should the hanging questions of what an experience feels like be unanswerable from knowledge of the brain structure?
Let’s be clear here—I’m advocating no such thing. My position is firmly reductionist. Also, we’re talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.
Any experience is, basically, a firing of neurons. It’s not something that “emerges” from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one’s memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.
If Mary were an advanced AI, she could reason as follows: “I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I’m an AI, so I can just fire those neurons on my own. Aha! That’s what red looks like!” Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary’s brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She’s only human.
Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.
Doesn’t it follow that Mary, since she knows everything about color, must have both electrodes and the desire and ability to perform of a brain surgery on herself? There is a truly fabulous story, rkunyngvba ol grq puvnat in which the protagonist does this, but since it only happens half way through, I don’t want to spoil it.
This is somewhat circular. There isn’t anyone who knows everything about the visual system. Thus, we’re hypothesizing that knowing everything about the visual system is insufficient to understand what red looks like… prove that knowing everything about the visual system is insufficient to understand what red looks like.
Even given this, the obvious solution seems to be that “What red looks like” is a fact about Mary’s brain. She needn’t have seen red light to see red; properly stimulating some neurons would result in the same effect. That the experience is itself a data point that cannot be explained through other means seems obvious. One could not experience a taste by reading about it.
Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let’s pretend) every zero and every one in that DVD. But if you don’t have a DVD player, you can never watch it. The human brain does not appear to be able to translate zeroes and ones into a visual experience. Similarly, people can’t know what sex feels like for the opposite sex; you simply don’t have the equipment.
DVD players do not require magic to work, why should the brain?
A better analogy would be: you have a DVD and a complete set of schematics for a DVD player, and the ability to understand both, but still can’t figure out what the DVD would look like when viewed.
I think your analogy betrays you: an AI wouldn’t need to have an actual DVD player to turn the ones and zeroes into an experience of the film, it would just need to know the right algorithm.
Let’s be clear here- you’re advocating an epistemically non-reductionist position, which should seem at least a little weird: if brains are made of atoms, why should the hanging questions of what an experience feels like be unanswerable from knowledge of the brain structure?
Let’s be clear here—I’m advocating no such thing. My position is firmly reductionist. Also, we’re talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.
Any experience is, basically, a firing of neurons. It’s not something that “emerges” from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one’s memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.
If Mary were an advanced AI, she could reason as follows: “I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I’m an AI, so I can just fire those neurons on my own. Aha! That’s what red looks like!” Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary’s brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She’s only human.
Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.
Doesn’t it follow that Mary, since she knows everything about color, must have both electrodes and the desire and ability to perform of a brain surgery on herself? There is a truly fabulous story, rkunyngvba ol grq puvnat in which the protagonist does this, but since it only happens half way through, I don’t want to spoil it.
Once gain, Mary knows everything knowable by description only. Whether that amounts to everything simpliciter is the puzzle.
But you still haven’t explained why she would need to fire her own neurons. She doens;t need to photosynthesise to understand photosynthesis.