I think your analogy betrays you: an AI wouldn’t need to have an actual DVD player to turn the ones and zeroes into an experience of the film, it would just need to know the right algorithm.
Let’s be clear here- you’re advocating an epistemically non-reductionist position, which should seem at least a little weird: if brains are made of atoms, why should the hanging questions of what an experience feels like be unanswerable from knowledge of the brain structure?
Let’s be clear here—I’m advocating no such thing. My position is firmly reductionist. Also, we’re talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.
Any experience is, basically, a firing of neurons. It’s not something that “emerges” from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one’s memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.
If Mary were an advanced AI, she could reason as follows: “I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I’m an AI, so I can just fire those neurons on my own. Aha! That’s what red looks like!” Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary’s brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She’s only human.
Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.
Doesn’t it follow that Mary, since she knows everything about color, must have both electrodes and the desire and ability to perform of a brain surgery on herself? There is a truly fabulous story, rkunyngvba ol grq puvnat in which the protagonist does this, but since it only happens half way through, I don’t want to spoil it.
I think your analogy betrays you: an AI wouldn’t need to have an actual DVD player to turn the ones and zeroes into an experience of the film, it would just need to know the right algorithm.
Let’s be clear here- you’re advocating an epistemically non-reductionist position, which should seem at least a little weird: if brains are made of atoms, why should the hanging questions of what an experience feels like be unanswerable from knowledge of the brain structure?
Let’s be clear here—I’m advocating no such thing. My position is firmly reductionist. Also, we’re talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.
Any experience is, basically, a firing of neurons. It’s not something that “emerges” from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one’s memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.
If Mary were an advanced AI, she could reason as follows: “I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I’m an AI, so I can just fire those neurons on my own. Aha! That’s what red looks like!” Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary’s brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She’s only human.
Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.
Doesn’t it follow that Mary, since she knows everything about color, must have both electrodes and the desire and ability to perform of a brain surgery on herself? There is a truly fabulous story, rkunyngvba ol grq puvnat in which the protagonist does this, but since it only happens half way through, I don’t want to spoil it.
Once gain, Mary knows everything knowable by description only. Whether that amounts to everything simpliciter is the puzzle.
But you still haven’t explained why she would need to fire her own neurons. She doens;t need to photosynthesise to understand photosynthesis.