The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from previously known experiences. But in most cases there is a difference between imagining a sensation and experiencing it.
In the case of the AI, there could either be the case where no information is passed between the color perception subroutine and the main processing unit, in which case the AI may have a new experience, but not learn anything new. Or some representation of the experience of being in the subroutine is saved to memory, in which case something new is learned.
The stronger someones imaginative ability is, the more their imagining an experience is actually having it, in terms of brain states....and the less it s a counterexample to anything relevant.
If the knowedge the AI gets from the colour routine is unproblematically encoded in a string of bits, why can’t it just look at the string of bits...for that matter, why can’t Mary just look at the neural spike trains of someone seing red?
why can’t Mary just look at the neural spike trains of someone seing red?
Why can’t we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?
The banal truth here is that knowing about a thing doesn’t turn you into it.
The significant and contentious claim is that here are certain kinds of knowledge that can only be accessed by instantiating a brain state. The existence of such subjective knowledge leads to a further argument against physicalism.
The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from previously known experiences. But in most cases there is a difference between imagining a sensation and experiencing it.
In the case of the AI, there could either be the case where no information is passed between the color perception subroutine and the main processing unit, in which case the AI may have a new experience, but not learn anything new. Or some representation of the experience of being in the subroutine is saved to memory, in which case something new is learned.
The stronger someones imaginative ability is, the more their imagining an experience is actually having it, in terms of brain states....and the less it s a counterexample to anything relevant.
If the knowedge the AI gets from the colour routine is unproblematically encoded in a string of bits, why can’t it just look at the string of bits...for that matter, why can’t Mary just look at the neural spike trains of someone seing red?
Why can’t we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?
Yes: is about a kind of knowledge.
The banal truth here is that knowing about a thing doesn’t turn you into it.
The significant and contentious claim is that here are certain kinds of knowledge that can only be accessed by instantiating a brain state. The existence of such subjective knowledge leads to a further argument against physicalism.