There is a qualitative redness to red. I get that intuition.
I think “Mary’s room is uninteresting” is wrong; it’s uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.
I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as “like what I saw when I saw that first red apple”. But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.
I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren’t. Thanks for explaining.
I predict color quales might feel like new information when they actually aren’t.
I am curious if you disagree with the claim that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn’t have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?
Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it’s red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn’t seen the first apple.
I think I got confused because, while she does learn something upon seeing the first red apple, it isn’t the naive “red wavelengths are red-quale”, it’s more like “the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths.” Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.
There is a qualitative redness to red. I get that intuition.
I think “Mary’s room is uninteresting” is wrong; it’s uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.
I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as “like what I saw when I saw that first red apple”. But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.
I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren’t. Thanks for explaining.
I am curious if you disagree with the claim that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn’t have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?
That does seem right, actually.
Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it’s red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn’t seen the first apple.
I think I got confused because, while she does learn something upon seeing the first red apple, it isn’t the naive “red wavelengths are red-quale”, it’s more like “the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths.” Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.