Robots take in observations. They make theories that explain their observations. Different robots will make different observations and communicate them to each other. Thus, they will talk about observations.
After making enough observations they make theories of physics. (They had to talk about observations before they made low-level physics theories, though; after all, they came to theorize about physics through their observations). They also make bridge laws explaining how their observations are related to physics. But, they have uncertainty about these bridge laws for a significant time period.
The robots theorize that humans are similar to them, based on the fact that they have functionally similar cognitive architecture; thus, they theorize that humans have observations as well. (The bridge laws they posit are symmetric that way, rather than being silicon-chauvinist)
I think you are using the word “observation” to refer to consciousness. If this is true, then I do not deny that humans take in observations and process them.
However, I think the issue is that you have simply re-defined consciousness into something which would be unrecognizable to the philosopher. To that extent, I don’t say you are wrong, but I will allege that you have not done enough to respond to the consciousness-realist’s intuition that consciousness is different from physical properties. Let me explain:
If qualia are just observations, then it seems obvious that Mary is not missing any information in her room, since she can perfectly well understand and model the process by which people receive color observations.
Likewise, if qualia are merely observations, then the Zombie argument amounts to saying that p-Zombies are beings which can’t observe anything. This seems patently absurd to me, and doesn’t seem like it’s what Chalmers meant at all when he came up with the thought experiment.
Likewise, if we were to ask, “Is a bat conscious?” then the answer would be a vacuous “yes” under your view, since they have echolocaterswhich take in observations and process information.
In this view even my computer is conscious since it has a camera on it. For this reason, I suggest we are talking about two different things.
Mary’s room seems uninteresting, in that robot-Mary can predict pretty well what bit-pattern she’s going to get upon seeing color. (To the extent that the human case is different, it’s because of cognitive architecture constraints)
Regarding the zombie argument: The robots have uncertainty over the bridge laws. Under this uncertainty, they may believe it is possible that humans don’t have experiences, due to the bridge laws only identifying silicon brains as conscious. Then humans would be zombies. (They may have other theories saying this is pretty unlikely / logically incoherent / etc)
Basically, the robots have a primitive entity “my observations” that they explain using their theories. They have to reconcile this with the eventual conclusion they reach that their observations are those of a physically instantiated mind like other minds, and they have degrees of freedom in which things they consider “observations” of the same type as “my observations” (things that could have been observed).
As a qualia denier, I sometimes feel like I side more with the Chalmers side of the argument, which at least admits that there’s a strong intuition for consciousness. It’s not that I think that the realist side is right, but it’s that I see the naive physicalists making statements that seem to completely misinterpret the realist’s argument.
I don’t mean to single you out in particular. However, you state that Mary’s room seems uninteresting because Mary is able to predict the “bit pattern” of color qualia. This seems to me to completely miss the point. When you look at the sky and see blue, is it immediately apprehensible as a simple bit pattern? Or does it at least seem to have qualitative properties too?
I’m not sure how to import my argument onto your brain without you at least seeing this intuition, which is something I considered obvious for many years.
There is a qualitative redness to red. I get that intuition.
I think “Mary’s room is uninteresting” is wrong; it’s uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.
I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as “like what I saw when I saw that first red apple”. But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.
I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren’t. Thanks for explaining.
I predict color quales might feel like new information when they actually aren’t.
I am curious if you disagree with the claim that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn’t have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?
Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it’s red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn’t seen the first apple.
I think I got confused because, while she does learn something upon seeing the first red apple, it isn’t the naive “red wavelengths are red-quale”, it’s more like “the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths.” Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.
Robots take in observations. They make theories that explain their observations. Different robots will make different observations and communicate them to each other. Thus, they will talk about observations.
After making enough observations they make theories of physics. (They had to talk about observations before they made low-level physics theories, though; after all, they came to theorize about physics through their observations). They also make bridge laws explaining how their observations are related to physics. But, they have uncertainty about these bridge laws for a significant time period.
The robots theorize that humans are similar to them, based on the fact that they have functionally similar cognitive architecture; thus, they theorize that humans have observations as well. (The bridge laws they posit are symmetric that way, rather than being silicon-chauvinist)
I think you are using the word “observation” to refer to consciousness. If this is true, then I do not deny that humans take in observations and process them.
However, I think the issue is that you have simply re-defined consciousness into something which would be unrecognizable to the philosopher. To that extent, I don’t say you are wrong, but I will allege that you have not done enough to respond to the consciousness-realist’s intuition that consciousness is different from physical properties. Let me explain:
If qualia are just observations, then it seems obvious that Mary is not missing any information in her room, since she can perfectly well understand and model the process by which people receive color observations.
Likewise, if qualia are merely observations, then the Zombie argument amounts to saying that p-Zombies are beings which can’t observe anything. This seems patently absurd to me, and doesn’t seem like it’s what Chalmers meant at all when he came up with the thought experiment.
Likewise, if we were to ask, “Is a bat conscious?” then the answer would be a vacuous “yes” under your view, since they have echolocaters which take in observations and process information.
In this view even my computer is conscious since it has a camera on it. For this reason, I suggest we are talking about two different things.
Mary’s room seems uninteresting, in that robot-Mary can predict pretty well what bit-pattern she’s going to get upon seeing color. (To the extent that the human case is different, it’s because of cognitive architecture constraints)
Regarding the zombie argument: The robots have uncertainty over the bridge laws. Under this uncertainty, they may believe it is possible that humans don’t have experiences, due to the bridge laws only identifying silicon brains as conscious. Then humans would be zombies. (They may have other theories saying this is pretty unlikely / logically incoherent / etc)
Basically, the robots have a primitive entity “my observations” that they explain using their theories. They have to reconcile this with the eventual conclusion they reach that their observations are those of a physically instantiated mind like other minds, and they have degrees of freedom in which things they consider “observations” of the same type as “my observations” (things that could have been observed).
As a qualia denier, I sometimes feel like I side more with the Chalmers side of the argument, which at least admits that there’s a strong intuition for consciousness. It’s not that I think that the realist side is right, but it’s that I see the naive physicalists making statements that seem to completely misinterpret the realist’s argument.
I don’t mean to single you out in particular. However, you state that Mary’s room seems uninteresting because Mary is able to predict the “bit pattern” of color qualia. This seems to me to completely miss the point. When you look at the sky and see blue, is it immediately apprehensible as a simple bit pattern? Or does it at least seem to have qualitative properties too?
I’m not sure how to import my argument onto your brain without you at least seeing this intuition, which is something I considered obvious for many years.
There is a qualitative redness to red. I get that intuition.
I think “Mary’s room is uninteresting” is wrong; it’s uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.
I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as “like what I saw when I saw that first red apple”. But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.
I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren’t. Thanks for explaining.
I am curious if you disagree with the claim that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn’t have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?
That does seem right, actually.
Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it’s red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn’t seen the first apple.
I think I got confused because, while she does learn something upon seeing the first red apple, it isn’t the naive “red wavelengths are red-quale”, it’s more like “the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths.” Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.