One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn’t suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you’ve got it.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it’s probably hard to see what I’m getting at without knowing why I think it’s relevant.
I’ve seen a system that I’m pretty sure fulfills your criteria—it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the “phenomenal consciousness” that you describe; but I would not call that system any more or less conscious than any other computer.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc.
Ah—surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn’t suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you’ve got it.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it’s probably hard to see what I’m getting at without knowing why I think it’s relevant.
Ah, thank you. That makes it a lot clearer.
I’ve seen a system that I’m pretty sure fulfills your criteria—it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the “phenomenal consciousness” that you describe; but I would not call that system any more or less conscious than any other computer.
Ah—surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
That (thought selection) seems like a good angle. I just wanted to throw out a necessary condition for phenomenal consciousness, not a sufficient one.