For a number of modes of information-gathering, nature has equipped us with internal access to our own states (subjective colors, sounds, etc.) as well as the external world-properties themselves. That’s something today’s computers (outside of an AI lab maybe) don’t do.
Surely any computer that controls an automated proccess must do this?
Consider, for example, a robotic arm used to manufacture a car. The software knows that if the arm moves like so, then it will be holding the door in the right place to be attached; and it knows this before it actually moves the arm. So it must have an internal knowledge of its own state, and of possible future states.
I was focusing on perceptual channels, so your motor-channel example would be analogous, but not the same. If the robot uses proprioception to locate the arm, and if it makes an appearance/reality distinction on the proprioceptive information, then you have a true example.
Assuming for the moment that the robot has a sensor of some type on each joint, that can tell it at which angle that point is being held; that would be a robotic form of proprioception.
And if it considers hypothetical future states of the arm, as it must do in order to safely move the arm, then it must consider what proprioceptive information it expects to get from the arm, and compare this to the reality (the actual sensor value changes) during the movement of the arm.
I think that’s an example of what you’re talking about...
One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn’t suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you’ve got it.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it’s probably hard to see what I’m getting at without knowing why I think it’s relevant.
I’ve seen a system that I’m pretty sure fulfills your criteria—it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the “phenomenal consciousness” that you describe; but I would not call that system any more or less conscious than any other computer.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc.
Ah—surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
Surely any computer that controls an automated proccess must do this?
Consider, for example, a robotic arm used to manufacture a car. The software knows that if the arm moves like so, then it will be holding the door in the right place to be attached; and it knows this before it actually moves the arm. So it must have an internal knowledge of its own state, and of possible future states.
Isn’t that exactly what you describe here?
I was focusing on perceptual channels, so your motor-channel example would be analogous, but not the same. If the robot uses proprioception to locate the arm, and if it makes an appearance/reality distinction on the proprioceptive information, then you have a true example.
Hmmm.
Assuming for the moment that the robot has a sensor of some type on each joint, that can tell it at which angle that point is being held; that would be a robotic form of proprioception.
And if it considers hypothetical future states of the arm, as it must do in order to safely move the arm, then it must consider what proprioceptive information it expects to get from the arm, and compare this to the reality (the actual sensor value changes) during the movement of the arm.
I think that’s an example of what you’re talking about...
One more thing: if the sensor values are taken as absolute truth and the motor-commands are adjusted to meet those criteria, that still wouldn’t suffice. But if you include a camera as well as the proprioceptors, and appropriate programming to reconcile the two information sources into a picture of an underlying reality, and make explicit comparisons back to each sensory domain, then you’ve got it.
Note that if two agents (robotic or human) agree on what external reality is like, but have no access to each other’s percepts, the whole realm of subjective experience will seem quite mysterious. Each can doubt that the other’s visual experience is like its own, for example (although obviously certain structural isomorphisms must obtain). Etc. Whereas, if an agent has no access to its own subjective states independent of its picture of reality, it will see no such problem. Agreement on external reality satisfies its curiosity entirely. This is why I brought the issue up. I apologize for not explaining that earlier; it’s probably hard to see what I’m getting at without knowing why I think it’s relevant.
Ah, thank you. That makes it a lot clearer.
I’ve seen a system that I’m pretty sure fulfills your criteria—it uses a set of multiple cameras at carefully defined positions and reconciles the pictures from these cameras to try to figure out the exact location of an object with a very specific colour and appearance. That would be the “phenomenal consciousness” that you describe; but I would not call that system any more or less conscious than any other computer.
Ah—surely that requires something more than just an appearance-reality distinction. That requires appearance-reality distinction and the ability to select its own thoughts. While the specific system I refer to in the second paragraph has an appearance-reality distinction, I have yet to see any sign that it is capable of choosing what to think about.
That (thought selection) seems like a good angle. I just wanted to throw out a necessary condition for phenomenal consciousness, not a sufficient one.