Buh? We can write systems with negative reinforcement. Say, a robot that performs various movements then releases a ball, detects how far from a target the ball landed, and executes this movement sequence less often if it was too far. I think “the robot avoids missing the target” is a fair description, and “the robot feels pain when it misses the target” is a completely bogus one. Do you disagree?
First off, I believe that consciousness is not discrete. That is, I can be more conscious than a dog. Given that consciousness isn’t necessarily zero or one, it seems unlikely to ever be exactly zero. As such, all systems have consciousness. A simple robot has a simple mind, with little consciousness. You can cause it pain, but it won’t be much pain.
Perhaps the robot simply isn’t truly aware of anything. In that case, it’s not aware that it should avoid missing the target, and it feels no pain. I just don’t see why adding an extra level would make it aware. If I gave it two sets of RAM, and the second repeated whatever the first said, it would be aware of everything in it’s memory, in that it would put it in another piece of memory, but it’s not going to make the robot become aware.
A simple robot has a simple mind, with little consciousness. You can cause it pain, but it won’t be much pain.
The designer wrote the code for the robot and assembled the parts, proving she understands how the robot works. You and I can both inspect the robot. Can you explain the process that creates consciousness (and in particular pain), whatever those two things are, from the robot’s source code, sensors, and actuators?
Perhaps the robot simply isn’t truly aware of anything. In that case, it’s not aware that it should avoid missing the target, and it feels no pain.
It has sensors that can measure the distance between ball and target. If I wish to, I can make it display a “I missed :(” message when the distance is too great. It then changes its actions in such a way that, in a stable enough environment, this distance is on average reduced. What extra work is “aware it should avoid missing the target” doing?
Buh? We can write systems with negative reinforcement. Say, a robot that performs various movements then releases a ball, detects how far from a target the ball landed, and executes this movement sequence less often if it was too far. I think “the robot avoids missing the target” is a fair description, and “the robot feels pain when it misses the target” is a completely bogus one. Do you disagree?
First off, I believe that consciousness is not discrete. That is, I can be more conscious than a dog. Given that consciousness isn’t necessarily zero or one, it seems unlikely to ever be exactly zero. As such, all systems have consciousness. A simple robot has a simple mind, with little consciousness. You can cause it pain, but it won’t be much pain.
Perhaps the robot simply isn’t truly aware of anything. In that case, it’s not aware that it should avoid missing the target, and it feels no pain. I just don’t see why adding an extra level would make it aware. If I gave it two sets of RAM, and the second repeated whatever the first said, it would be aware of everything in it’s memory, in that it would put it in another piece of memory, but it’s not going to make the robot become aware.
The designer wrote the code for the robot and assembled the parts, proving she understands how the robot works. You and I can both inspect the robot. Can you explain the process that creates consciousness (and in particular pain), whatever those two things are, from the robot’s source code, sensors, and actuators?
It has sensors that can measure the distance between ball and target. If I wish to, I can make it display a “I missed :(” message when the distance is too great. It then changes its actions in such a way that, in a stable enough environment, this distance is on average reduced. What extra work is “aware it should avoid missing the target” doing?