A simple robot has a simple mind, with little consciousness. You can cause it pain, but it won’t be much pain.
The designer wrote the code for the robot and assembled the parts, proving she understands how the robot works. You and I can both inspect the robot. Can you explain the process that creates consciousness (and in particular pain), whatever those two things are, from the robot’s source code, sensors, and actuators?
Perhaps the robot simply isn’t truly aware of anything. In that case, it’s not aware that it should avoid missing the target, and it feels no pain.
It has sensors that can measure the distance between ball and target. If I wish to, I can make it display a “I missed :(” message when the distance is too great. It then changes its actions in such a way that, in a stable enough environment, this distance is on average reduced. What extra work is “aware it should avoid missing the target” doing?
The designer wrote the code for the robot and assembled the parts, proving she understands how the robot works. You and I can both inspect the robot. Can you explain the process that creates consciousness (and in particular pain), whatever those two things are, from the robot’s source code, sensors, and actuators?
It has sensors that can measure the distance between ball and target. If I wish to, I can make it display a “I missed :(” message when the distance is too great. It then changes its actions in such a way that, in a stable enough environment, this distance is on average reduced. What extra work is “aware it should avoid missing the target” doing?