In effect, you’re saying that all reinforcement learners experience pleasure and suffering. But how do these algorithms “feel from the inside”? What does it mean for the variable A to be “obvious to the algorithm”? We know how we feel, but how do you determine whether there is anything it is like to be that program? Are railway lines screaming in pain when the wheel flanges rub against them? Does ChatGPT feel sorrow when it apologises on being told that its output was bad?
I see no reason to attribute emotional states to any of these things.
‘By ‘obvious to the algorithm’ I mean that, to the algorithm, A is referenced with no intermediate computation. This is how pleasure and pain feel to me. I do not believe all reinforcement learning algorithms feel pleasure/pain. A simple example that does not suffer is the Simpleton iterated prisoner’s dilemma strategy. I believe pain and pleasure are effective ways to implement reinforcement learning. In animals, reinforcement learning is called operant conditioning. See Reinforcement learning on a chicken for a chicken that has experienced it. I do not know any algorithms to determine whether there is anything to be like a given program. I suspected this program experienced pleasure/pain because of its paralells to the neuroscience of pleasure and pain.
The description of our feelings is not fundamentally different from the description of any reinforcement learner. They both describe the same thing—physical reality—just with different language and precision.
I see no reason to attribute emotional states to any of these things.
The reason is that they are abstractly analogous to emotional states in humans, like emotional state in one human may be abstractly analogous to emotional state in other human.
In effect, you’re saying that all reinforcement learners experience pleasure and suffering. But how do these algorithms “feel from the inside”? What does it mean for the variable A to be “obvious to the algorithm”? We know how we feel, but how do you determine whether there is anything it is like to be that program? Are railway lines screaming in pain when the wheel flanges rub against them? Does ChatGPT feel sorrow when it apologises on being told that its output was bad?
I see no reason to attribute emotional states to any of these things.
‘By ‘obvious to the algorithm’ I mean that, to the algorithm, A is referenced with no intermediate computation. This is how pleasure and pain feel to me. I do not believe all reinforcement learning algorithms feel pleasure/pain. A simple example that does not suffer is the Simpleton iterated prisoner’s dilemma strategy. I believe pain and pleasure are effective ways to implement reinforcement learning. In animals, reinforcement learning is called operant conditioning. See Reinforcement learning on a chicken for a chicken that has experienced it. I do not know any algorithms to determine whether there is anything to be like a given program. I suspected this program experienced pleasure/pain because of its paralells to the neuroscience of pleasure and pain.
The description of our feelings is not fundamentally different from the description of any reinforcement learner. They both describe the same thing—physical reality—just with different language and precision.
The reason is that they are abstractly analogous to emotional states in humans, like emotional state in one human may be abstractly analogous to emotional state in other human.
I cannot see “abstractly analogous” as sufficient grounds. Get abstract enough and everything is “abstractly analogous” to everything.