Okay, more details: if an animal’s behavior changes when it’s repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn’t correspond to a desire.
And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.
That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they’re the same thing, but it’s not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot’s code contained a function that “punished” states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I’m really not sure if that’s equivalent to animal pain, or where exactly that line is.
Ahh, I hadn’t seen that before. Thanks for the link.
So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it’s pretty clear that the robot didn’t experience pain, but I’m still confused.
There isn’t a very meaningful distinction between animals and machines. What does or doesn’t count as a “simple stimulus response”? Or learning?
Okay, more details: if an animal’s behavior changes when it’s repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn’t correspond to a desire.
And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.
I’ve programmed a robot to behave in the way you describe, treating bright lights as painful stimuli. Was testing it immoral?
That’s why I said it’s hairier with machines.
Um, actual pain or just disutility?
That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they’re the same thing, but it’s not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot’s code contained a function that “punished” states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I’m really not sure if that’s equivalent to animal pain, or where exactly that line is.
Pain has been the topic of a top-level post. I think my own comment on that thread is relevant here.
Ahh, I hadn’t seen that before. Thanks for the link.
So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it’s pretty clear that the robot didn’t experience pain, but I’m still confused.