Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.
Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it does not follow that they must have a equal value to humans or other ‘higher’ animals. As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a ‘hedonic coefficient’ of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.
As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a ‘hedonic coefficient’ of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.
Less absurd than that some organism is infinitely more valuable than its sibling that differs in lacking a single mutation (in the case of the first organism of a particular species to have evolved “high” enough to have minimal moral value)?
Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that’s just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many).
The instrumental value of most humans is enormously higher than the intrinsic value of the same persons—given that they do sufficiently good things.
My answer: if it shows signs of not wanting something to happen, such as avoiding a situation, it’s best not to have it happen. Of course, simple stimulus response doesn’t count, but if an animal can learn, it shouldn’t be tortured for fun.
This only applies to animals, though. I’m not sure about machines.
Okay, more details: if an animal’s behavior changes when it’s repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn’t correspond to a desire.
And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.
That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they’re the same thing, but it’s not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot’s code contained a function that “punished” states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I’m really not sure if that’s equivalent to animal pain, or where exactly that line is.
Ahh, I hadn’t seen that before. Thanks for the link.
So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it’s pretty clear that the robot didn’t experience pain, but I’m still confused.
Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.
Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it does not follow that they must have a equal value to humans or other ‘higher’ animals. As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a ‘hedonic coefficient’ of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.
Less absurd than that some organism is infinitely more valuable than its sibling that differs in lacking a single mutation (in the case of the first organism of a particular species to have evolved “high” enough to have minimal moral value)?
Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that’s just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many).
The instrumental value of most humans is enormously higher than the intrinsic value of the same persons—given that they do sufficiently good things.
My answer: if it shows signs of not wanting something to happen, such as avoiding a situation, it’s best not to have it happen. Of course, simple stimulus response doesn’t count, but if an animal can learn, it shouldn’t be tortured for fun.
This only applies to animals, though. I’m not sure about machines.
There isn’t a very meaningful distinction between animals and machines. What does or doesn’t count as a “simple stimulus response”? Or learning?
Okay, more details: if an animal’s behavior changes when it’s repeatedly injured, it can learn. And learning is goal-oriented. But if it always does the same thing in the same situation, whatever that action is, it doesn’t correspond to a desire.
And the reason why this is important for animals is that I assume that whatever it is that suffering is, I guess that it evolved quite long ago. After all, avoiding injury is a big part of the point of having a brain that can learn.
I’ve programmed a robot to behave in the way you describe, treating bright lights as painful stimuli. Was testing it immoral?
That’s why I said it’s hairier with machines.
Um, actual pain or just disutility?
That would depend pretty heavily on how you define pain. This is a good question; my first instinct was to say that they’re the same thing, but it’s not quite that simple. Pain in animals is really just an inaccurate signal of perceived disutility. The robot’s code contained a function that “punished” states in which its photoreceptor was highly stimulated, and the robot made changes to its behavior in response, but I’m really not sure if that’s equivalent to animal pain, or where exactly that line is.
Pain has been the topic of a top-level post. I think my own comment on that thread is relevant here.
Ahh, I hadn’t seen that before. Thanks for the link.
So, did my robot experience suffering then? Or is there some broader category of negative stimulus that includes both suffering and the punishment of states in which certain variables are above certain thresholds? I think it’s pretty clear that the robot didn’t experience pain, but I’m still confused.