Promoted to curated: The general topic of moral patienthood strikes me as important on two fronts. One, it’s important in terms of understanding our values and taking good actions, and two it’s important as an area in which I think it’s pretty clear that human thinking is still confused, and so for many people I think it’s a good place to try to dissolve confused questions and train the relevant skills of rationality. I think while this post is less polished than your much longer moral patienthood report, I think for most people it will be a better place to start engaging with this topic, at least in parts because it’s length isn’t as daunting.
On the more object level, I think this post makes some quite interesting points that have changed my thinking a good bit. I think most people have not considered the hypothesis that animals could be assigned a higher moral value than humans, and independently of the truth value of that hypothesis, I think the evidence presented helps people realize a bunch of implicit constraints in their thinking around moral patienthood. I’ve heard similar things from other people who’ve read the post.
It’s also great to see you write a post on LW again, and I strongly recommend newcomers to read lukeprog’s otherwriting on LW, if you haven’t done so.
I agree with this, and I agree with Luke that non-human animals could plausibly have much higher (or much lower) moral weight than humans, if they turned out to be moral patients at all.
It may be worth emphasizing that “plausible ranges of moral weight” are likely to get a lot wider when we move from classical utilitarianism to other reasonably-plausible moral theories (even before we try to take moral uncertainty into account).
Promoted to curated: The general topic of moral patienthood strikes me as important on two fronts. One, it’s important in terms of understanding our values and taking good actions, and two it’s important as an area in which I think it’s pretty clear that human thinking is still confused, and so for many people I think it’s a good place to try to dissolve confused questions and train the relevant skills of rationality. I think while this post is less polished than your much longer moral patienthood report, I think for most people it will be a better place to start engaging with this topic, at least in parts because it’s length isn’t as daunting.
On the more object level, I think this post makes some quite interesting points that have changed my thinking a good bit. I think most people have not considered the hypothesis that animals could be assigned a higher moral value than humans, and independently of the truth value of that hypothesis, I think the evidence presented helps people realize a bunch of implicit constraints in their thinking around moral patienthood. I’ve heard similar things from other people who’ve read the post.
It’s also great to see you write a post on LW again, and I strongly recommend newcomers to read lukeprog’s other writing on LW, if you haven’t done so.
I agree with this, and I agree with Luke that non-human animals could plausibly have much higher (or much lower) moral weight than humans, if they turned out to be moral patients at all.
It may be worth emphasizing that “plausible ranges of moral weight” are likely to get a lot wider when we move from classical utilitarianism to other reasonably-plausible moral theories (even before we try to take moral uncertainty into account).