Hi Roger, first, the paper is addressed to those who already do believe that all sentient beings deserve moral consideration and that their suffering is morally undesirable. I do not argue for these points in the paper, since they are already universally accepted in the moral philosophy literature.
This is why, for instance, write the following: “sentience in the sense understood above as the capacity of having positively or negatively valenced phenomenally conscious experiences is widely regarded and accepted as a sufficient condition for moral patienthood (Clarke, S., Zohny, H. & Savulescu, J., 2021)”.
Furthermore, it is just empirically not the case that people cannot be convinced “only by ethics and logic”: for instance, many people reading Peter Singer’s Animal Liberation, as a result, changed their views in light of the arguments he provided in the first chapter and came to believe that non-human animals deserve equal moral consideration of interests. Changing one’s ethical views when presented with ethical arguments is a standard practice that occurs to moral philosophers when researching and reading moral philosophy. Of course, there is the is/ought to gap, but this does not entail that one cannot convince someone that the most coherent version of their most fundamental ethical intuitions do not, in fact, lead where they believe they lead but instead that they lead to somewhere else, to a different conclusion. This happens all the time between more philosophers, one presents an argument in favour of a view, and in many instances, many philosophers are convinced by that argument and change their view.
In this paper, I was not trying to argue that non-human animals deserve moral consideration or that s-risks are bad, as I said, I have assumed this. What I try to argue is that if this is true, then, in some decision-making situations we would have some strong pro-tanto moral reasons to implement SCEV. In fact, I do not even argue that conclusively, what we should do is try to implement SCEV.
the paper is addressed to those who already do believe that all sentient beings deserve moral consideration and that their suffering is morally undesirable.
I think you should state these assumptions more clearly at the beginning of the paper, since you appear to be assuming what you are claiming to prove. You are also making incorrect assumptions about your audience, especially when posting it to Less Wrong. The idea that Coherent Extrapolated Volition, Utilitarianism, or “Human Values” applies only to humans, or perhaps only to sapient beings, is quite widespread on Less Wrong.
I do not argue for these points in the paper, since they are already universally accepted in the moral philosophy literature
I’m not deeply familiar with the most recent few decades of the moral philosophy literature, so I won’t attempt to argue this in a recent context, if that is what you in fact mean by “the moral philosophy literature” (though I have to say that I do find any claim of the form “absolutely everyone who matters agrees with me” inherently suspicious). However, Philosophy is not a field that has made such rapid recent advances such that one can simply ignore all but the last few decades, and for the moral philosophy literature of the early 20th century and the preceding few millennia (which includes basically every philosopher named in a typical introductory guide to Moral Philosophy), this claim is just blatantly false, even to someone from outside the academic specialty. For example, I am quite certain that Nietzsche, Hobbes, Thomas Aquinas and Plato would all have variously taken issue with the proposition that humans and ants deserve equal moral consideration, if ants can be shown to experience pain (though the Jains would not). Or perhaps you would care to cite quotes from each of them clearly supporting your position? Indeed, for much of the last two millennia, Christian moral philosophy made it entirely clear that they believed animals do not have souls, and thus did not deserve the same moral consideration as humans, and that humans held a unique role in God’s plan, as the only creature made in His image and imbued with souls. So claiming that your position is “already universally accepted in the moral philosophy literature” while simply ignoring O(90%) of that literature appears specious to me. Perhaps you should also briefly outline in your paper which portions of or schools from the moral philosophy literature in fact agree with your unstated underlying assumption?
What I mean by “moral philosophy literature” is the contemporary moral philosophy literature, I should have been more specific, my bad. And in contemporary philosophy, it is universally accepted (though of course, the might exist one philosopher or another who disagrees) that sentience in the sense understood above as the capacity of having positively or negatively valenced phenomenally conscious experiences is sufficient for moral patienthood. If this is the case, then, it is enough to cite a published work or works in which this is evident. This is why I cite Clarke, S., Zohny, H. & Savulescu, J., 2021. You can go see this recently edited book on moral status that this claim is assumed thought and in the book you can find the sources for its justification.
OK, note to self: If we manage to create a superintelligece, and give us access to the contemporary moral philosophy literature, it will euthanize us all and feed us to ants. Good to know!
I do not think this follows, the “consensus” is that sentience is sufficient for moral status. It is not clearly the case that giving some moral consideration to non-human sentient beings would lead to the scenario you describe. Though see: https://www.tandfonline.com/doi/full/10.1080/21550085.2023.2200724
“Some”, or “pro-tanto” unspecified amount of moral consideration, I agree in principle. “Equal” or even “anywhere within a few orders of magnitude of equal”, and we go extinct. Ants need ~10,000,000 times less resources per individual than humans, so if you don’t give humans around ~10,000,000 times the moral value, we end up extinct in favor of more ants. For even tinier creatures, the ratios are even larger. Explaining why moral weight ought to scale linearly with body weight over many orders of magnitude is a challenging moral position to argue for, but any position that doesn’t closely approximate that leads to wildly perverse incentives and the “repugnant conclusion”. The most plausible-sounding moral argument I’ve come up with is that moral weight should be assigned somewhat comparably per-species at a planetary level, and then shared out (equally?) per individual member of a species, so smaller more-numerous species end up with a smaller share per individual. However, given my attitude of ethical system design, I view these sorts of arguments as post-facto political-discussion justifications, and am happy to do what works, and between species of very different sizes, the only thing that works is that moral weight scales roughly linearly with adult body weight (or more accurately, resource needs).
I enjoyed Jeff Sebo’s paper, thank-you for the reference, and mostly agree with his analysis, if not his moral intuitions — but I really wish he had put some approximate numbers in on occasion to show just how many orders of magnitude the ratios can be between the “large” and “small” things he often discusses. Those words conjure up things within an order of magnitude of each other, not many orders-of-magnitude apart. Words like “vast” and “minute” might have been more appropriate, even before he got on to discussing microbes. But I loved Pascal’s Bugging.
Overall, thank-you for the inspiration: Due to your paper and this conversation, I’m now working on another post for my AI, Alignment and Ethics sequence where I’ll dig in more depth into this exact question, of the feasibility or otherwise of granting moral worth to sentient animals, from my non-moral-absolutist ethical-system design viewpoint, This one’s a really hard design problem that requires a lot of inelegant hacks. My urgent advice would be to steer clear of it, at least unless you have very capable ASI assistance and excellent nanotech and genetic engineering, plus some kind of backup plan in case you made a mistake and persuaded your ASIs to render humanity extinct. Something like an even more capable ASI running the previous moral system ready to step in under prespecified circumstances comes to mind, but then how do you get it to not step in due to ethical disagreement?.
I am glad to hear you enjoyed the paper and that our conversation has inspired you to work more on this issue! As I mentioned I now find the worries you lay out in the first paragraph significantly more pressing, thank you for pointing them out!
Hi Roger, first, the paper is addressed to those who already do believe that all sentient beings deserve moral consideration and that their suffering is morally undesirable. I do not argue for these points in the paper, since they are already universally accepted in the moral philosophy literature.
This is why, for instance, write the following: “sentience in the sense understood above as the capacity of having positively or negatively valenced phenomenally conscious experiences is widely regarded and accepted as a sufficient condition for moral patienthood (Clarke, S., Zohny, H. & Savulescu, J., 2021)”.
Furthermore, it is just empirically not the case that people cannot be convinced “only by ethics and logic”: for instance, many people reading Peter Singer’s Animal Liberation, as a result, changed their views in light of the arguments he provided in the first chapter and came to believe that non-human animals deserve equal moral consideration of interests. Changing one’s ethical views when presented with ethical arguments is a standard practice that occurs to moral philosophers when researching and reading moral philosophy. Of course, there is the is/ought to gap, but this does not entail that one cannot convince someone that the most coherent version of their most fundamental ethical intuitions do not, in fact, lead where they believe they lead but instead that they lead to somewhere else, to a different conclusion. This happens all the time between more philosophers, one presents an argument in favour of a view, and in many instances, many philosophers are convinced by that argument and change their view.
In this paper, I was not trying to argue that non-human animals deserve moral consideration or that s-risks are bad, as I said, I have assumed this. What I try to argue is that if this is true, then, in some decision-making situations we would have some strong pro-tanto moral reasons to implement SCEV. In fact, I do not even argue that conclusively, what we should do is try to implement SCEV.
I think you should state these assumptions more clearly at the beginning of the paper, since you appear to be assuming what you are claiming to prove. You are also making incorrect assumptions about your audience, especially when posting it to Less Wrong. The idea that Coherent Extrapolated Volition, Utilitarianism, or “Human Values” applies only to humans, or perhaps only to sapient beings, is quite widespread on Less Wrong.
I’m not deeply familiar with the most recent few decades of the moral philosophy literature, so I won’t attempt to argue this in a recent context, if that is what you in fact mean by “the moral philosophy literature” (though I have to say that I do find any claim of the form “absolutely everyone who matters agrees with me” inherently suspicious). However, Philosophy is not a field that has made such rapid recent advances such that one can simply ignore all but the last few decades, and for the moral philosophy literature of the early 20th century and the preceding few millennia (which includes basically every philosopher named in a typical introductory guide to Moral Philosophy), this claim is just blatantly false, even to someone from outside the academic specialty. For example, I am quite certain that Nietzsche, Hobbes, Thomas Aquinas and Plato would all have variously taken issue with the proposition that humans and ants deserve equal moral consideration, if ants can be shown to experience pain (though the Jains would not). Or perhaps you would care to cite quotes from each of them clearly supporting your position? Indeed, for much of the last two millennia, Christian moral philosophy made it entirely clear that they believed animals do not have souls, and thus did not deserve the same moral consideration as humans, and that humans held a unique role in God’s plan, as the only creature made in His image and imbued with souls. So claiming that your position is “already universally accepted in the moral philosophy literature” while simply ignoring O(90%) of that literature appears specious to me. Perhaps you should also briefly outline in your paper which portions of or schools from the moral philosophy literature in fact agree with your unstated underlying assumption?
What I mean by “moral philosophy literature” is the contemporary moral philosophy literature, I should have been more specific, my bad. And in contemporary philosophy, it is universally accepted (though of course, the might exist one philosopher or another who disagrees) that sentience in the sense understood above as the capacity of having positively or negatively valenced phenomenally conscious experiences is sufficient for moral patienthood. If this is the case, then, it is enough to cite a published work or works in which this is evident. This is why I cite Clarke, S., Zohny, H. & Savulescu, J., 2021. You can go see this recently edited book on moral status that this claim is assumed thought and in the book you can find the sources for its justification.
OK, note to self: If we manage to create a superintelligece, and give us access to the contemporary moral philosophy literature, it will euthanize us all and feed us to ants. Good to know!
I do not think this follows, the “consensus” is that sentience is sufficient for moral status. It is not clearly the case that giving some moral consideration to non-human sentient beings would lead to the scenario you describe. Though see: https://www.tandfonline.com/doi/full/10.1080/21550085.2023.2200724
“Some”, or “pro-tanto” unspecified amount of moral consideration, I agree in principle. “Equal” or even “anywhere within a few orders of magnitude of equal”, and we go extinct. Ants need ~10,000,000 times less resources per individual than humans, so if you don’t give humans around ~10,000,000 times the moral value, we end up extinct in favor of more ants. For even tinier creatures, the ratios are even larger. Explaining why moral weight ought to scale linearly with body weight over many orders of magnitude is a challenging moral position to argue for, but any position that doesn’t closely approximate that leads to wildly perverse incentives and the “repugnant conclusion”. The most plausible-sounding moral argument I’ve come up with is that moral weight should be assigned somewhat comparably per-species at a planetary level, and then shared out (equally?) per individual member of a species, so smaller more-numerous species end up with a smaller share per individual. However, given my attitude of ethical system design, I view these sorts of arguments as post-facto political-discussion justifications, and am happy to do what works, and between species of very different sizes, the only thing that works is that moral weight scales roughly linearly with adult body weight (or more accurately, resource needs).
I enjoyed Jeff Sebo’s paper, thank-you for the reference, and mostly agree with his analysis, if not his moral intuitions — but I really wish he had put some approximate numbers in on occasion to show just how many orders of magnitude the ratios can be between the “large” and “small” things he often discusses. Those words conjure up things within an order of magnitude of each other, not many orders-of-magnitude apart. Words like “vast” and “minute” might have been more appropriate, even before he got on to discussing microbes. But I loved Pascal’s Bugging.
Overall, thank-you for the inspiration: Due to your paper and this conversation, I’m now working on another post for my AI, Alignment and Ethics sequence where I’ll dig in more depth into this exact question, of the feasibility or otherwise of granting moral worth to sentient animals, from my non-moral-absolutist ethical-system design viewpoint, This one’s a really hard design problem that requires a lot of inelegant hacks. My urgent advice would be to steer clear of it, at least unless you have very capable ASI assistance and excellent nanotech and genetic engineering, plus some kind of backup plan in case you made a mistake and persuaded your ASIs to render humanity extinct. Something like an even more capable ASI running the previous moral system ready to step in under prespecified circumstances comes to mind, but then how do you get it to not step in due to ethical disagreement?.
I am glad to hear you enjoyed the paper and that our conversation has inspired you to work more on this issue! As I mentioned I now find the worries you lay out in the first paragraph significantly more pressing, thank you for pointing them out!