I do not think this follows, the “consensus” is that sentience is sufficient for moral status. It is not clearly the case that giving some moral consideration to non-human sentient beings would lead to the scenario you describe. Though see: https://www.tandfonline.com/doi/full/10.1080/21550085.2023.2200724
“Some”, or “pro-tanto” unspecified amount of moral consideration, I agree in principle. “Equal” or even “anywhere within a few orders of magnitude of equal”, and we go extinct. Ants need ~10,000,000 times less resources per individual than humans, so if you don’t give humans around ~10,000,000 times the moral value, we end up extinct in favor of more ants. For even tinier creatures, the ratios are even larger. Explaining why moral weight ought to scale linearly with body weight over many orders of magnitude is a challenging moral position to argue for, but any position that doesn’t closely approximate that leads to wildly perverse incentives and the “repugnant conclusion”. The most plausible-sounding moral argument I’ve come up with is that moral weight should be assigned somewhat comparably per-species at a planetary level, and then shared out (equally?) per individual member of a species, so smaller more-numerous species end up with a smaller share per individual. However, given my attitude of ethical system design, I view these sorts of arguments as post-facto political-discussion justifications, and am happy to do what works, and between species of very different sizes, the only thing that works is that moral weight scales roughly linearly with adult body weight (or more accurately, resource needs).
I enjoyed Jeff Sebo’s paper, thank-you for the reference, and mostly agree with his analysis, if not his moral intuitions — but I really wish he had put some approximate numbers in on occasion to show just how many orders of magnitude the ratios can be between the “large” and “small” things he often discusses. Those words conjure up things within an order of magnitude of each other, not many orders-of-magnitude apart. Words like “vast” and “minute” might have been more appropriate, even before he got on to discussing microbes. But I loved Pascal’s Bugging.
Overall, thank-you for the inspiration: Due to your paper and this conversation, I’m now working on another post for my AI, Alignment and Ethics sequence where I’ll dig in more depth into this exact question, of the feasibility or otherwise of granting moral worth to sentient animals, from my non-moral-absolutist ethical-system design viewpoint, This one’s a really hard design problem that requires a lot of inelegant hacks. My urgent advice would be to steer clear of it, at least unless you have very capable ASI assistance and excellent nanotech and genetic engineering, plus some kind of backup plan in case you made a mistake and persuaded your ASIs to render humanity extinct. Something like an even more capable ASI running the previous moral system ready to step in under prespecified circumstances comes to mind, but then how do you get it to not step in due to ethical disagreement?.
I am glad to hear you enjoyed the paper and that our conversation has inspired you to work more on this issue! As I mentioned I now find the worries you lay out in the first paragraph significantly more pressing, thank you for pointing them out!
I do not think this follows, the “consensus” is that sentience is sufficient for moral status. It is not clearly the case that giving some moral consideration to non-human sentient beings would lead to the scenario you describe. Though see: https://www.tandfonline.com/doi/full/10.1080/21550085.2023.2200724
“Some”, or “pro-tanto” unspecified amount of moral consideration, I agree in principle. “Equal” or even “anywhere within a few orders of magnitude of equal”, and we go extinct. Ants need ~10,000,000 times less resources per individual than humans, so if you don’t give humans around ~10,000,000 times the moral value, we end up extinct in favor of more ants. For even tinier creatures, the ratios are even larger. Explaining why moral weight ought to scale linearly with body weight over many orders of magnitude is a challenging moral position to argue for, but any position that doesn’t closely approximate that leads to wildly perverse incentives and the “repugnant conclusion”. The most plausible-sounding moral argument I’ve come up with is that moral weight should be assigned somewhat comparably per-species at a planetary level, and then shared out (equally?) per individual member of a species, so smaller more-numerous species end up with a smaller share per individual. However, given my attitude of ethical system design, I view these sorts of arguments as post-facto political-discussion justifications, and am happy to do what works, and between species of very different sizes, the only thing that works is that moral weight scales roughly linearly with adult body weight (or more accurately, resource needs).
I enjoyed Jeff Sebo’s paper, thank-you for the reference, and mostly agree with his analysis, if not his moral intuitions — but I really wish he had put some approximate numbers in on occasion to show just how many orders of magnitude the ratios can be between the “large” and “small” things he often discusses. Those words conjure up things within an order of magnitude of each other, not many orders-of-magnitude apart. Words like “vast” and “minute” might have been more appropriate, even before he got on to discussing microbes. But I loved Pascal’s Bugging.
Overall, thank-you for the inspiration: Due to your paper and this conversation, I’m now working on another post for my AI, Alignment and Ethics sequence where I’ll dig in more depth into this exact question, of the feasibility or otherwise of granting moral worth to sentient animals, from my non-moral-absolutist ethical-system design viewpoint, This one’s a really hard design problem that requires a lot of inelegant hacks. My urgent advice would be to steer clear of it, at least unless you have very capable ASI assistance and excellent nanotech and genetic engineering, plus some kind of backup plan in case you made a mistake and persuaded your ASIs to render humanity extinct. Something like an even more capable ASI running the previous moral system ready to step in under prespecified circumstances comes to mind, but then how do you get it to not step in due to ethical disagreement?.
I am glad to hear you enjoyed the paper and that our conversation has inspired you to work more on this issue! As I mentioned I now find the worries you lay out in the first paragraph significantly more pressing, thank you for pointing them out!