I think there’s another issue here. Human moral intuitions are evolved to work well between humans, in a primate troop/village with 50-100 individuals, or perhaps a few such groups allied. Extending these to O(100) million humans in a country or even 8 billion humans on a planet has has worked surprisingly well for us. But once you start to include other sentient creatures, as I show above, a lot of things break down if you try to follow human moral intuitions — which isn’t very surprising, since they’re those are now well out of the distribution they were evolved in. And once you don’t have human moral intuitions guiding and constraining your ethical system design, the design decisions start to get a lot more arbitrary. For any outcome you want it’s generally pretty easy to come up with an ethical system that will make that be the optimum (if nothing else, minus the L2 norm of the difference under some metric between the state of the world and the outcome you want). The challenge is to design something that behaves better than that, and actually gives sensible-looking preference orders, has the right stability properties under perturbations, and works sensibly under a range of conditions.
I think there’s another issue here. Human moral intuitions are evolved to work well between humans, in a primate troop/village with 50-100 individuals, or perhaps a few such groups allied. Extending these to O(100) million humans in a country or even 8 billion humans on a planet has has worked surprisingly well for us. But once you start to include other sentient creatures, as I show above, a lot of things break down if you try to follow human moral intuitions — which isn’t very surprising, since they’re those are now well out of the distribution they were evolved in. And once you don’t have human moral intuitions guiding and constraining your ethical system design, the design decisions start to get a lot more arbitrary. For any outcome you want it’s generally pretty easy to come up with an ethical system that will make that be the optimum (if nothing else, minus the L2 norm of the difference under some metric between the state of the world and the outcome you want). The challenge is to design something that behaves better than that, and actually gives sensible-looking preference orders, has the right stability properties under perturbations, and works sensibly under a range of conditions.