It is rather critical just which set of agents you plug into a CEV algorithm!
I take this (very real) possibility as strongly indicating that CEV-like approaches are insufficiently meta and that we should seriously expend a lot of effort on (getting closer to) solving moral philosophy if at all possible. (Or alternatively, as Wei Dai likes to point out, solving metaphilosophy.)
Put slightly differently: if I have some set of ethical standards S against which I’m prepared to compare the results R of a CEV-like algorithm, with the intention of discarding R where R conflicts with S, it follows that I consider wherever I got S from a more reliable source of ethical judgments than I consider CEV. If so, that strongly suggests that if I want reliable ethical judgments, what I ought to be doing is exploring the source of S.
Conversely, if I believe a CEV-like algorithm is a more reliable source of ethical judgments than anything else I have available, then I ought to be willing to discard S where it conflicts with R.
I take this (very real) possibility as strongly indicating that CEV-like approaches are insufficiently meta and that we should seriously expend a lot of effort on (getting closer to) solving moral philosophy if at all possible. (Or alternatively, as Wei Dai likes to point out, solving metaphilosophy.)
Sure.
Put slightly differently: if I have some set of ethical standards S against which I’m prepared to compare the results R of a CEV-like algorithm, with the intention of discarding R where R conflicts with S, it follows that I consider wherever I got S from a more reliable source of ethical judgments than I consider CEV. If so, that strongly suggests that if I want reliable ethical judgments, what I ought to be doing is exploring the source of S.
Conversely, if I believe a CEV-like algorithm is a more reliable source of ethical judgments than anything else I have available, then I ought to be willing to discard S where it conflicts with R.