Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It’s just that answering “what meta-ethics would the best paperclip maximizer have?” isn’t any easier than answering “what is the ideal metaethics?”. Varying an agent’s goal structure doesn’t change the question.
That said, if you think humans are just like paperclip maximizers except they’re trying to maximize something else than you’re already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).
Of course it’s also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers—like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips—and thinking about them pumps my anti-realist intuitions.
Perhaps I misunderstood. There are definitely possible scenarios in which metaethics could matter to a paperclip maximizer. It’s just that answering “what meta-ethics would the best paperclip maximizer have?” isn’t any easier than answering “what is the ideal metaethics?”. Varying an agent’s goal structure doesn’t change the question.
That said, if you think humans are just like paperclip maximizers except they’re trying to maximize something else than you’re already 8/10ths of the way to moral anti-realism (Come! Take those last two steps the water is fine!).
Of course it’s also the case that meta-ethics probably matters more to humans than paperclip maximizers: In particular metaethics matters for humans because of individual moral uncertainty, group and individual moral change, differences in between individual moralities, and the overall complexity of our values. There are probably similar possible issues for paperclip maximizers—like how should they resolve uncertainty over what counts as a paperclip or deal with agents that are ignorant of the ultimate value of paperclips—and thinking about them pumps my anti-realist intuitions.