Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.