“Preference” is used interchangeably with “morality” in a lot of discussion, but here Adam referred to an aspect of preference/morality where you care about what other people care about, and stated that you care about that but other things as well.
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on. The meta-ethics sequence is something like 300 pages long and very dense and I can’t keep it all in my head at the same time. I have serious reservations about reflective equilibrium (ala Brandt, Stich, and others). Do you have any thoughts on the role of reflective equilibrium in Eliezer’s meta-ethics?
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.
Oh, right, but it’s still all preferences. I can have a preference to fulfill others’ preferences, and I can have preferences for other things, too. Is that what you’re saying?
It seems to me that the method of reflective equilibrium has a partial role in Eliezer’s meta-ethical thought, but that’s another thing I’m not clear on. The meta-ethics sequence is something like 300 pages long and very dense and I can’t keep it all in my head at the same time. I have serious reservations about reflective equilibrium (ala Brandt, Stich, and others). Do you have any thoughts on the role of reflective equilibrium in Eliezer’s meta-ethics?
Possibly, but you’ve said that opaquely enough that I can imagine you intending a meaning I’d disagree with. For example, you refer to “other preferences”, while there is only one morality (preference) in the context of any given decision problem (agent), and the way you care about other agents doesn’t necessarily reference their “preference” in the same sense we are talking about our agent’s preference.
This is reflected in the ideas of morality being an abstract computation (something you won’t see a final answer to), and the need for morality being found on a sufficiently meta level, so that the particular baggage of contemporary beliefs doesn’t distort the picture. You don’t want to revise the beliefs about morality yourself, because you might do it in a human way, instead of doing that in the right way.