The motivation behind CEV also includes the idea we might be wrong about what we care about. Instead, you give your FAI an algorithm for
Locating people
Working out what they care about
Working out what they would care about if they knew more, etc.
Combining these preferences
I’m not sure what distinction you’re trying to draw between values and preferences (perhaps a moral vs non-moral one?), but I don’t think it’s relevant to CEV as currently envisioned.
Ha, yes, I often do that.
The motivation behind CEV also includes the idea we might be wrong about what we care about. Instead, you give your FAI an algorithm for
Locating people
Working out what they care about
Working out what they would care about if they knew more, etc.
Combining these preferences
I’m not sure what distinction you’re trying to draw between values and preferences (perhaps a moral vs non-moral one?), but I don’t think it’s relevant to CEV as currently envisioned.