How do we know that our own preferences are worth trusting? Surely you believe in possible preference systems that are defective (I’m reminded of another post involving giant cheesecakes). But how do we know that ours isn’t one of them? It seems plausible to me that evolution would optimize for preferences that aren’t morally optimal, because its utility function is inclusive fitness.
This requires us to ask what metric we would use, outside our own preferences; not an easy question, but one I think we have to face up to asking. Otherwise, we’ll end up making giant cheesecakes.
How do we know that our own preferences are worth trusting? Surely you believe in possible preference systems that are defective (I’m reminded of another post involving giant cheesecakes). But how do we know that ours isn’t one of them? It seems plausible to me that evolution would optimize for preferences that aren’t morally optimal, because its utility function is inclusive fitness.
This requires us to ask what metric we would use, outside our own preferences; not an easy question, but one I think we have to face up to asking. Otherwise, we’ll end up making giant cheesecakes.