Sure. For example, if I want other people’s volition to be implemented, that is sufficient to justify altruism. (Not necessary, but sufficient.)
But that doesn’t justify directing an AI to look at other people’s volition to determine its target directly… as has been said elsewhere, I can simply direct an AI to look at my volition, and the extrapolation process will naturally (if CEV works at all) take other people’s volition into account.
Sure. For example, if I want other people’s volition to be implemented, that is sufficient to justify altruism. (Not necessary, but sufficient.)
But that doesn’t justify directing an AI to look at other people’s volition to determine its target directly… as has been said elsewhere, I can simply direct an AI to look at my volition, and the extrapolation process will naturally (if CEV works at all) take other people’s volition into account.