But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
So why not just go for utilitarianism? By definition, that’s the safest option for everyone to whom things can matter/be valuable.
I still don’t see what could justify coherently extrapolating “our” volition only. The only non-arbitrary “we” is the community of all minds/consciousnesses.
What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
This sounds a bit like religious people saying “But what if it turns out that there is no morality? That would be bad!”. What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn’t care about anything else.
If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call ‘right’.
The only non-arbitrary “we” is the community of all minds/consciousnesses.
This is what you value, what you chose. Don’t lose sight of invisible frameworks. If we’re including all decision procedures, then why not computers too? This is part of the human intuition of ‘fairness’ and ‘equality’ too. Not the hamster’s one.
But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
So why not just go for utilitarianism? By definition, that’s the safest option for everyone to whom things can matter/be valuable.
I still don’t see what could justify coherently extrapolating “our” volition only. The only non-arbitrary “we” is the community of all minds/consciousnesses.
This sounds a bit like religious people saying “But what if it turns out that there is no morality? That would be bad!”. What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn’t care about anything else.
If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call ‘right’.
This is what you value, what you chose. Don’t lose sight of invisible frameworks. If we’re including all decision procedures, then why not computers too? This is part of the human intuition of ‘fairness’ and ‘equality’ too. Not the hamster’s one.
Yes. We want utilitarianism. You want CEV. It’s not clear where to go from there.
FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.