But not similar enough, I’d argue. For example, I value not farming nonhuman animals and making sure significant resources address world poverty (for a few examples). Not that many other people do. Hopefully CEV will iron that out so this minority wins over the majority, but I don’t quite know how.
(Comment disclaimer: Yes, I am woefully unfamiliar with CEV literature and unqualified to critique it. But hey, this is a comment in discussion. I do plan to research CEV more before I actually decide to disagree with it, assuming I do disagree with it after researching it further.)
Either, if we all knew more, thought faster, understood ourselves better, we would decide to farm animals, or we wouldn’t. For people to be so fundamentally different that there would be disagreement, they would need massively complex adaptations / mutations, which are vastly improbable. Even if someone sits down, and thinks long and hard about an ethical dilemma, they can very easily be wrong. To say that an AI could not coherently extrapolate our volition, is to say we’re so fundamentally unlike that we would not choose to work for a common good if we had the choice.
But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
So why not just go for utilitarianism? By definition, that’s the safest option for everyone to whom things can matter/be valuable.
I still don’t see what could justify coherently extrapolating “our” volition only. The only non-arbitrary “we” is the community of all minds/consciousnesses.
What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
This sounds a bit like religious people saying “But what if it turns out that there is no morality? That would be bad!”. What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn’t care about anything else.
If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call ‘right’.
The only non-arbitrary “we” is the community of all minds/consciousnesses.
This is what you value, what you chose. Don’t lose sight of invisible frameworks. If we’re including all decision procedures, then why not computers too? This is part of the human intuition of ‘fairness’ and ‘equality’ too. Not the hamster’s one.
But not similar enough, I’d argue. For example, I value not farming nonhuman animals and making sure significant resources address world poverty (for a few examples). Not that many other people do. Hopefully CEV will iron that out so this minority wins over the majority, but I don’t quite know how.
(Comment disclaimer: Yes, I am woefully unfamiliar with CEV literature and unqualified to critique it. But hey, this is a comment in discussion. I do plan to research CEV more before I actually decide to disagree with it, assuming I do disagree with it after researching it further.)
Okay.
Either, if we all knew more, thought faster, understood ourselves better, we would decide to farm animals, or we wouldn’t. For people to be so fundamentally different that there would be disagreement, they would need massively complex adaptations / mutations, which are vastly improbable. Even if someone sits down, and thinks long and hard about an ethical dilemma, they can very easily be wrong. To say that an AI could not coherently extrapolate our volition, is to say we’re so fundamentally unlike that we would not choose to work for a common good if we had the choice.
But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
So why not just go for utilitarianism? By definition, that’s the safest option for everyone to whom things can matter/be valuable.
I still don’t see what could justify coherently extrapolating “our” volition only. The only non-arbitrary “we” is the community of all minds/consciousnesses.
This sounds a bit like religious people saying “But what if it turns out that there is no morality? That would be bad!”. What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn’t care about anything else.
If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call ‘right’.
This is what you value, what you chose. Don’t lose sight of invisible frameworks. If we’re including all decision procedures, then why not computers too? This is part of the human intuition of ‘fairness’ and ‘equality’ too. Not the hamster’s one.
Yes. We want utilitarianism. You want CEV. It’s not clear where to go from there.
FWIW, hamsters probably exhibit fairness sensibility too. At least rats do.