Let’s assume that there exist some sort of objective right, no matter what that actually means.
Why assume that? It’s a nontrivial assumption. If we assume that there is one correct notion of “objective right”, then any two sufficiently intelligent entities will agree about what it is even if you and I don’t know what it is, they’ll both want to do it, and therefore they won’t be in conflict with each other. Expecting such an absence of conflict is purely wishful thinking, as far as I can tell.
If humans desire to be right, isn’t it the sort of human value that a friendly AI would seek to protect and cultivate?
Humans desire to do whatever made their ancestors reproduce successfully in the ancestral environment. I see no reason to expect that to resemble anything like obeying some objective morality. However, humans do have a motive to tell nice-sounding stories about themselves, regardless of whether those stories are true, so I’m not at all surprised to encounter lots of humans who claim to desire to obey some objective morality.
Why assume that? It’s a nontrivial assumption. If we assume that there is one correct notion of “objective right”, then any two sufficiently intelligent entities will agree about what it is even if you and I don’t know what it is, they’ll both want to do it, and therefore they won’t be in conflict with each other. Expecting such an absence of conflict is purely wishful thinking, as far as I can tell.
Humans desire to do whatever made their ancestors reproduce successfully in the ancestral environment. I see no reason to expect that to resemble anything like obeying some objective morality. However, humans do have a motive to tell nice-sounding stories about themselves, regardless of whether those stories are true, so I’m not at all surprised to encounter lots of humans who claim to desire to obey some objective morality.