Far more extreme, I would think, to say that zero out of 6.5 billion humans are stable psychopaths.
Heck, what about babies? What do they want, and would they be complicated enough to want anything different if they knew more and thought faster?
There are infinitely many self-consistent belief systems and infinitely many internally consistent optimization processes; while I believe mine to be the best I’ve found, I remain aware that if I held any of the others I would believe exactly the same thing.
You would not believe exactly the same thing. If you held one of the others, you would believe that your new system was frooter than any of the others, where “frooter” is not at all the same thing as “better”. And you would be correct.
If a person’s morality is not defined as what they believe about morals, I don’t know how it can be considered to meaningfully entail any propositions at all. A General AI should be able to convince it just about anything, right?
If you make matters that complicated to begin with, i.e., we’re not discussing metaethics for human usage anymore, then you should construe entailment / extrapolation / unfolding in more robust ways than “anything a superintelligence can convince you of”. E.g. CEV describes a form of entailment.
As for what a person’s morality is, surely you extrapolate it at least a little beyond their instantaneous beliefs. Would you agree that many people would morally disapprove of being shot by you, even if the actual thought has never crossed their mind and they don’t know you exist?
Heck, what about babies? What do they want, and would they be complicated enough to want anything different if they knew more and thought faster?
You would not believe exactly the same thing. If you held one of the others, you would believe that your new system was frooter than any of the others, where “frooter” is not at all the same thing as “better”. And you would be correct.
If you make matters that complicated to begin with, i.e., we’re not discussing metaethics for human usage anymore, then you should construe entailment / extrapolation / unfolding in more robust ways than “anything a superintelligence can convince you of”. E.g. CEV describes a form of entailment.
As for what a person’s morality is, surely you extrapolate it at least a little beyond their instantaneous beliefs. Would you agree that many people would morally disapprove of being shot by you, even if the actual thought has never crossed their mind and they don’t know you exist?