Eliezer, just for clarification, would you say that you’re “right” and God is “wrong” for thinking genocide is “good”, or just that you and God have different goal systems and neither of you could convert the other by rational argument? (Should this go on the consolidated morality thread instead?)
Hard to give answers about God. If I was dealing with a very powerful AI that I had tried to make Friendly, I would assess a dominating probability that the AI and I had ended up in different moral frames of reference; a weak probability that the AI was “right” and I was “wrong” (within the same moral frame of reference, but the AI could convince me by rational/non-truth-destroyable argument from my own premises); and a tiny probability that the AI was “wrong” and I was “right”.
I distinguish between a “moral frame of reference” and a “goal system” because it seems to me that the human ability to argue about morality does, in cognitive fact, invoke more than consequentialist arguments about a constant utility function. By arguing with a fellow human, I can change their (or my) value assignments over final outcomes. A “moral frame of reference” indicates a class of minds that can be moved by the same type of moral arguments (including consequentialist arguments as a special case), rather than a shared constant utility function (which is only one way to end up in a shared reference frame on a particular moral issue).
This is how I would cash out Gordon Worley’s observation that “Morality is objective within a given frame of reference.”
My reply to Tarleton from Doubting Thomas and Pious Pete:
Eliezer, just for clarification, would you say that you’re “right” and God is “wrong” for thinking genocide is “good”, or just that you and God have different goal systems and neither of you could convert the other by rational argument? (Should this go on the consolidated morality thread instead?)
Hard to give answers about God. If I was dealing with a very powerful AI that I had tried to make Friendly, I would assess a dominating probability that the AI and I had ended up in different moral frames of reference; a weak probability that the AI was “right” and I was “wrong” (within the same moral frame of reference, but the AI could convince me by rational/non-truth-destroyable argument from my own premises); and a tiny probability that the AI was “wrong” and I was “right”.
I distinguish between a “moral frame of reference” and a “goal system” because it seems to me that the human ability to argue about morality does, in cognitive fact, invoke more than consequentialist arguments about a constant utility function. By arguing with a fellow human, I can change their (or my) value assignments over final outcomes. A “moral frame of reference” indicates a class of minds that can be moved by the same type of moral arguments (including consequentialist arguments as a special case), rather than a shared constant utility function (which is only one way to end up in a shared reference frame on a particular moral issue).
This is how I would cash out Gordon Worley’s observation that “Morality is objective within a given frame of reference.”