Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a “perfect” reification.
That’s not moral realism (“morality is a part of the universe itself, external to human beings”), that’s objective moral-cognitivism (“morality is a measurable part of us but has no other grounding in external reality”).
Well if you knew what the goals were and could prove that such goals appeal to all intelligent, rational beings, including but not limited to humans, UFAI, Great Cthulhu, and business corporations...
I don’t need to do that. We are used to the idea that some people don’t find morality appealing, and we have mechanisms such as social disapproval and prisons to get the recalcitrant to play along.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
Might I suggest you take a look at the metaethics sequence? This position is explained very well.
Well no, not really. The meta-ethics sequence takes a cognitivist position: there is some cognitive algorithm called ethics, which actual people implement imperfectly but which you could somehow generalize to obtain a “perfect” reification.
That’s not moral realism (“morality is a part of the universe itself, external to human beings”), that’s objective moral-cognitivism (“morality is a measurable part of us but has no other grounding in external reality”).
Can you rule out a form objective moral-cognitivism that applies to any sufficiently rational and intelligent being?
Unless morality consists of game theory, I can rule out any objective moral cognitivism that applies to any intelligent and/or rational being.
Why shouldn’t it consist of optimal rules for achieving certain goals?
Well if you knew what the goals were and could prove that such goals appeal to all intelligent, rational beings, including but not limited to humans, UFAI, Great Cthulhu, and business corporations...
I don’t need to do that. We are used to the idea that some people don’t find morality appealing, and we have mechanisms such as social disapproval and prisons to get the recalcitrant to play along.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
Which position? The metaethics sequence isn’t clearly re4alist, or anything else.