No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No “delusion” whatsoever.
I believe he’s making the (joking) point that since we do not/cannot know what a human would want in the limit of maximial knowledge and reflective coherence (thus CEV), it is not impossible that what we’d want actually IS maximum paperclips.
You seem to think that uFAI would be delusional. No.
No, I think that a Friendly AI would correctly believe that maxmizing paperclips is what a human would want in the limit of maximal knowledge and reflective coherence. No “delusion” whatsoever.
Huh again?
What confuses you?
I believe he’s making the (joking) point that since we do not/cannot know what a human would want in the limit of maximial knowledge and reflective coherence (thus CEV), it is not impossible that what we’d want actually IS maximum paperclips.