Robin says “You might think the world should be grateful to be placed under the control of such a superior love, but many of them will not see it that way; they will see your attempt to create an AI to take over the world as an act of war against them.”
Robin, do you see that CEV was created (AFAICT) to address that very possibility? That too many, feeling this too strongly, means the AI self detructs or somesuch.
I like that someone challenged you to create your own unoffensive FAI/CEV, I hope you’ll respond to that. Perhaps you believe that there simply isn’t any possible fully global wish, however subtle or benign, that wouldn’t also be tantamount to a declaration of war...?
V.G., Eliezer was asking a hypothetical question, intended to get at one’s larger intentions, sidestepping lost purposes, etc. As Chris mentioned, substitute wielding a mere outrageous amount of money instead if that makes it any easier for you.
You know, personally I think this strategy of hypothetical questioning for developing the largest deepest meta-ethical insights could be the most important thing a person can do. And it seems necessary to the task of intelligently optimizing anything we’d call moral. I hope Eliezer will post something on this (I suspect he will), though some of his material may touch on it already.