If I encounter a being approximately equivalent to God - (almost) all-knowing, benevolent etc. - and it tells me to do something, why the hell should I refuse? If Omega told you something was the best choice according to your preferences—presumably as part of a larger game—why wouldn’t you try and achieve that?
My best guess is that Mr. Jilette is confused regarding morality.
You’re saying that if a Friendly superintellligence told you something was the right thing to do—however you define right - then you would trust your own judgement over theirs?
Acting the other way around would be trusting my judgement that the AI is friendly.
Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?
In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.
And so, logically, could God. Apparently FAIs don’t arbitrarily reprogram people. Who knew?
Why?
If I encounter a being approximately equivalent to God - (almost) all-knowing, benevolent etc. - and it tells me to do something, why the hell should I refuse? If Omega told you something was the best choice according to your preferences—presumably as part of a larger game—why wouldn’t you try and achieve that?
My best guess is that Mr. Jilette is confused regarding morality.
Because most people who are convinced by their pet moral principle to kill kids are utterly wrong.
You’re saying that if a Friendly superintellligence told you something was the right thing to do—however you define right - then you would trust your own judgement over theirs?
Acting the other way around would be trusting my judgement that the AI is friendly.
In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.
Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?
And so, logically, could God. Apparently FAIs don’t arbitrarily reprogram people. Who knew?