You’re saying that if a Friendly superintellligence told you something was the right thing to do—however you define right - then you would trust your own judgement over theirs?
Acting the other way around would be trusting my judgement that the AI is friendly.
Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?
In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.
And so, logically, could God. Apparently FAIs don’t arbitrarily reprogram people. Who knew?
You’re saying that if a Friendly superintellligence told you something was the right thing to do—however you define right - then you would trust your own judgement over theirs?
Acting the other way around would be trusting my judgement that the AI is friendly.
In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.
Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?
And so, logically, could God. Apparently FAIs don’t arbitrarily reprogram people. Who knew?