Acting the other way around would be trusting my judgement that the AI is friendly.
Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?
In any case, I would expect a superintelligence, friendly or not, to be able to convince me to kill my child, or do whatever.
And so, logically, could God. Apparently FAIs don’t arbitrarily reprogram people. Who knew?
Yes. Yes it would. Do you consider it so inconceivable that it might be the best course of action to kill one child that it outweighs any possible evidence of Friendliness?
And so, logically, could God. Apparently FAIs don’t arbitrarily reprogram people. Who knew?