In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.
Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can’t find any nice way to do it is a bit of a stretch.
What risk? The AI is lying about the torture :-) Maybe I’m too much of a deontologist, but I wouldn’t call such a creature friendly, even if it’s technically Friendly.
If it actually worked, I wouldn’t question it afterward. I try not to argue with superintelligences on occasions when they turn out to be right.
In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.
Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can’t find any nice way to do it is a bit of a stretch.
What risk? The AI is lying about the torture :-) Maybe I’m too much of a deontologist, but I wouldn’t call such a creature friendly, even if it’s technically Friendly.