In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.
Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can’t find any nice way to do it is a bit of a stretch.
Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can’t find any nice way to do it is a bit of a stretch.