Ouch. Eliezer, are you listening? Is the behavior described in the post compatible with your definition of Friendliness? Is this a problem with your definition, or what?
I think you misunderstood the question. Suppose the AI wants to prevent just 100 dustspeckings, but has reason enough to believe Dave will yield to the threat so no one will get tortured. Does this make the AI’s behavior acceptable? Should we file this under “following reason off a cliff”?
In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.
Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can’t find any nice way to do it is a bit of a stretch.
What risk? The AI is lying about the torture :-) Maybe I’m too much of a deontologist, but I wouldn’t call such a creature friendly, even if it’s technically Friendly.
I was about to point out that the fascinating and horrible dynamics of over-the-top threats are covered in length in Strategy of Conflict. But then I realised you’re the one who made that post in the first place. Thanks, I enjoyed that book.
Ouch. Eliezer, are you listening? Is the behavior described in the post compatible with your definition of Friendliness? Is this a problem with your definition, or what?
Well, suppose the situation is arbitrarily worse—you can only prevent 3^^^3 dustspeckings by torturing millions of sentient beings.
I think you misunderstood the question. Suppose the AI wants to prevent just 100 dustspeckings, but has reason enough to believe Dave will yield to the threat so no one will get tortured. Does this make the AI’s behavior acceptable? Should we file this under “following reason off a cliff”?
If it actually worked, I wouldn’t question it afterward. I try not to argue with superintelligences on occasions when they turn out to be right.
In advance, I have to say that the risk/reward ratio seems to imply an unreasonable degree of certainty about a noisy human brain, though.
Also, a world where the (Friendly) AI is that certain about what that noisy brain will do after a particular threat but can’t find any nice way to do it is a bit of a stretch.
What risk? The AI is lying about the torture :-) Maybe I’m too much of a deontologist, but I wouldn’t call such a creature friendly, even if it’s technically Friendly.
I was about to point out that the fascinating and horrible dynamics of over-the-top threats are covered in length in Strategy of Conflict. But then I realised you’re the one who made that post in the first place. Thanks, I enjoyed that book.