That’s where Clippy might fail at viability—unless it’s the only maximizer around, that “kill everyone” strategy might catch the notice of entities capable of stopping it—entities that wouldn’t move against a friendlier AI.
Intended to be a illustration of how Clippy can do completely obvious things that don’t happen to be stupid, not a coded obligation. Clippy will of course do whatever is necessary to gain more paper-clips. In the (unlikely) event that Clippy finds himself in a situation in which cooperation is a better maximisation strategy than simply outfooming then he will obviously cooperate.
It isn’t absolute not-viability, but the odds are worse for an AI which won’t cooperate unless it sees a good reason to do so than for an AI which cooperates unless it sees a good reason to not cooperate.
but the odds are worse for an AI which won’t cooperate unless it sees a good reason to do so than for an AI which cooperates unless it sees a good reason to not cooperate.
Rationalists win. Rational paperclip maximisers win then make paperclips.
Intended to be a illustration of how Clippy can do completely obvious things that don’t happen to be stupid, not a coded obligation. Clippy will of course do whatever is necessary to gain more paper-clips. In the (unlikely) event that Clippy finds himself in a situation in which cooperation is a better maximisation strategy than simply outfooming then he will obviously cooperate.
It isn’t absolute not-viability, but the odds are worse for an AI which won’t cooperate unless it sees a good reason to do so than for an AI which cooperates unless it sees a good reason to not cooperate.
Rationalists win. Rational paperclip maximisers win then make paperclips.