You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.
Won’t work, the clash will only happen in their minds (you don’t fight a war if you know you’ll lose; you can just proceed directly to the final truce agreement). Eliezer’s Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.
IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.
Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there’s sci-fi on ‘grey goo’, but that doesn’t serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don’t think a superhuman intelligence could be so devoted to something of “no real value”.
That’s… the opposite of what I was looking for. It’s pretty bad writing, and it’s got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)
yeah, like I said, it is pretty bad. But imagine rewriting that story to make it more realistic. It would become:
and then skynet misinterpreted one of its instructions, and decided that its mission was to wipe out all of humanity, which it did with superhuman speed and efficiency. The end
You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.
Won’t work, the clash will only happen in their minds (you don’t fight a war if you know you’ll lose; you can just proceed directly to the final truce agreement). Eliezer’s Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.
IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.
Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there’s sci-fi on ‘grey goo’, but that doesn’t serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don’t think a superhuman intelligence could be so devoted to something of “no real value”.
I’ve seen some bad ones~:
http://www.goingfaster.com/term2029/skynet.html
That’s… the opposite of what I was looking for. It’s pretty bad writing, and it’s got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)
yeah, like I said, it is pretty bad. But imagine rewriting that story to make it more realistic. It would become:
Ironically, a line from the original Terminator movie is a pretty good intuition pump for Powerful Optimization Processes:
It can’t be bargained with. It can’t be ‘reasoned’ with. It doesn’t feel pity or remorse or fear and it absolutely will not stop, ever, until [it achieves its goal].