If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...
Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot… something as simple as competent control of real (not CGI) articulated robots would be rather scary… for example, suppose that this robot does something shocking like physically taking a human confederate and nailing him to a cross, blood and all. Or something less gross, heh.
interesting. I wouldn’t want to rule out the “dark arts” , i.e. highly non rational methods of persuasion.
...
“Needless to say, those who come to me and offer their unsolicited advice {to lie} do not appear to be expert liars. For one thing, a majority of them don’t seem to find anything odd about floating their proposals in publicly archived, Google-indexed mailing lists.”—Eliezer Yudkowsky
There’s a difference between a direct lie and not-quite rational persuasion. I wouldn’t tell a direct lie about this kind of thing. Those people who would most be persuaded by a gory demo of robots killing people aren’t clever enough to research stuff on the net.
What’s “rational persuation”, anyway? Is a person supposed to already possess an ability to change their mind according to an agreed-to-be-safe protocol? Teaching rationality and then giving your complex case would be more natural, but isn’t necessarily an option.
The problem is that it’s possible to persuade that person of many wrong things, that the person isn’t safe from falsity. But if whatever action you are performing causes them to get closer to the truth, it’s a positive thing to do in their situation, one selected among many negative things that could be done and that happen habitually.
You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.
Won’t work, the clash will only happen in their minds (you don’t fight a war if you know you’ll lose; you can just proceed directly to the final truce agreement). Eliezer’s Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.
IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.
Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there’s sci-fi on ‘grey goo’, but that doesn’t serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don’t think a superhuman intelligence could be so devoted to something of “no real value”.
That’s… the opposite of what I was looking for. It’s pretty bad writing, and it’s got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)
yeah, like I said, it is pretty bad. But imagine rewriting that story to make it more realistic. It would become:
and then skynet misinterpreted one of its instructions, and decided that its mission was to wipe out all of humanity, which it did with superhuman speed and efficiency. The end
Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast.
Shakey the Robot was funded by DARPA; according to my dad, the grant proposals were usually written in such a way as to imply robot soldiers were right around the corner...in 1967. So it only took about 40 years.
If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...
Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot… something as simple as competent control of real (not CGI) articulated robots would be rather scary… for example, suppose that this robot does something shocking like physically taking a human confederate and nailing him to a cross, blood and all. Or something less gross, heh.
interesting. I wouldn’t want to rule out the “dark arts” , i.e. highly non rational methods of persuasion.
Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast.
A demonstration involving the very latest military robots could have the intended effect in perhaps 10 years.
...
There’s a difference between a direct lie and not-quite rational persuasion. I wouldn’t tell a direct lie about this kind of thing. Those people who would most be persuaded by a gory demo of robots killing people aren’t clever enough to research stuff on the net.
What’s “rational persuation”, anyway? Is a person supposed to already possess an ability to change their mind according to an agreed-to-be-safe protocol? Teaching rationality and then giving your complex case would be more natural, but isn’t necessarily an option.
The problem is that it’s possible to persuade that person of many wrong things, that the person isn’t safe from falsity. But if whatever action you are performing causes them to get closer to the truth, it’s a positive thing to do in their situation, one selected among many negative things that could be done and that happen habitually.
You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.
Won’t work, the clash will only happen in their minds (you don’t fight a war if you know you’ll lose; you can just proceed directly to the final truce agreement). Eliezer’s Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.
IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.
Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there’s sci-fi on ‘grey goo’, but that doesn’t serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don’t think a superhuman intelligence could be so devoted to something of “no real value”.
I’ve seen some bad ones~:
http://www.goingfaster.com/term2029/skynet.html
That’s… the opposite of what I was looking for. It’s pretty bad writing, and it’s got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)
yeah, like I said, it is pretty bad. But imagine rewriting that story to make it more realistic. It would become:
Ironically, a line from the original Terminator movie is a pretty good intuition pump for Powerful Optimization Processes:
It can’t be bargained with. It can’t be ‘reasoned’ with. It doesn’t feel pity or remorse or fear and it absolutely will not stop, ever, until [it achieves its goal].
Shakey the Robot was funded by DARPA; according to my dad, the grant proposals were usually written in such a way as to imply robot soldiers were right around the corner...in 1967. So it only took about 40 years.