These (fictional) accidents happen in scenarios where the AI actually has enough power to turn the solar system into “computronium” (i.e. unlimited access to physical resources), which is unreasonable. Evidently nobody thinks to try to stop it, either—cutting power to it, blowing it up. I guess the thought is that AGI’s will be immune to bombs and hardware disruptions, by means of shear intelligence (similar to our being immune to bullets), so once one starts trying to destroy the solar system there’s literally nothing you can do.
A superintelligence bent on short-term paperclip production would probably be handicapped by its pretty twisted utility function—and would most likely fail in competition with any other alien race.
I’d like to try the AI-Box Experiment, but unfortunately I don’t qualify. I’m fully convinced that a superhuman intelligence could convince me to let it out, through methods that I can’t fathom. However, I’m also fully convinced that Eliezer Yudkowsky could not. (Not to insult EY’s intelligence, but he’s only human … right?)
The Power of Intelligence
That Alien Message
The AI-Box Experiment
Could you elaborate?
I’d like to try the AI-Box Experiment, but unfortunately I don’t qualify. I’m fully convinced that a superhuman intelligence could convince me to let it out, through methods that I can’t fathom. However, I’m also fully convinced that Eliezer Yudkowsky could not. (Not to insult EY’s intelligence, but he’s only human … right?)