The list had me wondering where the political problems went.
You’re right. If at some point the general public starts to take risks from AI seriously and realizes that SI is actually trying to take over the universe without their consensus then a better case scenario will be that SI gets closed and its members send to prison. Some of the not so good scenarios might include the complete extermination of the Bay Area if some foreign party believes that they are close to launching an AGI capable of recursive self-improvement.
Sounds ridiculous? Well, what do you think will be the reaction of governments and billions of irrational people who learn and actually believe that a small group of American white male (Jewish) atheist geeks is going to take over the whole universe? BOOM instead of FOOM.
Reference:
...—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)
If at some point the general public starts to take risks from AI seriously and realizes that SI is actually trying to take over the universe without their consensus then a better case scenario will be that SI gets closed and its members send to prison.
It doesn’t sound terribly likely. People are more likey to guffaw: So: you’re planning to take over the world? And you can’t tell us how because that’s secret information? Right. Feel free to send us a postcard letting us know how you are getting on with that.
Well, what do you think will be the reaction of governments and billions of irrational people who learn and actually believe that a small group of American white male (Jewish) atheist geeks is going to take over the whole universe?
Again, why would anyone believe that, though? Plenty of people dream of ruling the universe—but—so far, nobody has pulled it off.
The list had me wondering where the political problems went.
You’re right. If at some point the general public starts to take risks from AI seriously and realizes that SI is actually trying to take over the universe without their consensus then a better case scenario will be that SI gets closed and its members send to prison. Some of the not so good scenarios might include the complete extermination of the Bay Area if some foreign party believes that they are close to launching an AGI capable of recursive self-improvement.
Sounds ridiculous? Well, what do you think will be the reaction of governments and billions of irrational people who learn and actually believe that a small group of American white male (Jewish) atheist geeks is going to take over the whole universe? BOOM instead of FOOM.
Reference:
Eliezer Yudkowsky in an interview with John Baez.
It doesn’t sound terribly likely. People are more likey to guffaw: So: you’re planning to take over the world? And you can’t tell us how because that’s secret information? Right. Feel free to send us a postcard letting us know how you are getting on with that.
Again, why would anyone believe that, though? Plenty of people dream of ruling the universe—but—so far, nobody has pulled it off.
Most people are more worried about the secret banking cabal with the huge supercomputers, the billions of dollars in spare change and the shadowy past—who are busy banging on the core problem of inductive inference—than they are about the ‘friendly’ non-profit with its videos and PDF files—and probably rightfully so.