“The Intelligence Explosion Thesis says that an AI can potentially grow in capability on a timescale that seems fast relative to human experience due to recursive self-improvement. This in turn implies that strategies which rely on humans reacting to and restraining or punishing AIs are unlikely to be successful in the long run, and that what the first strongly self-improving AI prefers can end up mostly determining the final outcomes for Earth-originating intelligent life. ”—Eliezer Yudkowsky, IEM.
I.e., Eliezer thinks it’ll take less time than it takes you to hit Ctrl-C. (Granted it takes Eliezer a whole paragraph to say what the essay captures in a phrase, but I digress.)
Eliezer’s position is somewhat more nuanced than that. He admits a possibility of a FOOM timescale on the order of seconds, but a timescale on the order of weeks/months/years is also in line with the IE thesis.
FOOM on the order of seconds can be strongly argued against (Eli does a fair job of it himself, but likes to leave everything open so he can cite himself later no matter what happens), and if it’s weeks/months/years, then Hit Control C. Seriously. If your computer is trying to take over the world and is likely to succeed in the next few weeks, then kill −9 the thing. I realize that at that point you’ve likely got other AIs to worry about, but at least you’re in a position to understand it well enough to have some hope at making yours friendly and re-activating it before Skynet goes live. (I know Eli has counters to this, and counter-counters, and counter-counter-counters, but so do I—I just don’t assume you’ll be interested in hearing them. The main point here really is that the original statement wasn’t so much hyperbole as refreshingly concise—whether or not you agree with it.)
“The Intelligence Explosion Thesis says that an AI can potentially grow in capability on a timescale that seems fast relative to human experience due to recursive self-improvement. This in turn implies that strategies which rely on humans reacting to and restraining or punishing AIs are unlikely to be successful in the long run, and that what the first strongly self-improving AI prefers can end up mostly determining the final outcomes for Earth-originating intelligent life. ”—Eliezer Yudkowsky, IEM.
I.e., Eliezer thinks it’ll take less time than it takes you to hit Ctrl-C. (Granted it takes Eliezer a whole paragraph to say what the essay captures in a phrase, but I digress.)
Eliezer’s position is somewhat more nuanced than that. He admits a possibility of a FOOM timescale on the order of seconds, but a timescale on the order of weeks/months/years is also in line with the IE thesis.
FOOM on the order of seconds can be strongly argued against (Eli does a fair job of it himself, but likes to leave everything open so he can cite himself later no matter what happens), and if it’s weeks/months/years, then Hit Control C. Seriously. If your computer is trying to take over the world and is likely to succeed in the next few weeks, then kill −9 the thing. I realize that at that point you’ve likely got other AIs to worry about, but at least you’re in a position to understand it well enough to have some hope at making yours friendly and re-activating it before Skynet goes live. (I know Eli has counters to this, and counter-counters, and counter-counter-counters, but so do I—I just don’t assume you’ll be interested in hearing them. The main point here really is that the original statement wasn’t so much hyperbole as refreshingly concise—whether or not you agree with it.)