It should apply to AIs if you think that there will be multiple AIs that are at roughly the same capability level. A common assumption here is that as soon as there is a single general AI it will quickly improve to the point where it is so far beyond everything else in capability that there capabilities won’t matter. Frankly, I find this assumption to be highly questionable and very optimistic about potential fooming rates among other problems, but if one accepts the idea it makes some sense. The analogy might be to the hypothetical situation of the US instead of having just the strongest military but also having monopolies on cheap fusion power, an immortality pill, and having a bunch of superheroes on their side. The distinction between the US controlling everything and the US having direct military control might quickly become irrelevant.
Edit: Thinking about the rate of fooming issue. I’d be really interested if a fast-foom proponent would be willing to put together a top-level post outlining why fooming will happen so quickly.
Eliezer and Robin had a lengthy debate on this perhaps a year ago. I don’t remember if it’s on OB or LW. Robin believes in no foom, using economic arguments.
The people who design the first AI could build a large number of AIs in different locations and turn them on at the same time. This plan would have a high probability of leading to disaster; but so do all the other plans that I’ve heard.
It should apply to AIs if you think that there will be multiple AIs that are at roughly the same capability level. A common assumption here is that as soon as there is a single general AI it will quickly improve to the point where it is so far beyond everything else in capability that there capabilities won’t matter. Frankly, I find this assumption to be highly questionable and very optimistic about potential fooming rates among other problems, but if one accepts the idea it makes some sense. The analogy might be to the hypothetical situation of the US instead of having just the strongest military but also having monopolies on cheap fusion power, an immortality pill, and having a bunch of superheroes on their side. The distinction between the US controlling everything and the US having direct military control might quickly become irrelevant.
Edit: Thinking about the rate of fooming issue. I’d be really interested if a fast-foom proponent would be willing to put together a top-level post outlining why fooming will happen so quickly.
Eliezer and Robin had a lengthy debate on this perhaps a year ago. I don’t remember if it’s on OB or LW. Robin believes in no foom, using economic arguments.
The people who design the first AI could build a large number of AIs in different locations and turn them on at the same time. This plan would have a high probability of leading to disaster; but so do all the other plans that I’ve heard.
http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate
Reading now. Looks very interesting.