Eliezer and Robin had a lengthy debate on this perhaps a year ago. I don’t remember if it’s on OB or LW. Robin believes in no foom, using economic arguments.
The people who design the first AI could build a large number of AIs in different locations and turn them on at the same time. This plan would have a high probability of leading to disaster; but so do all the other plans that I’ve heard.
Eliezer and Robin had a lengthy debate on this perhaps a year ago. I don’t remember if it’s on OB or LW. Robin believes in no foom, using economic arguments.
The people who design the first AI could build a large number of AIs in different locations and turn them on at the same time. This plan would have a high probability of leading to disaster; but so do all the other plans that I’ve heard.
http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate
Reading now. Looks very interesting.