For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.
The question is not whether Bostrom urges caution (which Goertzel and many others also urge), but whether Bostrom agrees that the Scary Idea is true—that is, whether projects like Ben’s and others will probably end the human race if developed without a pre-existing FAI theory, and whether the only (or most promising) way to not incur extremely high risk of wiping out humanity is to develop FAI theory first.
He wrote Ethical Issues in Advanced Artificial Intelligence, which does caution against non-friendly AGI:
The question is not whether Bostrom urges caution (which Goertzel and many others also urge), but whether Bostrom agrees that the Scary Idea is true—that is, whether projects like Ben’s and others will probably end the human race if developed without a pre-existing FAI theory, and whether the only (or most promising) way to not incur extremely high risk of wiping out humanity is to develop FAI theory first.
Right, forgot about that.