Finally, I note that most of the other knowledgeable futurist scientists and philosophers, who have come into close contact with SIAI’s perspective, also don’t accept the Scary Idea. Examples include Robin Hanson, Nick Bostrom and Ray Kurzweil.
Is there a reference for Bostrom’s position on AGI-without-FAI risk? Is Goertzel correct here?
For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.
The question is not whether Bostrom urges caution (which Goertzel and many others also urge), but whether Bostrom agrees that the Scary Idea is true—that is, whether projects like Ben’s and others will probably end the human race if developed without a pre-existing FAI theory, and whether the only (or most promising) way to not incur extremely high risk of wiping out humanity is to develop FAI theory first.
Ben’s post states,
Is there a reference for Bostrom’s position on AGI-without-FAI risk? Is Goertzel correct here?
He wrote Ethical Issues in Advanced Artificial Intelligence, which does caution against non-friendly AGI:
The question is not whether Bostrom urges caution (which Goertzel and many others also urge), but whether Bostrom agrees that the Scary Idea is true—that is, whether projects like Ben’s and others will probably end the human race if developed without a pre-existing FAI theory, and whether the only (or most promising) way to not incur extremely high risk of wiping out humanity is to develop FAI theory first.
Right, forgot about that.