I’m curious about the extent to which LW/OB/SIAI-style concerns about unfriendly AI are of interest in the academic AI community. This book provides one datapoint, so:
It mentions the singularity and the SIAI (p. 646-648):
Some people have pointed out that HLAI [human-level AI] necessarily implies superhuman-level intelligence [...]
In 2004, The Singularity Institute for Artificial Intelligence (SIAI) was
formed “to confront this urgent challenge, both the opportunity and the risk.”
Its Director of Research, Ben Goertzel, is also chair of an organization called
the “Artificial General Intelligence Research Institute” [...]
[Note that what “the risk” is is not spelt out. Also it gives the impression that Ben Goertzel is in charge—if I remember right he never was, and has now left entirely.]
I think there’s no mention of friendliness, even under other names (did anyone find one?). It’s not that such questions would be considered off-topic:
Besides the criticisms of AI based on what people claim it cannot do, there are
also criticisms based on what people claim it should not do. Some of the
“should-not” people mention the inappropriateness of machines attempting to
perform tasks that are inherently human-centric, such as teaching, counseling,
and rendering judicial opinions. Others, such as the Computer Professionals
for Social Responsibility mentioned previously, don’t want to see AI
technology (or any other technology for that matter) used in warfare or for
surveillance or for tasks that require experience-based human judgment. In
addition, there are those who, like the Luddites of 19th century Britain, are
concerned about machines replacing humans and thereby causing
unemployment and economic dislocation. Finally, there are those who worry
that AI and other computer technology would dehumanize people, reduce the
need for person-to-person contact, and change what it means to be human.
I’m curious about the extent to which LW/OB/SIAI-style concerns about unfriendly AI are of interest in the academic AI community. This book provides one datapoint, so:
It mentions the singularity and the SIAI (p. 646-648):
[Note that what “the risk” is is not spelt out. Also it gives the impression that Ben Goertzel is in charge—if I remember right he never was, and has now left entirely.]
I think there’s no mention of friendliness, even under other names (did anyone find one?). It’s not that such questions would be considered off-topic:
(p. 393)