Why should we believe any prediction that an AGI is likely to be created soon, given the history of these predictions in the past?
What progress has been made at solving the very hard problems of AGI, such as representing general knowledge, or understanding natural language?
Is there it possible that humans are incapable of constructing an AGI by reason of our own limited intelligence?
Is it possible for an AGI to be created and yet an intelligence explosion to not happen? [Norvig’s talk at the Singularity Summit posits that this is possible]
(Note that I don’t fully endorse the skepticism of these questions, but they’re questions that reasonable people might ask).
Why should we believe any prediction that an AGI is likely to be created soon, given the history of these predictions in the past?
What progress has been made at solving the very hard problems of AGI, such as representing general knowledge, or understanding natural language?
Is there it possible that humans are incapable of constructing an AGI by reason of our own limited intelligence?
Is it possible for an AGI to be created and yet an intelligence explosion to not happen? [Norvig’s talk at the Singularity Summit posits that this is possible]
(Note that I don’t fully endorse the skepticism of these questions, but they’re questions that reasonable people might ask).