Eliezer said: “I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they’ll suddenly decide it’s “pseudoscience”.”
It may be that the notion of strongly superhuman AI runs into people’s preconceptions they aren’t willing to give up (possibly of religious origins). But I wonder if the ‘Singularians’ aren’t suffering from a bias of their own. Our current understanding of science and intelligence is compatible with many non-Singularity outcomes:
(a) ‘human-level’ intelligence is, for various physical reasons, an approximate upper bound on intelligence
(b) Scaling past ‘human-level’ intelligence is possible but difficult due to extremely poor returns (e.g., logarithmic rather than exponential growth past a certain point)
(c) Scaling past ‘human-level’ intelligence is possible, is not difficult, but runs into an inherent ‘glass ceiling’ far below ‘incomprehensibility’ of the resulting intelligence
and so on
Many of these scenarios seem as interesting to me as a true Singularity outcome, but my perception is they aren’t being given equal time. Singularity is certainly more ‘vivid,’ but is it more likely?
Eliezer said: “I encounter people who are quite willing to entertain the notion of dumber-than-human Artificial Intelligence, or even mildly smarter-than-human Artificial Intelligence. Introduce the notion of strongly superhuman Artificial Intelligence, and they’ll suddenly decide it’s “pseudoscience”.”
It may be that the notion of strongly superhuman AI runs into people’s preconceptions they aren’t willing to give up (possibly of religious origins). But I wonder if the ‘Singularians’ aren’t suffering from a bias of their own. Our current understanding of science and intelligence is compatible with many non-Singularity outcomes:
(a) ‘human-level’ intelligence is, for various physical reasons, an approximate upper bound on intelligence (b) Scaling past ‘human-level’ intelligence is possible but difficult due to extremely poor returns (e.g., logarithmic rather than exponential growth past a certain point) (c) Scaling past ‘human-level’ intelligence is possible, is not difficult, but runs into an inherent ‘glass ceiling’ far below ‘incomprehensibility’ of the resulting intelligence
and so on
Many of these scenarios seem as interesting to me as a true Singularity outcome, but my perception is they aren’t being given equal time. Singularity is certainly more ‘vivid,’ but is it more likely?