Ah, that’s the one I’m thinking of—he didn’t comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn’t save any citations at the time and a quick Google search didn’t find any, which is one of the reasons I’m not writing a post on failed Singularity predictions.
Another reason, hopefully, is that there would always have been a wide range of predictions, and there’s a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.
Ah, that’s the one I’m thinking of—he didn’t comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn’t save any citations at the time and a quick Google search didn’t find any, which is one of the reasons I’m not writing a post on failed Singularity predictions.
Another reason, hopefully, is that there would always have been a wide range of predictions, and there’s a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.