It may be an obvious point on which to be biased, but how many of such people then go on to work out birthdates and prediction dates or to look for someone else’s work on those lines like Maes-Garreau?
1) Among those sampled, the young do not seem to systematically predict a later Singularity.
2) People do update their estimates based on incremental data (as they should), so we distinguish between estimated dates, and estimated time-from-present.
2a) A lot of people burned by the 1980s AI bubble shifted both of those into the future.
3) A lot of AI folk with experience from that bubble have a strong taboo against making predictions for fear of harming the field by raising expectations. This skews the log of public predictions.
4) Younger people working on AGI (like Shane Legg, Google’s Moshe Looks) are a self-selected group and tend to think that it is relatively close (within decades,and their careers).
5) Random smart folk, not working on AI (physicists, philosophers, economists), of varied ages, tend to put broad distributions on AGI development with central tendencies in the mid-21st century.
Yes. An improved version of the spreadsheet, which serves as the data set for Stuart’s recent writeup, will probably be released when the Stuart+Kaj paper is published, or perhaps earlier.
Yes but shouldn’t we use the earliest predictions by a person? Even a heavily biased person may produce reasonable estimates given enough data. The first few estimates are likely to be based most on intuition—or bias, in another word.
But which way? There may be a publication bias to ‘true believers’ but then there may also be a bias towards unobjectionably far away estimates like Minsky’s 5 to 500 years. (One wonders what odds Minsky genuinely assigns to the first AI being created in 2500 AD.)
Reasonable. Optimism is an incentive to work, and self-deception is probably relevant.
Evidence for, isn’t it? Especially if they assign even weak belief in significant life-extension breakthroughs, ~2050 is within their conceivable lifespan (since they know humans currently don’t live past ~120, they’d have to be >~80 to be sure of not reaching 2050).
It may be an obvious point on which to be biased, but how many of such people then go on to work out birthdates and prediction dates or to look for someone else’s work on those lines like Maes-Garreau?
A lot of folk at SIAI have looked at and for age correlations.
And found?
1) Among those sampled, the young do not seem to systematically predict a later Singularity.
2) People do update their estimates based on incremental data (as they should), so we distinguish between estimated dates, and estimated time-from-present.
2a) A lot of people burned by the 1980s AI bubble shifted both of those into the future.
3) A lot of AI folk with experience from that bubble have a strong taboo against making predictions for fear of harming the field by raising expectations. This skews the log of public predictions.
4) Younger people working on AGI (like Shane Legg, Google’s Moshe Looks) are a self-selected group and tend to think that it is relatively close (within decades,and their careers).
5) Random smart folk, not working on AI (physicists, philosophers, economists), of varied ages, tend to put broad distributions on AGI development with central tendencies in the mid-21st century.
Is there any chance of the actual data or writeups being released? It’s been almost 3 years now.
Lukeprog has a big spreadsheet. I don’t know his plans for it.
Hm… I wonder if that’s the big spreadsheet ksotala has been working on for a while?
Yes. An improved version of the spreadsheet, which serves as the data set for Stuart’s recent writeup, will probably be released when the Stuart+Kaj paper is published, or perhaps earlier.
evidence for, apparently
Yes but shouldn’t we use the earliest predictions by a person? Even a heavily biased person may produce reasonable estimates given enough data. The first few estimates are likely to be based most on intuition—or bias, in another word.
But which way? There may be a publication bias to ‘true believers’ but then there may also be a bias towards unobjectionably far away estimates like Minsky’s 5 to 500 years. (One wonders what odds Minsky genuinely assigns to the first AI being created in 2500 AD.)
Reasonable. Optimism is an incentive to work, and self-deception is probably relevant.
Evidence for, isn’t it? Especially if they assign even weak belief in significant life-extension breakthroughs, ~2050 is within their conceivable lifespan (since they know humans currently don’t live past ~120, they’d have to be >~80 to be sure of not reaching 2050).