Katja Grace’s 2015 survey of NIPS and ICML researchers provided an aggregate forecast giving a 50% chance of HLMI occurring by 2060 and a 10% chance of it occurring by 2024.
2015 feels decades ago though. That’s before GPT-1!
(Today, seven years after the survey was conducted, you might want to update against the researchers that predicted HLMI by 2024.)
I would expect a survey done today to have more researchers predicting 2024. Certainly I’d expect a median before 2060! My layman impression is that things have turned out to be easier to do for big language models, not harder.
This was heavily upvoted at the time of posting, including by me. It turns out to be mostly wrong. AI Impacts just released a survey of 4271 NeurIPS and ICML researchers conducted in 2021 and found that the median year for expected HLMI is 2059, down only two years from 2061 since 2016. Looks like the last five years of evidence hasn’t swayed the field much. My inside view says they’re wrong, but the opinions of the field and our inability to anticipate them are both important.
2015 feels decades ago though. That’s before GPT-1!
I would expect a survey done today to have more researchers predicting 2024. Certainly I’d expect a median before 2060! My layman impression is that things have turned out to be easier to do for big language models, not harder.
The surveys urgently need to be updated.
This was heavily upvoted at the time of posting, including by me. It turns out to be mostly wrong. AI Impacts just released a survey of 4271 NeurIPS and ICML researchers conducted in 2021 and found that the median year for expected HLMI is 2059, down only two years from 2061 since 2016. Looks like the last five years of evidence hasn’t swayed the field much. My inside view says they’re wrong, but the opinions of the field and our inability to anticipate them are both important.
https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/