Turns out that this dataset contains little to no correlation between a researcher’s years of experience in the field and their HLMI timelines. Here’s the trendline, showing a small positive correlation where older researchers have longer timelines—the opposite of what you’d expect if everyone predicted AGI as soon as they retire.
My read of this survey is that most ML researchers haven’t updated significantly on the last five years of progress. I don’t think they’re particularly informed on forecasting and I’d be more inclined to trust the inside view arguments, but it’s still relevant information. It’s also worth noting that the median number of years until a 10% probability of HLMI is only 10 years, showing they believe HLMI is at least plausible on somewhat short timelines.
Yes, to be clear, I don’t buy the M-G law either on the basis of earlier surveys which showed it was just cherrypicking a few points motivated by dunking on forecasts. But it is still widely informally believed, so I point this out to annoy such people: ‘you can have your M-G law but you will have to also have the implication (which you don’t want) that timelines dropped almost an entire decade in this survey & the past few years have not been business-as-usual or “expected” or “predicted”’.
I don’t think they’re particularly informed on forecasting and I’d be more inclined to trust the inside view arguments, but it’s still relevant information.
For people in epistemic positions similar to ours, I think surveys like this are not very useful for updating on timelines & p(doom) & etc., but are very useful for updating on what ML researchers think about those things, which is important for different reasons.
I wonder if the fact that there are ~10 respondents who have worked in AI for 7 years, but only one who has for 8 years is because of Superintelligence which came out in 2015.
Turns out that this dataset contains little to no correlation between a researcher’s years of experience in the field and their HLMI timelines. Here’s the trendline, showing a small positive correlation where older researchers have longer timelines—the opposite of what you’d expect if everyone predicted AGI as soon as they retire.
My read of this survey is that most ML researchers haven’t updated significantly on the last five years of progress. I don’t think they’re particularly informed on forecasting and I’d be more inclined to trust the inside view arguments, but it’s still relevant information. It’s also worth noting that the median number of years until a 10% probability of HLMI is only 10 years, showing they believe HLMI is at least plausible on somewhat short timelines.
Yes, to be clear, I don’t buy the M-G law either on the basis of earlier surveys which showed it was just cherrypicking a few points motivated by dunking on forecasts. But it is still widely informally believed, so I point this out to annoy such people: ‘you can have your M-G law but you will have to also have the implication (which you don’t want) that timelines dropped almost an entire decade in this survey & the past few years have not been business-as-usual or “expected” or “predicted”’.
For people in epistemic positions similar to ours, I think surveys like this are not very useful for updating on timelines & p(doom) & etc., but are very useful for updating on what ML researchers think about those things, which is important for different reasons.
(I do not represent AI Impacts, etc.)
I wonder if the fact that there are ~10 respondents who have worked in AI for 7 years, but only one who has for 8 years is because of Superintelligence which came out in 2015.