Overcorrecting in AGI Timeline Forecasts in the current AI boom
Acknowledgements: Thanks to Lowe Lundin and Misha Yagudin for their feedback.
Disclaimer: I by no means am an expert in forecasting or the AI Safety field.
Lately, in conversations, I have been having and the general atmosphere around AI, I have come across the tendency of a lot of people to get saturated with the news cycle leading to potentially many (according to me) people overcorrecting their AGI timelines and making them too short.
The primary reason for this could be that cut-throat competition generates dense periods of announcements, prompting a specific public relations strategy among tech companies. For instance, when Company A releases a product or service, Company B feels compelled to “respond” to prove that they are at least in the running, preserve their reputation, and keep their internal operations running smoothly. This is essentially a compulsive need of companies to make it seem like something is cooking.
I feel people are not balancing how the competitive dynamics of the AI field could play out with the high economic investments (and maybe even higher projected economic returns driving organisational incentive structures) with the necessary hypercompetitive PR shrouding causing dense packets of announcements.
Moving forward, our forecasting models must account for such periods of intense activity, such as the past year of announcements. Currently, we are experiencing somewhat of a peak Gartner hype cycle moment with AI, and it seems as though we are on the brink of discovering something groundbreaking, such as AGI, that is generating a buzz across the industry.
To be fair, we might even be, and I don’t mean to discount this completely. Some of the increased attention paid to AI could have positive implications, such as increased research funding towards safety research and attracting some of the best talent to the alignment cause area. However, it is also plausible that some of the attention is unwarranted, and it may lead to unsafe research due to the need to “respond.”
A way to counter overcorrecting is to put a moratorium on adjusting forecasts and revisit them after a period when the noise in the channel has subsided. This is to give your brain time to absorb the sudden shockwaves across the news cycle and should lead to much more rational decision-making.
The world is a complex place with many variables, but a bit of birds-eye skepticism can prove valuable.
My point of view is different. I was worried about 4 months ago that this spring-summer of 2023 would potentially be FOOM-time. My experience has been one of gratefully feeling like, ‘oh good, GPT-4 wasn’t as scary competent as I was worried it might be. We’re safe for another year or two. Time to calm down and get back to work on alignment at a regular pace.’
Also, I’m one of those weird people who started taking this all seriously like 20 years ago, and I’ve been planning my life around a scary tricky transition time somewhere around 2025 − 2030. And in the past few years, I got better at ML, read more papers, researched enough to get my own inside view on timelines and then realized I didn’t think we had until 2030 until AGI. I don’t think we’ll be doomed right after inventing it, like some do, but I do think that it’s going to change our world in scary ways and that if we don’t deal with it well within a few years, it’ll get out of control and then we’ll be doomed.
GPT4 was a (slight) update towards “oh hey, maybe we’re not as close as I thought”, but I haven’t updated significantly in years; GPT3 happened when I expected, GPT4 happened when I expected, alien gods later this year, extinction or utopia by 2025. I have been surprised by the exact path things have taken somewhat, but compute is an incredibly reliable predictor of how things are going to go.