Acknowledgements: Thanks to Lowe Lundin and Misha Yagudin for their feedback.
Disclaimer: I by no means am an expert in forecasting or the AI Safety field.
Lately, in conversations, I have been having and the general atmosphere around AI, I have come across the tendency of a lot of people to get saturated with the news cycle leading to potentially many (according to me) people overcorrecting their AGI timelines and making them too short.
The primary reason for this could be that cut-throat competition generates dense periods of announcements, prompting a specific public relations strategy among tech companies. For instance, when Company A releases a product or service, Company B feels compelled to “respond” to prove that they are at least in the running, preserve their reputation, and keep their internal operations running smoothly. This is essentially a compulsive need of companies to make it seem like something is cooking.
I feel people are not balancing how the competitive dynamics of the AI field could play out with the high economic investments (and maybe even higher projected economic returns driving organisational incentive structures) with the necessary hypercompetitive PR shrouding causing dense packets of announcements.
Moving forward, our forecasting models must account for such periods of intense activity, such as the past year of announcements. Currently, we are experiencing somewhat of a peak Gartner hype cycle moment with AI, and it seems as though we are on the brink of discovering something groundbreaking, such as AGI, that is generating a buzz across the industry.
To be fair, we might even be, and I don’t mean to discount this completely. Some of the increased attention paid to AI could have positive implications, such as increased research funding towards safety research and attracting some of the best talent to the alignment cause area. However, it is also plausible that some of the attention is unwarranted, and it may lead to unsafe research due to the need to “respond.”
A way to counter overcorrecting is to put a moratorium on adjusting forecasts and revisit them after a period when the noise in the channel has subsided. This is to give your brain time to absorb the sudden shockwaves across the news cycle and should lead to much more rational decision-making.
The world is a complex place with many variables, but a bit of birds-eye skepticism can prove valuable.
Overcorrecting in AGI Timeline Forecasts in the current AI boom
Acknowledgements: Thanks to Lowe Lundin and Misha Yagudin for their feedback.
Disclaimer: I by no means am an expert in forecasting or the AI Safety field.
Lately, in conversations, I have been having and the general atmosphere around AI, I have come across the tendency of a lot of people to get saturated with the news cycle leading to potentially many (according to me) people overcorrecting their AGI timelines and making them too short.
The primary reason for this could be that cut-throat competition generates dense periods of announcements, prompting a specific public relations strategy among tech companies. For instance, when Company A releases a product or service, Company B feels compelled to “respond” to prove that they are at least in the running, preserve their reputation, and keep their internal operations running smoothly. This is essentially a compulsive need of companies to make it seem like something is cooking.
I feel people are not balancing how the competitive dynamics of the AI field could play out with the high economic investments (and maybe even higher projected economic returns driving organisational incentive structures) with the necessary hypercompetitive PR shrouding causing dense packets of announcements.
Moving forward, our forecasting models must account for such periods of intense activity, such as the past year of announcements. Currently, we are experiencing somewhat of a peak Gartner hype cycle moment with AI, and it seems as though we are on the brink of discovering something groundbreaking, such as AGI, that is generating a buzz across the industry.
To be fair, we might even be, and I don’t mean to discount this completely. Some of the increased attention paid to AI could have positive implications, such as increased research funding towards safety research and attracting some of the best talent to the alignment cause area. However, it is also plausible that some of the attention is unwarranted, and it may lead to unsafe research due to the need to “respond.”
A way to counter overcorrecting is to put a moratorium on adjusting forecasts and revisit them after a period when the noise in the channel has subsided. This is to give your brain time to absorb the sudden shockwaves across the news cycle and should lead to much more rational decision-making.
The world is a complex place with many variables, but a bit of birds-eye skepticism can prove valuable.