I no longer stand by this post, and will preserve it here for history reasons.
TL;DR: Research and discourse on AGI timelines aren’t as helpful as they may at first appear, and a lot of the low-hanging fruit (i.e. motivating AGI-this-century as a serious possibility) has already been plucked.
David Collingridge famously posed a dilemma for technology governance—in short, many interventions happen too early (when you lack sufficient information) or too late (when it’s harder to change things). Collingridge’s solution was essentially to take an iterative approach to governance, with reversible policy interventions. But, people in favor of more work on timelines might ask, why don’t we just frontload information gathering as much as possible, and/or take precautionary measures, so that we can have the best of both worlds?
Again, as noted above, I think there’s some merit to this perspective, but it can easily be overstated. In particular, in the context of AI development and deployment, there is only so much value to knowing in advance that capabilities are coming at a certain time in the future (at least, assuming that there are some reasonable upper bounds on how good our forecasts can be, on which more below).
Even when my colleagues and I, for example, believed with a high degree of confidence that language understanding/generation and image generation capabilities would improve a lot between 2020 and 2022 as a result of efforts that we were aware of at our org and others, this didn’t help us prepare that much. There was still a need for various stakeholders to be “in the room” at various points along the way, to perform analysis of particular systems’ capabilities and risks (some of which were not, IMO, possible to anticipate), to coordinate across organizations, to raise awareness of these issues among people who didn’t pay attention to those earlier bullish forecasts/projections (e.g. from scaling laws), etc. Only some of this could or would have gone more smoothly if there had been more and better forecasting of various NLP and image generation benchmarks over the past few years.
I don’t see any reason why AGI will be radically different in this respect. We should frontload some of the information gathering via foresight, for sure, but there will still be tons of contingent details that won’t be possible to anticipate, as well as many cases where knowing that things are coming won’t help that much because having an impact requires actually “being there” (both in space and time).
EDIT: I no longer endorse Ajeya’s report, and instead defer to this report:
Why AGI Timeline Research/Discourse Might Be Overrated
Link post
I no longer stand by this post, and will preserve it here for history reasons.
EDIT: I no longer endorse Ajeya’s report, and instead defer to this report:
https://www.lesswrong.com/posts/3nMpdmt8LrzxQnkGp/ai-timelines-via-cumulative-optimization-power-less-long