This post outlines and argues for three reasons to expect long AI timelines that the author expects are not taken into account in current forecasting efforts:
1. **Technological deployment lag:** Most technologies take decades between when they’re first developed and when they become widely impactful. 2. **Overestimating the generality of AI technology:** Just as people in the 1950s and 1960s overestimated the impact of solving chess, it seems likely that current people are overestimating the impact of recent progress, and how far it can scale in the future. 3. **Regulation will slow things down,** as with [nuclear energy](https://rootsofprogress.org/devanney-on-the-nuclear-flop), for example.
You might argue that the first and third points don’t matter, since what we care about is when AGI is _developed_, as opposed to when it becomes widely deployed. However, it seems that we continue to have the opportunity to intervene until the technology becomes widely impactful, and that seems to be the relevant quantity for decision-making. You could have some specific argument like “the AI goes FOOM and very quickly achieves all of its goals” that then implies that the development time is the right thing to forecast, but none of these seem all that obvious.
Planned summary for the Alignment Newsletter:
Planned opinion: