This post argues that AI researchers and AI organizations have an incentive to predict that AGI will come soon, since that leads to more funding, and so we should expect timeline estimates to be systematically too short. Besides the conceptual argument, we can also see this in the field’s response to critics: both historically and now, criticism is often met with counterarguments based on “style” rather than engaging with the technical meat of the criticism.
Planned opinion:
I agree with the conceptual argument, and I think it does hold in practice, quite strongly. I don’t really agree that the field’s response to critics implies that they are biased towards short timelines—see thesecomments. Nonetheless, I’m going to do exactly what this post critiques, and say that I put significant probability on short timelines, but not explain my reasons (because they’re complicated and I don’t think I can convey them, and certainly can’t convey them in a small number of words).
Note that even if AI researchers do this similarly to other groups of people, that doesn’t change the conclusion that there are distortions that push towards shorter timelines.
Planned summary:
This post argues that AI researchers and AI organizations have an incentive to predict that AGI will come soon, since that leads to more funding, and so we should expect timeline estimates to be systematically too short. Besides the conceptual argument, we can also see this in the field’s response to critics: both historically and now, criticism is often met with counterarguments based on “style” rather than engaging with the technical meat of the criticism.
Planned opinion:
I agree with the conceptual argument, and I think it does hold in practice, quite strongly. I don’t really agree that the field’s response to critics implies that they are biased towards short timelines—see these comments. Nonetheless, I’m going to do exactly what this post critiques, and say that I put significant probability on short timelines, but not explain my reasons (because they’re complicated and I don’t think I can convey them, and certainly can’t convey them in a small number of words).
Is there any group of people who reliably don’t do this? Is there any indication that AI researchers do this more often than others?
¯\_(ツ)_/¯
Note that even if AI researchers do this similarly to other groups of people, that doesn’t change the conclusion that there are distortions that push towards shorter timelines.