Idk what we mean by “AGI”, so I’m predicting when transformative AI will be developed instead. This is still a pretty fuzzy target: at what point do we say it’s “transformative”? Does it have to be fully deployed and we already see the huge economic impact? Or is it just the point at which the model training is complete? I’m erring more on the side of “when the model training is complete”, but also there may be lots of models contributing to TAI, in which case it’s not clear which particular model we mean. Nonetheless, this feels a lot more concrete and specific than AGI.
Methodology: use a quantitative model, and then slightly change the prediction to account for important unmodeled factors. I expect to write about this model in a future newsletter.
This should really be thought of as “when we see the transformative economic impact”, I don’t like the “when model training is complete” framing (for basically the reason mentioned above, that there may be lots of models).
I’ve updated towards shorter timelines; my median is roughly 2045 with a similar shape of the distribution as above.
One argument for shorter timelines than that in bio anchors is “bio anchors doesn’t take into account how non-transformative AI would accelerate AI progress”.
Another relevant argument is “the huge difference between training time compute and inference time compute suggests that we’ll find ways to get use out of lots of inferences with dumb models rather than a few inferences with smart models; this means we don’t need models as smart as the human brain, thus lessening the needed compute at training time”.
I also feel more strongly about short horizon models probably being sufficient (whereas previously I mostly had a mixture between short and medium horizon models).
Conversely, reflecting on regulation and robustness made me think I was underweighting those concerns, and lengthened my timelines.
Interestingly, I apparently had a median around 2040 back in 2019, so my median is still later than it used to be prior to reading the bio anchors report.
My snapshot: https://elicit.ought.org/builder/xPoVZh7Xq
Idk what we mean by “AGI”, so I’m predicting when transformative AI will be developed instead. This is still a pretty fuzzy target: at what point do we say it’s “transformative”? Does it have to be fully deployed and we already see the huge economic impact? Or is it just the point at which the model training is complete? I’m erring more on the side of “when the model training is complete”, but also there may be lots of models contributing to TAI, in which case it’s not clear which particular model we mean. Nonetheless, this feels a lot more concrete and specific than AGI.
Methodology: use a quantitative model, and then slightly change the prediction to account for important unmodeled factors. I expect to write about this model in a future newsletter.
Some updates:
This should really be thought of as “when we see the transformative economic impact”, I don’t like the “when model training is complete” framing (for basically the reason mentioned above, that there may be lots of models).
I’ve updated towards shorter timelines; my median is roughly 2045 with a similar shape of the distribution as above.
One argument for shorter timelines than that in bio anchors is “bio anchors doesn’t take into account how non-transformative AI would accelerate AI progress”.
Another relevant argument is “the huge difference between training time compute and inference time compute suggests that we’ll find ways to get use out of lots of inferences with dumb models rather than a few inferences with smart models; this means we don’t need models as smart as the human brain, thus lessening the needed compute at training time”.
I also feel more strongly about short horizon models probably being sufficient (whereas previously I mostly had a mixture between short and medium horizon models).
Conversely, reflecting on regulation and robustness made me think I was underweighting those concerns, and lengthened my timelines.
Interestingly, I apparently had a median around 2040 back in 2019, so my median is still later than it used to be prior to reading the bio anchors report.