This should really be thought of as “when we see the transformative economic impact”, I don’t like the “when model training is complete” framing (for basically the reason mentioned above, that there may be lots of models).
I’ve updated towards shorter timelines; my median is roughly 2045 with a similar shape of the distribution as above.
One argument for shorter timelines than that in bio anchors is “bio anchors doesn’t take into account how non-transformative AI would accelerate AI progress”.
Another relevant argument is “the huge difference between training time compute and inference time compute suggests that we’ll find ways to get use out of lots of inferences with dumb models rather than a few inferences with smart models; this means we don’t need models as smart as the human brain, thus lessening the needed compute at training time”.
I also feel more strongly about short horizon models probably being sufficient (whereas previously I mostly had a mixture between short and medium horizon models).
Conversely, reflecting on regulation and robustness made me think I was underweighting those concerns, and lengthened my timelines.
Interestingly, I apparently had a median around 2040 back in 2019, so my median is still later than it used to be prior to reading the bio anchors report.
Some updates:
This should really be thought of as “when we see the transformative economic impact”, I don’t like the “when model training is complete” framing (for basically the reason mentioned above, that there may be lots of models).
I’ve updated towards shorter timelines; my median is roughly 2045 with a similar shape of the distribution as above.
One argument for shorter timelines than that in bio anchors is “bio anchors doesn’t take into account how non-transformative AI would accelerate AI progress”.
Another relevant argument is “the huge difference between training time compute and inference time compute suggests that we’ll find ways to get use out of lots of inferences with dumb models rather than a few inferences with smart models; this means we don’t need models as smart as the human brain, thus lessening the needed compute at training time”.
I also feel more strongly about short horizon models probably being sufficient (whereas previously I mostly had a mixture between short and medium horizon models).
Conversely, reflecting on regulation and robustness made me think I was underweighting those concerns, and lengthened my timelines.
Interestingly, I apparently had a median around 2040 back in 2019, so my median is still later than it used to be prior to reading the bio anchors report.