@elifland what do you think is the strongest argument for long(er) timelines? Do you think it’s essentially just “it takes a long time for researchers learn how to cross the gaps”?
Or do you think there’s an entirely different frame (something that’s in an ontology that just looks very different from the one presented in the “benchmarks + gaps argument”?)
A few possible categories of situations we might have long timelines, off the top of my head:
Benchmarks + gaps is still best: overall gap is somewhat larger + slowdown in compute doubling time after 2028, but trend extrapolations still tell us something about gap trends: This is how I would most naturally think about how timelines through maybe the 2030s are achieved, and potentially beyond if neither of the next hold.
Others are best (more than of one of these can be true):
The current benchmarks and evaluations are so far away from AGI that trends on them don’t tell us anything (including regarding how fast gaps might be crossed). In this case one might want to identify the 1-2 most important gaps and reason about when we will cross these based on gears-level reasoning or trend extrapolation/forecasting on “real-world” data (e.g. revenue?) rather than trend extrapolation on benchmarks. Example candidate “gaps” that I often hear for these sorts of cases are the lack of feedback loops and the “long-tail of tasks” / reliability.
A paradigm shift in AGI training is needed and benchmark trends don’t tell us much about when we will achieve this (this is basically Steven’s sibling comment): in this case the best analysis might involve looking at the base rate of paradigm shifts per research effort, and/or looking at specific possible shifts.
^ this taxonomy is not comprehensive, just things I came up with quickly. Might be missing something that would be good.
To cop out answer your question, I feel like if I were making a long-timelines argument I’d argue that all 3 of those would be ways of forecasting to give weight to, then aggregate. If I had to choose just one I’d probably still go with (1) though.
edit: oh there’s also the “defer to AI experts” argument. I mostly try not to think about deference-based arguments because thinking on the object-level is more productive, though I think if I were really trying to make an all-things-considered timelines distribution there’s some chance I would adjust to longer due to deference arguments (but also some chance I’d adjust toward shorter, given that lots of people who have thought deeply about AGI / are close to the action have short timelines).
There’s also “base rate of super crazy things happening is low” style arguments which I don’t give much weight to.
@elifland what do you think is the strongest argument for long(er) timelines? Do you think it’s essentially just “it takes a long time for researchers learn how to cross the gaps”?
Or do you think there’s an entirely different frame (something that’s in an ontology that just looks very different from the one presented in the “benchmarks + gaps argument”?)
A few possible categories of situations we might have long timelines, off the top of my head:
Benchmarks + gaps is still best: overall gap is somewhat larger + slowdown in compute doubling time after 2028, but trend extrapolations still tell us something about gap trends: This is how I would most naturally think about how timelines through maybe the 2030s are achieved, and potentially beyond if neither of the next hold.
Others are best (more than of one of these can be true):
The current benchmarks and evaluations are so far away from AGI that trends on them don’t tell us anything (including regarding how fast gaps might be crossed). In this case one might want to identify the 1-2 most important gaps and reason about when we will cross these based on gears-level reasoning or trend extrapolation/forecasting on “real-world” data (e.g. revenue?) rather than trend extrapolation on benchmarks. Example candidate “gaps” that I often hear for these sorts of cases are the lack of feedback loops and the “long-tail of tasks” / reliability.
A paradigm shift in AGI training is needed and benchmark trends don’t tell us much about when we will achieve this (this is basically Steven’s sibling comment): in this case the best analysis might involve looking at the base rate of paradigm shifts per research effort, and/or looking at specific possible shifts.
^ this taxonomy is not comprehensive, just things I came up with quickly. Might be missing something that would be good.
To cop out answer your question, I feel like if I were making a long-timelines argument I’d argue that all 3 of those would be ways of forecasting to give weight to, then aggregate. If I had to choose just one I’d probably still go with (1) though.
edit: oh there’s also the “defer to AI experts” argument. I mostly try not to think about deference-based arguments because thinking on the object-level is more productive, though I think if I were really trying to make an all-things-considered timelines distribution there’s some chance I would adjust to longer due to deference arguments (but also some chance I’d adjust toward shorter, given that lots of people who have thought deeply about AGI / are close to the action have short timelines).
There’s also “base rate of super crazy things happening is low” style arguments which I don’t give much weight to.