This is my “slow scenario”. Not sure whether it’s clear that I meant the things I said here to lean pessimistic – I struggled with whether to clutter each scenario with a lot of “might” and “if things go quickly / slowly” and so forth.
In any case, you are absolutely correct that I am handwaving here, independent of whether I am attempting to wave in the general direction of my median prediction or something else. The same is true in other places, for instance when I argue that even in what I am dubbing a “fast scenario” AGI (as defined here) is at least four years away. Perhaps I should have added additional qualifiers in the handful of places where I mention specific calendar timelines.
What I am primarily hoping to contribute is a focus on specific(ish) qualitative changes that (I argue) will need to emerge in AI capabilities along the path to AGI. A lot of the discourse seems to treat capabilities as a scalar, one-dimensional variable, with the implication that we can project timelines by measuring the rate of increase in that variable. At this point I don’t think that’s the best framing, or at least not the only useful framing.
One hope I have is that others can step in and help construct better-grounded estimates on things I’m gesturing at, such as how many “breakthroughs” (a term I have notably not attempted to define) would be needed to reach AGI and how many we might expect per year. But I’d be satisfied if my only contribution would be that people start talking a bit less about benchmark scores and a bit more about the indicators I list toward the end of the post – or, even better, some improved set of indicators.
That makes sense—I should have mentioned, I like your post overall & agree with the thesis that we should be thinking about what short vs. long timelines worlds will look like and then thinking about what the early indicators will be, instead of simply looking at benchmark scores. & I like your slow vs. fast scenarios, I guess I just think the fast one is more likely. :)
This is my “slow scenario”. Not sure whether it’s clear that I meant the things I said here to lean pessimistic – I struggled with whether to clutter each scenario with a lot of “might” and “if things go quickly / slowly” and so forth.
In any case, you are absolutely correct that I am handwaving here, independent of whether I am attempting to wave in the general direction of my median prediction or something else. The same is true in other places, for instance when I argue that even in what I am dubbing a “fast scenario” AGI (as defined here) is at least four years away. Perhaps I should have added additional qualifiers in the handful of places where I mention specific calendar timelines.
What I am primarily hoping to contribute is a focus on specific(ish) qualitative changes that (I argue) will need to emerge in AI capabilities along the path to AGI. A lot of the discourse seems to treat capabilities as a scalar, one-dimensional variable, with the implication that we can project timelines by measuring the rate of increase in that variable. At this point I don’t think that’s the best framing, or at least not the only useful framing.
One hope I have is that others can step in and help construct better-grounded estimates on things I’m gesturing at, such as how many “breakthroughs” (a term I have notably not attempted to define) would be needed to reach AGI and how many we might expect per year. But I’d be satisfied if my only contribution would be that people start talking a bit less about benchmark scores and a bit more about the indicators I list toward the end of the post – or, even better, some improved set of indicators.
That makes sense—I should have mentioned, I like your post overall & agree with the thesis that we should be thinking about what short vs. long timelines worlds will look like and then thinking about what the early indicators will be, instead of simply looking at benchmark scores. & I like your slow vs. fast scenarios, I guess I just think the fast one is more likely. :)