Suppose that we get AGI tomorrow because of a fast take-off. If so timelines will be extremely short.
If we instead suppose that take-off will be gradual, then it seems impossible for timelines to be that short.
So in this scenario—this choice of AGI difficulty—conditioning on gradual take-off doesn’t seem to imply shorter timelines.
Those were two different scenarios with two different amounts of AGI difficulty! In the first scenario, we have enough knowledge to build AGI today; in the second we don’t have enough knowledge to build AGI today (and that is part of why the takeoff will be gradual).
My argument involved scenarios with fast take-off and short time-lines. There is a clarificatory part of the post that discusses the converse case, of a gradual take-off and long time-lines:
Is it inconsistent, then, to think both that take-off will be gradual and timelines will be long? No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.
Maybe a related clarification could be made about the fast take-off/short time-line combination.
However, this claim also confuses me a bit:
No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines.
The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view “that marginal improvements in AI capabilities are hard”, gradual take-off and longer timelines correlate. And the author seems to suggest that that’s a plausible view (though empirically it may be false). I’m not quite sure how to interpret this combination of claims.
Maybe a related clarification could be made about the fast take-off/short time-line combination.
Right. I guess the view here is that “The threshold level of capabilities needed for explosive growth is very low.” Which would imply that we hit explosive growth before AIs are useful enough to be integrated into the economy, i.e. sudden take-off.
The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view “that marginal improvements in AI capabilities are hard”, gradual take-off and longer timelines correlate. And the author seems to suggest that that’s a plausible view (though empirically it may be false). I’m not quite sure how to interpret this combination of claims.
If “marginal improvements in AI capabilities are hard” then we must have a gradual take-off and timelines are probably “long” by the community’s standards. In such a world, you simply can’t have a sudden take-off, so a gradual take-off still happens on shorter timelines than a sudden take-off (i.e. sooner than never).
I realise I have used two different meanings of “long timelines” 1) “long” by people’s standards; 2) “longer” than in the counterfactual take-off scenario. Sorry for the confusion!
Those were two different scenarios with two different amounts of AGI difficulty! In the first scenario, we have enough knowledge to build AGI today; in the second we don’t have enough knowledge to build AGI today (and that is part of why the takeoff will be gradual).
Thanks.
My argument involved scenarios with fast take-off and short time-lines. There is a clarificatory part of the post that discusses the converse case, of a gradual take-off and long time-lines:
Maybe a related clarification could be made about the fast take-off/short time-line combination.
However, this claim also confuses me a bit:
The main claim in the post is that gradual take-off implies shorter time-lines. But here the author seems to say that according to the view “that marginal improvements in AI capabilities are hard”, gradual take-off and longer timelines correlate. And the author seems to suggest that that’s a plausible view (though empirically it may be false). I’m not quite sure how to interpret this combination of claims.
I agree with Rohin’s comment above.
Right. I guess the view here is that “The threshold level of capabilities needed for explosive growth is very low.” Which would imply that we hit explosive growth before AIs are useful enough to be integrated into the economy, i.e. sudden take-off.
If “marginal improvements in AI capabilities are hard” then we must have a gradual take-off and timelines are probably “long” by the community’s standards. In such a world, you simply can’t have a sudden take-off, so a gradual take-off still happens on shorter timelines than a sudden take-off (i.e. sooner than never).
I realise I have used two different meanings of “long timelines” 1) “long” by people’s standards; 2) “longer” than in the counterfactual take-off scenario. Sorry for the confusion!