I used to think that slower takeoff implied shorter timelines, because slow takeoff means that pre-AGI AI is more economically valuable, which means that economy advances faster, which means that we get AGI sooner. But there’s a countervailing consideration, which is that in slow takeoff worlds, you can make arguments like ‘it’s unlikely that we’re close to AGI, because AI can’t do X yet’, where X might be ‘make a trillion dollars a year’ or ‘be as competent as a bee’. I now overall think that arguments for fast takeoff should update you towards shorter timelines.
So slow takeoffs cause shorter timelines, but are evidence for longer timelines.
This graph is a version of this argument: if we notice that current capabilities are at the level of the green line, then if we think we’re on the fast takeoff curve we’ll deduce we’re much further ahead than we’d think on the slow takeoff curve.
For the “slow takeoffs mean shorter timelines” argument, see here: https://sideways-view.com/2018/02/24/takeoff-speeds/This point feels really obvious now that I’ve written it down, and I suspect it’s obvious to many AI safety people, including the people whose writings I’m referencing here. Thanks to various people for helpful comments.
I think that this is why belief in slow takeoffs is correlated with belief in long timelines among the people I know who think a lot about AI safety.
I wrote a whole post on modelling specific continuous or discontinuous scenarios- in the course of trying to make a very simple differential equation model of continuous takeoff, by modifying the models given by Bostrom/Yudkowsky for fast takeoff, the result that fast takeoff means later timelines naturally jumps out.
Varying d between 0 (no RSI) and infinity (a discontinuity) while holding everything else constant looks like this:
If we compare the trajectories, we see two effects—the more continuous the progress is (lower d), the earlier we see growth accelerating above the exponential trend-line (except for slow progress, where growth is always just exponential) and the smoother the transition to the new growth mode is. For d=0.5, AGI was reached at t=1.5 but for discontinuous progress this was not until after t=2. As Paul Christiano says, slow takeoff seems to mean that AI has a larger impact on the world, sooner.
But that model relies on pre-setting a fixed ’threshold for AGI, given by the parameter AGI, in advance. This, along with the starting intelligence of the system, fixes how far away AGI is.
For values between 0 and infinity we have varying steepnesses of continuous progress. IAGI is the Intelligence level we identify with AGI. In the discontinuous case, it is where the jump occurs. In the continuous case, it is the centre of the logistic curve. here IAGI=4
You could (I might get round to doing this), model the effect you’re talking about by allowing IAGI to vary with the level of discontinuity. So every model would start with the same initial intelligence I0, but the IAGI would be correlated with the level of discontinuity, with larger discontinuity implying IAGI is smaller. That way, you would reproduce the epistemic difference of expecting a stronger discontinuity—that the current intelligence of AI systems is implied to be closer to what we’d expect to need for explosive growth on discontinuous takeoff scenarios than on continuous scenarios.
We know the current level of capability and the current rate of progress, but we don’t know I_AGI, and holding all else constant slow takeoff implies I_AGI is a significantly higher number (again, I_AGI is relative to the starting intelligence of the system)
This is because my model was trying to model different physical situations, different ways AGI could be, not different epistemic situations, so I was thinking in terms of I_AGI being some fixed, objective value that we just don’t happen to know.
I’m uncertain if there’s a rigorous way of quantifying how much this epistemic update does against the physical fact that continuous takeoff implies an earlier acceleration above exponential. If you’re right, it overall completely cancels this effect out and makes timelines on discontinuous takeoff earlier overall—I think you’re right about this. It would be easy enough to write something to evenly cancel it out, to make all takeoffs in the different scenarios appear at the same time, but that’s not what you have in mind.
I used to think that slower takeoff implied shorter timelines, because slow takeoff means that pre-AGI AI is more economically valuable, which means that economy advances faster, which means that we get AGI sooner. But there’s a countervailing consideration, which is that in slow takeoff worlds, you can make arguments like ‘it’s unlikely that we’re close to AGI, because AI can’t do X yet’, where X might be ‘make a trillion dollars a year’ or ‘be as competent as a bee’. I now overall think that arguments for fast takeoff should update you towards shorter timelines.
So slow takeoffs cause shorter timelines, but are evidence for longer timelines.
This graph is a version of this argument: if we notice that current capabilities are at the level of the green line, then if we think we’re on the fast takeoff curve we’ll deduce we’re much further ahead than we’d think on the slow takeoff curve.
For the “slow takeoffs mean shorter timelines” argument, see here: https://sideways-view.com/2018/02/24/takeoff-speeds/This point feels really obvious now that I’ve written it down, and I suspect it’s obvious to many AI safety people, including the people whose writings I’m referencing here. Thanks to various people for helpful comments.
I think that this is why belief in slow takeoffs is correlated with belief in long timelines among the people I know who think a lot about AI safety.
I wrote a whole post on modelling specific continuous or discontinuous scenarios- in the course of trying to make a very simple differential equation model of continuous takeoff, by modifying the models given by Bostrom/Yudkowsky for fast takeoff, the result that fast takeoff means later timelines naturally jumps out.
But that model relies on pre-setting a fixed ’threshold for AGI, given by the parameter AGI, in advance. This, along with the starting intelligence of the system, fixes how far away AGI is.
You could (I might get round to doing this), model the effect you’re talking about by allowing IAGI to vary with the level of discontinuity. So every model would start with the same initial intelligence I0, but the IAGI would be correlated with the level of discontinuity, with larger discontinuity implying IAGI is smaller. That way, you would reproduce the epistemic difference of expecting a stronger discontinuity—that the current intelligence of AI systems is implied to be closer to what we’d expect to need for explosive growth on discontinuous takeoff scenarios than on continuous scenarios.
We know the current level of capability and the current rate of progress, but we don’t know I_AGI, and holding all else constant slow takeoff implies I_AGI is a significantly higher number (again, I_AGI is relative to the starting intelligence of the system)
This is because my model was trying to model different physical situations, different ways AGI could be, not different epistemic situations, so I was thinking in terms of I_AGI being some fixed, objective value that we just don’t happen to know.
I’m uncertain if there’s a rigorous way of quantifying how much this epistemic update does against the physical fact that continuous takeoff implies an earlier acceleration above exponential. If you’re right, it overall completely cancels this effect out and makes timelines on discontinuous takeoff earlier overall—I think you’re right about this. It would be easy enough to write something to evenly cancel it out, to make all takeoffs in the different scenarios appear at the same time, but that’s not what you have in mind.