Strong-upvoted for thoughtful and careful engagement!
My own two cents on this issue: I basically accept Davidson’s model as our current-best-guess so to speak, though I acknowledge that things could be slower or faster for various reasons including the reasons you give.
I think it’s important to emphasize (a) that Davidson’s model is mostly about pre-AGI takeoff (20% automation to 100%) rather than post-AGI takeoff (100% to superintelligence) but it strongly suggests that the latter will be very fast (relative to what most people naively expect) on the order of weeks probably and very likely less than a year. To see this, play around with takeoffspeeds.com and look at the slope of the green line after AGI is achieved. It’s hard not to have it crossing several OOMs in a single year, until it starts to asymptote. i.e. in a single year we get several OOMs of software/algorithms improvement over AGI. There is no definition of superintelligence in the model, but I use that as a proxy. (Oh, and now that I think about it more, I’d guess that Davidson’s model significantly underestimates the speed of post-AGI takeoff, because it might just treat anything above AGI as merely 100% automation, whereas actually there are different degrees of 100% automation corresponding to different levels of quality intelligence; 100% automation by ASI will be significantly more research-oomph than 100% automation by AGI. But I’d need to reread the model to decide whether this is true or not. You’ve read it recently, what do you think?)
And (b) Davidson’s model says that while there is significant uncertainty over how fast takeoff will be if it happens in the 30′s or beyond, if it happens in the 20′s—i.e. if AGI is achieved in the 20′s—then it’s pretty much gotta be pretty fast. Again this can be seen by playing around with the widget on takeoffspeeds.com
...
Other cents from me:
--I work at OpenAI and I see how the sausage gets made. Already things like Copilot and ChatGPT are (barely, but noticeably) accelerating AI R&D. I can see a clear path to automating more and more parts of the research process, and my estimate is that going 10x faster is something like a lower bound on what would happen if we had AGI (e.g. if AutoGPT worked well enough that we could basically use it as a virtual engineer + scientist) and my central estimate would be “it’s probably about 10x when we first reach AGI, but then it quickly becomes 100x, 1000x, etc. as qualitative improvements kick in.” There’s a related issue of how much ‘room to grow’ is there, i.e. how much low-hanging fruit is there to pick that would improve our algorithms, supposing we started from something like “It’s AutoGPT but good, as good as an OAI employee.” My answer is “Several OOMs at least.” So my nose-to-the-ground impression is if anything more bullish/fast-takeoff-y than Davidson’s model predicts.
--I do agree with the maxim “everything is always harder and always takes longer than you think, even when you take this into account”. In fact I’ve been mildly surprised by this in the recent past (things took longer than I expected, even though I was trying to take it into account). This gives me some hope. I have more to say on the subject but I’ve rambled for long enough...
...
Point by point replies:
I expect that during the transition period, where humans and AIs are each making significant contributions to AI R&D, that there will be significant lags in taking full advantage of AI (project management and individual work habits will continually need to be adjusted), with resulting inefficiencies. Davidson touches on these ideas, but AFAICT does not include them in the model; for instance, “Assumes no lag in reallocating human talent when tasks have been automated.”
Agreed. This is part of the reason why I think Davidson’s model overestimates the speed at which AI will influence the economy in general, and the chip industry in particular. I think the AI industry will be accelerated far before the chip industry or the general economy, however; we’ll probably get a “software singularity.” And unfortunately that’s ‘good enough’ from an AI-risk perspective, because to a first approximation what matters is how smart the AIs are and whether they are aligned, not how many robot bodies they control.
It may be that when AI has automated X% of human inputs into AI R&D, the remaining inputs are the most sophisticated part of the job, and can only be done by senior researchers, meaning that most of the people being freed up are not immediately able to be redirected to non-automated tasks. It might even be the case that, by abandoning the lower-level work, we (humans) would lose our grounding in the nuts and bolts of the field, and the quality of the higher-level work we are still doing might gradually decline.
Agreed. I would be interested to see a revised model in which humans whose jobs are automated basically don’t get reallocated at all. I don’t think the bottom-line conclusions of the model would change much, but I could be wrong—if takeoff is significantly slower, that would be an update for me.
I think of AI progress as being driven by a mix of cognitive input, training data, training FLOPs, and inference FLOPs. Davidson models the impact of cognitive input and inference FLOPs, but I didn’t see training data or training FLOPs taken into account. (“Doesn’t model data/environment inputs to AI development.”) My expectation that as RSI drives an increase in cognitive input, training data and training FLOPs will be a drag on progress. (Training FLOPs will be increasing, but not as quickly as cognitive inputs.)
Training FLOPs is literally the most important and prominent variable in the model, it’s the “AGI training requirements” variable. I agree that possible data bottlenecks are ignored; if it turns out that data is the bottleneck, timelines to AGI will be longer (and possibly takeoff slower? Depends on how the data problem eventually gets solved; takeoff could be faster in some scenarios...) Personally I don’t think the data bottleneck will slow us down much, but I could be wrong.
I specifically expect progress to become more difficult as we approach human-level AGI, as human-generated training data will become less useful at that point. We will also be outrunning our existence proof for intelligence; I expect superhuman intelligence to be feasible, but we don’t know for certain that extreme superhuman performance is reasonably achievable, and so we should allow for some probability that progress beyond human performance will be significantly more difficult.
I think I disagree here. I mean, I technically agree that since most of our data is human-generated, there’s going to be some headwind at getting to superhuman performance. But I think this headwind will be pretty mild, and also, to get to AGI we just need to get to human-level performance not superhuman. Also I’m pretty darn confident that extreme superhuman performance is reasonably achievable; I think there is basically no justification for thinking otherwise. (It’s not like evolution explored making even smarter humans and found that they didn’t get smarter beyond a certain point and we’re at that point now. Also, scaling laws. Also, the upper limits of human performance might as well be superintelligence for practical purposes—something that is exactly as good as the best human at X, for all X, except that it also thinks at 100x speed and there are a million copies of it that share memories all working together in a sort of virtual civilization… I’d go out on a limb and say that’s ASI.)
As we approach human-level AGI, we may encounter other complications: coordination problems and transition delays as the economy begins to evolve rapidly, increased security overhead as AI becomes increasingly strategic for both corporations and nations (and as risks hopefully are taken more seriously), etc.
Yeah I agree here. I think the main bottleneck to takeoff speed will be humans deliberately going less fast than they could for various reasons, partly just stupid red tape and overcaution, and partly correct realization that going fast is dangerous. Tom’s model basically doesn’t model this. I think of Tom’s model as something like “how fast could we go if we weren’t trying to slow down at all and in fact were racing hard against each other.” In real life I dearly hope the powerful CEOs and politicians in the world will be more sane than that.
Thanks for the thoughtful and detailed comments! I’ll respond to a few points, otherwise in general I’m just nodding in agreement.
I think it’s important to emphasize (a) that Davidson’s model is mostly about pre-AGI takeoff (20% automation to 100%) rather than post-AGI takeoff (100% to superintelligence) but it strongly suggests that the latter will be very fast (relative to what most people naively expect) on the order of weeks probably and very likely less than a year.
And it’s a good model, so we need to take this seriously. My only quibble would be to raise again the possibility (only a possibility!) that progress becomes more difficult around the point where we reach AGI, because that is the point where we’d be outgrowing human training data. I haven’t tried to play with the model and see whether that would significantly affect the post-AGI takeoff timeline.
(Oh, and now that I think about it more, I’d guess that Davidson’s model significantly underestimates the speed of post-AGI takeoff, because it might just treat anything above AGI as merely 100% automation, whereas actually there are different degrees of 100% automation corresponding to different levels of quality intelligence; 100% automation by ASI will be significantly more research-oomph than 100% automation by AGI. But I’d need to reread the model to decide whether this is true or not. You’ve read it recently, what do you think?)
I want to say that he models this by equating the contribution of one ASI to more than one AGI, i.e. treating additional intelligence as equivalent to a speed boost. But I could be mis-remembering, and I certainly don’t remember how he translates intelligence into speed. If it’s just that each post-AGI factor of two in algorithm / silicon improvements is modeled as yielding twice as many AGIs per dollar, then I’d agree that might be an underestimate (because one IQ 300 AI might be worth a very large number of IQ 150 AIs, or whatever).
And (b) Davidson’s model says that while there is significant uncertainty over how fast takeoff will be if it happens in the 30′s or beyond, if it happens in the 20′s—i.e. if AGI is achieved in the 20′s—then it’s pretty much gotta be pretty fast. Again this can be seen by playing around with the widget on takeoffspeeds.com.
Yeah, even without consulting any models, I would expect that any scenario where we achieve AGI in the 20s is a very scary scenario for many reasons.
--I work at OpenAI and I see how the sausage gets made. Already things like Copilot and ChatGPT are (barely, but noticeably) accelerating AI R&D. I can see a clear path to automating more and more parts of the research process, and my estimate is that going 10x faster is something like a lower bound on what would happen if we had AGI (e.g. if AutoGPT worked well enough that we could basically use it as a virtual engineer + scientist) and my central estimate would be “it’s probably about 10x when we first reach AGI, but then it quickly becomes 100x, 1000x, etc. as qualitative improvements kick in.” There’s a related issue of how much ‘room to grow’ is there, i.e. how much low-hanging fruit is there to pick that would improve our algorithms, supposing we started from something like “It’s AutoGPT but good, as good as an OAI employee.” My answer is “Several OOMs at least.” So my nose-to-the-ground impression is if anything more bullish/fast-takeoff-y than Davidson’s model predicts.
What is your feeling regarding the importance of other inputs, i.e. training data and compute?
> I think of AI progress as being driven by a mix of cognitive input, training data, training FLOPs, and inference FLOPs. Davidson models the impact of cognitive input and inference FLOPs, but I didn’t see training data or training FLOPs taken into account. (“Doesn’t model data/environment inputs to AI development.”) My expectation that as RSI drives an increase in cognitive input, training data and training FLOPs will be a drag on progress. (Training FLOPs will be increasing, but not as quickly as cognitive inputs.)
Training FLOPs is literally the most important and prominent variable in the model, it’s the “AGI training requirements” variable. I agree that possible data bottlenecks are ignored; if it turns out that data is the bottleneck, timelines to AGI will be longer (and possibly takeoff slower? Depends on how the data problem eventually gets solved; takeoff could be faster in some scenarios...) Personally I don’t think the data bottleneck will slow us down much, but I could be wrong.
Ugh! This was a big miss on my part, thank you for calling it out. I skimmed too rapidly through the introduction. I saw references to biological anchors and I think I assumed that meant the model was starting from an estimate of FLOPS performed by the brain (i.e. during “inference”) and projecting when the combination of more-efficient algorithms and larger FLOPS budgets (due to more $$$ plus better hardware) would cross that threshold. But on re-read, of course you are correct and the model does focus on training FLOPS.
Compute is a very important input, important enough that it makes sense IMO to use it as the currency by which we measure the other inputs (this is basically what Bio Anchors + Tom’s model do).
There is a question of whether we’ll be bottlenecked on it in a way that throttles takeoff; it may not matter if you have AGI, if the only way to get AGI+ is to wait for another even bigger training run to complete.
I think in some sense we will indeed be bottlenecked by compute during takeoff… but that nevertheless we’ll be going something like 10x − 1000x faster than we currently go, because labor can substitute for compute to some extent (Not so much if it’s going at 1x speed; but very much if it’s going at 10x, 100x speed) and we’ll have a LOT of sped-up labor. Like, I do a little exercise where I think about what my coworkers are doing and I imagine what if they had access to AGI that was exactly as good as they are at everything, only 100x faster. I feel like they’d make progress on their current research agendas about 10x as fast. Could be a bit less, could be a lot more. Especially once we start getting qualitative intelligence improvements over typical OAI researchers, it could be a LOT more, because in scientific research there seems to be HUGE returns to quality, the smartest geniuses seem to accomplish more in a year than 90th-percentile scientists accomplish in their lifetime.
Training data also might be a bottleneck. However I think that by the time we are about to hit AGI and/or just having hit AGI, it won’t be. Smart humans are able to generate their own training data, so to speak; the entire field of mathematics is a bunch of people talking to each other and iteratively adding proofs to the blockchain so to speak and learning from each other’s proofs. That’s just an example, I think, of how around AGI we should basically have a self-sustaining civilization of AGIs talking to each other and evaluating each other’s outputs and learning from them. And this is just one of several ways in which training data bottleneck could be overcome. Another is better algorithms that are more data-efficient. The human brain seems to be more data-efficient than modern LLMs, for example. Maybe we can figure out how it manages that.
Strong-upvoted for thoughtful and careful engagement!
My own two cents on this issue: I basically accept Davidson’s model as our current-best-guess so to speak, though I acknowledge that things could be slower or faster for various reasons including the reasons you give.
I think it’s important to emphasize (a) that Davidson’s model is mostly about pre-AGI takeoff (20% automation to 100%) rather than post-AGI takeoff (100% to superintelligence) but it strongly suggests that the latter will be very fast (relative to what most people naively expect) on the order of weeks probably and very likely less than a year. To see this, play around with takeoffspeeds.com and look at the slope of the green line after AGI is achieved. It’s hard not to have it crossing several OOMs in a single year, until it starts to asymptote. i.e. in a single year we get several OOMs of software/algorithms improvement over AGI. There is no definition of superintelligence in the model, but I use that as a proxy. (Oh, and now that I think about it more, I’d guess that Davidson’s model significantly underestimates the speed of post-AGI takeoff, because it might just treat anything above AGI as merely 100% automation, whereas actually there are different degrees of 100% automation corresponding to different levels of quality intelligence; 100% automation by ASI will be significantly more research-oomph than 100% automation by AGI. But I’d need to reread the model to decide whether this is true or not. You’ve read it recently, what do you think?)
And (b) Davidson’s model says that while there is significant uncertainty over how fast takeoff will be if it happens in the 30′s or beyond, if it happens in the 20′s—i.e. if AGI is achieved in the 20′s—then it’s pretty much gotta be pretty fast. Again this can be seen by playing around with the widget on takeoffspeeds.com
...
Other cents from me:
--I work at OpenAI and I see how the sausage gets made. Already things like Copilot and ChatGPT are (barely, but noticeably) accelerating AI R&D. I can see a clear path to automating more and more parts of the research process, and my estimate is that going 10x faster is something like a lower bound on what would happen if we had AGI (e.g. if AutoGPT worked well enough that we could basically use it as a virtual engineer + scientist) and my central estimate would be “it’s probably about 10x when we first reach AGI, but then it quickly becomes 100x, 1000x, etc. as qualitative improvements kick in.” There’s a related issue of how much ‘room to grow’ is there, i.e. how much low-hanging fruit is there to pick that would improve our algorithms, supposing we started from something like “It’s AutoGPT but good, as good as an OAI employee.” My answer is “Several OOMs at least.” So my nose-to-the-ground impression is if anything more bullish/fast-takeoff-y than Davidson’s model predicts.
--I do agree with the maxim “everything is always harder and always takes longer than you think, even when you take this into account”. In fact I’ve been mildly surprised by this in the recent past (things took longer than I expected, even though I was trying to take it into account). This gives me some hope. I have more to say on the subject but I’ve rambled for long enough...
...
Point by point replies:
Agreed. This is part of the reason why I think Davidson’s model overestimates the speed at which AI will influence the economy in general, and the chip industry in particular. I think the AI industry will be accelerated far before the chip industry or the general economy, however; we’ll probably get a “software singularity.” And unfortunately that’s ‘good enough’ from an AI-risk perspective, because to a first approximation what matters is how smart the AIs are and whether they are aligned, not how many robot bodies they control.
Agreed. I would be interested to see a revised model in which humans whose jobs are automated basically don’t get reallocated at all. I don’t think the bottom-line conclusions of the model would change much, but I could be wrong—if takeoff is significantly slower, that would be an update for me.
Training FLOPs is literally the most important and prominent variable in the model, it’s the “AGI training requirements” variable. I agree that possible data bottlenecks are ignored; if it turns out that data is the bottleneck, timelines to AGI will be longer (and possibly takeoff slower? Depends on how the data problem eventually gets solved; takeoff could be faster in some scenarios...) Personally I don’t think the data bottleneck will slow us down much, but I could be wrong.
I think I disagree here. I mean, I technically agree that since most of our data is human-generated, there’s going to be some headwind at getting to superhuman performance. But I think this headwind will be pretty mild, and also, to get to AGI we just need to get to human-level performance not superhuman. Also I’m pretty darn confident that extreme superhuman performance is reasonably achievable; I think there is basically no justification for thinking otherwise. (It’s not like evolution explored making even smarter humans and found that they didn’t get smarter beyond a certain point and we’re at that point now. Also, scaling laws. Also, the upper limits of human performance might as well be superintelligence for practical purposes—something that is exactly as good as the best human at X, for all X, except that it also thinks at 100x speed and there are a million copies of it that share memories all working together in a sort of virtual civilization… I’d go out on a limb and say that’s ASI.)
Yeah I agree here. I think the main bottleneck to takeoff speed will be humans deliberately going less fast than they could for various reasons, partly just stupid red tape and overcaution, and partly correct realization that going fast is dangerous. Tom’s model basically doesn’t model this. I think of Tom’s model as something like “how fast could we go if we weren’t trying to slow down at all and in fact were racing hard against each other.” In real life I dearly hope the powerful CEOs and politicians in the world will be more sane than that.
Thanks for the thoughtful and detailed comments! I’ll respond to a few points, otherwise in general I’m just nodding in agreement.
And it’s a good model, so we need to take this seriously. My only quibble would be to raise again the possibility (only a possibility!) that progress becomes more difficult around the point where we reach AGI, because that is the point where we’d be outgrowing human training data. I haven’t tried to play with the model and see whether that would significantly affect the post-AGI takeoff timeline.
I want to say that he models this by equating the contribution of one ASI to more than one AGI, i.e. treating additional intelligence as equivalent to a speed boost. But I could be mis-remembering, and I certainly don’t remember how he translates intelligence into speed. If it’s just that each post-AGI factor of two in algorithm / silicon improvements is modeled as yielding twice as many AGIs per dollar, then I’d agree that might be an underestimate (because one IQ 300 AI might be worth a very large number of IQ 150 AIs, or whatever).
Yeah, even without consulting any models, I would expect that any scenario where we achieve AGI in the 20s is a very scary scenario for many reasons.
What is your feeling regarding the importance of other inputs, i.e. training data and compute?
Ugh! This was a big miss on my part, thank you for calling it out. I skimmed too rapidly through the introduction. I saw references to biological anchors and I think I assumed that meant the model was starting from an estimate of FLOPS performed by the brain (i.e. during “inference”) and projecting when the combination of more-efficient algorithms and larger FLOPS budgets (due to more $$$ plus better hardware) would cross that threshold. But on re-read, of course you are correct and the model does focus on training FLOPS.
Sounds like we are basically on the same page!
Re: your question:
Compute is a very important input, important enough that it makes sense IMO to use it as the currency by which we measure the other inputs (this is basically what Bio Anchors + Tom’s model do).
There is a question of whether we’ll be bottlenecked on it in a way that throttles takeoff; it may not matter if you have AGI, if the only way to get AGI+ is to wait for another even bigger training run to complete.
I think in some sense we will indeed be bottlenecked by compute during takeoff… but that nevertheless we’ll be going something like 10x − 1000x faster than we currently go, because labor can substitute for compute to some extent (Not so much if it’s going at 1x speed; but very much if it’s going at 10x, 100x speed) and we’ll have a LOT of sped-up labor. Like, I do a little exercise where I think about what my coworkers are doing and I imagine what if they had access to AGI that was exactly as good as they are at everything, only 100x faster. I feel like they’d make progress on their current research agendas about 10x as fast. Could be a bit less, could be a lot more. Especially once we start getting qualitative intelligence improvements over typical OAI researchers, it could be a LOT more, because in scientific research there seems to be HUGE returns to quality, the smartest geniuses seem to accomplish more in a year than 90th-percentile scientists accomplish in their lifetime.
Training data also might be a bottleneck. However I think that by the time we are about to hit AGI and/or just having hit AGI, it won’t be. Smart humans are able to generate their own training data, so to speak; the entire field of mathematics is a bunch of people talking to each other and iteratively adding proofs to the blockchain so to speak and learning from each other’s proofs. That’s just an example, I think, of how around AGI we should basically have a self-sustaining civilization of AGIs talking to each other and evaluating each other’s outputs and learning from them. And this is just one of several ways in which training data bottleneck could be overcome. Another is better algorithms that are more data-efficient. The human brain seems to be more data-efficient than modern LLMs, for example. Maybe we can figure out how it manages that.