I’m going to defend that addendum weakly, but I think it’s implicit in a lot of models that assume that intelligence will grow to superhumanity by say, the 2040s, like Scott Alexander’s or Kurzweil after 2030 or your model after 2029, and I suspect that Ajeya does in fact think that AI progress will continue to be like the past, and she thinks it will be even faster.
If she believes that AI progress will slow down in a decade, then I’ll probably edit or remove that statement.
I literally heard her saying a few weeks ago something to the effect of “it’ll be such a relief when we get through these next few OOMs of progress. Everything is happening so fast now because we are scaling up through so many OOMs so quickly in various metrics. But after a few more years the pace will slow down and we’ll get back to a much slower rate of progress in AI capabilities.”
Her bio anchors model also incorporates some of these effects IIRC.
My model after 2029--what are you referring to? I currently think that probably we’ll have superintelligence by 2029. I definitely agree that if I’m wrong about that and AGI is a lot harder to build than I think, progress in AI will be slowing down significantly around 2030 relative to today’s pace.
But after a few more years the pace will slow down and we’ll get back to a much slower rate of progress in AI capabilities.”
Is that realistic? When I plug some estimates that I find reasonable into the Epoch interactive model, I find that scaling shouldn’t slow down significantly until about 2030. And at that point we might be getting into a regime where the economy should be growing quickly enough to support further rapid scaling, if TAI is attainable at lower FLOP levels. So, actually, our current regime of rapid scaling might not slow down until we approach the limits of the solar system, which is likely over 10 OOMs above our current level.
The reason for this relatively dramatic prediction seems to be that we have a lot of slack left. The current largest training run is GPT-4, which apparently only cost OpenAI about $50 million. That’s roughly 4-5 OOMs away from the maximum amount I’d expect our current world economy would be willing to spend on a single training run before running into fundamental constraints. Moreover hardware progress and specialization might add another 1 OOM to that in the next 6 years.
Oh I agree, the scaling will not slow down. But that’s because I think TAI/AGI/etc. isn’t that far off in terms of OOMs of various inputs. If I thought it was farther off, say 1e36 OOMs, I’d think that before AI R&D or the economy began to accelerate, we’d run out of steam and scaling would slow significantly and we’d hit another AI winter.
I’ll grant that Ajeya was misrepresented in this post, and I’ll probably either edit or remove the section.
My model after 2029--what are you referring to? I currently think that probably we’ll have superintelligence by 2029. I definitely agree that if I’m wrong about that and AGI is a lot harder to build than I think, progress in AI will be slowing down significantly around 2030 relative to today’s pace.
This isn’t a crux on why I believe AI to be safe, but I think my potential disagreement is that once you manage to reach the human compute and memory regime, I do expect it to be more difficult to scale upwards.
I definitely assign some credence to you being right, so I’ll probably edit or remove that section.
Who thinks that? I don’t think that. Ajeya doesn’t think that.
I’m going to defend that addendum weakly, but I think it’s implicit in a lot of models that assume that intelligence will grow to superhumanity by say, the 2040s, like Scott Alexander’s or Kurzweil after 2030 or your model after 2029, and I suspect that Ajeya does in fact think that AI progress will continue to be like the past, and she thinks it will be even faster.
If she believes that AI progress will slow down in a decade, then I’ll probably edit or remove that statement.
I literally heard her saying a few weeks ago something to the effect of “it’ll be such a relief when we get through these next few OOMs of progress. Everything is happening so fast now because we are scaling up through so many OOMs so quickly in various metrics. But after a few more years the pace will slow down and we’ll get back to a much slower rate of progress in AI capabilities.”
Her bio anchors model also incorporates some of these effects IIRC.
My model after 2029--what are you referring to? I currently think that probably we’ll have superintelligence by 2029. I definitely agree that if I’m wrong about that and AGI is a lot harder to build than I think, progress in AI will be slowing down significantly around 2030 relative to today’s pace.
Is that realistic? When I plug some estimates that I find reasonable into the Epoch interactive model, I find that scaling shouldn’t slow down significantly until about 2030. And at that point we might be getting into a regime where the economy should be growing quickly enough to support further rapid scaling, if TAI is attainable at lower FLOP levels. So, actually, our current regime of rapid scaling might not slow down until we approach the limits of the solar system, which is likely over 10 OOMs above our current level.
The reason for this relatively dramatic prediction seems to be that we have a lot of slack left. The current largest training run is GPT-4, which apparently only cost OpenAI about $50 million. That’s roughly 4-5 OOMs away from the maximum amount I’d expect our current world economy would be willing to spend on a single training run before running into fundamental constraints. Moreover hardware progress and specialization might add another 1 OOM to that in the next 6 years.
Oh I agree, the scaling will not slow down. But that’s because I think TAI/AGI/etc. isn’t that far off in terms of OOMs of various inputs. If I thought it was farther off, say 1e36 OOMs, I’d think that before AI R&D or the economy began to accelerate, we’d run out of steam and scaling would slow significantly and we’d hit another AI winter.
Ultimately, that’s why I decided to cut the section: It was probably false, and it didn’t even matter for my thesis statement on AI safety/alignment.
I’ll grant that Ajeya was misrepresented in this post, and I’ll probably either edit or remove the section.
This isn’t a crux on why I believe AI to be safe, but I think my potential disagreement is that once you manage to reach the human compute and memory regime, I do expect it to be more difficult to scale upwards.
I definitely assign some credence to you being right, so I’ll probably edit or remove that section.