But after a few more years the pace will slow down and we’ll get back to a much slower rate of progress in AI capabilities.”
Is that realistic? When I plug some estimates that I find reasonable into the Epoch interactive model, I find that scaling shouldn’t slow down significantly until about 2030. And at that point we might be getting into a regime where the economy should be growing quickly enough to support further rapid scaling, if TAI is attainable at lower FLOP levels. So, actually, our current regime of rapid scaling might not slow down until we approach the limits of the solar system, which is likely over 10 OOMs above our current level.
The reason for this relatively dramatic prediction seems to be that we have a lot of slack left. The current largest training run is GPT-4, which apparently only cost OpenAI about $50 million. That’s roughly 4-5 OOMs away from the maximum amount I’d expect our current world economy would be willing to spend on a single training run before running into fundamental constraints. Moreover hardware progress and specialization might add another 1 OOM to that in the next 6 years.
Oh I agree, the scaling will not slow down. But that’s because I think TAI/AGI/etc. isn’t that far off in terms of OOMs of various inputs. If I thought it was farther off, say 1e36 OOMs, I’d think that before AI R&D or the economy began to accelerate, we’d run out of steam and scaling would slow significantly and we’d hit another AI winter.
Is that realistic? When I plug some estimates that I find reasonable into the Epoch interactive model, I find that scaling shouldn’t slow down significantly until about 2030. And at that point we might be getting into a regime where the economy should be growing quickly enough to support further rapid scaling, if TAI is attainable at lower FLOP levels. So, actually, our current regime of rapid scaling might not slow down until we approach the limits of the solar system, which is likely over 10 OOMs above our current level.
The reason for this relatively dramatic prediction seems to be that we have a lot of slack left. The current largest training run is GPT-4, which apparently only cost OpenAI about $50 million. That’s roughly 4-5 OOMs away from the maximum amount I’d expect our current world economy would be willing to spend on a single training run before running into fundamental constraints. Moreover hardware progress and specialization might add another 1 OOM to that in the next 6 years.
Oh I agree, the scaling will not slow down. But that’s because I think TAI/AGI/etc. isn’t that far off in terms of OOMs of various inputs. If I thought it was farther off, say 1e36 OOMs, I’d think that before AI R&D or the economy began to accelerate, we’d run out of steam and scaling would slow significantly and we’d hit another AI winter.
Ultimately, that’s why I decided to cut the section: It was probably false, and it didn’t even matter for my thesis statement on AI safety/alignment.