Oh I agree, the scaling will not slow down. But that’s because I think TAI/AGI/etc. isn’t that far off in terms of OOMs of various inputs. If I thought it was farther off, say 1e36 OOMs, I’d think that before AI R&D or the economy began to accelerate, we’d run out of steam and scaling would slow significantly and we’d hit another AI winter.
Oh I agree, the scaling will not slow down. But that’s because I think TAI/AGI/etc. isn’t that far off in terms of OOMs of various inputs. If I thought it was farther off, say 1e36 OOMs, I’d think that before AI R&D or the economy began to accelerate, we’d run out of steam and scaling would slow significantly and we’d hit another AI winter.
Ultimately, that’s why I decided to cut the section: It was probably false, and it didn’t even matter for my thesis statement on AI safety/alignment.