I’m much more uncertain about it holding to GPT-5 (let alone AGI) because of various reasons
As someone who shares the intuition that scaling laws break down “eventually, but probably not immediately” (loosely speaking), can I ask you why you think that?
A mix of hitting a ceiling on available data to train on, increased scaling not giving obvious enough returns through an economic lens (for regulatory reasons, or from trying to get the model to do something it’s just tangentially good at) to be incentivized heavily for long (this is more of a practical note than a theoretical one), and general affordances for wide confidence intervals over periods longer than a year or two. To be clear, I don’t think it’s much more probable than not that these would break scaling laws. I can think of plausible-sounding ways all of these don’t end up being problems. But I don’t have high credence in those predictions, hence why I’m much more uncertain about them.
As someone who shares the intuition that scaling laws break down “eventually, but probably not immediately” (loosely speaking), can I ask you why you think that?
A mix of hitting a ceiling on available data to train on, increased scaling not giving obvious enough returns through an economic lens (for regulatory reasons, or from trying to get the model to do something it’s just tangentially good at) to be incentivized heavily for long (this is more of a practical note than a theoretical one), and general affordances for wide confidence intervals over periods longer than a year or two. To be clear, I don’t think it’s much more probable than not that these would break scaling laws. I can think of plausible-sounding ways all of these don’t end up being problems. But I don’t have high credence in those predictions, hence why I’m much more uncertain about them.