if you assign an extremely low credence to that scenario, then whatever
I don’t assign low credence to the scenario where LLMs don’t scale to AGI (and my point doesn’t depend on this). I assign low credence to the scenario where it’s knowable today that LLMs very likely won’t scale to AGI. That is, that there is a thing I could study that should change my mind on this. This is more of a crux than the question as a whole, studying that thing would be actionable if I knew what it is.
whether or not LLMs will scale to AGI
This wording mostly answers one of my questions, I’m now guessing that you would say that LLMs are (in hindsight) “the right kind of algorithm” if the scenario I described comes to pass, which wasn’t clear to me from the post.
Yeah when I say things like “I expect LLMs to plateau before TAI”, I tend not to say it with the supremely high confidence and swagger that you’d hear from e.g. Yann LeCun, François Chollet, Gary Marcus, Dileep George, etc. I’d be more likely to say “I expect LLMs to plateau before TAI … but, well, who knows, I guess. Shrug.” (The last paragraph of this comment is me bringing up a scenario with a vaguely similar flavor to the thing you’re pointing at.)
I don’t assign low credence to the scenario where LLMs don’t scale to AGI (and my point doesn’t depend on this). I assign low credence to the scenario where it’s knowable today that LLMs very likely won’t scale to AGI. That is, that there is a thing I could study that should change my mind on this. This is more of a crux than the question as a whole, studying that thing would be actionable if I knew what it is.
This wording mostly answers one of my questions, I’m now guessing that you would say that LLMs are (in hindsight) “the right kind of algorithm” if the scenario I described comes to pass, which wasn’t clear to me from the post.
Yeah when I say things like “I expect LLMs to plateau before TAI”, I tend not to say it with the supremely high confidence and swagger that you’d hear from e.g. Yann LeCun, François Chollet, Gary Marcus, Dileep George, etc. I’d be more likely to say “I expect LLMs to plateau before TAI … but, well, who knows, I guess. Shrug.” (The last paragraph of this comment is me bringing up a scenario with a vaguely similar flavor to the thing you’re pointing at.)