[EDIT: I originally gave an excessively long and detailed response to your predictions. That version is preserved (& commentable) here in case it’s of interest]
I applaud your willingness to give predictions! Some of them seem useful but others don’t differ from what the opposing view would predict. Specifically:
I think most people would agree that there are blind spots; LLMs have and will continue to have a different balance of strengths and weaknesses from humans. You seem to say that those blind spots will block capability gains in general; that seems unlikely to me (and it would shift me toward your view if it clearly happened) although I agree they could get in the way of certain specific capability gains.
The need for escalating compute seems like it’ll happen either way, so I don’t think this prediction provides evidence on your view vs the other.
Transformers not being the main cognitive component of scaffolded systems seems like a good prediction. I expect that to happen for some systems regardless, but I expect LLMs to be the cognitive core for most, until a substantially better architecture is found, and it will shift me a bit toward your view if that isn’t the case. I do think we’ll eventually see such an architectural breakthrough regardless of whether your view is correct, so I think that seeing a breakthrough won’t provide useful evidence.
‘LLM-centric systems can’t do novel ML research’ seems like a valuable prediction; if it proves true, that would shift me toward your view.
[EDIT: I originally gave an excessively long and detailed response to your predictions. That version is preserved (& commentable) here in case it’s of interest]
I applaud your willingness to give predictions! Some of them seem useful but others don’t differ from what the opposing view would predict. Specifically:
I think most people would agree that there are blind spots; LLMs have and will continue to have a different balance of strengths and weaknesses from humans. You seem to say that those blind spots will block capability gains in general; that seems unlikely to me (and it would shift me toward your view if it clearly happened) although I agree they could get in the way of certain specific capability gains.
The need for escalating compute seems like it’ll happen either way, so I don’t think this prediction provides evidence on your view vs the other.
Transformers not being the main cognitive component of scaffolded systems seems like a good prediction. I expect that to happen for some systems regardless, but I expect LLMs to be the cognitive core for most, until a substantially better architecture is found, and it will shift me a bit toward your view if that isn’t the case. I do think we’ll eventually see such an architectural breakthrough regardless of whether your view is correct, so I think that seeing a breakthrough won’t provide useful evidence.
‘LLM-centric systems can’t do novel ML research’ seems like a valuable prediction; if it proves true, that would shift me toward your view.