It’s possible that GPT-3 is roughly at where the maximally naive simple text LM begins to hit the constant wall, but I don’t regard this as important; as I emphasize at every turn, there are many distinct ways in which to improve it greatly using purely known methods, never mind future research approaches. The question is not whether there is any way GPT-4 might fail, but any way in which it might succeed.
It’s possible that GPT-3 is roughly at where the maximally naive simple text LM begins to hit the constant wall, but I don’t regard this as important; as I emphasize at every turn, there are many distinct ways in which to improve it greatly using purely known methods, never mind future research approaches. The question is not whether there is any way GPT-4 might fail, but any way in which it might succeed.