it would be the best possible model of this type, at the task of language modeling on data sampled from the same distribution as MassiveText
Transformers a Turing complete, so “model of this type” is not much of a constraint. On the other hand, I guess it’s theoretically possible that some weight matrices are inaccessible to current training algorithms no matter how much compute and data we have. It seems also possible that the scaling law doesn’t go on forever, but phase-transitions somewhere (maybe very far) to a new trend which goes below the “irreducible” term.
Transformers a Turing complete, so “model of this type” is not much of a constraint. On the other hand, I guess it’s theoretically possible that some weight matrices are inaccessible to current training algorithms no matter how much compute and data we have. It seems also possible that the scaling law doesn’t go on forever, but phase-transitions somewhere (maybe very far) to a new trend which goes below the “irreducible” term.