With current LLMs, the algorithm is fairly small and the information is all in the training set.
This would seem to make foom unlikely, as the AI can’t easily get hold of more training data.
using the existing data more efficiently might be possible, of course.
With current LLMs, the algorithm is fairly small and the information is all in the training set.
This would seem to make foom unlikely, as the AI can’t easily get hold of more training data.
using the existing data more efficiently might be possible, of course.