It seems to me like this is the main thing that will slow down the extent to which foundation models can consistently beat newly trained specialized models.
Anecdotally, I know several people who don’t like to use chatgpt because its training cuts off in 2021. This seems like a form of technical debt.
I guess it depends on how easily adaptable foundation models are.
It seems to me like your model is not necessarily taking into account technical debt sufficiently enough. https://neurobiology.substack.com/p/technical-debt-probably-the-main-roadblack-in-applying-machine-learning-to-medicine
It seems to me like this is the main thing that will slow down the extent to which foundation models can consistently beat newly trained specialized models.
Anecdotally, I know several people who don’t like to use chatgpt because its training cuts off in 2021. This seems like a form of technical debt.
I guess it depends on how easily adaptable foundation models are.