This is good thinking. Breaking out of your framework: trainings are routinely checkpointed periodically to disk (in case of crash) and can be resumed—even across algorithmic improvements in the learning method. So some trainings will effectively be maintained through upgrades. I’d say trainings are short mostly because we haven’t converged on the best model architectures and because of publication incentives. IMO benefitting from previous trainings of an evolving architecture will feature in published work over the next decade.
Was going to make nearly the same comment, so i’ll just add to yours: an existing training run can benefit from hardware/software upgrades nearly as much as new training runs. Big changes to hardware&software are slow relative to these timescales. (Nvidia releases new GPU architectures on a two year cadence, but they are mostly incremental).
New training runs benefit most from major architectural changes and especially training/data/curriculum changes.
This is good thinking. Breaking out of your framework: trainings are routinely checkpointed periodically to disk (in case of crash) and can be resumed—even across algorithmic improvements in the learning method. So some trainings will effectively be maintained through upgrades. I’d say trainings are short mostly because we haven’t converged on the best model architectures and because of publication incentives. IMO benefitting from previous trainings of an evolving architecture will feature in published work over the next decade.
Was going to make nearly the same comment, so i’ll just add to yours: an existing training run can benefit from hardware/software upgrades nearly as much as new training runs. Big changes to hardware&software are slow relative to these timescales. (Nvidia releases new GPU architectures on a two year cadence, but they are mostly incremental).
New training runs benefit most from major architectural changes and especially training/data/curriculum changes.