Thanks! I definitely agree that the proper modeling technique would involve introducing uncertainty on algorithmic progress, and that this uncertainty would be pretty wide; this is one of the most important few directions of future research (the others being better understanding effective horizon length and better narrowing model size).
In terms of uncertainty in model size, I personally find it somewhat easier to think about what the final spread should be in the training FLOP requirements distribution, since there’s a fair amount of arbitrariness in how the uncertainty is apportioned between model size and scaling behavior. There’s also semantic uncertainty about what it means to “condition on the hypothesis that X is the best anchor.” If we’re living in the world of “brain FLOP/s anchor + normal scaling behavior”, then assigning a lot of weight to really small model sizes would wind up “in the territory” of the Lifetime Anchor hypothesis, and assigning a lot of weight to really large model sizes would wind up “in the territory” of the Evolution Anchor hypothesis, or go beyond the Evolution Anchor hypothesis.
I was roughly aiming for +- 5 OOM uncertainty in training FLOP requirements on top of the anchor distribution, and then apportioned uncertainty between model size and scaling behavior based on which one seemed more uncertain.
Thanks! I definitely agree that the proper modeling technique would involve introducing uncertainty on algorithmic progress, and that this uncertainty would be pretty wide; this is one of the most important few directions of future research (the others being better understanding effective horizon length and better narrowing model size).
In terms of uncertainty in model size, I personally find it somewhat easier to think about what the final spread should be in the training FLOP requirements distribution, since there’s a fair amount of arbitrariness in how the uncertainty is apportioned between model size and scaling behavior. There’s also semantic uncertainty about what it means to “condition on the hypothesis that X is the best anchor.” If we’re living in the world of “brain FLOP/s anchor + normal scaling behavior”, then assigning a lot of weight to really small model sizes would wind up “in the territory” of the Lifetime Anchor hypothesis, and assigning a lot of weight to really large model sizes would wind up “in the territory” of the Evolution Anchor hypothesis, or go beyond the Evolution Anchor hypothesis.
I was roughly aiming for +- 5 OOM uncertainty in training FLOP requirements on top of the anchor distribution, and then apportioned uncertainty between model size and scaling behavior based on which one seemed more uncertain.