I think the story for the discontinuity is basically “around 2018 industry labs realized that language models would be the next big thing” (based on Attention is all you need, GPT-2, and/or BERT), and then they switched their largest experiments to be on language (as opposed to the previous contender, games).
Similarly for games, if you take DQN to be the event causing people to realize “large games models will be the next big thing”, it does kinda look like there’s a discontinuity there (though there are way fewer points so it’s harder to tell, also I’m inclined to ignore things like CURL which came out of an academic lab with a limited compute budget).
This story doesn’t hold up for vision though (taking AlexNet as the event); I’m not sure why that is. One theory is that vision is tied to a fixed dataset—ImageNet—and that effectively puts a max size on how big your neural nets can be.
You might also think that model size underwent a discontinuity around 2018, independent of which domain it’s in—I think that’s because the biggest experiments moved from vision (2012-15) to games (2015-19) to language (2019-now), with the compute trend staying continuous. However, in games the model-size-to-compute ratio is way lower (since it involves RL, while vision and language involve SL). For example, AlphaZero had fewer parameters than AlexNet, despite taking almost 5 orders of magnitude more compute. So you see max model size stalling a bit in 2015-19, and then bursting upwards around 2019.
Aside: I hadn’t realized AlphaZero took 5 orders of magnitude more compute per parameter than AlexNet—the horizon length concept would have predicted ~2 orders (since a full Go game is a couple hundred moves). I wonder what gets the extra 3 orders. Probably at least part of it comes from the difference between using a differentiable vs. non-differentiable objective function.
The difference in compute between AlexNet and AlphaZero is because for AlexNet you are only counting the flops during training, while for AlphaZero you are counting both the training and the self-play data generation (which does 800 forwards per move * ~200 moves to generate each game).
If you were to compare supervised training numbers for both (e.g. training on human chess or Go games) then you’d get much closer.
That’s fair. I was thinking of that as part of “compute needed during training”, but you could also split it up into “compute needed for gradient updates” and “compute needed to create data of sufficient quality”, and then say that the stable thing is the “compute needed for gradient updates”.
Aside: I hadn’t realized AlphaZero took 5 orders of magnitude more compute per parameter than AlexNet—the horizon length concept would have predicted ~2 orders (since a full Go game is a couple hundred moves). I wonder what gets the extra 3 orders. Probably at least part of it comes from the difference between using a differentiable vs. non-differentiable objective function.
I think that in a forward pass, AlexNet uses about 10-15 flops per parameter (assuming 4 bytes per parameter and using this table), because it puts most of its parameters in the small convolutions and FC layers. But I think AlphaZero has most of its parameters in 19x19 convolutions, which involve 722 flops per parameter (19 x 19 x 2). If that’s right, it accounts for a factor of 50; combined with game length that’s 4 orders of magnitude explained.
I’m not sure what’s up with the last order of magnitude. I think that’s a normal amount of noise / variation across different tasks, though I would have expected AlexNet to be somewhat overtrained given the context. I also think the comparison is kind of complicated because of MCTS and distillation (e.g. AlphaZero uses much more than 1 forward pass per turn, and you can potentially learn from much shorter effective horizons when imitating the distilled targets).
I also looked into number of training points very briefly, Googling suggests AlexNet used 90 epochs on ImageNet’s 1.3 million train images, while AlphaZero played 44 million games for chess (I didn’t quickly find a number for Go), suggesting that the number of images was roughly similar to the number of games.
So I think probably the remaining orders of magnitude are coming from the tree search part of MCTS (which causes there to be > 200 forward passes per game).
One reason it might not be fitting as well for vision, is that vision has much more weight-tying / weight-reuse in convolutional filters. If the underlying variable that mattered was compute, then image processing neural networks would show up more prominently in compute (rather than parameters).
Could it be inefficient scaling? Most work not explicitly using scaling laws to plan it seems to generally overestimate in compute per parameter, using too-small models. Anyone want to try to apply Jones 2021 to see if AlphaZero was scaled wrong?
I think the story for the discontinuity is basically “around 2018 industry labs realized that language models would be the next big thing” (based on Attention is all you need, GPT-2, and/or BERT), and then they switched their largest experiments to be on language (as opposed to the previous contender, games).
Similarly for games, if you take DQN to be the event causing people to realize “large games models will be the next big thing”, it does kinda look like there’s a discontinuity there (though there are way fewer points so it’s harder to tell, also I’m inclined to ignore things like CURL which came out of an academic lab with a limited compute budget).
This story doesn’t hold up for vision though (taking AlexNet as the event); I’m not sure why that is. One theory is that vision is tied to a fixed dataset—ImageNet—and that effectively puts a max size on how big your neural nets can be.
You might also think that model size underwent a discontinuity around 2018, independent of which domain it’s in—I think that’s because the biggest experiments moved from vision (2012-15) to games (2015-19) to language (2019-now), with the compute trend staying continuous. However, in games the model-size-to-compute ratio is way lower (since it involves RL, while vision and language involve SL). For example, AlphaZero had fewer parameters than AlexNet, despite taking almost 5 orders of magnitude more compute. So you see max model size stalling a bit in 2015-19, and then bursting upwards around 2019.
Aside: I hadn’t realized AlphaZero took 5 orders of magnitude more compute per parameter than AlexNet—the horizon length concept would have predicted ~2 orders (since a full Go game is a couple hundred moves). I wonder what gets the extra 3 orders. Probably at least part of it comes from the difference between using a differentiable vs. non-differentiable objective function.
The difference in compute between AlexNet and AlphaZero is because for AlexNet you are only counting the flops during training, while for AlphaZero you are counting both the training and the self-play data generation (which does 800 forwards per move * ~200 moves to generate each game).
If you were to compare supervised training numbers for both (e.g. training on human chess or Go games) then you’d get much closer.
That’s fair. I was thinking of that as part of “compute needed during training”, but you could also split it up into “compute needed for gradient updates” and “compute needed to create data of sufficient quality”, and then say that the stable thing is the “compute needed for gradient updates”.
I think that in a forward pass, AlexNet uses about 10-15 flops per parameter (assuming 4 bytes per parameter and using this table), because it puts most of its parameters in the small convolutions and FC layers. But I think AlphaZero has most of its parameters in 19x19 convolutions, which involve 722 flops per parameter (19 x 19 x 2). If that’s right, it accounts for a factor of 50; combined with game length that’s 4 orders of magnitude explained.
I’m not sure what’s up with the last order of magnitude. I think that’s a normal amount of noise / variation across different tasks, though I would have expected AlexNet to be somewhat overtrained given the context. I also think the comparison is kind of complicated because of MCTS and distillation (e.g. AlphaZero uses much more than 1 forward pass per turn, and you can potentially learn from much shorter effective horizons when imitating the distilled targets).
I also looked into number of training points very briefly, Googling suggests AlexNet used 90 epochs on ImageNet’s 1.3 million train images, while AlphaZero played 44 million games for chess (I didn’t quickly find a number for Go), suggesting that the number of images was roughly similar to the number of games.
So I think probably the remaining orders of magnitude are coming from the tree search part of MCTS (which causes there to be > 200 forward passes per game).
One reason it might not be fitting as well for vision, is that vision has much more weight-tying / weight-reuse in convolutional filters. If the underlying variable that mattered was compute, then image processing neural networks would show up more prominently in compute (rather than parameters).
Could it be inefficient scaling? Most work not explicitly using scaling laws to plan it seems to generally overestimate in compute per parameter, using too-small models. Anyone want to try to apply Jones 2021 to see if AlphaZero was scaled wrong?
Ben Adlam (via Maloney et al 2022) makes an interesting point: if you plot parameters vs training data, it’s a nearly perfect 1:1 ratio historically. (He doesn’t seem to have published anything formally on this.)
We have conveniently just updated our database if anyone wants to investigate this further!
https://epochai.org/data/notable-ai-models