The sample-efficient DQN is covered on pg2, as the tuned Rainbow. More or less, you just train the DQN a lot more frequently, and a lot more, on its experience replay buffer, with no other modifications; this makes it like 10x more sample-efficient than the usual hundreds of millions of frames quoted for best results (but again, because of diminishing returns and the value of training on new fresh data and how lightweight ALE is to run, this costs you a lot more wallclock/total-compute, which is usually what ALE researchers try to economize and why DQN/Rainbow isn’t as sample-efficient as possible by default). That gives you about 25% of human perf at 200k frames according to figure 1 in EfficientZero paper.
MuZero’s sample-efficient mode is similar, and is covered in the MuZero paper as ‘MuZero-Reanalyze’. In the original MuZero paper, Reanalyze kicks butt like MuZero and achieves mean scores way above the human benchmark [2100%], but they only benchmark the ALE at 200m frames so this is ‘sample-efficient’ only when compared to the 20 billion frames that the regular MuZero uses to achieve twice better results [5000%]. EfficientZero starts with MuZero-Reanalyze as their baseline and say that ‘MuZero’ in their paper always refers to MuZero-Reanalyze, so I assume that their figure 1′s ‘MuZero’ is Reanalyze, in which case it reaches ~50% human at 200k frames, so it does twice as well as DQN, but it does a quarter as well as their EfficientZero variant which reaches almost 200% at 200k. I don’t see a number for baseline MuZero at merely 200k frames anywhere. (Figure 2d doesn’t seem to be it, since it’s ‘training steps’ which are minibatches of many episodes, I think? And looks too high.)
So the 200k frame ranking would go something like: DQN [lol%] < MuZero? [meh?%] < Rainbow DQN [25%] < MuZero-Reanalyze [50%] < MuZero-Reanalyze-EfficientZero [190%].
how many frames the more efficient MuZero would require to train to EfficientZero-equivalent performance?
They don’t run their MuZero-Reanalyze to equivalence (although this wouldn’t be a bad thing to do in general for all these attempts at greater sample-efficiency). Borrowing my guess from the other comment about how it looks like they are all still in the linearish regime of the usual learning log curve, I would hazard a guess that MuZero-Reanalyze would need 190⁄50 = 3.8x = 760,000 frames to match EfficientZero at 200k.
Scaling curves along the line of Jones so you could see if the exponents differ or if it’s just a constant offset (the former would mean EfficientZero is a lot better than it looks in the long run) would of course be research very up my alley, and the compute requirements for some scaling curve sweeps don’t seem too onerous...
Eyeballing the graph at 200k env frames, the self-supervised variant is >200x more sample-efficient than PPO (which has no sample-efficient variant the way DQN does because it’s on-policy and can’t reuse data), and 5x the baseline MuZero (and a ‘model-free’ MuZero variant/ablation I’m unfamiliar with). So reasonably parallel.
Performance is mostly limited here by the fact that there are 500 levels for each game (i.e., level overfitting is the problem) so it’s not that meaningful to look at sample efficiency wrt environment interactions. The results would look a lot different on the full distribution of levels. I agree with your statement directionally though.
We do actually train/evaluate on the full distribution (See Figure 5 rightmost). MuZero+SSL versions (especially reconstruction) continue to be a lot more sample-efficient even in the full-distribution, and MuZero itself seems to be quite a bit more sample efficient than PPO/PPG.
I’m still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your “QL” baseline is pretty similar to widespread DQN implementations.
The Q-Learning baseline is a model-free control of MuZero. So it shares implementation details of MuZero (network architecture, replay ratio, training details etc.) while removing the model-based components of MuZero (details in sec A.2) . Some key differences you’d find vs a typical Q-learning implementation:
Larger network architectures: 10 block ResNet compared to a few conv layers in typical implementations.
Higher sample reuse: When using a reanalyse ratio of 0.95, both MuZero and Q-Learning use each replay buffer sample an average of 20 times. The target network is updated every 100 training steps.
Batch size of 1024 and some smaller details like using categorical reward and value predictions similar to MuZero.
We also have a small model-based component which predicts reward at next time step which lets us decompose the Q(s,a) into reward and value predictions just like MuZero.
I would guess larger networks + higher sample reuse have the biggest effect size compared to standard Q-learning implementations.
The ProcGen competition also might have used the easy difficulty mode compared to the hard difficulty mode used in our paper.
Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the “RL as fine-tuning paradigm”: making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans.
It’s model based RL because you’re optimizing against the model of the human (ie the reward model). And there are some results at the end on test-time search.
The sample-efficient DQN is covered on pg2, as the tuned Rainbow. More or less, you just train the DQN a lot more frequently, and a lot more, on its experience replay buffer, with no other modifications; this makes it like 10x more sample-efficient than the usual hundreds of millions of frames quoted for best results (but again, because of diminishing returns and the value of training on new fresh data and how lightweight ALE is to run, this costs you a lot more wallclock/total-compute, which is usually what ALE researchers try to economize and why DQN/Rainbow isn’t as sample-efficient as possible by default). That gives you about 25% of human perf at 200k frames according to figure 1 in EfficientZero paper.
MuZero’s sample-efficient mode is similar, and is covered in the MuZero paper as ‘MuZero-Reanalyze’. In the original MuZero paper, Reanalyze kicks butt like MuZero and achieves mean scores way above the human benchmark [2100%], but they only benchmark the ALE at 200m frames so this is ‘sample-efficient’ only when compared to the 20 billion frames that the regular MuZero uses to achieve twice better results [5000%]. EfficientZero starts with MuZero-Reanalyze as their baseline and say that ‘MuZero’ in their paper always refers to MuZero-Reanalyze, so I assume that their figure 1′s ‘MuZero’ is Reanalyze, in which case it reaches ~50% human at 200k frames, so it does twice as well as DQN, but it does a quarter as well as their EfficientZero variant which reaches almost 200% at 200k. I don’t see a number for baseline MuZero at merely 200k frames anywhere. (Figure 2d doesn’t seem to be it, since it’s ‘training steps’ which are minibatches of many episodes, I think? And looks too high.)
So the 200k frame ranking would go something like: DQN [lol%] < MuZero? [meh?%] < Rainbow DQN [25%] < MuZero-Reanalyze [50%] < MuZero-Reanalyze-EfficientZero [190%].
They don’t run their MuZero-Reanalyze to equivalence (although this wouldn’t be a bad thing to do in general for all these attempts at greater sample-efficiency). Borrowing my guess from the other comment about how it looks like they are all still in the linearish regime of the usual learning log curve, I would hazard a guess that MuZero-Reanalyze would need 190⁄50 = 3.8x = 760,000 frames to match EfficientZero at 200k.
Scaling curves along the line of Jones so you could see if the exponents differ or if it’s just a constant offset (the former would mean EfficientZero is a lot better than it looks in the long run) would of course be research very up my alley, and the compute requirements for some scaling curve sweeps don’t seem too onerous...
Speaking of sample-efficient MuZero, DM just posted “Procedural Generalization by Planning with Self-Supervised World Models”, Anand et al 2021 on a different set of environments than ALE, which also uses self-supervision to boost MuZero-Reanalyze sample-efficiency dramatically (in addition to another one of my pet interests, demonstrating implicit meta-learning through the blessings of scale): https://arxiv.org/pdf/2111.01587.pdf#page=5
Eyeballing the graph at 200k env frames, the self-supervised variant is >200x more sample-efficient than PPO (which has no sample-efficient variant the way DQN does because it’s on-policy and can’t reuse data), and 5x the baseline MuZero (and a ‘model-free’ MuZero variant/ablation I’m unfamiliar with). So reasonably parallel.
Performance is mostly limited here by the fact that there are 500 levels for each game (i.e., level overfitting is the problem) so it’s not that meaningful to look at sample efficiency wrt environment interactions. The results would look a lot different on the full distribution of levels. I agree with your statement directionally though.
We do actually train/evaluate on the full distribution (See Figure 5 rightmost). MuZero+SSL versions (especially reconstruction) continue to be a lot more sample-efficient even in the full-distribution, and MuZero itself seems to be quite a bit more sample efficient than PPO/PPG.
I’m still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your “QL” baseline is pretty similar to widespread DQN implementations.
https://arxiv.org/pdf/2103.15332.pdf
https://www.aicrowd.com/challenges/neurips-2020-procgen-competition/leaderboards?challenge_leaderboard_extra_id=470&challenge_round_id=662
Are there implementation level changes that dramatically improve performance of your QL implementation?
(Currently on vacation and I read your paper briefly while traveling, but I may very well have missed something.)
The Q-Learning baseline is a model-free control of MuZero. So it shares implementation details of MuZero (network architecture, replay ratio, training details etc.) while removing the model-based components of MuZero (details in sec A.2) . Some key differences you’d find vs a typical Q-learning implementation:
Larger network architectures: 10 block ResNet compared to a few conv layers in typical implementations.
Higher sample reuse: When using a reanalyse ratio of 0.95, both MuZero and Q-Learning use each replay buffer sample an average of 20 times. The target network is updated every 100 training steps.
Batch size of 1024 and some smaller details like using categorical reward and value predictions similar to MuZero.
We also have a small model-based component which predicts reward at next time step which lets us decompose the Q(s,a) into reward and value predictions just like MuZero.
I would guess larger networks + higher sample reuse have the biggest effect size compared to standard Q-learning implementations.
The ProcGen competition also might have used the easy difficulty mode compared to the hard difficulty mode used in our paper.
Thanks, this is very insightful. BTW, I think your paper is excellent!
Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the “RL as fine-tuning paradigm”: making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans.
Would you say Learning to Summarize is an example of this? https://arxiv.org/abs/2009.01325
It’s model based RL because you’re optimizing against the model of the human (ie the reward model). And there are some results at the end on test-time search.
Or do you have something else in mind?
There’s no PPO/PPG curve there—I’d be curious to see that comparison. (though I agree that QL/MuZero will probably be more sample efficient.)
I was eyeballing Figure 2 in the PPG paper and comparing it to our results on the full distribution (Table A.3).
PPO: ~0.25
PPG: ~0.52
MuZero: 0.68
MuZero+Reconstruction: 0.93