Yes, that just gives you an idea of how far sample-efficiency has come since then. Both DQN and MuZero have sample-efficient configurations which do much better (at the expense of wallclock time / compute, however, which is usually what is optimized for). The real interest here is that it’s hit human-level sample-efficiency—and that’s with modest compute like a GPU-day and without having transfer learning or extremely good priors learned from a lifetime of playing other games and tasks or self-supervised learning on giant offline datasets of logged experiences, too, I’d note. Hence, concerning.
(Not for the first time, nor will it be the last, do I find myself wondering what the human brain does with those 5-10 additional orders of magnitude more compute that some people claim must be necessary for human-level intelligence.)
I would say the major catch here is that the sample-efficient work usually excludes the unsolved exploration games like Montezuma’s Revenge, so this doesn’t reach human-level in MR. However, I continue to say that exploration seems tractable with a MuZero model if you extract uncertainty/disagreement from the model-based rollouts and use that for the environment interactions, so I don’t think MR & Pitfall should be incredibly hard to solve.
Both DQN and MuZero have sample-efficient configurations which do much better
Say more? In particular, do you happen to know what the sample efficiency advantage for EfficientNet was, over the sample-efficient version of MuZero—eg, how many frames the more efficient MuZero would require to train to EfficientNet-equivalent performance? This seems to me like the appropriate figure of merit for how much EfficientNet improved over MuZero (and to potentially refute the current applicability of what I interpret as the OpenPhil view about how AGI development should look at some indefinite point later when there are no big wins left in AGI).
The sample-efficient DQN is covered on pg2, as the tuned Rainbow. More or less, you just train the DQN a lot more frequently, and a lot more, on its experience replay buffer, with no other modifications; this makes it like 10x more sample-efficient than the usual hundreds of millions of frames quoted for best results (but again, because of diminishing returns and the value of training on new fresh data and how lightweight ALE is to run, this costs you a lot more wallclock/total-compute, which is usually what ALE researchers try to economize and why DQN/Rainbow isn’t as sample-efficient as possible by default). That gives you about 25% of human perf at 200k frames according to figure 1 in EfficientZero paper.
MuZero’s sample-efficient mode is similar, and is covered in the MuZero paper as ‘MuZero-Reanalyze’. In the original MuZero paper, Reanalyze kicks butt like MuZero and achieves mean scores way above the human benchmark [2100%], but they only benchmark the ALE at 200m frames so this is ‘sample-efficient’ only when compared to the 20 billion frames that the regular MuZero uses to achieve twice better results [5000%]. EfficientZero starts with MuZero-Reanalyze as their baseline and say that ‘MuZero’ in their paper always refers to MuZero-Reanalyze, so I assume that their figure 1′s ‘MuZero’ is Reanalyze, in which case it reaches ~50% human at 200k frames, so it does twice as well as DQN, but it does a quarter as well as their EfficientZero variant which reaches almost 200% at 200k. I don’t see a number for baseline MuZero at merely 200k frames anywhere. (Figure 2d doesn’t seem to be it, since it’s ‘training steps’ which are minibatches of many episodes, I think? And looks too high.)
So the 200k frame ranking would go something like: DQN [lol%] < MuZero? [meh?%] < Rainbow DQN [25%] < MuZero-Reanalyze [50%] < MuZero-Reanalyze-EfficientZero [190%].
how many frames the more efficient MuZero would require to train to EfficientZero-equivalent performance?
They don’t run their MuZero-Reanalyze to equivalence (although this wouldn’t be a bad thing to do in general for all these attempts at greater sample-efficiency). Borrowing my guess from the other comment about how it looks like they are all still in the linearish regime of the usual learning log curve, I would hazard a guess that MuZero-Reanalyze would need 190⁄50 = 3.8x = 760,000 frames to match EfficientZero at 200k.
Scaling curves along the line of Jones so you could see if the exponents differ or if it’s just a constant offset (the former would mean EfficientZero is a lot better than it looks in the long run) would of course be research very up my alley, and the compute requirements for some scaling curve sweeps don’t seem too onerous...
Eyeballing the graph at 200k env frames, the self-supervised variant is >200x more sample-efficient than PPO (which has no sample-efficient variant the way DQN does because it’s on-policy and can’t reuse data), and 5x the baseline MuZero (and a ‘model-free’ MuZero variant/ablation I’m unfamiliar with). So reasonably parallel.
Performance is mostly limited here by the fact that there are 500 levels for each game (i.e., level overfitting is the problem) so it’s not that meaningful to look at sample efficiency wrt environment interactions. The results would look a lot different on the full distribution of levels. I agree with your statement directionally though.
We do actually train/evaluate on the full distribution (See Figure 5 rightmost). MuZero+SSL versions (especially reconstruction) continue to be a lot more sample-efficient even in the full-distribution, and MuZero itself seems to be quite a bit more sample efficient than PPO/PPG.
I’m still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your “QL” baseline is pretty similar to widespread DQN implementations.
The Q-Learning baseline is a model-free control of MuZero. So it shares implementation details of MuZero (network architecture, replay ratio, training details etc.) while removing the model-based components of MuZero (details in sec A.2) . Some key differences you’d find vs a typical Q-learning implementation:
Larger network architectures: 10 block ResNet compared to a few conv layers in typical implementations.
Higher sample reuse: When using a reanalyse ratio of 0.95, both MuZero and Q-Learning use each replay buffer sample an average of 20 times. The target network is updated every 100 training steps.
Batch size of 1024 and some smaller details like using categorical reward and value predictions similar to MuZero.
We also have a small model-based component which predicts reward at next time step which lets us decompose the Q(s,a) into reward and value predictions just like MuZero.
I would guess larger networks + higher sample reuse have the biggest effect size compared to standard Q-learning implementations.
The ProcGen competition also might have used the easy difficulty mode compared to the hard difficulty mode used in our paper.
Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the “RL as fine-tuning paradigm”: making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans.
It’s model based RL because you’re optimizing against the model of the human (ie the reward model). And there are some results at the end on test-time search.
What would stump a (naive) exploration-based AI? One may imagine a game as such: the player starts on the left side of a featureless room. If they go to the right side of the room, they win. In the middle of the room is a terminal. If one interacts with the terminal, one is kicked into an embedded copy of the original Doom.
An exploration-based agent would probably discern that Doom is way more interesting than the featureless room, whereas a human would probably put it aside at some point to “finish” exploring the starter room first. I think this demands a sort of mixed breadth-depth exploration?
The famous problem here is the “noisy TV problem”. If your AI is driven to go towards regions of uncertainty then it will be completely captivated by a TV on the wall showing random images, no need for a copy of Doom, any random giberish that the AI can’t predict will work.
OpenAI claims to have already solved the noisy TV problem via Random Network Distillation, although I’m still skeptical of it. I think it’s a clever hack that only solves a specific subclass of this problem that is relatively superficial.
Well, one may develop an AI that handles noisy TV by learning that it can’t predict the noisy TV. The idea was to give it a space that is filled with novelty reward, but doesn’t lead to a performance payoff.
Even defining what is a ‘featureless room’ in full generality is difficult. After all, the literal pixel array will be different at most timesteps (and even if ALE games are discrete enough for that to not be true, there are plenty of environments with continuous state variables that never repeat exactly). That describes the opening room of Montezuma’s Revenge: you have to go in a long loop around the room, timing a jump over a monster that will kill you, before you get near the key which will give you the first reward after hundreds of timesteps. Go-Explore can solve MR and doesn’t suffer from the noisy TV problem because it does in fact do basically breadth+depth exploration (iterative widening), but it also relies on a human-written hack for deciding what states/nodes are novel or different from each other and potentially worth using as a starting point for exploration.
You could certainly engineer an adversarial learning environment to stump an exploration-based AI, but you could just as well engineer an adversarial learning environment to stump a human. Neither is “naive” because of it in any useful sense, unless you can show that that adversarial environment has some actual practical relevance.
That’s true, but … I feel in most cases, it’s a good idea to run mixed strategies. I think that by naivety I mean the notion that any single strategy will handle all cases—even if there are strategies where this is true, it’s wrong for almost all of them.
Humans can be stumped, but we’re fairly good at dynamic strategy selection, which tends to protect us from being reliably exploited.
Humans can be stumped, but we’re fairly good at dynamic strategy selection, which tends to protect us from being reliably exploited.
Have you ever played Far Cry 4? At the beginning of that game, there is a scene where you’re being told by the main villain of the storyline to sit still while he goes downstairs to deal with some rebels. A normal human player would do the expected thing, which is to curiously explore what’s going on downstairs, which then leads to the unfolding of the main story and thus actual gameplay. But if you actually stick to the villain’s instruction and sit still for 12 minutes, it leads straight to the ending of the game.
This is an analogous situation to your scenario, except it’s one where humans reliably fail. Now you could argue that a human player’s goal is to actually play and enjoy the game, therefore it’s perfectly reasonable to explore and forego a quick ending. But I bet even if you incentivized a novice player to finish the game in under 2 hours with a million dollars, he would not think of exploiting this Easter egg.
More importantly, he would have learned absolutely nothing from this experience about how to act rationally (except for maybe stop believing that anyone would genuinely offer a million dollars out of the blue). The point is, it’s not just possible to rig the game against an agent for it to fail, it’s trivially easy when you have complete control of the environment. But it’s also irrelevant, because that’s not how reality works in general. And I do mean reality, not some fictional story or adversarial setup where things happen because the author says they happen.
Yes, that just gives you an idea of how far sample-efficiency has come since then. Both DQN and MuZero have sample-efficient configurations which do much better (at the expense of wallclock time / compute, however, which is usually what is optimized for). The real interest here is that it’s hit human-level sample-efficiency—and that’s with modest compute like a GPU-day and without having transfer learning or extremely good priors learned from a lifetime of playing other games and tasks or self-supervised learning on giant offline datasets of logged experiences, too, I’d note. Hence, concerning.
(Not for the first time, nor will it be the last, do I find myself wondering what the human brain does with those 5-10 additional orders of magnitude more compute that some people claim must be necessary for human-level intelligence.)
I would say the major catch here is that the sample-efficient work usually excludes the unsolved exploration games like Montezuma’s Revenge, so this doesn’t reach human-level in MR. However, I continue to say that exploration seems tractable with a MuZero model if you extract uncertainty/disagreement from the model-based rollouts and use that for the environment interactions, so I don’t think MR & Pitfall should be incredibly hard to solve.
Say more? In particular, do you happen to know what the sample efficiency advantage for EfficientNet was, over the sample-efficient version of MuZero—eg, how many frames the more efficient MuZero would require to train to EfficientNet-equivalent performance? This seems to me like the appropriate figure of merit for how much EfficientNet improved over MuZero (and to potentially refute the current applicability of what I interpret as the OpenPhil view about how AGI development should look at some indefinite point later when there are no big wins left in AGI).
The sample-efficient DQN is covered on pg2, as the tuned Rainbow. More or less, you just train the DQN a lot more frequently, and a lot more, on its experience replay buffer, with no other modifications; this makes it like 10x more sample-efficient than the usual hundreds of millions of frames quoted for best results (but again, because of diminishing returns and the value of training on new fresh data and how lightweight ALE is to run, this costs you a lot more wallclock/total-compute, which is usually what ALE researchers try to economize and why DQN/Rainbow isn’t as sample-efficient as possible by default). That gives you about 25% of human perf at 200k frames according to figure 1 in EfficientZero paper.
MuZero’s sample-efficient mode is similar, and is covered in the MuZero paper as ‘MuZero-Reanalyze’. In the original MuZero paper, Reanalyze kicks butt like MuZero and achieves mean scores way above the human benchmark [2100%], but they only benchmark the ALE at 200m frames so this is ‘sample-efficient’ only when compared to the 20 billion frames that the regular MuZero uses to achieve twice better results [5000%]. EfficientZero starts with MuZero-Reanalyze as their baseline and say that ‘MuZero’ in their paper always refers to MuZero-Reanalyze, so I assume that their figure 1′s ‘MuZero’ is Reanalyze, in which case it reaches ~50% human at 200k frames, so it does twice as well as DQN, but it does a quarter as well as their EfficientZero variant which reaches almost 200% at 200k. I don’t see a number for baseline MuZero at merely 200k frames anywhere. (Figure 2d doesn’t seem to be it, since it’s ‘training steps’ which are minibatches of many episodes, I think? And looks too high.)
So the 200k frame ranking would go something like: DQN [lol%] < MuZero? [meh?%] < Rainbow DQN [25%] < MuZero-Reanalyze [50%] < MuZero-Reanalyze-EfficientZero [190%].
They don’t run their MuZero-Reanalyze to equivalence (although this wouldn’t be a bad thing to do in general for all these attempts at greater sample-efficiency). Borrowing my guess from the other comment about how it looks like they are all still in the linearish regime of the usual learning log curve, I would hazard a guess that MuZero-Reanalyze would need 190⁄50 = 3.8x = 760,000 frames to match EfficientZero at 200k.
Scaling curves along the line of Jones so you could see if the exponents differ or if it’s just a constant offset (the former would mean EfficientZero is a lot better than it looks in the long run) would of course be research very up my alley, and the compute requirements for some scaling curve sweeps don’t seem too onerous...
Speaking of sample-efficient MuZero, DM just posted “Procedural Generalization by Planning with Self-Supervised World Models”, Anand et al 2021 on a different set of environments than ALE, which also uses self-supervision to boost MuZero-Reanalyze sample-efficiency dramatically (in addition to another one of my pet interests, demonstrating implicit meta-learning through the blessings of scale): https://arxiv.org/pdf/2111.01587.pdf#page=5
Eyeballing the graph at 200k env frames, the self-supervised variant is >200x more sample-efficient than PPO (which has no sample-efficient variant the way DQN does because it’s on-policy and can’t reuse data), and 5x the baseline MuZero (and a ‘model-free’ MuZero variant/ablation I’m unfamiliar with). So reasonably parallel.
Performance is mostly limited here by the fact that there are 500 levels for each game (i.e., level overfitting is the problem) so it’s not that meaningful to look at sample efficiency wrt environment interactions. The results would look a lot different on the full distribution of levels. I agree with your statement directionally though.
We do actually train/evaluate on the full distribution (See Figure 5 rightmost). MuZero+SSL versions (especially reconstruction) continue to be a lot more sample-efficient even in the full-distribution, and MuZero itself seems to be quite a bit more sample efficient than PPO/PPG.
I’m still not sure how to reconcile your results with the fact that the participants in the procgen contest ended up winning with modifications of our PPO/PPG baselines, rather than Q-learning and other value-based algorithms, whereas your paper suggests that Q-learning performs much better. The contest used 8M timesteps + 200 levels. I assume that your “QL” baseline is pretty similar to widespread DQN implementations.
https://arxiv.org/pdf/2103.15332.pdf
https://www.aicrowd.com/challenges/neurips-2020-procgen-competition/leaderboards?challenge_leaderboard_extra_id=470&challenge_round_id=662
Are there implementation level changes that dramatically improve performance of your QL implementation?
(Currently on vacation and I read your paper briefly while traveling, but I may very well have missed something.)
The Q-Learning baseline is a model-free control of MuZero. So it shares implementation details of MuZero (network architecture, replay ratio, training details etc.) while removing the model-based components of MuZero (details in sec A.2) . Some key differences you’d find vs a typical Q-learning implementation:
Larger network architectures: 10 block ResNet compared to a few conv layers in typical implementations.
Higher sample reuse: When using a reanalyse ratio of 0.95, both MuZero and Q-Learning use each replay buffer sample an average of 20 times. The target network is updated every 100 training steps.
Batch size of 1024 and some smaller details like using categorical reward and value predictions similar to MuZero.
We also have a small model-based component which predicts reward at next time step which lets us decompose the Q(s,a) into reward and value predictions just like MuZero.
I would guess larger networks + higher sample reuse have the biggest effect size compared to standard Q-learning implementations.
The ProcGen competition also might have used the easy difficulty mode compared to the hard difficulty mode used in our paper.
Thanks, this is very insightful. BTW, I think your paper is excellent!
Thanks, glad you liked it, I really like the recent RL directions from OpenAI too! It would be interesting to see the use of model-based RL for the “RL as fine-tuning paradigm”: making large pre-trained models more aligned/goal-directed efficiently by simply searching over a reward function learned from humans.
Would you say Learning to Summarize is an example of this? https://arxiv.org/abs/2009.01325
It’s model based RL because you’re optimizing against the model of the human (ie the reward model). And there are some results at the end on test-time search.
Or do you have something else in mind?
There’s no PPO/PPG curve there—I’d be curious to see that comparison. (though I agree that QL/MuZero will probably be more sample efficient.)
I was eyeballing Figure 2 in the PPG paper and comparing it to our results on the full distribution (Table A.3).
PPO: ~0.25
PPG: ~0.52
MuZero: 0.68
MuZero+Reconstruction: 0.93
What would stump a (naive) exploration-based AI? One may imagine a game as such: the player starts on the left side of a featureless room. If they go to the right side of the room, they win. In the middle of the room is a terminal. If one interacts with the terminal, one is kicked into an embedded copy of the original Doom.
An exploration-based agent would probably discern that Doom is way more interesting than the featureless room, whereas a human would probably put it aside at some point to “finish” exploring the starter room first. I think this demands a sort of mixed breadth-depth exploration?
The famous problem here is the “noisy TV problem”. If your AI is driven to go towards regions of uncertainty then it will be completely captivated by a TV on the wall showing random images, no need for a copy of Doom, any random giberish that the AI can’t predict will work.
OpenAI claims to have already solved the noisy TV problem via Random Network Distillation, although I’m still skeptical of it. I think it’s a clever hack that only solves a specific subclass of this problem that is relatively superficial.
Well, one may develop an AI that handles noisy TV by learning that it can’t predict the noisy TV. The idea was to give it a space that is filled with novelty reward, but doesn’t lead to a performance payoff.
Even defining what is a ‘featureless room’ in full generality is difficult. After all, the literal pixel array will be different at most timesteps (and even if ALE games are discrete enough for that to not be true, there are plenty of environments with continuous state variables that never repeat exactly). That describes the opening room of Montezuma’s Revenge: you have to go in a long loop around the room, timing a jump over a monster that will kill you, before you get near the key which will give you the first reward after hundreds of timesteps. Go-Explore can solve MR and doesn’t suffer from the noisy TV problem because it does in fact do basically breadth+depth exploration (iterative widening), but it also relies on a human-written hack for deciding what states/nodes are novel or different from each other and potentially worth using as a starting point for exploration.
You could certainly engineer an adversarial learning environment to stump an exploration-based AI, but you could just as well engineer an adversarial learning environment to stump a human. Neither is “naive” because of it in any useful sense, unless you can show that that adversarial environment has some actual practical relevance.
That’s true, but … I feel in most cases, it’s a good idea to run mixed strategies. I think that by naivety I mean the notion that any single strategy will handle all cases—even if there are strategies where this is true, it’s wrong for almost all of them.
Humans can be stumped, but we’re fairly good at dynamic strategy selection, which tends to protect us from being reliably exploited.
Have you ever played Far Cry 4? At the beginning of that game, there is a scene where you’re being told by the main villain of the storyline to sit still while he goes downstairs to deal with some rebels. A normal human player would do the expected thing, which is to curiously explore what’s going on downstairs, which then leads to the unfolding of the main story and thus actual gameplay. But if you actually stick to the villain’s instruction and sit still for 12 minutes, it leads straight to the ending of the game.
This is an analogous situation to your scenario, except it’s one where humans reliably fail. Now you could argue that a human player’s goal is to actually play and enjoy the game, therefore it’s perfectly reasonable to explore and forego a quick ending. But I bet even if you incentivized a novice player to finish the game in under 2 hours with a million dollars, he would not think of exploiting this Easter egg.
More importantly, he would have learned absolutely nothing from this experience about how to act rationally (except for maybe stop believing that anyone would genuinely offer a million dollars out of the blue). The point is, it’s not just possible to rig the game against an agent for it to fail, it’s trivially easy when you have complete control of the environment. But it’s also irrelevant, because that’s not how reality works in general. And I do mean reality, not some fictional story or adversarial setup where things happen because the author says they happen.
Haha I didn’t realize you already replied so quickly, seems like we had similar thoughts.