But remember, to train “a Gato,” we have to first train all the RL policies that generate its training data. So we have access to all of them too.
No, you don’t have to, nor do you have guaranteed access, nor would you necessarily want to use them rather than Gato if you did. As Daniel points out, this is obviously untrue of all of the datasets it’s simply doing self-supervised learning on (how did we ‘train the RL policy’ for photographs?). It is also not true of it because it’s off-policy and offline: the experts could be human, or they could be the output of non-RL algorithms which are infeasible to run much like large search processes (eg chess endgame tables) or brittle non-generalizable expert-hand-engineered algorithms, or they could be RL policies you don’t have direct access to (because they’ve bitrotten or their owners won’t let you), or even RL policies which no longer exist because the agents were deleted but their data remains, or they could be RL policies from an oracle setting where you can’t run the original policy in the meaningful real world context (eg in robotics sim2real where you train the expert with oracle access to the simulation’s ground truth to get a good source of demonstrations, but at the end you need a policy which doesn’t use that oracle so you can run it in a real robot) or more broadly any kind of meta-learning context where you have data from RL policies for some problems in a family of problems and want to induce general solving, or they are filtered high-reward episodes from large numbers of attempts by brute force dumb (even random) agents where you trivially have ‘access to all of them’ but that is useless, or… Those RL policies may also not be better than a Gato or DT to begin with, because imitation learning can exceed observed experts and the ‘RL policies’ here might be, say, random baselines which merely have good coverage of the state-space. Plus, nothing at all stops Decision Transformer from doing its own exploration (planning was already demonstrated by DT/Trajectory Transformer, and there’s been work afterwards like Online Decision Transformer).
No, you don’t have to, nor do you have guaranteed access, nor would you necessarily want to use them rather than Gato if you did. As Daniel points out, this is obviously untrue of all of the datasets it’s simply doing self-supervised learning on (how did we ‘train the RL policy’ for photographs?). It is also not true of it because it’s off-policy and offline: the experts could be human, or they could be the output of non-RL algorithms which are infeasible to run much like large search processes (eg chess endgame tables) or brittle non-generalizable expert-hand-engineered algorithms, or they could be RL policies you don’t have direct access to (because they’ve bitrotten or their owners won’t let you), or even RL policies which no longer exist because the agents were deleted but their data remains, or they could be RL policies from an oracle setting where you can’t run the original policy in the meaningful real world context (eg in robotics sim2real where you train the expert with oracle access to the simulation’s ground truth to get a good source of demonstrations, but at the end you need a policy which doesn’t use that oracle so you can run it in a real robot) or more broadly any kind of meta-learning context where you have data from RL policies for some problems in a family of problems and want to induce general solving, or they are filtered high-reward episodes from large numbers of attempts by brute force dumb (even random) agents where you trivially have ‘access to all of them’ but that is useless, or… Those RL policies may also not be better than a Gato or DT to begin with, because imitation learning can exceed observed experts and the ‘RL policies’ here might be, say, random baselines which merely have good coverage of the state-space. Plus, nothing at all stops Decision Transformer from doing its own exploration (planning was already demonstrated by DT/Trajectory Transformer, and there’s been work afterwards like Online Decision Transformer).