I agree that if a DT is trained on trajectories which sometimes contain unoptimal actions, but the effect size of this mistake is small compared to the inherent randomness of the return, the learned policy will also take such unoptimal action, though with a bit less frequency. (By inherent randomness I mean, in this case, the fact that Player 2 pushes the button randomly and independently of the actions of Player 1)
But where do these trajectories come from? If you take them from another RL algorithm, then the optimization is already there. And if you start from random trajectories, it seems to me that one iteration of DT will not be enough, because due to this randomness, the learned policy will also be quite random. And when the transformers starts producing reasonable actions, it will probably have fixed the mistake as well.
Another way to see it is to consider DT as doing behaviour cloning from the certain fraction of best trajectories (they call it %BC in the paper and compare DT with it on many benchmarks). It ignores generalization, of course, but that shouldn’t be relevant for this case. Now, what happens if you take a lot of random trajectories in an environment with a nondeterministic reward and select those which happened to get high rewards? They will contain some mix of good and lucky moves. To remove the influence of luck, you run each of them again, and filter by new rewards (i.e. training a DT on the trajectories generated by previous DT). And so on until the remaining trajectories are all getting a high average reward. I think that in your example at this point the n-th iteration of DT will learn to disconnect the button with high probability on the first move, and that this probability was gradually rising during the previous iterations (because each time the agents who disconnect the button were slightly more likely to perform well, and so they were slightly overrepresented in the population of winners).
I agree with @James Camacho, at least intuitively, that stickiness in the space of observations is equivalent to switchiness in the space of their prefix XORs and vice versa. Also, I tried to replicate, and didn’t observe the mentioned effect, so maybe one of us has a bug in the simulation.