Either that, or the idea of mind reading agents is flawed.
We shouldn’t conclude that, since to various degrees mindreading agents already happen in real life.
If we tighten our standard to “games where the mindreading agent is only allowed to predict actions you’d choose in the game, which is played with you already knowing about the mindreading agent”, then many decision theories that are different in other situations might all choose to respond to “pick B or I’ll kick you in the dick” by picking B.
To entertain that possibility, suppose you’re X% confident that your best “fool the predictor into thinking I’ll one-box, and then two-box” plan will work, and Y% confident that “actually do one-box, in a way the predictor can predict” plan will work. If X=Y or X>Y you’ve got no incentive to actually one-box, only to try to pretend you will, but above some threshold of belief the predictor might beat your deception it makes sense to actually be honest.