So the chooser in this case is a fully deterministic system, not a real-live human brain with some chance of random firings screwing up Omega’s prediction?
This is more of a way of pointing out a special case that shares relevant considerations with TDT-like approach to decision theory (in this extreme identical-simulation case it’s just Hofstadter’s “superrationality”).
If we start from this case and gradually make the prediction model and the player less and less similar to each other (perhaps by making the model less detailed), at which point do the considerations that make you one-box in this edge case break? Clearly, if you change the prediction model just a little bit, correct answer shouldn’t immediately flip, but CDT is no longer applicable out-of-the-box (arguably, even if you “control” two identical copies, it’s also not directly applicable). Thus, a need for generalization that admits imperfect acausal “control” over sufficiently similar decision-makers (and sufficiently accurate predictions) in the same sense in which you “control” your identical copies.
This is more of a way of pointing out a special case that shares relevant considerations with TDT-like approach to decision theory (in this extreme identical-simulation case it’s just Hofstadter’s “superrationality”).
If we start from this case and gradually make the prediction model and the player less and less similar to each other (perhaps by making the model less detailed), at which point do the considerations that make you one-box in this edge case break? Clearly, if you change the prediction model just a little bit, correct answer shouldn’t immediately flip, but CDT is no longer applicable out-of-the-box (arguably, even if you “control” two identical copies, it’s also not directly applicable). Thus, a need for generalization that admits imperfect acausal “control” over sufficiently similar decision-makers (and sufficiently accurate predictions) in the same sense in which you “control” your identical copies.