At the point of decision, T2, you want box B to have the million dollars. But Omega’s decision was made at T1. If you want to affect T1 from T2, it seems to me like you’d need backwards causality.
Omega’s decision at T2 (I don’t understand why you try to distinguish between T1 and T2; T1 seems irrelevant) is based on its prediction of your decision algorithm in Newcomb problems (including on what it predicts you’ll do at T3). It presents you with two boxes. And if it expects you to two-box at T3, then its box B is empty. What is timing supposed to change about this?
Omega is a nigh-perfect predictor: “Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.”
So if you follow the kind of decision algorithm that would make you two-box, box B will be empty.
How do concepts like backwards causality make any difference here?
At the point of decision, T2, you want box B to have the million dollars. But Omega’s decision was made at T1. If you want to affect T1 from T2, it seems to me like you’d need backwards causality.
Omega’s decision at T2 (I don’t understand why you try to distinguish between T1 and T2; T1 seems irrelevant) is based on its prediction of your decision algorithm in Newcomb problems (including on what it predicts you’ll do at T3). It presents you with two boxes. And if it expects you to two-box at T3, then its box B is empty. What is timing supposed to change about this?