At the point of decision, T2, you want box B to have the million dollars. But Omega’s decision was made at T1. If you want to affect T1 from T2, it seems to me like you’d need backwards causality.
Omega’s decision at T2 (I don’t understand why you try to distinguish between T1 and T2; T1 seems irrelevant) is based on its prediction of your decision algorithm in Newcomb problems (including on what it predicts you’ll do at T3). It presents you with two boxes. And if it expects you to two-box at T3, then its box B is empty. What is timing supposed to change about this?
At the point of decision, T2, you want box B to have the million dollars. But Omega’s decision was made at T1. If you want to affect T1 from T2, it seems to me like you’d need backwards causality.
Omega’s decision at T2 (I don’t understand why you try to distinguish between T1 and T2; T1 seems irrelevant) is based on its prediction of your decision algorithm in Newcomb problems (including on what it predicts you’ll do at T3). It presents you with two boxes. And if it expects you to two-box at T3, then its box B is empty. What is timing supposed to change about this?