I don’t think this counterexample is actually a counterexample. When you-simulation decides in Scenario 1, he has no knowledge of Scenario 2. Yes, if people respond in arbitrary and unexpected ways to your decisions, this sort of thing can easily be set up; but ultimately the best you can do is to maximize expected utility. If you lose due to Omega pulling such a move on you, that’s due to your lack of knowledge and bad calibration as to his probable responses, not to a flaw in your decision theory. If you-simulation somehow knew what the result would be used for, he would choose taking that into account.
I don’t think this counterexample is actually a counterexample. When you-simulation decides in Scenario 1, he has no knowledge of Scenario 2. Yes, if people respond in arbitrary and unexpected ways to your decisions, this sort of thing can easily be set up; but ultimately the best you can do is to maximize expected utility. If you lose due to Omega pulling such a move on you, that’s due to your lack of knowledge and bad calibration as to his probable responses, not to a flaw in your decision theory. If you-simulation somehow knew what the result would be used for, he would choose taking that into account.