I think you have to consider what winning means more carefully.
A rational agent doesn’t buy a lottery ticket because it’s a bad bet. If that ticket ends up winning, does that contradict the principle that “rational agents win”?
That doesn’t seem at all analogous. At the time they had the opportunity to purchase the ticket, they had no way to know it was going to win.
An Irene who acts like your model of Irene will win slightly more when omega makes an incorrect prediction (she wins the lottery), but will be given the million dollars far less commonly because Omega is almost always correct. On average she loses. And rational agents win on average.
By average I don’t mean average within a particular world (repeated iteration), but on average across all possible worlds.
I agree with all of this. I’m not sure why you’re bringing it up?
That doesn’t seem at all analogous. At the time they had the opportunity to purchase the ticket, they had no way to know it was going to win.
I agree with all of this. I’m not sure why you’re bringing it up?
I’m showing why a rational agent would not take the 1000 dollars, and that doesn’t contradict “rational agents win”