Could Omega’s decision of which game to play depend on the algorithm I submit as an answer? One convenient ruling might be that if Omega tried to predict whether I would accept, that would count as the simulation accepting/rejecting and it would have to pay out >=100$.
One approach is worst-case analysis as employed in computer science—assume that Omega wants to minimize our reward, then choose the strategy that maximizes it. Here, that means always accepting because that never yields less than 100$.
If I had a random number oracle that Omega couldn’t predict, I could accept 10000/10900 of the time, because that always yields an expected reward of 100000/109$, but since Omega can simulate the world this is unlikely.
Some interesting situations may have it be able to predict random number generators within its simulations, but not in the real world...
I agree with Dagon’s first paragraph.
Could Omega’s decision of which game to play depend on the algorithm I submit as an answer? One convenient ruling might be that if Omega tried to predict whether I would accept, that would count as the simulation accepting/rejecting and it would have to pay out >=100$.
One approach is worst-case analysis as employed in computer science—assume that Omega wants to minimize our reward, then choose the strategy that maximizes it. Here, that means always accepting because that never yields less than 100$.
If I had a random number oracle that Omega couldn’t predict, I could accept 10000/10900 of the time, because that always yields an expected reward of 100000/109$, but since Omega can simulate the world this is unlikely.
Some interesting situations may have it be able to predict random number generators within its simulations, but not in the real world...