Let’s say we simulate Omega’s prediction by a coin toss and repeat the simulation (without payoffs) until the coin toss matches the agent’s decision.
It’s not quite clear to me what you have in mind here. Are you envisioning this with human agents or with programs? If with humans, how will they not remember that Omega got it wrong on the past run? If with programs, what’s the purpose of the coin?
If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.
For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).
It’s not quite clear to me what you have in mind here. Are you envisioning this with human agents or with programs? If with humans, how will they not remember that Omega got it wrong on the past run? If with programs, what’s the purpose of the coin?
If you substitute Omega with a repeated toin coss, there is no Omega, and there is no concept of Omega being always right. Instead of repeating the problem, you can also run several instances of the simulation with several agents simultaneously, and only counting those instances in which the prediction matches the decision.
For this simulation, it is completely irrelevant whether the multiple agents are actually identical human beings, as long as their decision-making process is identical (and deterministic).
Ah, that makes sense.