The agent in Newcomb’s problem needs to know that Omega’s prediction is caused by the same factors as his actual decision. The agent does not need to know any more detail than that, but does need to know at least that much. If there were no such causal path between prediction and decision then Omega would be unable to make a reliable prediction. When there is correlation, there must, somewhere, be causation (though not necessarily in the same place as the correlation).
If the agent believes that Omega is just pretending to be able to make that prediction, but really tossed a coin, and intends only publicising the cases where the agent’s decision happened to be the same, then the agent has no reason to one-box.
If the agent believes Omega’s story, but Omega is really tossing a coin and engaging in selective reporting, then the agent’s decision may be correct on the basis of his belief, but wrong relative to the truth. Such is life.
To simulate Newcomb’s problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can’t.
I only used Newcomb as an example to show that determining whether a simulation actually simulates a problem isn’t trivial. The issue here is not finding particular simulations for Newcomb or other problems, but the general concept of correctly linking problems to simulations. As I said, it’s a rather mathematical issue. Your last statement seems the most relevant one to me:
To simulate Newcomb’s problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can’t.
Can we generalize this to mean “if a problem can’t exist in reality, an accurate simulation of it can’t exist either” or something along those lines? Can we prove this?
Can we generalize this to mean “if a problem can’t exist in reality, an accurate simulation of it can’t exist either” or something along those lines? Can we prove this?
I would cast this sentence in this form, seeing that if a problem contains some infinite it’s impossibile for it to exist in reality. Can an infinite transition system be simulated by a finite transition sistem? If there’s only one which can be, then this would disprove your conjecture. The converse of course it’s not true...
I’m not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?
Regardless, I think I have already disproven the conjecture I made above in another comment:
Omega predicting an otherwise irrelevant random factor such as a fair coin toss can be reduced to the random factor itself, thereby getting rid of Omega. Equivalence can easily be proven because regardless of whether we allow for backwards causality and whatnot, a fair coin is always fair and even if we assume that Omega may be wrong, the probability of error must still be the same for either side of the coin, so in the end Omega is exactly as random as the coin itself no matter Omega’s actual accuracy.
The agent in Newcomb’s problem needs to know that Omega’s prediction is caused by the same factors as his actual decision. The agent does not need to know any more detail than that, but does need to know at least that much. If there were no such causal path between prediction and decision then Omega would be unable to make a reliable prediction. When there is correlation, there must, somewhere, be causation (though not necessarily in the same place as the correlation).
If the agent believes that Omega is just pretending to be able to make that prediction, but really tossed a coin, and intends only publicising the cases where the agent’s decision happened to be the same, then the agent has no reason to one-box.
If the agent believes Omega’s story, but Omega is really tossing a coin and engaging in selective reporting, then the agent’s decision may be correct on the basis of his belief, but wrong relative to the truth. Such is life.
To simulate Newcomb’s problem with a real agent, you have the problem of convincing the agent you can predict his decision, even though in fact you can’t.
I only used Newcomb as an example to show that determining whether a simulation actually simulates a problem isn’t trivial. The issue here is not finding particular simulations for Newcomb or other problems, but the general concept of correctly linking problems to simulations. As I said, it’s a rather mathematical issue. Your last statement seems the most relevant one to me:
Can we generalize this to mean “if a problem can’t exist in reality, an accurate simulation of it can’t exist either” or something along those lines? Can we prove this?
I would cast this sentence in this form, seeing that if a problem contains some infinite it’s impossibile for it to exist in reality. Can an infinite transition system be simulated by a finite transition sistem? If there’s only one which can be, then this would disprove your conjecture. The converse of course it’s not true...
I’m not sure what you mean by an infinite transition system. Are you referring to circular causality such as in Newcomb, or to an actually infinite number of states such as a variant of Sleeping Beauty in which on each day the coin is tossed anew and the experiment only ends once the coin lands heads?
Regardless, I think I have already disproven the conjecture I made above in another comment: