To predict, Omega doesn’t need to simulate. You can predict that water will boil when put on fire without simulating the movement of 10^23 molecules.
Omega even can’t use simulation to arrive at his prediction in this scenario. If Omega demands money from simulated agents who then agree to pay, the simulation violates the formulation of the problem, according to which Omega should reward those agents.
If the problem is reformulated as “Omega demands payment only if the agent would counterfactually disagree to pay, OR in a simulation”, then we have a completely different problem. For example, if the agent is sufficiently confident about his own decision algorithm, then after Omega’s demand he could assign high probability to being in a simulation. The analysis would be more complicated there.
In short, I am only saying that
Omega is trustworthy.
Omega can predict the agents behaviour with certainty.
Omega tells that it demands money only from agents whom it predicted to reject the demand.
You can predict that water will boil when put on fire without simulating the movement of 10^23 molecules.
True but irrelevant. In order to make an accurate prediction, Omega needs, at the very least, to simulate my decision-making faculty in all significant aspects. If my decision-making process decides to recall some particular memory, then Omega needs to simulate that memory in all significant aspects. If my decision-making process decides to wander around the room conducting physics experiments, just to be a jackass, and to peg my decision to the results of those experiments—well, then Omega will need to convincingly simulate the results of those experiments. The anticipated experience will be identical for my actual decision-making process as for my simulated decision-making process.
Mind you, based on what I know of the brain, I think you’d actually need to run a pretty convincing, if somewhat coarse-grained, simulation of a good chunk of my light cone in order to predict my decision with any kind of certainty, but I’m being charitable here.
And yes, this seems to render the original formulation of the problem paradoxical. I’m trying to think of ways to suitably reformulate it without altering the decision theoretics, but I’m not sure it’s possible.
True but irrelevant. In order to make an accurate prediction, Omega needs, at the very least, to simulate my decision-making faculty in all significant aspects. If my decision-making process decides to recall some particular memory, then Omega needs to simulate that memory in all significant aspects. If my decision-making process decides to wander around the room conducting physics experiments, just to be a jackass, and to peg my decision to the results of those experiments—well, then Omega will need to convincingly simulate the results of those experiments.
I’m not convinced that all that actually follows from the premises. One of the features of Newcomblike problems is that they tend to appear intuitively obvious to the people exposed to them, which suggests rather strongly to me that the intuitive answer is linked to hidden variables in personality or experience, and in most cases isn’t sensitively dependent on initial conditions.
People don’t always choose the intuitive answer, of course, but augmenting that with information about the decision-theoretic literature you’ve been exposed to, any contrarian tendencies you might have, etc. seems like it might be sufficient to achieve fine-grained predictive power without actually running a full simulation of you. The better the predictive power, of course, the more powerful the model of your decision-making process has to be, but Omega doesn’t actually have to have perfect predictive power for Newcomblike conditions to hold. It doesn’t even have to have particularly good predictive power, given the size of the payoff.
Er, I think we’re talking about two different formulations of the problem (both of which are floating around on this page, so this isn’t too surprising). In the original post, the constraint is given by P(o=award)=P(a=pay), rather than P(o=award)=qP(a=pay)+(1-q)P(a=refuse), which implies that Omega’s prediction is nearly infallible, as it usually is in problems starring Omega: any deviation from P(o=award)=0 or 1 will be due to “truly random” influences on my decision (e.g. quantum coin tosses). Also, I think the question is not “what are your intuitions?” but “what is the optimal decision for a rationalist in these circumstances?”
You seem to be suggesting that most of what determines my decision to pay or refuse could be boiled down to a few factors. I think the evidence weighs heavily against this: effect sizes in psychological studies tend to be very weak. Evidence also suggests that these kinds of cognitive processes are indeed sensitively dependent on initial conditions. Differences in the way questions are phrased, and what you’ve had on your mind lately, can have a significant impact, just to name a couple of examples.
To predict, Omega doesn’t need to simulate. You can predict that water will boil when put on fire without simulating the movement of 10^23 molecules.
Omega even can’t use simulation to arrive at his prediction in this scenario. If Omega demands money from simulated agents who then agree to pay, the simulation violates the formulation of the problem, according to which Omega should reward those agents.
If the problem is reformulated as “Omega demands payment only if the agent would counterfactually disagree to pay, OR in a simulation”, then we have a completely different problem. For example, if the agent is sufficiently confident about his own decision algorithm, then after Omega’s demand he could assign high probability to being in a simulation. The analysis would be more complicated there.
In short, I am only saying that
Omega is trustworthy.
Omega can predict the agents behaviour with certainty.
Omega tells that it demands money only from agents whom it predicted to reject the demand.
Omega demands the money.
The agent pays.
are together incompatible statements.
True but irrelevant. In order to make an accurate prediction, Omega needs, at the very least, to simulate my decision-making faculty in all significant aspects. If my decision-making process decides to recall some particular memory, then Omega needs to simulate that memory in all significant aspects. If my decision-making process decides to wander around the room conducting physics experiments, just to be a jackass, and to peg my decision to the results of those experiments—well, then Omega will need to convincingly simulate the results of those experiments. The anticipated experience will be identical for my actual decision-making process as for my simulated decision-making process.
Mind you, based on what I know of the brain, I think you’d actually need to run a pretty convincing, if somewhat coarse-grained, simulation of a good chunk of my light cone in order to predict my decision with any kind of certainty, but I’m being charitable here.
And yes, this seems to render the original formulation of the problem paradoxical. I’m trying to think of ways to suitably reformulate it without altering the decision theoretics, but I’m not sure it’s possible.
I’m not convinced that all that actually follows from the premises. One of the features of Newcomblike problems is that they tend to appear intuitively obvious to the people exposed to them, which suggests rather strongly to me that the intuitive answer is linked to hidden variables in personality or experience, and in most cases isn’t sensitively dependent on initial conditions.
People don’t always choose the intuitive answer, of course, but augmenting that with information about the decision-theoretic literature you’ve been exposed to, any contrarian tendencies you might have, etc. seems like it might be sufficient to achieve fine-grained predictive power without actually running a full simulation of you. The better the predictive power, of course, the more powerful the model of your decision-making process has to be, but Omega doesn’t actually have to have perfect predictive power for Newcomblike conditions to hold. It doesn’t even have to have particularly good predictive power, given the size of the payoff.
Er, I think we’re talking about two different formulations of the problem (both of which are floating around on this page, so this isn’t too surprising). In the original post, the constraint is given by P(o=award)=P(a=pay), rather than P(o=award)=qP(a=pay)+(1-q)P(a=refuse), which implies that Omega’s prediction is nearly infallible, as it usually is in problems starring Omega: any deviation from P(o=award)=0 or 1 will be due to “truly random” influences on my decision (e.g. quantum coin tosses). Also, I think the question is not “what are your intuitions?” but “what is the optimal decision for a rationalist in these circumstances?”
You seem to be suggesting that most of what determines my decision to pay or refuse could be boiled down to a few factors. I think the evidence weighs heavily against this: effect sizes in psychological studies tend to be very weak. Evidence also suggests that these kinds of cognitive processes are indeed sensitively dependent on initial conditions. Differences in the way questions are phrased, and what you’ve had on your mind lately, can have a significant impact, just to name a couple of examples.