Suppose Omega (the same superagent from Newcomb’s Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:
“I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads—can I have $1000?”
Obviously, the only reflectively consistent answer in this case is “Yes—here’s the $1000”, because if you’re an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers “Yes” to this sort of question—just like with Newcomb’s Problem or Parfit’s Hitchhiker.
Compute the probabilities P(0)..P(n) that this deal will be offered to you again n times in the future. Sum over 499500 P(n) (n) for all n and agree to pay if the sum is greater than 1,000.
What if it’s offered just once—but if the coin comes up tails, Omega simulates a universe in which it came up heads, asks you this question, then acts based on your response? (Do whatever you like to ignore anthropics… say, Omega always simulates the opposite result, at the appropriate time.)
Are both I and my simulation told this is a one-time offer?
Is a simulation generated whether the real coin is heads or tails?
Are both my simulation and I told that one of us is a simulation?
Does the simulation persist after the choice is made?
I suppose the second and fourth points don’t matter particularly… as long as the first and third are true, then I consider it plus EV to pay the $1000.
Compute the probabilities P(0)..P(n) that this deal will be offered to you again n times in the future. Sum over 499500 P(n) (n) for all n and agree to pay if the sum is greater than 1,000.
What if it’s offered just once—but if the coin comes up tails, Omega simulates a universe in which it came up heads, asks you this question, then acts based on your response? (Do whatever you like to ignore anthropics… say, Omega always simulates the opposite result, at the appropriate time.)
To be clear:
Are both I and my simulation told this is a one-time offer?
Is a simulation generated whether the real coin is heads or tails?
Are both my simulation and I told that one of us is a simulation?
Does the simulation persist after the choice is made?
I suppose the second and fourth points don’t matter particularly… as long as the first and third are true, then I consider it plus EV to pay the $1000.
Should you pay the money even if you’re not told about the simulations, because Omega is a good predictor (perhaps because it’s using simulations)?
If I judge the probability that I am a simulation or equivalent construct to be greater than 1⁄499500, yes.
(EDIT: Er, make that 1⁄999000, actually. What’s the markup code for strikethrough ’round these parts?)
(EDIT 2: Okay, I’m posting too quickly. It should be just 10^-6, straight up. If I’m a figment then the $1000 isn’t real disutility.)
(EDIT 3: ARGH. Sorry. 24 hours without sleep here. I might not be the sim, duh. Correct calculations:
u(pay|sim) = 10^6; u(~pay|sim) = 0; u(pay|~sim) = −1000; u(~pay|~sim) = 0
u(~pay) = 0; u(pay) = P(sim) 10^6 - P(~sim) (1000) = 1001000 * P(sim) − 1000
pay if P(sim) > 1/1001.
Double-checking… triple-checking… okay, I think that’s got it. No… no… NOW that’s got it.)