So if Omega simulates several copies of me, it can’t be that both of them, by themselves have the power to make Omega decide to give real-me the money. So I have to give Omega money twice in simulation to get real-me the money once.
As for the absent-minded driver problem the problem here is that the probabilistic approach overcounts probability in some situations but not in others. It’s like playing the following game:
Phase 1: I flip a coin
If it was heads, there is a Phase 1.5 in which you get to guess its value and then are given an amnesia pill
Phase 2: You get to guess whether the coin came up heads and if you are right you get $1.
Using the bad analysis from the absent-minded driver problem. Your strategy is to always guess that it is heads with probability p. Suppose that there is a probability alpha that you are in phase 1.5 when you guess, and 1-alpha that you are in phase 2.
Well your expected payoff is then
(alpha)(p) + (1-alpha)*(1/2)
This is clearly silly. This is because if the coin came up heads they counted your strategy twice. I guess to fix this in general, you need to pick a time at which you make your averaging over possible yous. (For example, only count it at phase 2).
For the absent-minded driver problem, you could either choose at the first intersection you come to, in which case you have
1*(p^2+4p(1-p)) + 0(p+4(1-p))
Or the last intersection you come to in which case alpha = 1-p and you have
(1-p)*0 + p(p + 4(1-p))
(the 0 because if X is your last intersection you get 0)
Both give the correct answer.
Phase 1: I flip a coin If it was heads, there is a Phase 1.5 in which you get to guess its value and then are given an amnesia pill Phase 2: You get to guess whether the coin came up heads and if you are right you get $1.
This is (a variant of) the Sleeping Beauty problem. I’m guessing you must be new here—this is an old ‘chestnut’ that we’ve done to death several times. :-)
(1-p)*0 + p(p + 4(1-p))
(the 0 because if X is your last intersection you get 0) Both give the correct answer.
Good stuff. But now here’s a really stupid idea for you:
Suppose you’re going to play Counterfactual Mugging with an Omega who (for argument’s sake) doesn’t create a conscious simulation of you. But your friend Bill has a policy that if you ever have to play counterfactual mugging, and the coin lands tails then he will create a simulation of you as you were just prior to the game and make your copy have an experience indistinguishable from the experience you would have had of Omega asking you for money (as though the coin had landed heads). Then following your approach, surely you ought now to pay up (whereas you wouldn’t have previously)? Despite the fact that your friend Bill is penniless, and his actions have no effect on Omega or your payoff in the real world?
I don’t see why you think I should pay if Bill is involved. Knowing Bill’s behavior, I think that there’s a 50% chance that I am real, any paying earns me -$1000, and there’s a 50% chance that I am a Bill-simulation and paying earns me $0. Hence paying earns me an expected -$500.
If you know there is going to be a simulation then your subjective probability for the state of the real coin is that it’s heads with probability 1⁄2. And if the coin is really tails then, assuming Omega is perfect, your action of ‘giving money’ (in the simulation) seems to be “determining” whether or not you receive money (in the real world).
(Perhaps you’ll simply take this as all the more reason to rule out the possibility that there can be a perfect Omega that doesn’t create a conscious simulation of you? Fair enough.)
I’m not sure I would buy this argument unless you could claim that my Bob-simulation’s actions would cause Omega to give or not give me money. At very least it should depend on how Omega makes his prediction.
Perhaps a clearer variation goes as follows: Bill arranges so that if the coin is tails then (a) he will temporarily receive your winnings, if you get any, and (b) he will do a flawless imitation of Omega asking for money.
If you pay Bill then he returns both what you paid and your winnings (which you’re guaranteed to have, by hypothesis). If you don’t pay him then he has no winnings to give you.
Well look: If the real coin is tails and you pay up, then (assuming Omega is perfect, but otherwise irrespectively of how it makes its prediction) you know with certainty that you get the prize. If you don’t pay up then you would know with certainty that you don’t get the prize. The absence of a ‘causal arrow’ pointing from your decision to pay to Omega’s decision to pay becomes irrelevant in light of this.
(One complication which I think is reasonable to consider here is ‘what if physics is indeterministic and so knowing your prior state doesn’t permit Omega (or Bill) to calculate with certainty what you will do?’ Here I would generalize the game slightly so that if Omega calculates that your probability of paying up is p then you receive proportion p of the prize. Then everything else goes through unchanged—Omega and Bill will now calculate the same probability that you pay up.)
OK. I am uncomfortable with the idea of dealing with the situation where Omega is actually perfect.
I guess this boils down to me being not quite convinced by the arguments for one-boxing in Newcomb’s problem without further specification of how Omega operates.
At first sight it appears to be isomorphic to Newcomb’s problem. However, a couple of extra details have been thrown in:
A person’s decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
“Omega” only has access to the latter.
Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can’t rule out the possibility that it can only “see” the unfree part of your will, in which case you should “try as hard as you can to two-box (but perhaps not succeed).” However, if Omega has even “partial access” to the “free part” of your will then it will usually be best to one-box.
I did not know about it, thanks for pointing it out. It’s Simpson’s paradox the decision theory problem.
On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn’t Newcomb’s problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.
So if Omega simulates several copies of me, it can’t be that both of them, by themselves have the power to make Omega decide to give real-me the money. So I have to give Omega money twice in simulation to get real-me the money once.
As for the absent-minded driver problem the problem here is that the probabilistic approach overcounts probability in some situations but not in others. It’s like playing the following game:
Phase 1: I flip a coin If it was heads, there is a Phase 1.5 in which you get to guess its value and then are given an amnesia pill Phase 2: You get to guess whether the coin came up heads and if you are right you get $1.
Using the bad analysis from the absent-minded driver problem. Your strategy is to always guess that it is heads with probability p. Suppose that there is a probability alpha that you are in phase 1.5 when you guess, and 1-alpha that you are in phase 2.
Well your expected payoff is then (alpha)(p) + (1-alpha)*(1/2)
This is clearly silly. This is because if the coin came up heads they counted your strategy twice. I guess to fix this in general, you need to pick a time at which you make your averaging over possible yous. (For example, only count it at phase 2).
For the absent-minded driver problem, you could either choose at the first intersection you come to, in which case you have
1*(p^2+4p(1-p)) + 0(p+4(1-p))
Or the last intersection you come to in which case alpha = 1-p and you have
(1-p)*0 + p(p + 4(1-p))
(the 0 because if X is your last intersection you get 0) Both give the correct answer.
This is (a variant of) the Sleeping Beauty problem. I’m guessing you must be new here—this is an old ‘chestnut’ that we’ve done to death several times. :-)
Good stuff. But now here’s a really stupid idea for you:
Suppose you’re going to play Counterfactual Mugging with an Omega who (for argument’s sake) doesn’t create a conscious simulation of you. But your friend Bill has a policy that if you ever have to play counterfactual mugging, and the coin lands tails then he will create a simulation of you as you were just prior to the game and make your copy have an experience indistinguishable from the experience you would have had of Omega asking you for money (as though the coin had landed heads). Then following your approach, surely you ought now to pay up (whereas you wouldn’t have previously)? Despite the fact that your friend Bill is penniless, and his actions have no effect on Omega or your payoff in the real world?
I don’t see why you think I should pay if Bill is involved. Knowing Bill’s behavior, I think that there’s a 50% chance that I am real, any paying earns me -$1000, and there’s a 50% chance that I am a Bill-simulation and paying earns me $0. Hence paying earns me an expected -$500.
If you know there is going to be a simulation then your subjective probability for the state of the real coin is that it’s heads with probability 1⁄2. And if the coin is really tails then, assuming Omega is perfect, your action of ‘giving money’ (in the simulation) seems to be “determining” whether or not you receive money (in the real world).
(Perhaps you’ll simply take this as all the more reason to rule out the possibility that there can be a perfect Omega that doesn’t create a conscious simulation of you? Fair enough.)
I’m not sure I would buy this argument unless you could claim that my Bob-simulation’s actions would cause Omega to give or not give me money. At very least it should depend on how Omega makes his prediction.
Perhaps a clearer variation goes as follows: Bill arranges so that if the coin is tails then (a) he will temporarily receive your winnings, if you get any, and (b) he will do a flawless imitation of Omega asking for money.
If you pay Bill then he returns both what you paid and your winnings (which you’re guaranteed to have, by hypothesis). If you don’t pay him then he has no winnings to give you.
Well look: If the real coin is tails and you pay up, then (assuming Omega is perfect, but otherwise irrespectively of how it makes its prediction) you know with certainty that you get the prize. If you don’t pay up then you would know with certainty that you don’t get the prize. The absence of a ‘causal arrow’ pointing from your decision to pay to Omega’s decision to pay becomes irrelevant in light of this.
(One complication which I think is reasonable to consider here is ‘what if physics is indeterministic and so knowing your prior state doesn’t permit Omega (or Bill) to calculate with certainty what you will do?’ Here I would generalize the game slightly so that if Omega calculates that your probability of paying up is p then you receive proportion p of the prize. Then everything else goes through unchanged—Omega and Bill will now calculate the same probability that you pay up.)
OK. I am uncomfortable with the idea of dealing with the situation where Omega is actually perfect.
I guess this boils down to me being not quite convinced by the arguments for one-boxing in Newcomb’s problem without further specification of how Omega operates.
Do you know about the “Smoking Lesion” problem?
At first sight it appears to be isomorphic to Newcomb’s problem. However, a couple of extra details have been thrown in:
A person’s decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
“Omega” only has access to the latter.
Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can’t rule out the possibility that it can only “see” the unfree part of your will, in which case you should “try as hard as you can to two-box (but perhaps not succeed).” However, if Omega has even “partial access” to the “free part” of your will then it will usually be best to one-box.
Or at least this is how I like to think about it.
I did not know about it, thanks for pointing it out. It’s Simpson’s paradox the decision theory problem.
On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn’t Newcomb’s problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.