I think that playing this game is the right move, in the contrived hypothetical circumstances where
You have already played a huge number of times. (say >200)
Your priors only contain options for “totally safe for me” or “1/6 chance of death.”
I don’t think you are going to actually make that move in the real world much because
You would never play the first few times
Your going to have some prior on “this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me.” This seems no less reasonable than the no chance of killing you prior.
If for some strange reason, you have already played a huge huge number of times, like billions. Then you are already rich, diminishing marginal utility of money. An agent with logarithmic utility in money, nonzero starting balance, uniform priors over lethality probability and a fairly large dis-utility of death will never play.
This isn’t really a problem if the rewards start out high and gradually diminish.
I.e., suppose that you value your life at $L (i.e., you’re willing to die if the heirs of your choice get L dollars), and you assign a probability of 10^-15 to H1 = “I am immune to losing at Russian roulette”, something like 10^ 4 to H2 = “I intuitively twist the gun each time to avoid the bullet”,, and a probability of something like 10^-3 to H3 = “they gave me an empty gun this time”. Then you are offered to play enough rounds of Russian roulette for a price of $L/round until you update to arbitrary levels.
Now, if you play enough times, H3 becomes the dominant hypothesis with say 90% probability, so you’d accept a payout for, say, $L/2. Similarly, if you know that H3 isn’t the case, you’d still assign very high probability to something like H2 after enough rounds, so you’d still accept a bounty of $L/2.
Now, suppose that all the alternative hypothesis H2, H3,… are false, and your only other alternative hypothesis is H1 (magical intervention). Now the original dilemma has been saved. What should one do?
Your going to have some prior on “this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me.” This seems no less reasonable than the no chance of killing you prior.
If you’ve survived often enough, this can go arbitrarily close to 0.
I think that playing this game is the right move
Why? It seems to me like I have to pick between the theories “I am an exception to natural law, but only in ways that could also be produced by the anthropic effect” and “Its just the anthropic effect”. The latter seems obviously more reasonable to me, and it implies I’ll die if I play.
Work out your prior on being an exception to natural law in that way. Pick a number of rounds such that the chance of you winning by luck is even smaller. You currently think that the most likely way for you to be in that situation is if you were an exception.
What if the game didn’t kill you, it just made you sick? Would your reasoning still hold? There is no hard and sharp boundary between life and death.
Hm. I think your reason here is more or less “because our current formalisms say so”. Which is fair enough, but I don’t think it gives me an additional reason—I already have my intuition despite knowing it contradicts them.
What if the game didn’t kill you, it just made you sick? Would your reasoning still hold?
No. The relevant gradual version here is forgetting rather than sickness. But yes, I agree there is an embedding question here.
I think that playing this game is the right move, in the contrived hypothetical circumstances where
You have already played a huge number of times. (say >200)
Your priors only contain options for “totally safe for me” or “1/6 chance of death.”
I don’t think you are going to actually make that move in the real world much because
You would never play the first few times
Your going to have some prior on “this is safer for me, but not totally save, it actually has a 1/1000 chance of killing me.” This seems no less reasonable than the no chance of killing you prior.
If for some strange reason, you have already played a huge huge number of times, like billions. Then you are already rich, diminishing marginal utility of money. An agent with logarithmic utility in money, nonzero starting balance, uniform priors over lethality probability and a fairly large dis-utility of death will never play.
This isn’t really a problem if the rewards start out high and gradually diminish.
I.e., suppose that you value your life at $L (i.e., you’re willing to die if the heirs of your choice get L dollars), and you assign a probability of 10^-15 to H1 = “I am immune to losing at Russian roulette”, something like 10^ 4 to H2 = “I intuitively twist the gun each time to avoid the bullet”,, and a probability of something like 10^-3 to H3 = “they gave me an empty gun this time”. Then you are offered to play enough rounds of Russian roulette for a price of $L/round until you update to arbitrary levels.
Now, if you play enough times, H3 becomes the dominant hypothesis with say 90% probability, so you’d accept a payout for, say, $L/2. Similarly, if you know that H3 isn’t the case, you’d still assign very high probability to something like H2 after enough rounds, so you’d still accept a bounty of $L/2.
Now, suppose that all the alternative hypothesis H2, H3,… are false, and your only other alternative hypothesis is H1 (magical intervention). Now the original dilemma has been saved. What should one do?
If you’ve survived often enough, this can go arbitrarily close to 0.
Why? It seems to me like I have to pick between the theories “I am an exception to natural law, but only in ways that could also be produced by the anthropic effect” and “Its just the anthropic effect”. The latter seems obviously more reasonable to me, and it implies I’ll die if I play.
Work out your prior on being an exception to natural law in that way. Pick a number of rounds such that the chance of you winning by luck is even smaller. You currently think that the most likely way for you to be in that situation is if you were an exception.
What if the game didn’t kill you, it just made you sick? Would your reasoning still hold? There is no hard and sharp boundary between life and death.
Hm. I think your reason here is more or less “because our current formalisms say so”. Which is fair enough, but I don’t think it gives me an additional reason—I already have my intuition despite knowing it contradicts them.
No. The relevant gradual version here is forgetting rather than sickness. But yes, I agree there is an embedding question here.