Okay, so “russian roulette” meaning the gun is held to your head and the trigger is pulled once.
And “pay” means in terms of candy bars (utility that’s only realized if you live), not malaria victims (utility that gets realized even if you’re shot).
Okay, sure. Seems reasonable. I think the intuitiveness has a lot to do with the phrasing.
I’m still not sure that the reasoning is correct. It may depend on your life goals. For example, if your only goal in life is saving weasels from avalanches, which requires you to be alive but doesn’t require any money, then case 2 lets you save 2x more future weasels than case 1, so I guess you’d pay more. On the other hand, if your utility function doesn’t mention any weasels and you care only about candy bars eaten by the surviving version of you, then I’m not sure why you’d want to pay to survive at all. In either case Landsburg’s conclusion seems to be wrong. Or am I missing something?
In the former case you’d pay infinity (or all you have) either way. In the latter case you’d pay zero either way. I don’t see how that contradicts Landsburg.
You’re right and I’m being stupid, thanks. But what if you value both weasels (proportional to the probability of survival) and candy bars (proportional to remaining money in case of survival)? Then each bullet destroys a fixed number of weasels and no candy bars, so you should pay 2x more candy bars to remove two bullets instead of one, no?
The bullet does destroy candy bars. Unless you’re introducing some sort of quantum suicide assumption, where you average only over surviving future selves? I suppose then you’re correct: the argument cited by Landsburg fails, because it must be assuming somewhere that your utility function is a probability-weighted sum over future worlds.
You’re right again, thanks again :-) I was indeed using a sort of quantum suicide assumption because I don’t understand why I should care about losing candy bars in the worlds where I’m dead. In such worlds it makes more sense to care only about external goals like saving weasels, or not getting your relatives upset over your premature quantum suicide, etc.
Specifically, I think the middle part of the argument would fail, because you’d go, “eh, if they’re executing half of my future selves, I can only save half the weasels at a given cost in average candy bars, so I’ll spend more of the money on candy bars”.
Okay, so “russian roulette” meaning the gun is held to your head and the trigger is pulled once.
And “pay” means in terms of candy bars (utility that’s only realized if you live), not malaria victims (utility that gets realized even if you’re shot).
Okay, sure. Seems reasonable. I think the intuitiveness has a lot to do with the phrasing.
I’m still not sure that the reasoning is correct. It may depend on your life goals. For example, if your only goal in life is saving weasels from avalanches, which requires you to be alive but doesn’t require any money, then case 2 lets you save 2x more future weasels than case 1, so I guess you’d pay more. On the other hand, if your utility function doesn’t mention any weasels and you care only about candy bars eaten by the surviving version of you, then I’m not sure why you’d want to pay to survive at all. In either case Landsburg’s conclusion seems to be wrong. Or am I missing something?
In the former case you’d pay infinity (or all you have) either way. In the latter case you’d pay zero either way. I don’t see how that contradicts Landsburg.
You’re right and I’m being stupid, thanks. But what if you value both weasels (proportional to the probability of survival) and candy bars (proportional to remaining money in case of survival)? Then each bullet destroys a fixed number of weasels and no candy bars, so you should pay 2x more candy bars to remove two bullets instead of one, no?
The bullet does destroy candy bars. Unless you’re introducing some sort of quantum suicide assumption, where you average only over surviving future selves? I suppose then you’re correct: the argument cited by Landsburg fails, because it must be assuming somewhere that your utility function is a probability-weighted sum over future worlds.
You’re right again, thanks again :-) I was indeed using a sort of quantum suicide assumption because I don’t understand why I should care about losing candy bars in the worlds where I’m dead. In such worlds it makes more sense to care only about external goals like saving weasels, or not getting your relatives upset over your premature quantum suicide, etc.
Specifically, I think the middle part of the argument would fail, because you’d go, “eh, if they’re executing half of my future selves, I can only save half the weasels at a given cost in average candy bars, so I’ll spend more of the money on candy bars”.