It seems to me that if you make a basic bayes net with utilities at the end. The choice with the higher expected utility is to one box.
Say: P(1,000,000 in box b and 10,000 in box a|I one box) = 99% P(box b is empty and 10,000 in box a|I two box) = 99% hence P(box b is empty and 10,000 in box a|I one box) = 1% P(1,000,000 in box b and 10,000 in box a|I two box) = 1% So If I one box i should expect 99%1,000,000+1%0 = 990,000 If I two box i should expect 99%10,000+1%1,010,000 = 20,000 Expected utility(I one box)/Expected utility(I two box) = 49.5, so I should one box by a land slide.
This is assuming that omega has a 99% rate of true positive, and of true negative; it’s more dramatic if we assume that omega is perfect. If P(1,000,000 in box b and 10,000 in box a|I one box) = P(box b is empty and 10,000 in box a|I two box) = 100%, then Expected utility(I one box)/Expected utility(I two box) = 100. If omega is perfect, by my calculation we should expect one boxing to be a 100 times more profitable than two boxing.
This is the sort of math I usually use to decide. Is this none-standard, did I make a mistake, or does this method produce stupid results elsewhere?
It’s true that one-boxing is the strategy that maximizes expected utility, and that it is a fairly uncontroversial maxim in normative decision theory that one should pick the strategy that maximizes expected utility. However, it is also a fairly uncontroversial maxim in normative decision theory that if a dominant strategy exists, one should adopt it. In this case, two-boxing is dominant (if you suppose there is no backwards causation). Usually, these two maxims do not conflict, but they do in Newcomb’s problem. I guess the question you should ask yourself is why you think the one we should adhere to is expected utility maximization.
Not saying it’s the wrong answer (I don’t think it is), but simply saying “We do this sort of math all the time. Why not here?” is insufficient justification because we also do this other sort of math all the time, so why not do that here?
Great, I’ll work on that. That’s exactly what I should ask my self. And if I find that the rule of do that with highest expected utility fails on the smoking lesion problem, I’ll ask why I want to go with the dominant strategy (as I predict I will).
The only reason that I have to trust expected utility particularly is that I have a geometric metaphor, which forces me to believe the rule, if I believe certain basic things about utility.
It seems to me that if you make a basic bayes net with utilities at the end. The choice with the higher expected utility is to one box. Say:
P(1,000,000 in box b and 10,000 in box a|I one box) = 99%
P(box b is empty and 10,000 in box a|I two box) = 99%
hence
P(box b is empty and 10,000 in box a|I one box) = 1%
P(1,000,000 in box b and 10,000 in box a|I two box) = 1%
So
If I one box i should expect 99%1,000,000+1%0 = 990,000
If I two box i should expect 99%10,000+1%1,010,000 = 20,000
Expected utility(I one box)/Expected utility(I two box) = 49.5, so I should one box by a land slide. This is assuming that omega has a 99% rate of true positive, and of true negative; it’s more dramatic if we assume that omega is perfect. If P(1,000,000 in box b and 10,000 in box a|I one box) = P(box b is empty and 10,000 in box a|I two box) = 100%, then Expected utility(I one box)/Expected utility(I two box) = 100. If omega is perfect, by my calculation we should expect one boxing to be a 100 times more profitable than two boxing.
This is the sort of math I usually use to decide. Is this none-standard, did I make a mistake, or does this method produce stupid results elsewhere?
It’s true that one-boxing is the strategy that maximizes expected utility, and that it is a fairly uncontroversial maxim in normative decision theory that one should pick the strategy that maximizes expected utility. However, it is also a fairly uncontroversial maxim in normative decision theory that if a dominant strategy exists, one should adopt it. In this case, two-boxing is dominant (if you suppose there is no backwards causation). Usually, these two maxims do not conflict, but they do in Newcomb’s problem. I guess the question you should ask yourself is why you think the one we should adhere to is expected utility maximization.
Not saying it’s the wrong answer (I don’t think it is), but simply saying “We do this sort of math all the time. Why not here?” is insufficient justification because we also do this other sort of math all the time, so why not do that here?
Great, I’ll work on that. That’s exactly what I should ask my self. And if I find that the rule of do that with highest expected utility fails on the smoking lesion problem, I’ll ask why I want to go with the dominant strategy (as I predict I will).
The only reason that I have to trust expected utility particularly is that I have a geometric metaphor, which forces me to believe the rule, if I believe certain basic things about utility.
This looks like it loses in the Smoking Lesion problem.
I’ll work on that and edit my result to here. Thanks.