Answer 1: If you take box A, you’ll probably get $100. If you take box B, you’ll probably get $700. You prefer $700 to $100, so you should take box A.
Verdict: WRONG!.
That’s true. If B gives the $700 and you want the $700 you clearly pick B, not A!
This is exactly the reasoning that leads to taking one box in Newcomb’s problem, and one boxing is wrong. (If you don’t agree, then you’re not going to be in the target audience for this post I’m afraid.)
Oh! For this to make (limited) sense it must mean that answer 1 “so you should take box A” is a typo and he intended to say ‘B’ as the answer.
It seems that two wrongs can make a right (when both errors happen to entail a binary inversion of the same bit).
The only alternative is to deny that B is even a little irrational. But that seems quite odd, since choosing B involves doing something that you know, when you do it, is less rewarding than something else you could just as easily have done.
So I conclude Answer 2 is correct. Either choice is less than fully rational. There isn’t anything that we can, simply and without qualification, say that you should do. This is a problem for those who think decision theory should aim for completeness, but cases like this suggest that this was an implausible aim.
Poor guy. He did all the work of identifying the problem, setting up scenarios to illustrate and analysing the answers. But he just couldn’t manage to bite the bullet that was staring him in the face. That his decision theory of choice was just wrong.
In the context, I think the author is talking about anti-prediction. If you want to be where Death isn’t, and Death knows you use CDT, should you choose the opposite of what CDT normally recommends?
I don’t think I endorse his reasoning, but I think you misread him.
I don’t think I endorse his reasoning, but I think you misread him.
It is not inconceivable that I misread him. Mind reading is a task that is particularly difficult when it comes to working out precisely which mistake someone is making when at least part of their reasoning is visibly broken. My subjectively experienced amusement applies to what seemed to be the least insane of the interpretations. Your explanation requires the explanation to be wrong (ie. it wouldn’t be analogous to one boxing at all) rather than merely the label.
Death knows you use CDT, should you choose the opposite of what CDT normally recommends?
That wouldn’t make much sense (for the reasoning in the paper).
Some unintended humour from the link essay:
That’s true. If B gives the $700 and you want the $700 you clearly pick B, not A!
Oh! For this to make (limited) sense it must mean that answer 1 “so you should take box A” is a typo and he intended to say ‘B’ as the answer.
It seems that two wrongs can make a right (when both errors happen to entail a binary inversion of the same bit).
Poor guy. He did all the work of identifying the problem, setting up scenarios to illustrate and analysing the answers. But he just couldn’t manage to bite the bullet that was staring him in the face. That his decision theory of choice was just wrong.
In the context, I think the author is talking about anti-prediction. If you want to be where Death isn’t, and Death knows you use CDT, should you choose the opposite of what CDT normally recommends?
I don’t think I endorse his reasoning, but I think you misread him.
It is not inconceivable that I misread him. Mind reading is a task that is particularly difficult when it comes to working out precisely which mistake someone is making when at least part of their reasoning is visibly broken. My subjectively experienced amusement applies to what seemed to be the least insane of the interpretations. Your explanation requires the explanation to be wrong (ie. it wouldn’t be analogous to one boxing at all) rather than merely the label.
That wouldn’t make much sense (for the reasoning in the paper).