Neither analysis is correct. Both are incomplete. The outsider’s viewpoint is less wrong.
The first person argument presented here finds a local maximum and stops there. Yes, if they ignore Omega’s abilities and climb the two-box hill then they can get 1000 dollars more. No mention of whether they’re on the right hill, and no analysis of whether this is reasonable given their knowledge of Omega.
The outsider’s view as stated fails to account for a possibly limited ability of the decision-maker to choose what sort of decision-maker they can be. Omega knows (in this infallible predictor version), but the decision-maker might not. Or they might know but be powerless to update (not all possible agents have even local “free will”).
Wait. Will you and the outsider observe the same results? It’s hard not to think that one of the two people is simply incorrect in their prediction. They cannot both be correct.
I’m still missing something (which from this observer’s standpoint feels like disagreeing). Let me restate and tell me which statement is wrong. You are in front of two boxes which Omega has prepared based on prediction of your decision. There is an observer watching you. You and the observer both assign very high probability that Omega has predicted correctly. Despite this, you believe you can two-box and both boxes will be filled, and the observer believes that if you two-box, only the smaller amount will be filled.
Fast-forward to after you’ve opened both boxes. The second box was empty. The observer feels vindicated. You feel your prediction was correct, even though it doesn’t match the reality you find yourself in.
I think you’re just wrong. You got less by two-boxing, so your prediction was incorrect.
Alternate fast-forward to after you open only one box, and found $1M. I think you were wrong in expecting to find it empty.
I understand anthropic reasoning is difficult—both in understanding precisely what future experience is being predicted, and in enumerating the counterfactuals that let you apply evidence. Neither of those is relevant to Newcomb’s problem, since the probability of objective outcomes is given.
“ You are in front of two boxes ….. you believe you can two-box and both boxes will be filled”
No....That is not first-person decision. I do not think if I choose to two-box both will be filled. I think the two boxes’ contents are predetermined. Whatever I choose can no longer change what is already inside. Two-boxing is better because it gives me 1000 dollar more. So my decision is right regardless if the second box is empty or not.
Outsiders and the first-person give different counterfactuals even when facing the same outcome. Say the outcome is two-boxing and the second box is empty. The outsider would think the counterfactual is to make the machine (decision-maker) always one-box, so the second box is filled. The first-person would think the counterfactual is that I have only chosen the second box which is empty.
Facing the same outcome while giving different counterfactuals is the same reason for perspective disagreement in anthropics.
One more try—I’m misunderstanding your use of words “correct” and/or “choose”. I understand difficulties and disagreements in anthropics for not-resolved probabilities with branching experience-measurement points. But I don’t see how it applies AFTER the results are known in a fairly linear choice.
Does your first-person decision include any actual decision? do you think there are two universes you might find yourself in (or some other definition of “possible”)? If you think the boxes’ contents are determined, AND you think Omega predicted correctly, does this not imply that your “decision” is predetermined as well?
I totally get the “all choice is illusion” perspective, but I don’t see much reason to call it “anthropic reasoning”.
I think I kind of getting where our disagreement lies. You agree with the “all choices are illusions”. By this, there is no point in thinking about “how should I decide”. We can discuss what kind of decision-maker would benefit most in this situation, which is the “outsider perspective”. Obviously, one-boxing decision-makers are going to be better off.
The controversy is if we reason as the first-person when facing the two boxes. Regardless of the content of the opaque box, two-boxing should give me 1000 dollars more. The causal analysis is quite straightforward. This seems to be a contradiction with the first paragraph.
What I am suggesting is the two reasoning are parallel to each other. They are based on different premises. The “god’s eye view” treats the decision-maker as an ordinary part of the environment like a machine. Whereas the first-person analysis treats the self as something unique: a primitively identified irreducible perspective center, i.e. THE agent—as opposed to part of the environment. (Similar to how a dualist agent consider itself) Here free will is a premise. I think they are both correct, yet because they are based on different perspectives (thus different premises) they cannot be mixed together. (Kind of like deductions from different axiomatic systems cannot be mixed.) So from a first-person perspective, I cannot put how Omega has analyzed me (like a machine) thus filled the box into consideration. For the same reason, from a god’s eye view, we cannot imagine being the decision-maker himself when facing the two boxes and choose.
If I understand correctly, what you have in mind is that those two approaches must be put together to arrive at a complete solution. Then the conflict must be resolved somehow. It is done by letting the god’s eye view dominate over the first-person approach. This makes sense because after all treating oneself as special does not seem objective. Yet that would deny free will which could make all casual decision-making processes into question. Also, this brings to a metaphysical debate of which is more fundamental? Reasoning from a first-person perspective or reasoning objectively?
I bring up anthropics because I think this is the exact same reason which leads to the paradoxes in that field, mixing reasoning from different perspectives. If you do not agree with treating perspectives as premises and keeping two approaches separate then there is indeed little connection between that and Newcomb’s paradox.
With the two boxes with predetermined content right in front of me, two-boxing makes me 1000 dollars richer than one-boxing.
From an outsider’s view, making the decision-maker one-box will cause it 999,000 dollars richer than making it two-box.
I think both are correct. Mixing the two analyses together is not.
Neither analysis is correct. Both are incomplete. The outsider’s viewpoint is less wrong.
The first person argument presented here finds a local maximum and stops there. Yes, if they ignore Omega’s abilities and climb the two-box hill then they can get 1000 dollars more. No mention of whether they’re on the right hill, and no analysis of whether this is reasonable given their knowledge of Omega.
The outsider’s view as stated fails to account for a possibly limited ability of the decision-maker to choose what sort of decision-maker they can be. Omega knows (in this infallible predictor version), but the decision-maker might not. Or they might know but be powerless to update (not all possible agents have even local “free will”).
Wait. Will you and the outsider observe the same results? It’s hard not to think that one of the two people is simply incorrect in their prediction. They cannot both be correct.
They will observe the same result. Say the result is the opaque box is empty.
From a first-person perspective, if I had chosen this box only then I would have gone empty-handed.
From an outsider’s perspective, making a one-boxing decision-maker would cause the box to be filled with 1 million dollars.
This “disagreement” is due to the two having different reasoning starting points. In anthropics, the same reason leads to robust perspectivism. I.E. Two people sharing all their information can give different answers to the same probability question.
I’m still missing something (which from this observer’s standpoint feels like disagreeing). Let me restate and tell me which statement is wrong. You are in front of two boxes which Omega has prepared based on prediction of your decision. There is an observer watching you. You and the observer both assign very high probability that Omega has predicted correctly. Despite this, you believe you can two-box and both boxes will be filled, and the observer believes that if you two-box, only the smaller amount will be filled.
Fast-forward to after you’ve opened both boxes. The second box was empty. The observer feels vindicated. You feel your prediction was correct, even though it doesn’t match the reality you find yourself in.
I think you’re just wrong. You got less by two-boxing, so your prediction was incorrect.
Alternate fast-forward to after you open only one box, and found $1M. I think you were wrong in expecting to find it empty.
I understand anthropic reasoning is difficult—both in understanding precisely what future experience is being predicted, and in enumerating the counterfactuals that let you apply evidence. Neither of those is relevant to Newcomb’s problem, since the probability of objective outcomes is given.
“ You are in front of two boxes ….. you believe you can two-box and both boxes will be filled”
No....That is not first-person decision. I do not think if I choose to two-box both will be filled. I think the two boxes’ contents are predetermined. Whatever I choose can no longer change what is already inside. Two-boxing is better because it gives me 1000 dollar more. So my decision is right regardless if the second box is empty or not.
Outsiders and the first-person give different counterfactuals even when facing the same outcome. Say the outcome is two-boxing and the second box is empty. The outsider would think the counterfactual is to make the machine (decision-maker) always one-box, so the second box is filled. The first-person would think the counterfactual is that I have only chosen the second box which is empty.
Facing the same outcome while giving different counterfactuals is the same reason for perspective disagreement in anthropics.
One more try—I’m misunderstanding your use of words “correct” and/or “choose”. I understand difficulties and disagreements in anthropics for not-resolved probabilities with branching experience-measurement points. But I don’t see how it applies AFTER the results are known in a fairly linear choice.
Does your first-person decision include any actual decision? do you think there are two universes you might find yourself in (or some other definition of “possible”)? If you think the boxes’ contents are determined, AND you think Omega predicted correctly, does this not imply that your “decision” is predetermined as well?
I totally get the “all choice is illusion” perspective, but I don’t see much reason to call it “anthropic reasoning”.
I think I kind of getting where our disagreement lies. You agree with the “all choices are illusions”. By this, there is no point in thinking about “how should I decide”. We can discuss what kind of decision-maker would benefit most in this situation, which is the “outsider perspective”. Obviously, one-boxing decision-makers are going to be better off.
The controversy is if we reason as the first-person when facing the two boxes. Regardless of the content of the opaque box, two-boxing should give me 1000 dollars more. The causal analysis is quite straightforward. This seems to be a contradiction with the first paragraph.
What I am suggesting is the two reasoning are parallel to each other. They are based on different premises. The “god’s eye view” treats the decision-maker as an ordinary part of the environment like a machine. Whereas the first-person analysis treats the self as something unique: a primitively identified irreducible perspective center, i.e. THE agent—as opposed to part of the environment. (Similar to how a dualist agent consider itself) Here free will is a premise. I think they are both correct, yet because they are based on different perspectives (thus different premises) they cannot be mixed together. (Kind of like deductions from different axiomatic systems cannot be mixed.) So from a first-person perspective, I cannot put how Omega has analyzed me (like a machine) thus filled the box into consideration. For the same reason, from a god’s eye view, we cannot imagine being the decision-maker himself when facing the two boxes and choose.
If I understand correctly, what you have in mind is that those two approaches must be put together to arrive at a complete solution. Then the conflict must be resolved somehow. It is done by letting the god’s eye view dominate over the first-person approach. This makes sense because after all treating oneself as special does not seem objective. Yet that would deny free will which could make all casual decision-making processes into question. Also, this brings to a metaphysical debate of which is more fundamental? Reasoning from a first-person perspective or reasoning objectively?
I bring up anthropics because I think this is the exact same reason which leads to the paradoxes in that field, mixing reasoning from different perspectives. If you do not agree with treating perspectives as premises and keeping two approaches separate then there is indeed little connection between that and Newcomb’s paradox.