Phil, you said “if you didn’t know ahead of time that you’d be given this decision, choose both boxes”, which is a wrong answer. You didn’t know, but the predictor knew what you’ll do, and if you one-box, that is your property that predictor knew, and you’ll have your reward as a result.
The important part is what predictor knows about your action, not even what you yourself know about your action, and it doesn’t matter how you convince the predictor. If predictor just calculates your final action by physical simulation or whatnot, you don’t need anything else to convince it, you just need to make the right action. Commitment is a way of convincing, either yourself to make the necessary choice, or your opponent of the fact that you’ll make that choice. In our current real world, a person usually can’t just say “I promise”, without any expected penalty for lying, however implicit, and expect to be trusted, which makes Newcomb’s paradox counterintuitive, and which makes cooperating in one-off prisoner’s dilemma without pre-commitment unrealistic. But it’s a technical problem of communication, or of rationality, nothing more. If predictor can verify that you’ll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it’s all it takes.
Phil, you said “if you didn’t know ahead of time that you’d be given this decision, choose both boxes”, which is a wrong answer. You didn’t know, but the predictor knew what you’ll do, and if you one-box, that is your property that predictor knew, and you’ll have your reward as a result.
The important part is what predictor knows about your action, not even what you yourself know about your action, and it doesn’t matter how you convince the predictor. If predictor just calculates your final action by physical simulation or whatnot, you don’t need anything else to convince it, you just need to make the right action. Commitment is a way of convincing, either yourself to make the necessary choice, or your opponent of the fact that you’ll make that choice. In our current real world, a person usually can’t just say “I promise”, without any expected penalty for lying, however implicit, and expect to be trusted, which makes Newcomb’s paradox counterintuitive, and which makes cooperating in one-off prisoner’s dilemma without pre-commitment unrealistic. But it’s a technical problem of communication, or of rationality, nothing more. If predictor can verify that you’ll one-box (after you understand the rules of the game, yadda yadda), your property of one-boxing is communicated, and it’s all it takes.