True Path has already covered it (or most of it) extensively, but both the Newcomb’s Problem and the distinction made in the post (if it were to be applied in a game theory setting) contain too many inherent contradiction and do not seem to actually point out anything concrete.
You can’t talk about decision-making agents if they are basically not making any decisions (classical determinism, or effective precommitment in this case, enforces that). Also, you can’t have a 100% accurate predictor and have freedom of choice on the other hand, because that implies (in the very least) that the subset of phenomena in the universe that govern your decision is deterministic.
[Plus, even if you have a 99.9999… (… meaning some large N times 9, not infinity) percent accurate predictor, if the Newcomb’s problem assumes perfect rationality, there’s really no paradox.
I think what this post exemplifies (and perhaps that was the intent from the get-go and I just completely missed it) is precisely that Newcomb’s is ambiguous about the type of precommitment taken (which follows from it being ambiguous about how Omega works) and therefore is sort of self contradictory, and not truly a paradox.]
“Also, you can’t have a 100% accurate predictor and have freedom of choice on the other hand”—yes, there is a classic philosophical argument that claims determinism means that we don’t have libertarian freewill and I agree with that.
“You can’t talk about decision-making agents if they are basically not making any decisions”—My discussion of the student and the exam in this post may help clear things up. Decisions don’t require you to have multiple things that you could have chosen as per the libertarian freewill model, but simply require you to be able to construct counterfactuals. Alternatively, this post by Anna Salamon might help clarify how we can do this.
True Path has already covered it (or most of it) extensively, but both the Newcomb’s Problem and the distinction made in the post (if it were to be applied in a game theory setting) contain too many inherent contradiction and do not seem to actually point out anything concrete.
You can’t talk about decision-making agents if they are basically not making any decisions (classical determinism, or effective precommitment in this case, enforces that). Also, you can’t have a 100% accurate predictor and have freedom of choice on the other hand, because that implies (in the very least) that the subset of phenomena in the universe that govern your decision is deterministic.
[Plus, even if you have a 99.9999… (… meaning some large N times 9, not infinity) percent accurate predictor, if the Newcomb’s problem assumes perfect rationality, there’s really no paradox.
I think what this post exemplifies (and perhaps that was the intent from the get-go and I just completely missed it) is precisely that Newcomb’s is ambiguous about the type of precommitment taken (which follows from it being ambiguous about how Omega works) and therefore is sort of self contradictory, and not truly a paradox.]
“Also, you can’t have a 100% accurate predictor and have freedom of choice on the other hand”—yes, there is a classic philosophical argument that claims determinism means that we don’t have libertarian freewill and I agree with that.
“You can’t talk about decision-making agents if they are basically not making any decisions”—My discussion of the student and the exam in this post may help clear things up. Decisions don’t require you to have multiple things that you could have chosen as per the libertarian freewill model, but simply require you to be able to construct counterfactuals. Alternatively, this post by Anna Salamon might help clarify how we can do this.