No, I suspect it’s a correct ingredient of counterfactuals, one I didn’t see discussed before, not an error restricted to a particular decision theory. There is no contradiction in considering each of the counterfactuals as having a given possible decision made by the agent and satisfying the Oracle’s prediction, as the agent doesn’t know that it won’t make this exact decision. And if it does make this exact decision, the prediction is going to be correct, just like the possible decision indexing the counterfactual is going to be the decision actually taken. Most decision theories allow explicitly considering different possible decisions, and adding correctness of the Oracle’s prediction into the mix doesn’t seem fundamentally different in any way, it’s similarly sketchy.
Thanks for patience with this. I am still missing some fundamental assumption or framing about why this is non-obvious (IMO, either the Oracle is wrong, or the choice is illusory). I’ll continue to examine the discussions and examples in hopes that it will click.
I presume Vladimir and me are likely discussing this from within the determinist paradigm in which “either the Oracle is wrong, or the choice is illusory” doesn’t apply (although I propose a similar idea in Why 1-boxing doesn’t imply backwards causation).
IMO, either the Oracle is wrong, or the choice is illusory
This is similar to determinism vs. free will, and suggests the following example. The Oracle proclaims: “The world will follow the laws of physics!”. But in the counterfactual where an agent takes a decision that won’t actually be taken, the fact of taking that counterfactual decision contradicts the agent’s cognition following the laws of physics. Yet we want to think about the world within the counterfactual as if the laws of physics are followed.
Hmm. So does this only apply to CDT agents, who foolishly believe that their decision is not subject to predictions?
No, I suspect it’s a correct ingredient of counterfactuals, one I didn’t see discussed before, not an error restricted to a particular decision theory. There is no contradiction in considering each of the counterfactuals as having a given possible decision made by the agent and satisfying the Oracle’s prediction, as the agent doesn’t know that it won’t make this exact decision. And if it does make this exact decision, the prediction is going to be correct, just like the possible decision indexing the counterfactual is going to be the decision actually taken. Most decision theories allow explicitly considering different possible decisions, and adding correctness of the Oracle’s prediction into the mix doesn’t seem fundamentally different in any way, it’s similarly sketchy.
Thanks for patience with this. I am still missing some fundamental assumption or framing about why this is non-obvious (IMO, either the Oracle is wrong, or the choice is illusory). I’ll continue to examine the discussions and examples in hopes that it will click.
I presume Vladimir and me are likely discussing this from within the determinist paradigm in which “either the Oracle is wrong, or the choice is illusory” doesn’t apply (although I propose a similar idea in Why 1-boxing doesn’t imply backwards causation).
This is similar to determinism vs. free will, and suggests the following example. The Oracle proclaims: “The world will follow the laws of physics!”. But in the counterfactual where an agent takes a decision that won’t actually be taken, the fact of taking that counterfactual decision contradicts the agent’s cognition following the laws of physics. Yet we want to think about the world within the counterfactual as if the laws of physics are followed.