I guess it’s too late for this comment (no worries if you don’t feel like replying!), but are you basically saying that CDT doesn’t make sense because it considers impossible/zero-probability worlds (such as the one where you get 11 doses)?
If so: I agree! The paper on the evidentialist’s wager assumes that you should/want to hedge between CDT and EDT, given that the issue is contentious.
Does that make sense / relate at all to your question?
Not “CDT does not make sense”, but any argument that fights a hypothetical such as “predictor knows what you will do” is silly. EDT does that sometimes. I don’t understand FDT (not sure anyone does, since people keep arguing what it predicts), so maybe it fares better. Two-boxing in a perfect predictor setup is a classic example. You can change the problem, but it will not be the same problem. 11 doses outcome is not a possibility in the Moral Newcomb’s. I’ve been shouting in the void for a decade that all you need to do is enumerate the worlds, assign probabilities, calculate expected utility. You throw away silliness like “dominant strategies”, they are not applicable in twin PD, Newcomb’s, Smoking Lesion, Pafit’s Hitchhiker etc. “Decision” is not a primitive concept, but an emergent one. The correct question to ask is “given an agent’s actual actions (not thoughts, not decisions), what is the EV, and what kind of actions maximize it?” I wrote a detailed post about it, but it wooshed. People constantly and unwittingly try to smuggle libertarian free will in their logic.
I guess it’s too late for this comment (no worries if you don’t feel like replying!), but are you basically saying that CDT doesn’t make sense because it considers impossible/zero-probability worlds (such as the one where you get 11 doses)?
If so: I agree! The paper on the evidentialist’s wager assumes that you should/want to hedge between CDT and EDT, given that the issue is contentious.
Does that make sense / relate at all to your question?
Not “CDT does not make sense”, but any argument that fights a hypothetical such as “predictor knows what you will do” is silly. EDT does that sometimes. I don’t understand FDT (not sure anyone does, since people keep arguing what it predicts), so maybe it fares better. Two-boxing in a perfect predictor setup is a classic example. You can change the problem, but it will not be the same problem. 11 doses outcome is not a possibility in the Moral Newcomb’s. I’ve been shouting in the void for a decade that all you need to do is enumerate the worlds, assign probabilities, calculate expected utility. You throw away silliness like “dominant strategies”, they are not applicable in twin PD, Newcomb’s, Smoking Lesion, Pafit’s Hitchhiker etc. “Decision” is not a primitive concept, but an emergent one. The correct question to ask is “given an agent’s actual actions (not thoughts, not decisions), what is the EV, and what kind of actions maximize it?” I wrote a detailed post about it, but it wooshed. People constantly and unwittingly try to smuggle libertarian free will in their logic.