Are you saying that the “CDT as it is normally interpreted” cannot help but fight the hypothetical?
It doesn’t have to fight the hypothetical. CDT counterfactuals don’t have to be possible.
The standard CDT algorithm computes the value of each action by computing the expected utility conditional on a miraculous intervention changing one’s decision to that action, separately from early deterministic causes, and computing the causal consequences of that. See Anna’s discussion here, including modifications in which the miraculous intervention changes other things, like one’s earlier dispositions (perhaps before the Predictor scanned you) or the output of one’s algorithm (instantiated in you and the Predictor’s model).
Say before the contents of the boxes are revealed our CDTer assigns some probability p to the state of the world where box B is full and his internal makeup will deterministically lead him to one-box, and probability (1-p) to the state of the world where box B is empty and that his internal makeup will deterministically lead him to two-box.
That notwithstanding, CDT does take probabilities into account, at least the CDT as described in Wikipedia, so the question is, what is the counterfactual probability that if I were to two-box, then I get $1.001M, as opposed to the conditional probability of the same thing. The latter is very low, the former has to be evaluated on some grounds.
Altering your action miraculously and exogenously would not change the box contents causally. So the CDTer uses the old probabilities for the box contents, the utility of one-boxing is computed to be $1,000,000 times p, and the utility of two boxing is calculated to be $1,001,000p+$1,000 times (1-p).
If she is confident that she will apply CDT based on past experience, or introspection, she will have previously updated to thinking that p is very low.
utility of one-boxing is computed to be $1,000,000 times p,
utility of two boxing is calculated to be $1,001,000p+$1,000 times (1-p).
If she is confident that she will apply CDT based on past experience, or introspection, she will have previously updated to thinking that p is very low.
Right, I forgot. The reasoning is “I’m a two-boxer because I follow a loser’s logic and Omega knows it, so I may as well two-box.” There is no anticipation of winning $1,001,000. No, that does not sound quite right...
The last bit about p going low with introspection isn’t necessary. The conclusion (two-boxing preferred, or at best indifference between one-boxing and two-boxing if one is certain one will two-box) follows under CDT with the usual counterfactuals for any value of p.
The reasoning is “well, if the world is such that I am going to two-box, then I should two-box, and if the world is such that I am going to one-box, then I should two-box” Optional extension: “hmm, sounds like I’ll be two-boxing then, alas! No million dollars for me...” (Unless I wind up changing my mind or the like, which keeps p above 0).
The standard CDT algorithm computes the value of each action by computing the expected utility conditional on a miraculous intervention changing one’s decision to that action, separately from early deterministic causes, and computing the causal consequences of that. See Anna’s discussion here, including modifications in which the miraculous intervention changes other things, like one’s earlier dispositions (perhaps before the Predictor scanned you) or the output of one’s algorithm (instantiated in you and the Predictor’s model).
Say before the contents of the boxes are revealed our CDTer assigns some probability p to the state of the world where box B is full and his internal makeup will deterministically lead him to one-box, and probability (1-p) to the state of the world where box B is empty and that his internal makeup will deterministically lead him to two-box.
Altering your action miraculously and exogenously would not change the box contents causally. So the CDTer uses the old probabilities for the box contents, the utility of one-boxing is computed to be $1,000,000 times p, and the utility of two boxing is calculated to be $1,001,000p+$1,000 times (1-p).
If she is confident that she will apply CDT based on past experience, or introspection, she will have previously updated to thinking that p is very low.
Right, I forgot. The reasoning is “I’m a two-boxer because I follow a loser’s logic and Omega knows it, so I may as well two-box.” There is no anticipation of winning $1,001,000. No, that does not sound quite right...
The last bit about p going low with introspection isn’t necessary. The conclusion (two-boxing preferred, or at best indifference between one-boxing and two-boxing if one is certain one will two-box) follows under CDT with the usual counterfactuals for any value of p.
The reasoning is “well, if the world is such that I am going to two-box, then I should two-box, and if the world is such that I am going to one-box, then I should two-box” Optional extension: “hmm, sounds like I’ll be two-boxing then, alas! No million dollars for me...” (Unless I wind up changing my mind or the like, which keeps p above 0).