I’m not sure whether you’re saying that the proponent of CDT has a silly view or whether you’re saying you don’t understand their view.
The second… well, probably a bit of both. Anyway, I think that I understand my reservation about the classic presentation of CDT. From Wikipedia:
your choice of one or two boxes can’t causally affect the Predictor’s guess, causal decision theory recommends the two-boxing strategy.
It’s the first statement that is false in the perfect predictor version, because it fights the counterfactual (the predictor is perfect). So the naive CDT in this case is not even self-consistent, as it assigns non-zero odds (100% in fact) to the predictor being imperfect.
It seems more reasonable to say that your choice of one or two boxes causally affects your self-assignment to one of the two groups, winners and losers.
I’m not convinced that this is a fair portrayal of what the proponent of CDT says. That’s not to weigh in on whether they’re right but I don’t think they fail to be self-consistent in the way you have outlined.
The proponent of CDT doesn’t assign non-zero odds to the predictor being imperfect, they just say that it doesn’t matter if the predictor is perfect or not as, given that the boxes are already filled, it is too late to influence the thing which would lead you to get the $M (your agent type at t=0 rather than your decision at t=1).
The CDT agent will agree that the predictor is perfect but just deny that this is relevant because it doesn’t change the fact that NP rewards people based on agent type (at t=0) and not decision, nor does it change the fact that the decision now can’t causally influence the agent type at t=0.
Whether this is the right question to ask seems to me to be open to debate but I don’t think that the proponent of CDT has an internal inconsistency in their consideration of whether the predictor is perfect.
they just say that it doesn’t matter if the predictor is perfect or not as, given that the boxes are already filled, it is too late to influence the thing which would lead you to get the $M (your agent type at t=0 rather than your decision at t=1).
That’s where they lose me. By definition of a perfect predictor, there is no option of “two-box and get $1000 and $1,000,000” in the problem setup, why would they even consider it?
From their perspective, they don’t need to consider it.
The CDT agent can have a credence of 0 in the proposition that they will get $M + $1000. After all, if they have a credence of 1 that the predictor is perfect and a credence of 1 that they were a two-boxing sort of agent at t=0 then they should have a credence of 0 that they will get $M + $1000. The CDT agent won’t deny this.
They then say, however, that they have a credence of 1 in the world state where there is $0 in the second box. Given this credence, the smart decision is to two-box (and get $1000) rather than 1-box (and get $0).
So the short answer is: they don’t even consider this possibility but this doesn’t change the fact that, on their view, the best decision is to two-box.
I’m not entirely sure what we’re discussing here but what I’m saying is that the view isn’t internally inconsistent: they don’t have contradictory credences in world states and they don’t think that there is an option of two-boxing and getting $M and $1000 (they assign credence 0 to this possibility—of course, presuming they have credence 1 that they were a two-boxing type of agent at t=0 then they also assign credence 0 to the possibility of one-boxing and getting $M because they hold that what matters is the decision type at t=0 and not the decision at t=1).
So if you’re saying that their view is internally inconsistent in one of the above ways then one or the other of us is confused. On the other hand, if you’re just saying that this way of thinking seems alien to you then what I’ve said in this comment is pretty much irrelevant...
They then say, however, that they have a credence of 1 in the world state where there is $0 in the second box. Given this credence, the smart decision is to two-box (and get $1000) rather than 1-box (and get $0).
Ah, right, they never expect anything to be in the opaque box, so for them taking the opaque box is basically redundant (“might as well, no harm can come from it”). So they correctly assign the probability of zero to the event “I’m a two-boxer and there is $1M to be had”.
However, this is supplemented by “CDTer must two-box” because “predictor’s choice has already been made”, as if this choice is independent of what they decide. This strange loop can only be unwound by considering how the predictor might know what they will decide before they think that they decided something. And this requires taking the outside view and going into the free-will analysis.
Yeah, that’s right—so I think the proponent of CDT can be criticised for all sorts of reasons but I don’t think they’re (straight-forwardly) inconsistent.
As a note, the decision in NP is whether to take the opaque and the transparent box or whether to just take the opaque box—so the CDT agent doesn’t just think they “may as well” two-box, they think they’re actively better to do so because doing so will gain them the $1000 in the transparent box.
And yes, I agree that considerations of free will are relevant to NP. People have all sorts of opinions about what conclusion we should draw from these considerations and how important they are.
It is, however, quite frustrating to realize, in retrospect, that I had already gone through this chain of reasoning at least once, and then forgot it completely :(
The second… well, probably a bit of both. Anyway, I think that I understand my reservation about the classic presentation of CDT. From Wikipedia:
It’s the first statement that is false in the perfect predictor version, because it fights the counterfactual (the predictor is perfect). So the naive CDT in this case is not even self-consistent, as it assigns non-zero odds (100% in fact) to the predictor being imperfect.
It seems more reasonable to say that your choice of one or two boxes causally affects your self-assignment to one of the two groups, winners and losers.
I’m not convinced that this is a fair portrayal of what the proponent of CDT says. That’s not to weigh in on whether they’re right but I don’t think they fail to be self-consistent in the way you have outlined.
The proponent of CDT doesn’t assign non-zero odds to the predictor being imperfect, they just say that it doesn’t matter if the predictor is perfect or not as, given that the boxes are already filled, it is too late to influence the thing which would lead you to get the $M (your agent type at t=0 rather than your decision at t=1).
The CDT agent will agree that the predictor is perfect but just deny that this is relevant because it doesn’t change the fact that NP rewards people based on agent type (at t=0) and not decision, nor does it change the fact that the decision now can’t causally influence the agent type at t=0.
Whether this is the right question to ask seems to me to be open to debate but I don’t think that the proponent of CDT has an internal inconsistency in their consideration of whether the predictor is perfect.
That’s where they lose me. By definition of a perfect predictor, there is no option of “two-box and get $1000 and $1,000,000” in the problem setup, why would they even consider it?
From their perspective, they don’t need to consider it.
The CDT agent can have a credence of 0 in the proposition that they will get $M + $1000. After all, if they have a credence of 1 that the predictor is perfect and a credence of 1 that they were a two-boxing sort of agent at t=0 then they should have a credence of 0 that they will get $M + $1000. The CDT agent won’t deny this.
They then say, however, that they have a credence of 1 in the world state where there is $0 in the second box. Given this credence, the smart decision is to two-box (and get $1000) rather than 1-box (and get $0).
So the short answer is: they don’t even consider this possibility but this doesn’t change the fact that, on their view, the best decision is to two-box.
I’m not entirely sure what we’re discussing here but what I’m saying is that the view isn’t internally inconsistent: they don’t have contradictory credences in world states and they don’t think that there is an option of two-boxing and getting $M and $1000 (they assign credence 0 to this possibility—of course, presuming they have credence 1 that they were a two-boxing type of agent at t=0 then they also assign credence 0 to the possibility of one-boxing and getting $M because they hold that what matters is the decision type at t=0 and not the decision at t=1).
So if you’re saying that their view is internally inconsistent in one of the above ways then one or the other of us is confused. On the other hand, if you’re just saying that this way of thinking seems alien to you then what I’ve said in this comment is pretty much irrelevant...
Ah, right, they never expect anything to be in the opaque box, so for them taking the opaque box is basically redundant (“might as well, no harm can come from it”). So they correctly assign the probability of zero to the event “I’m a two-boxer and there is $1M to be had”.
However, this is supplemented by “CDTer must two-box” because “predictor’s choice has already been made”, as if this choice is independent of what they decide. This strange loop can only be unwound by considering how the predictor might know what they will decide before they think that they decided something. And this requires taking the outside view and going into the free-will analysis.
Yeah, that’s right—so I think the proponent of CDT can be criticised for all sorts of reasons but I don’t think they’re (straight-forwardly) inconsistent.
As a note, the decision in NP is whether to take the opaque and the transparent box or whether to just take the opaque box—so the CDT agent doesn’t just think they “may as well” two-box, they think they’re actively better to do so because doing so will gain them the $1000 in the transparent box.
And yes, I agree that considerations of free will are relevant to NP. People have all sorts of opinions about what conclusion we should draw from these considerations and how important they are.
OK, thanks for clearing this CDT self-consistency stuff up for me.
That’s cool, glad I had something useful to say (and it’s nice to know we weren’t just talking at cross purposes but were actually getting somewhere!)
It is, however, quite frustrating to realize, in retrospect, that I had already gone through this chain of reasoning at least once, and then forgot it completely :(