I don’t think that’s quite right. At no point is the CDT agent ignoring any evidence, or failing to consider the implications of a hypothetical choice to one-box. It knows that a choice to one-box would provide strong evidence that box B contains the million; it just doesn’t care, because if that’s the case then two-boxing still nets it an extra $1k. It doesn’t merely prefer two-boxing given its current beliefs about the state of the boxes, it prefers two-boxing regardless of its current beliefs about the state of the boxes. (Except, of course, for the belief that their contents will not change.)
It sounds like you’re having CDT think “If I one-box, the first box is full, so two-boxing would have been better.” Applying that consistently to the adversarial offer doesn’t fix the problem I think. CDT thinks “if I buy the first box, it only has a 25% chance of paying out, so it would be better for me to buy the second box.” It reasons the same way about the second box, and gets into an infinite loop where it believes that each box is better than the other. Nothing ever makes it realize that it shouldn’t buy either box.
Similar to the tickle defense version of CDT discussed here and how it doesn’t make any defined decision in Death in Damascus.
My model of CDT in the Newcomb problem is that the CDT agent:
is aware that if it one-boxes, it will very likely make $1m, while if it two-boxes, it will very likely make only $1k;
but, when deciding what to do, only cares about the causal effect of each possible choice (and not the evidence it would provide about things that have happened in the past and are therefore, barring retrocausality, now out of the agent’s control).
So, at the moment of decision, it considers the two possible states of the world it could be in (boxes contain $1m and $1k; boxes contain $0 and $1k), sees that two-boxing gets it an extra $1k in both scenarios, and therefore chooses to two-box.
(Before the prediction is made, the CDT agent will, if it can, make a binding precommitment to one-box. But if, after the prediction has been made and the money is in the boxes, it is capable of two-boxing, it will two-box.)
I don’t have its decision process running along these lines:
“I’m going to one-box, therefore the boxes probably contain $1m and $1k, therefore one-boxing is worth ~$1m and two-boxing is worth ~$1.001m, therefore two-boxing is better, therefore I’m going to two-box, therefore the boxes probably contain $0 and $1k, therefore one-boxing is worth ~$0 and two boxing is worth ~$1k, therefore two-boxing is better, therefore I’m going to two-box.”
Which would, as you point out, translate to this loop in your adversarial scenario:
“I’m going to choose A, therefore the predictor probably predicted A, therefore B is probably the winning choice, therefore I’m going to choose B, therefore the predictor probably predicted B, therefore A is probably the winning choice, [repeat until meltdown]”
My model of CDT in your Aaronson oracle scenario, with the stipulation that the player is helpless against an Aaronson oracle, is that the CDT agent:
is aware that on each play, if it chooses A, it is likely to lose money, while if it chooses B, it is (as far as it knows) equally likely to lose money;
therefore, if it can choose whether to play this game or not, will choose not to play.
If it’s forced to play, then, at the moment of decision, it considers the two possible states of the world it could be in (oracle predicted A; oracle predicted B). It sees that in the first case B is the profitable choice and in the second case A is the profitable choice, so—unlike in the Newcomb problem—there’s no dominance argument available this time.
This is where things potentially get tricky, and some versions of CDT could get themselves into trouble in the way you described. But I don’t think anything I’ve said above, either about the CDT approach to Newcomb’s problem or the CDT decision not to play your game, commits CDT in general to any principles that will cause it to fail here.
How to play depends on the precise details of the scenario. If we were facing a literal Aaronson oracle, the correct decision procedure would be:
If you know a strategy that beats an Aaronson oracle, play that.
Else if you can randomise your choice (e.g. flip a coin), do that.
Else just try your best to randomise your choice, taking into account the ways that human attempts to simulate randomness tend to fail.
I don’t think any of that requires us to adopt a non-causal decision theory.
In the version of your scenario where the predictor is omniscient and the universe is 100% deterministic -- as in the version of Newcomb’s problem where the predictor isn’t just extremely good at predicting, it’s guaranteed to be infallible—I don’t think CDT has much to say. In my view, CDT represents rational decision-making under the assumption of libertarian-style free will; it models a choice as a causal intervention on the world, rather than just another link in the chain of causes and effects.
is aware that on each play, if it chooses A, it is likely to lose money, while if it chooses B, it is (as far as it knows) equally likely to lose money;
Isn’t that conditioning on its future choice, which CDT doesn’t do?
green_leaf, please stop interacting with my posts if you’re not willing to actually engage. Your ‘I checked, it’s false’ stamp is, again, inaccurate. The statement “if box B contains the million, then two-boxing nets an extra $1k” is true. Do you actually disagree with this?
I don’t think that’s quite right. At no point is the CDT agent ignoring any evidence, or failing to consider the implications of a hypothetical choice to one-box. It knows that a choice to one-box would provide strong evidence that box B contains the million; it just doesn’t care, because if that’s the case then two-boxing still nets it an extra $1k. It doesn’t merely prefer two-boxing given its current beliefs about the state of the boxes, it prefers two-boxing regardless of its current beliefs about the state of the boxes. (Except, of course, for the belief that their contents will not change.)
It sounds like you’re having CDT think “If I one-box, the first box is full, so two-boxing would have been better.” Applying that consistently to the adversarial offer doesn’t fix the problem I think. CDT thinks “if I buy the first box, it only has a 25% chance of paying out, so it would be better for me to buy the second box.” It reasons the same way about the second box, and gets into an infinite loop where it believes that each box is better than the other. Nothing ever makes it realize that it shouldn’t buy either box.
Similar to the tickle defense version of CDT discussed here and how it doesn’t make any defined decision in Death in Damascus.
My model of CDT in the Newcomb problem is that the CDT agent:
is aware that if it one-boxes, it will very likely make $1m, while if it two-boxes, it will very likely make only $1k;
but, when deciding what to do, only cares about the causal effect of each possible choice (and not the evidence it would provide about things that have happened in the past and are therefore, barring retrocausality, now out of the agent’s control).
So, at the moment of decision, it considers the two possible states of the world it could be in (boxes contain $1m and $1k; boxes contain $0 and $1k), sees that two-boxing gets it an extra $1k in both scenarios, and therefore chooses to two-box.
(Before the prediction is made, the CDT agent will, if it can, make a binding precommitment to one-box. But if, after the prediction has been made and the money is in the boxes, it is capable of two-boxing, it will two-box.)
I don’t have its decision process running along these lines:
“I’m going to one-box, therefore the boxes probably contain $1m and $1k, therefore one-boxing is worth ~$1m and two-boxing is worth ~$1.001m, therefore two-boxing is better, therefore I’m going to two-box, therefore the boxes probably contain $0 and $1k, therefore one-boxing is worth ~$0 and two boxing is worth ~$1k, therefore two-boxing is better, therefore I’m going to two-box.”
Which would, as you point out, translate to this loop in your adversarial scenario:
“I’m going to choose A, therefore the predictor probably predicted A, therefore B is probably the winning choice, therefore I’m going to choose B, therefore the predictor probably predicted B, therefore A is probably the winning choice, [repeat until meltdown]”
My model of CDT in your Aaronson oracle scenario, with the stipulation that the player is helpless against an Aaronson oracle, is that the CDT agent:
is aware that on each play, if it chooses A, it is likely to lose money, while if it chooses B, it is (as far as it knows) equally likely to lose money;
therefore, if it can choose whether to play this game or not, will choose not to play.
If it’s forced to play, then, at the moment of decision, it considers the two possible states of the world it could be in (oracle predicted A; oracle predicted B). It sees that in the first case B is the profitable choice and in the second case A is the profitable choice, so—unlike in the Newcomb problem—there’s no dominance argument available this time.
This is where things potentially get tricky, and some versions of CDT could get themselves into trouble in the way you described. But I don’t think anything I’ve said above, either about the CDT approach to Newcomb’s problem or the CDT decision not to play your game, commits CDT in general to any principles that will cause it to fail here.
How to play depends on the precise details of the scenario. If we were facing a literal Aaronson oracle, the correct decision procedure would be:
If you know a strategy that beats an Aaronson oracle, play that.
Else if you can randomise your choice (e.g. flip a coin), do that.
Else just try your best to randomise your choice, taking into account the ways that human attempts to simulate randomness tend to fail.
I don’t think any of that requires us to adopt a non-causal decision theory.
In the version of your scenario where the predictor is omniscient and the universe is 100% deterministic -- as in the version of Newcomb’s problem where the predictor isn’t just extremely good at predicting, it’s guaranteed to be infallible—I don’t think CDT has much to say. In my view, CDT represents rational decision-making under the assumption of libertarian-style free will; it models a choice as a causal intervention on the world, rather than just another link in the chain of causes and effects.
Isn’t that conditioning on its future choice, which CDT doesn’t do?
green_leaf, please stop interacting with my posts if you’re not willing to actually engage. Your ‘I checked, it’s false’ stamp is, again, inaccurate. The statement “if box B contains the million, then two-boxing nets an extra $1k” is true. Do you actually disagree with this?