To me, the fact that I have been told to assume that I believe the Predictor seems extremely relevant. If we assume that I am able to believe that, then it would likely be the single most important fact that I had ever observed, and to say that it would cause a significant update on my beliefs regarding causality would be an understatement. On the basis that I would have strong reason to believe that causality could flow backwards, I would likely choose the one box.
If you tell me that somehow, I still also believe that causality always flows forward with respect to time, then I must strain to accept the premises—really, nobody has tried to trip the Predictor up by choosing according to a source of quantum randomness? - but in that case, I would either choose two boxes, or choose randomly myself, depending on how certain I felt about causality.
My initial reaction is to find that aggravating and to try to come up with another experiment that would allow me to poke at the universe by exploiting the Predictor, but it seems likely that this too would be sidestepped using the same tactic. So we could generalize to say that any experiment you come up with that involves the Predictor and gives evidence regarding the temporal direction of causation will be sidestepped so as to give you no new information.
But intuitively, it seems like this condition itself gives new information in the paradox, yet I haven’t yet wrapped my head around what evidence can be drawn from it.
On another note, even if causality flows always forward, it is possible that humans might be insufficiently affected by nondeterministic phenomena to produce significantly nondeterministic behavior, at least at the time scale we’re talking about. If that is the case, then it could potentially be the case that human reasoning has approximate t-symmetry over short time scales, and that this can be exploited to “violate causality” with respect to humans without actually violating causality with respect to the universe at large.
Which means that I have a more general hypothesis, “human reasoning causality can be violated” for which the violation of causality in general would be strong evidence, but the non-violation of causality would only be weak counter-evidence. And in learning of the Predictor’s success, I have observed evidence strongly recommending this hypothesis.
So upon further consideration, I think that one-boxing is probably the way to go regardless, and it must simply be accepted that if you actually observe the Predictor, you can no longer rely on CDT if you know that such an entity might be involved.
The only part of the paradox that still bugs me then is the hand-waving that goes into “assume you believe the Predictor’s claims”. It is actually hard for me to imagine what evidence I could observe for that which would both clearly distinguish “the Predictor is honest” hypothesis from the “I’m being cleverly deceived” and “I’ve gone crazy” hypotheses, and also does not directly tip the Predictor’s hand as to whether human reasoning causality can be violated.
To me, the fact that I have been told to assume that I believe the Predictor seems extremely relevant. If we assume that I am able to believe that, then it would likely be the single most important fact that I had ever observed, and to say that it would cause a significant update on my beliefs regarding causality would be an understatement. On the basis that I would have strong reason to believe that causality could flow backwards, I would likely choose the one box.
If you tell me that somehow, I still also believe that causality always flows forward with respect to time, then I must strain to accept the premises—really, nobody has tried to trip the Predictor up by choosing according to a source of quantum randomness? - but in that case, I would either choose two boxes, or choose randomly myself, depending on how certain I felt about causality.
The standard formulation to sidestep that is that the Predictor treats choosing a mixed strategy as two-boxing.
My initial reaction is to find that aggravating and to try to come up with another experiment that would allow me to poke at the universe by exploiting the Predictor, but it seems likely that this too would be sidestepped using the same tactic. So we could generalize to say that any experiment you come up with that involves the Predictor and gives evidence regarding the temporal direction of causation will be sidestepped so as to give you no new information.
But intuitively, it seems like this condition itself gives new information in the paradox, yet I haven’t yet wrapped my head around what evidence can be drawn from it.
On another note, even if causality flows always forward, it is possible that humans might be insufficiently affected by nondeterministic phenomena to produce significantly nondeterministic behavior, at least at the time scale we’re talking about. If that is the case, then it could potentially be the case that human reasoning has approximate t-symmetry over short time scales, and that this can be exploited to “violate causality” with respect to humans without actually violating causality with respect to the universe at large.
Which means that I have a more general hypothesis, “human reasoning causality can be violated” for which the violation of causality in general would be strong evidence, but the non-violation of causality would only be weak counter-evidence. And in learning of the Predictor’s success, I have observed evidence strongly recommending this hypothesis.
So upon further consideration, I think that one-boxing is probably the way to go regardless, and it must simply be accepted that if you actually observe the Predictor, you can no longer rely on CDT if you know that such an entity might be involved.
The only part of the paradox that still bugs me then is the hand-waving that goes into “assume you believe the Predictor’s claims”. It is actually hard for me to imagine what evidence I could observe for that which would both clearly distinguish “the Predictor is honest” hypothesis from the “I’m being cleverly deceived” and “I’ve gone crazy” hypotheses, and also does not directly tip the Predictor’s hand as to whether human reasoning causality can be violated.