Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this “counterfactual decision theory” in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses “if I were to one-box now then my past disposition was one-boxing” and “if I were to two-box now then my past disposition was two-boxing.” She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can’t have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb’s problem.
But I thought this fact [you can’t have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can’t have the one-boxing disposition and then take two boxes.
Not irrational by their own lights. “Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions” is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren’t clean implementations of any of these theories, and can be swayed by considerations like “agents following this rule regularly get rich.”
Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this “counterfactual decision theory” in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses “if I were to one-box now then my past disposition was one-boxing” and “if I were to two-box now then my past disposition was two-boxing.” She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can’t have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb’s problem.
But I thought this fact [you can’t have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can’t have the one-boxing disposition and then take two boxes.
Not irrational by their own lights. “Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions” is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren’t clean implementations of any of these theories, and can be swayed by considerations like “agents following this rule regularly get rich.”
I agree with all of this.