Everyone agrees about what the best disposition to have is. The disagreement is about what to do. I have uniformly meant “ought” in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean “ought”, unless otherwise specified.)
BTW: I still don’t understand the relevance of the fact that it is impossible for people with one-boxing dispositions to two-box. If you don’t like the arguments that I formalized for you, could you tell me what other premises you are using to reach your conclusion?
The disagreement is about what to do. I have uniformly meant “ought” in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean “ought”, unless otherwise specified.)
That sense is entirely uninteresting, as I explained in my first comment in this thread. It’s the sense in which one “ought” to two-box after having been predicted by Omega to one-box—a stipulated impossibility.
Philosophers who, after having considered the distinction, remain concerned with the “action” sense, would tend to be—shall we say—vehemently suspected of non-reductionist thinking; of forgetting that actions are completely determined by dispositions (i.e. the algorithms running in the mind of the agent).
Having said that, if one does use “ought” in the action sense, then there should be no difficulty in saying that one “ought” to two-box in the situation where Omega has predicted you will one-box. That’s just a restatement of the assumption that the outcome of (one-box predicted, two-box) is higher in the preference ordering than that of (one-box predicted, one-box).
Normally, the two meanings of “ought” coincide, because outcomes normally depend on actions that happen to be determined by dispositions, not directly on dispositions themselves. Hence it’s easy to be deceived into thinking that the action sense is the appropriate sense of “ought”. But this breaks down in situations of the Newcomb type. There, the dispositional sense is clearly the right one, because that’s the sense in which you ought to one-box; since the dispositional sense also gives the same answers as the action sense for “normal” situations, we may as well say that the dispositional sense is what we mean by “ought” in general.
So, you’re really interested in this question: what is the best decision algorithm? And then you’re interested, in a subsidiary way, in what you ought to do. You think the “action” sense is silly, since you can’t run one algorithm and make some other choice.
Your answer to my objection involving the parody argument is that you ought to do something else (not go with loss aversion) because there is some better decision algorithm (that you could, in some sense of “could”, use?) that tells you to do something else.
What do you do with cases where it is impossible for you to run a different algorithm? You can’t exactly use your algorithm to switch to some other algorithm, unless your original algorithm told you to do that all along, so these cases won’t be that rare. How do you avoid the result that you should just always use whatever algorithm you started with? However you answer this objection, why can’t two-boxers who care about the “action sense” of ought answer your objection analogously?
Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this “counterfactual decision theory” in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses “if I were to one-box now then my past disposition was one-boxing” and “if I were to two-box now then my past disposition was two-boxing.” She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can’t have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb’s problem.
But I thought this fact [you can’t have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can’t have the one-boxing disposition and then take two boxes.
Not irrational by their own lights. “Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions” is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren’t clean implementations of any of these theories, and can be swayed by considerations like “agents following this rule regularly get rich.”
Everyone agrees about what the best disposition to have is. The disagreement is about what to do. I have uniformly meant “ought” in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean “ought”, unless otherwise specified.)
BTW: I still don’t understand the relevance of the fact that it is impossible for people with one-boxing dispositions to two-box. If you don’t like the arguments that I formalized for you, could you tell me what other premises you are using to reach your conclusion?
That sense is entirely uninteresting, as I explained in my first comment in this thread. It’s the sense in which one “ought” to two-box after having been predicted by Omega to one-box—a stipulated impossibility.
Philosophers who, after having considered the distinction, remain concerned with the “action” sense, would tend to be—shall we say—vehemently suspected of non-reductionist thinking; of forgetting that actions are completely determined by dispositions (i.e. the algorithms running in the mind of the agent).
Having said that, if one does use “ought” in the action sense, then there should be no difficulty in saying that one “ought” to two-box in the situation where Omega has predicted you will one-box. That’s just a restatement of the assumption that the outcome of (one-box predicted, two-box) is higher in the preference ordering than that of (one-box predicted, one-box).
Normally, the two meanings of “ought” coincide, because outcomes normally depend on actions that happen to be determined by dispositions, not directly on dispositions themselves. Hence it’s easy to be deceived into thinking that the action sense is the appropriate sense of “ought”. But this breaks down in situations of the Newcomb type. There, the dispositional sense is clearly the right one, because that’s the sense in which you ought to one-box; since the dispositional sense also gives the same answers as the action sense for “normal” situations, we may as well say that the dispositional sense is what we mean by “ought” in general.
So, you’re really interested in this question: what is the best decision algorithm? And then you’re interested, in a subsidiary way, in what you ought to do. You think the “action” sense is silly, since you can’t run one algorithm and make some other choice.
Your answer to my objection involving the parody argument is that you ought to do something else (not go with loss aversion) because there is some better decision algorithm (that you could, in some sense of “could”, use?) that tells you to do something else.
What do you do with cases where it is impossible for you to run a different algorithm? You can’t exactly use your algorithm to switch to some other algorithm, unless your original algorithm told you to do that all along, so these cases won’t be that rare. How do you avoid the result that you should just always use whatever algorithm you started with? However you answer this objection, why can’t two-boxers who care about the “action sense” of ought answer your objection analogously?
Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this “counterfactual decision theory” in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses “if I were to one-box now then my past disposition was one-boxing” and “if I were to two-box now then my past disposition was two-boxing.” She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can’t have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb’s problem.
But I thought this fact [you can’t have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can’t have the one-boxing disposition and then take two boxes.
Not irrational by their own lights. “Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions” is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren’t clean implementations of any of these theories, and can be swayed by considerations like “agents following this rule regularly get rich.”
I agree with all of this.