My claim is purely theoretical: we need to distinguish, conceptually, between desirable dispositions and rational actions. It seems to me that many on LW fail to make this conceptual distinction, which can lead to mistaken (or at least under-argued) theorizing about rationality
This is because actions only ever arise from dispositions. Yes, given that Omega has predicted you will one-box, it would (as an abstract fact) be to your benefit to two-box; but in order for you to actually two-box, you would have to execute some instruction in your source code, which, if it were present, Omega would have read, and thus would not have predicted that you would one-box.
It is impossible to have the one-boxing disposition and then two-box.
Ought implies can.
Therefore, it is false that someone with a one-boxing disposition ought to two-box.
Or are you agreeing that you ought to two-box, but claiming that this fact isn’t interesting because of premise 1?
At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)
Or are you agreeing that you ought to two-box, but claiming that this fact isn’t interesting because of premise 1?
Yes, if “ought” merely means the outcome would be better, and doesn’t imply “can”.
At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently
As far as I can tell, it would only have that implication in situations where an outcome depended directly on one’s disposition (as opposed to one’s actions).
As far as I can tell, it would only have that implication in situations where an outcome depended directly on one’s disposition (as opposed to one’s actions).
I don’t think so:
John has the loss-aversion disposition..
It is impossible to have the loss-aversion disposition and maximize expected utility in case C.
Ought implies can.
Therefore, it is false that John ought to maximize expected utility in case C.
Or, for Newcomb:
It is impossible for someone with the two-boxing disposition to one-box.
Ought implies can.
Therefore, it is false that someone with the two-boxing disposition ought to one box.
Either “ought” applies to dispositions, or actions, but one mustn’t equivocate. If “what John ought to do” means “the disposition John should have”, then perhaps John ought to maximize expected utility even if he’s not currently so disposed. If the outcomes depend on John’s disposition only indirectly via his actions, and his current disposition will lead to a suboptimal action, then we may very well say that John “ought” to do something different, meaning that he should have a different disposition.
If, however, John is involved in a Newcomblike problem where there is a causal arrow leading directly from his disposition to the outcome, and his current disposition is optimal with respect to outcome, then one cannot say that he “ought” to do differently, on this (dispositional) usage of “ought”.
Everyone agrees about what the best disposition to have is. The disagreement is about what to do. I have uniformly meant “ought” in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean “ought”, unless otherwise specified.)
BTW: I still don’t understand the relevance of the fact that it is impossible for people with one-boxing dispositions to two-box. If you don’t like the arguments that I formalized for you, could you tell me what other premises you are using to reach your conclusion?
The disagreement is about what to do. I have uniformly meant “ought” in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean “ought”, unless otherwise specified.)
That sense is entirely uninteresting, as I explained in my first comment in this thread. It’s the sense in which one “ought” to two-box after having been predicted by Omega to one-box—a stipulated impossibility.
Philosophers who, after having considered the distinction, remain concerned with the “action” sense, would tend to be—shall we say—vehemently suspected of non-reductionist thinking; of forgetting that actions are completely determined by dispositions (i.e. the algorithms running in the mind of the agent).
Having said that, if one does use “ought” in the action sense, then there should be no difficulty in saying that one “ought” to two-box in the situation where Omega has predicted you will one-box. That’s just a restatement of the assumption that the outcome of (one-box predicted, two-box) is higher in the preference ordering than that of (one-box predicted, one-box).
Normally, the two meanings of “ought” coincide, because outcomes normally depend on actions that happen to be determined by dispositions, not directly on dispositions themselves. Hence it’s easy to be deceived into thinking that the action sense is the appropriate sense of “ought”. But this breaks down in situations of the Newcomb type. There, the dispositional sense is clearly the right one, because that’s the sense in which you ought to one-box; since the dispositional sense also gives the same answers as the action sense for “normal” situations, we may as well say that the dispositional sense is what we mean by “ought” in general.
So, you’re really interested in this question: what is the best decision algorithm? And then you’re interested, in a subsidiary way, in what you ought to do. You think the “action” sense is silly, since you can’t run one algorithm and make some other choice.
Your answer to my objection involving the parody argument is that you ought to do something else (not go with loss aversion) because there is some better decision algorithm (that you could, in some sense of “could”, use?) that tells you to do something else.
What do you do with cases where it is impossible for you to run a different algorithm? You can’t exactly use your algorithm to switch to some other algorithm, unless your original algorithm told you to do that all along, so these cases won’t be that rare. How do you avoid the result that you should just always use whatever algorithm you started with? However you answer this objection, why can’t two-boxers who care about the “action sense” of ought answer your objection analogously?
Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this “counterfactual decision theory” in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses “if I were to one-box now then my past disposition was one-boxing” and “if I were to two-box now then my past disposition was two-boxing.” She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can’t have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb’s problem.
But I thought this fact [you can’t have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can’t have the one-boxing disposition and then take two boxes.
Not irrational by their own lights. “Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions” is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren’t clean implementations of any of these theories, and can be swayed by considerations like “agents following this rule regularly get rich.”
This is because actions only ever arise from dispositions. Yes, given that Omega has predicted you will one-box, it would (as an abstract fact) be to your benefit to two-box; but in order for you to actually two-box, you would have to execute some instruction in your source code, which, if it were present, Omega would have read, and thus would not have predicted that you would one-box.
Hence only dispositions are of interest.
Is this the argument?
It is impossible to have the one-boxing disposition and then two-box.
Ought implies can.
Therefore, it is false that someone with a one-boxing disposition ought to two-box.
Or are you agreeing that you ought to two-box, but claiming that this fact isn’t interesting because of premise 1?
At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)
Yes, if “ought” merely means the outcome would be better, and doesn’t imply “can”.
As far as I can tell, it would only have that implication in situations where an outcome depended directly on one’s disposition (as opposed to one’s actions).
I don’t think so:
John has the loss-aversion disposition..
It is impossible to have the loss-aversion disposition and maximize expected utility in case C.
Ought implies can.
Therefore, it is false that John ought to maximize expected utility in case C.
Or, for Newcomb:
It is impossible for someone with the two-boxing disposition to one-box.
Ought implies can.
Therefore, it is false that someone with the two-boxing disposition ought to one box.
Either “ought” applies to dispositions, or actions, but one mustn’t equivocate. If “what John ought to do” means “the disposition John should have”, then perhaps John ought to maximize expected utility even if he’s not currently so disposed. If the outcomes depend on John’s disposition only indirectly via his actions, and his current disposition will lead to a suboptimal action, then we may very well say that John “ought” to do something different, meaning that he should have a different disposition.
If, however, John is involved in a Newcomblike problem where there is a causal arrow leading directly from his disposition to the outcome, and his current disposition is optimal with respect to outcome, then one cannot say that he “ought” to do differently, on this (dispositional) usage of “ought”.
Everyone agrees about what the best disposition to have is. The disagreement is about what to do. I have uniformly meant “ought” in the action sense, not the dispositional sense. (FYI: this is always the sense in which philosophers (incl. Richard) mean “ought”, unless otherwise specified.)
BTW: I still don’t understand the relevance of the fact that it is impossible for people with one-boxing dispositions to two-box. If you don’t like the arguments that I formalized for you, could you tell me what other premises you are using to reach your conclusion?
That sense is entirely uninteresting, as I explained in my first comment in this thread. It’s the sense in which one “ought” to two-box after having been predicted by Omega to one-box—a stipulated impossibility.
Philosophers who, after having considered the distinction, remain concerned with the “action” sense, would tend to be—shall we say—vehemently suspected of non-reductionist thinking; of forgetting that actions are completely determined by dispositions (i.e. the algorithms running in the mind of the agent).
Having said that, if one does use “ought” in the action sense, then there should be no difficulty in saying that one “ought” to two-box in the situation where Omega has predicted you will one-box. That’s just a restatement of the assumption that the outcome of (one-box predicted, two-box) is higher in the preference ordering than that of (one-box predicted, one-box).
Normally, the two meanings of “ought” coincide, because outcomes normally depend on actions that happen to be determined by dispositions, not directly on dispositions themselves. Hence it’s easy to be deceived into thinking that the action sense is the appropriate sense of “ought”. But this breaks down in situations of the Newcomb type. There, the dispositional sense is clearly the right one, because that’s the sense in which you ought to one-box; since the dispositional sense also gives the same answers as the action sense for “normal” situations, we may as well say that the dispositional sense is what we mean by “ought” in general.
So, you’re really interested in this question: what is the best decision algorithm? And then you’re interested, in a subsidiary way, in what you ought to do. You think the “action” sense is silly, since you can’t run one algorithm and make some other choice.
Your answer to my objection involving the parody argument is that you ought to do something else (not go with loss aversion) because there is some better decision algorithm (that you could, in some sense of “could”, use?) that tells you to do something else.
What do you do with cases where it is impossible for you to run a different algorithm? You can’t exactly use your algorithm to switch to some other algorithm, unless your original algorithm told you to do that all along, so these cases won’t be that rare. How do you avoid the result that you should just always use whatever algorithm you started with? However you answer this objection, why can’t two-boxers who care about the “action sense” of ought answer your objection analogously?
Just take causal decision theory and then crank it with an account of counterfactuals whereby there is probably a counterfactual dependency between your box-choice and your early disposition.
Arntzenius called something like this “counterfactual decision theory” in 2002. The counterfactual decision theorist would assign high probability to the dependency hypotheses “if I were to one-box now then my past disposition was one-boxing” and “if I were to two-box now then my past disposition was two-boxing.” She would assign much lower probability to the dependency hypotheses on which her current action is independent of her past disposition (these would be the cognitive glitch/spasm sorts of cases).
I agree that this fact [you can’t have a one-boxing disposition and then two box] could appear as premise in an argument, together with an alternative proposed decision theory, for the conclusion that one-boxing is a bad idea. If that was the implicit argument, then I now understand the point.
To be clear: I have not been trying to argue that you ought to take two boxes in Newcomb’s problem.
But I thought this fact [you can’t have a one-boxing disposition and then two box] was supposed to be a part of an argument that did not use a decision theory as a premise. Maybe I was misreading things, but I thought it was supposed to be clear that two-boxers were irrational, and that this should be pretty clear once we point out that you can’t have the one-boxing disposition and then take two boxes.
Not irrational by their own lights. “Take the action such that an unanticipated local miracle causing me to perform that action would be at least as good news as local miracles causing me to perform any of the alternative actions” is a coherent normative principle, even though such miracles do not occur. Other principles with different miracles are coherent too. Arguments for one decision theory or another only make sense for humans because we aren’t clean implementations of any of these theories, and can be swayed by considerations like “agents following this rule regularly get rich.”
I agree with all of this.