Maybe the focus shouldn’t be on the decision (or action) that leads to the best outcome, but on the decision procedure (or theory or algorithm) that leads to the best outcome.
If the outcome is entirely independent of the procedure, the difference is unimportant, so you can speak of “rational decision” and “rational decision procedure” interchangeably. But in newcomb’s problem, that’s not the case.
I am interested though in how you define a rational decision if not in terms of which leads to the better outcome?
That sounds fine to me. (Well, technically I think it’s a primitive concept, but that’s not important here.) It’s applying the term ‘rational’ to decision theories that I found ambiguous in the way noted.
Which means that one boxing is the better choice because it leads to the better outcome. I say that slightly tongue in cheek because I know you know that but, at the same time, I don’t really understand the position that says:
1.) The rational decision is the one that leads to the better outcome.
2.) In Newcomb’s Problem one boxing would actually lead to the better outcome.
3.) But the principle of strong dominance suggests that this shouldn’t be the case
I don’t understand how 3, a statement about how things should be, outweighs 2, a statement about how things are.
It seems like the sensible thing to do is say, well due to point 2, one boxing does lead to the better outcome. Due to point 1, this means one boxing is rational. A side note of this is that strong dominance must not be a rational way of making decisions (in all cases).
No, the choice of one-boxing doesn’t lead to the better outcome. It’s one’s prior possession of the disposition to one-box that leads to the good outcome. It would be best of all to have the general one-boxing disposition and yet (somehow, perhaps flukily) manage to choose both boxes.
(Compare Parfit’s case. Ignoring threats doesn’t lead to better outcomes. It’s merely the possession of the disposition that does so.)
Okay, so your dispositions are basically the counterfactual “If A occurred then I would do B” and your choice, C, is what you actually do when A occurs.
In the perfect predictor version of Newcomb’s, Omega predicts perfectly the choice you make, not your disposition. It may generate it’s own counterfactual for this “If A occurs then this person will do B” but that’s not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega’s prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.
Inversely, the rational disposition in Newcomb’s isn’t to one box. Instead, your disposition has no bearing on Newcomb’s except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn’t a disposition to one box that leads to Omega loading the boxes a certain way, it’s a choice to one box so your disposition neither helps nor hinders you.
As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).
So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb’s because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.
It seems like dispositions distract from the issue here because as humans we think “Omega must use dispositions to predict the choice.” But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.
If you use dispositions simply to mean, the decision you would make before you actually make the decision then you’re denying one of the parts of the problem itself in order to solve it—you’re denying that Omega is a perfect predictor of choices and you’re suggesting he’s only able to predict the way choices would be at a certain time and not the choice you actually make.
This can be extended to the imperfect predictor version of Newcomb’s easily enough.
I’ll grant you it leaves open the need for some causal explanation but we can’t simply retreat from difficult questions by suggesting that they’re not really questions. Ie. We can’t avoid needing to account for causality in Newcomb’s by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).
So far, everything I’ve said has been weakly defended so I’m interested to see whether this is any stronger or whether I’ll be doing some more time thinking tomorrow.
We’re going in circles a little aren’t we (my fault, I’ll grant). Okay, so there are two questions:
1.) Is it a rational choice to one box? Answer: No.
2.) Is it rational to have a disposition to one box? Answer: Yes.
As mentioned earlier, I think I’m more interested in creating a decision theory than wins than one that’s rational. But let’s say you are interested in a decision theory that captures rationality: It still seems arbitrary to say that the rationality of the choice is more important than the rationality of the decision. Yes, you could argue that choice is the domain of study for decision theory but the number of decision theorists that would one box (outside of LW) suggests that other people have a different idea of what decision theory would be.
I guess my question is this: Is the whole debate over one or two boxing on Newcomb’s just a disagreement over which question decision theory should be studying or are there people who use choice to mean the same thing that you do that think one boxing is the rational choice?
The latter, I think. (Otherwise, one-boxers would not really be disagreeing with two-boxers. We two-boxers already granted that one-boxing is the better disposition. So if they’re merely aiming to construct a theory of desirable dispositions, rather than rational choice, then their claims would be utterly uncontroversial.)
Yes, I’m willing to concede the possibility that I could be using words in unclear ways and that may lead to problems.
I am interested though in how you define a rational decision if not in terms of which leads to the better outcome?
Maybe the focus shouldn’t be on the decision (or action) that leads to the best outcome, but on the decision procedure (or theory or algorithm) that leads to the best outcome.
If the outcome is entirely independent of the procedure, the difference is unimportant, so you can speak of “rational decision” and “rational decision procedure” interchangeably. But in newcomb’s problem, that’s not the case.
Yes, that’s my basic view.
The difficulty in part is that people seem to have different ideas of what it means to be rational.
That sounds fine to me. (Well, technically I think it’s a primitive concept, but that’s not important here.) It’s applying the term ‘rational’ to decision theories that I found ambiguous in the way noted.
Which means that one boxing is the better choice because it leads to the better outcome. I say that slightly tongue in cheek because I know you know that but, at the same time, I don’t really understand the position that says:
1.) The rational decision is the one that leads to the better outcome. 2.) In Newcomb’s Problem one boxing would actually lead to the better outcome. 3.) But the principle of strong dominance suggests that this shouldn’t be the case
I don’t understand how 3, a statement about how things should be, outweighs 2, a statement about how things are.
It seems like the sensible thing to do is say, well due to point 2, one boxing does lead to the better outcome. Due to point 1, this means one boxing is rational. A side note of this is that strong dominance must not be a rational way of making decisions (in all cases).
No, the choice of one-boxing doesn’t lead to the better outcome. It’s one’s prior possession of the disposition to one-box that leads to the good outcome. It would be best of all to have the general one-boxing disposition and yet (somehow, perhaps flukily) manage to choose both boxes.
(Compare Parfit’s case. Ignoring threats doesn’t lead to better outcomes. It’s merely the possession of the disposition that does so.)
Okay, so your dispositions are basically the counterfactual “If A occurred then I would do B” and your choice, C, is what you actually do when A occurs.
In the perfect predictor version of Newcomb’s, Omega predicts perfectly the choice you make, not your disposition. It may generate it’s own counterfactual for this “If A occurs then this person will do B” but that’s not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega’s prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.
Inversely, the rational disposition in Newcomb’s isn’t to one box. Instead, your disposition has no bearing on Newcomb’s except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn’t a disposition to one box that leads to Omega loading the boxes a certain way, it’s a choice to one box so your disposition neither helps nor hinders you.
As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).
So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb’s because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.
It seems like dispositions distract from the issue here because as humans we think “Omega must use dispositions to predict the choice.” But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.
If you use dispositions simply to mean, the decision you would make before you actually make the decision then you’re denying one of the parts of the problem itself in order to solve it—you’re denying that Omega is a perfect predictor of choices and you’re suggesting he’s only able to predict the way choices would be at a certain time and not the choice you actually make.
This can be extended to the imperfect predictor version of Newcomb’s easily enough.
I’ll grant you it leaves open the need for some causal explanation but we can’t simply retreat from difficult questions by suggesting that they’re not really questions. Ie. We can’t avoid needing to account for causality in Newcomb’s by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).
So far, everything I’ve said has been weakly defended so I’m interested to see whether this is any stronger or whether I’ll be doing some more time thinking tomorrow.
We’re going in circles a little aren’t we (my fault, I’ll grant). Okay, so there are two questions:
1.) Is it a rational choice to one box? Answer: No. 2.) Is it rational to have a disposition to one box? Answer: Yes.
As mentioned earlier, I think I’m more interested in creating a decision theory than wins than one that’s rational. But let’s say you are interested in a decision theory that captures rationality: It still seems arbitrary to say that the rationality of the choice is more important than the rationality of the decision. Yes, you could argue that choice is the domain of study for decision theory but the number of decision theorists that would one box (outside of LW) suggests that other people have a different idea of what decision theory would be.
I guess my question is this: Is the whole debate over one or two boxing on Newcomb’s just a disagreement over which question decision theory should be studying or are there people who use choice to mean the same thing that you do that think one boxing is the rational choice?
I don’t understand the distinction between choosing to one-box and being the sort of person who chooses to one-box. Can you formalize that difference?
The latter, I think. (Otherwise, one-boxers would not really be disagreeing with two-boxers. We two-boxers already granted that one-boxing is the better disposition. So if they’re merely aiming to construct a theory of desirable dispositions, rather than rational choice, then their claims would be utterly uncontroversial.)
I thought that debate was about free will.