Hi Adam, can I ask for a little more clarification here? You write:
My argument is basically that Newcombe’s Problem shows that strong dominance is an irrational way to make decisions because you do not in fact benefit regardless of circumstances by following strong dominance
Newcomb’s Problem is a case where Omega punishes those who are disposed to follow strong dominance reasoning. But how, exactly, does it follow from this that dominance reasoning isn’t rational? It may just be a case where Omega punishes those who are disposed to reason rationally. (If dominance reasoning is indeed rational, then this is the right way to describe the case.)
Edit: Hang on, let me try that again before you respond.
I suppose it depends what you mean by rationality but it seems to me that the same argument that is often used to make people favour strong dominance (regardless of the world state, strong dominance leads to better outcomes) can actually be used to argue that it’s not a very good decision procedure (because there are world states where using this decision procedure does not lead to a better outcome), at least as long as there are decision theories that do lead to better outcomes in general (regardless of the world state, these decision theories lead to better outcomes than other decision theories—or the weaker but more realistic, in more world states, these decision theories lead to sensible outcomes.).
Just as the rationality of a strong dominance decision is justified by it leading to better outcomes than other decisions, the rationality of a decision theory could be justified by whether it leads to better outcomes than other decision theories.
If that’s not what you mean by rationality that’s fine but then what establishes strong dominance as being a rational way of acting and hence what makes two boxing on Newcomb’s rational? I’m not saying there’s no answer to that but I am saying that I will struggle to respond to your question without knowing how you think about rationality in that situation.
I’m confident you know more about this topic than me so I will try to understand your points but so far, I haven’t seen anything which would:
a.) Establish a decision based on strong dominance at an individual point in time as being rational
without:
b.) Establishing strong dominance as an irrational decision procedure by using a similar argument but applied to decision procedures rather than individual decisions.
I’d be interested to know whether you think this is flawed as I’d be happy to either change my mind or learn to explain my reasoning better, depending on what the flaw was.
Rationality and winning may not be the same thing. But I do think they’re linked. If we’re asked to judge whether the principle of strong dominance is rational, we say yes because it always leads to the best outcome (leads to “winning”). If we were asked to choose from a 10% chance of winning $100 or a 20% chance, we would say it was rational to choose the 20% chance, once again because there’s a higher chance of winning.
In fact, it seems to me that people do judge whether a decision is rational based on whether it leads to “winning” but they just get confused by multiple possible meanings of winning in the case of Newcomb’s Problem which I think comes from confusing two possible questions about the rationality of a decision in the problem (discussed later).
Regardless, even if that’s not true, it seems that rationality and winning are at least related.
Now I believe that, in just the same way, the rationality of a decision theory or procedure can be judged based on the same basis. So it may be rational to follow TDT instead of CDT (as an example, I’m not getting into the conversation of which is better here) because it may lead to a greater chance of winning. The justification here is just the same as it is in the strong dominance and lottery example in the first paragraph.
Which means there are two questions:
1.) What is the rational decision to make in the circumstance? The answer here may well be the strongly dominant decision (two boxing)
2.) What is the rational decision theory to follow? The answer here might be (for example) TDT and hence the decision that flows from this is one boxing.
But that means the question of whether one boxing or two boxing is the rational decision in the case of Newcomb’s Problem can mean one of two things:
1.) Is it a rational decision?
2.) Did it follow from a rational decision theory?
Previously, I provided more weight to the second of these and said that as it followed from a rational decision theory, that was what mattered. I still feel like that’s right (like the meta level should override the normal) but I need to think on it more to figure out if I have a real justification for it. So let’s say both levels are equally important. So, given that, I would agree that two boxing is the rational decision.
However, when it comes to creating better or worse decision theories, I think the relevant question is whether the decision theory is rational, not whether the decisions it entails are. After all, we are judging between decision theories and hence the decision theory perspective seems more relevant.
But let’s say you totally disagree with my definition of rationality (my first question would be, how do you define rationality and how does this lead to strong dominance being seen as rational rather than just seen as a winning technique? Which is to say, I wonder whether your question can be applied to many things we already see as rational as easily as it can be applied to these contested issues – maybe I’m wrong there though). But regardless, I think I’m getting too caught up in this question of rationality.
Rationality is an important issue but I think a decision theory should be about making “winning” decision and if rationality and winning aren’t even linked in your definitions, then I would say that decision theories are meant to be about how to make decisions. I think their success should be measured based on whether they lead to the best outcomes not based on an arbitrary (or none arbitrary, for that matter), definition of rationality.
So let’s say two boxing is the rational decision in Newcomb’s Problem. I’m not sure I care. I’m more interested in whether we can come up with a decision theory that comes up with a better outcome and I personally will judge such a decision theory higher than one that meets so-and-so definition of rationality but doesn’t lead to such results.
decision theory should be about making “winning” decision
But remember, in Newcomb the one-boxer wins in virtue of her disposition, not in virtue of her decision per se.
On your broader point, I agree that we need to distinguish the two questions you note, though I find it a little obscure to talk of a “rational decision theory” (as by this I had previously taken you to mean the theory which correctly specifies rational decisions, when you really mean something more like what I’m calling desirable dispositions). I agree with you that one-boxing is the more desirable disposition (or decision-procedure to have inculcated). But it’s a separate question what the rational act is; and I think it’d be a mistake to assume that two-boxing can’t be a rational choice just because a disposition to so choose would not be rational to inculcate.
when it comes to creating better or worse decision theories, I think the relevant question is whether the decision theory is rational [desirable to inculcate], not whether the decisions it entails are.
Well, I think that depends on one’s purposes. If you’re interested in creature-building, then I guess you want to know what decision procedure would be best (regardless of the rationality of the decisions it leads to). But if—like me—you’re just interested in understanding rationality, then what you want is a criterion or general theory of which particular actions are rational (and why) -- regardless of whether we can reliably implement or follow it.
(See also my previous contrast between the projects of constructing theoretical ‘accounts’ vs. practical ‘instruction manuals’.)
Maybe the focus shouldn’t be on the decision (or action) that leads to the best outcome, but on the decision procedure (or theory or algorithm) that leads to the best outcome.
If the outcome is entirely independent of the procedure, the difference is unimportant, so you can speak of “rational decision” and “rational decision procedure” interchangeably. But in newcomb’s problem, that’s not the case.
I am interested though in how you define a rational decision if not in terms of which leads to the better outcome?
That sounds fine to me. (Well, technically I think it’s a primitive concept, but that’s not important here.) It’s applying the term ‘rational’ to decision theories that I found ambiguous in the way noted.
Which means that one boxing is the better choice because it leads to the better outcome. I say that slightly tongue in cheek because I know you know that but, at the same time, I don’t really understand the position that says:
1.) The rational decision is the one that leads to the better outcome.
2.) In Newcomb’s Problem one boxing would actually lead to the better outcome.
3.) But the principle of strong dominance suggests that this shouldn’t be the case
I don’t understand how 3, a statement about how things should be, outweighs 2, a statement about how things are.
It seems like the sensible thing to do is say, well due to point 2, one boxing does lead to the better outcome. Due to point 1, this means one boxing is rational. A side note of this is that strong dominance must not be a rational way of making decisions (in all cases).
No, the choice of one-boxing doesn’t lead to the better outcome. It’s one’s prior possession of the disposition to one-box that leads to the good outcome. It would be best of all to have the general one-boxing disposition and yet (somehow, perhaps flukily) manage to choose both boxes.
(Compare Parfit’s case. Ignoring threats doesn’t lead to better outcomes. It’s merely the possession of the disposition that does so.)
Okay, so your dispositions are basically the counterfactual “If A occurred then I would do B” and your choice, C, is what you actually do when A occurs.
In the perfect predictor version of Newcomb’s, Omega predicts perfectly the choice you make, not your disposition. It may generate it’s own counterfactual for this “If A occurs then this person will do B” but that’s not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega’s prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.
Inversely, the rational disposition in Newcomb’s isn’t to one box. Instead, your disposition has no bearing on Newcomb’s except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn’t a disposition to one box that leads to Omega loading the boxes a certain way, it’s a choice to one box so your disposition neither helps nor hinders you.
As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).
So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb’s because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.
It seems like dispositions distract from the issue here because as humans we think “Omega must use dispositions to predict the choice.” But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.
If you use dispositions simply to mean, the decision you would make before you actually make the decision then you’re denying one of the parts of the problem itself in order to solve it—you’re denying that Omega is a perfect predictor of choices and you’re suggesting he’s only able to predict the way choices would be at a certain time and not the choice you actually make.
This can be extended to the imperfect predictor version of Newcomb’s easily enough.
I’ll grant you it leaves open the need for some causal explanation but we can’t simply retreat from difficult questions by suggesting that they’re not really questions. Ie. We can’t avoid needing to account for causality in Newcomb’s by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).
So far, everything I’ve said has been weakly defended so I’m interested to see whether this is any stronger or whether I’ll be doing some more time thinking tomorrow.
We’re going in circles a little aren’t we (my fault, I’ll grant). Okay, so there are two questions:
1.) Is it a rational choice to one box? Answer: No.
2.) Is it rational to have a disposition to one box? Answer: Yes.
As mentioned earlier, I think I’m more interested in creating a decision theory than wins than one that’s rational. But let’s say you are interested in a decision theory that captures rationality: It still seems arbitrary to say that the rationality of the choice is more important than the rationality of the decision. Yes, you could argue that choice is the domain of study for decision theory but the number of decision theorists that would one box (outside of LW) suggests that other people have a different idea of what decision theory would be.
I guess my question is this: Is the whole debate over one or two boxing on Newcomb’s just a disagreement over which question decision theory should be studying or are there people who use choice to mean the same thing that you do that think one boxing is the rational choice?
The latter, I think. (Otherwise, one-boxers would not really be disagreeing with two-boxers. We two-boxers already granted that one-boxing is the better disposition. So if they’re merely aiming to construct a theory of desirable dispositions, rather than rational choice, then their claims would be utterly uncontroversial.)
Hi Adam, can I ask for a little more clarification here? You write:
Newcomb’s Problem is a case where Omega punishes those who are disposed to follow strong dominance reasoning. But how, exactly, does it follow from this that dominance reasoning isn’t rational? It may just be a case where Omega punishes those who are disposed to reason rationally. (If dominance reasoning is indeed rational, then this is the right way to describe the case.)
Edit: Hang on, let me try that again before you respond.
I suppose it depends what you mean by rationality but it seems to me that the same argument that is often used to make people favour strong dominance (regardless of the world state, strong dominance leads to better outcomes) can actually be used to argue that it’s not a very good decision procedure (because there are world states where using this decision procedure does not lead to a better outcome), at least as long as there are decision theories that do lead to better outcomes in general (regardless of the world state, these decision theories lead to better outcomes than other decision theories—or the weaker but more realistic, in more world states, these decision theories lead to sensible outcomes.).
Just as the rationality of a strong dominance decision is justified by it leading to better outcomes than other decisions, the rationality of a decision theory could be justified by whether it leads to better outcomes than other decision theories.
If that’s not what you mean by rationality that’s fine but then what establishes strong dominance as being a rational way of acting and hence what makes two boxing on Newcomb’s rational? I’m not saying there’s no answer to that but I am saying that I will struggle to respond to your question without knowing how you think about rationality in that situation.
I’m confident you know more about this topic than me so I will try to understand your points but so far, I haven’t seen anything which would: a.) Establish a decision based on strong dominance at an individual point in time as being rational without: b.) Establishing strong dominance as an irrational decision procedure by using a similar argument but applied to decision procedures rather than individual decisions.
I’d be interested to know whether you think this is flawed as I’d be happy to either change my mind or learn to explain my reasoning better, depending on what the flaw was.
Rationality and winning may not be the same thing. But I do think they’re linked. If we’re asked to judge whether the principle of strong dominance is rational, we say yes because it always leads to the best outcome (leads to “winning”). If we were asked to choose from a 10% chance of winning $100 or a 20% chance, we would say it was rational to choose the 20% chance, once again because there’s a higher chance of winning.
In fact, it seems to me that people do judge whether a decision is rational based on whether it leads to “winning” but they just get confused by multiple possible meanings of winning in the case of Newcomb’s Problem which I think comes from confusing two possible questions about the rationality of a decision in the problem (discussed later).
Regardless, even if that’s not true, it seems that rationality and winning are at least related.
Now I believe that, in just the same way, the rationality of a decision theory or procedure can be judged based on the same basis. So it may be rational to follow TDT instead of CDT (as an example, I’m not getting into the conversation of which is better here) because it may lead to a greater chance of winning. The justification here is just the same as it is in the strong dominance and lottery example in the first paragraph.
Which means there are two questions: 1.) What is the rational decision to make in the circumstance? The answer here may well be the strongly dominant decision (two boxing) 2.) What is the rational decision theory to follow? The answer here might be (for example) TDT and hence the decision that flows from this is one boxing.
But that means the question of whether one boxing or two boxing is the rational decision in the case of Newcomb’s Problem can mean one of two things: 1.) Is it a rational decision? 2.) Did it follow from a rational decision theory?
Previously, I provided more weight to the second of these and said that as it followed from a rational decision theory, that was what mattered. I still feel like that’s right (like the meta level should override the normal) but I need to think on it more to figure out if I have a real justification for it. So let’s say both levels are equally important. So, given that, I would agree that two boxing is the rational decision.
However, when it comes to creating better or worse decision theories, I think the relevant question is whether the decision theory is rational, not whether the decisions it entails are. After all, we are judging between decision theories and hence the decision theory perspective seems more relevant.
But let’s say you totally disagree with my definition of rationality (my first question would be, how do you define rationality and how does this lead to strong dominance being seen as rational rather than just seen as a winning technique? Which is to say, I wonder whether your question can be applied to many things we already see as rational as easily as it can be applied to these contested issues – maybe I’m wrong there though). But regardless, I think I’m getting too caught up in this question of rationality.
Rationality is an important issue but I think a decision theory should be about making “winning” decision and if rationality and winning aren’t even linked in your definitions, then I would say that decision theories are meant to be about how to make decisions. I think their success should be measured based on whether they lead to the best outcomes not based on an arbitrary (or none arbitrary, for that matter), definition of rationality.
So let’s say two boxing is the rational decision in Newcomb’s Problem. I’m not sure I care. I’m more interested in whether we can come up with a decision theory that comes up with a better outcome and I personally will judge such a decision theory higher than one that meets so-and-so definition of rationality but doesn’t lead to such results.
But remember, in Newcomb the one-boxer wins in virtue of her disposition, not in virtue of her decision per se.
On your broader point, I agree that we need to distinguish the two questions you note, though I find it a little obscure to talk of a “rational decision theory” (as by this I had previously taken you to mean the theory which correctly specifies rational decisions, when you really mean something more like what I’m calling desirable dispositions). I agree with you that one-boxing is the more desirable disposition (or decision-procedure to have inculcated). But it’s a separate question what the rational act is; and I think it’d be a mistake to assume that two-boxing can’t be a rational choice just because a disposition to so choose would not be rational to inculcate.
Well, I think that depends on one’s purposes. If you’re interested in creature-building, then I guess you want to know what decision procedure would be best (regardless of the rationality of the decisions it leads to). But if—like me—you’re just interested in understanding rationality, then what you want is a criterion or general theory of which particular actions are rational (and why) -- regardless of whether we can reliably implement or follow it.
(See also my previous contrast between the projects of constructing theoretical ‘accounts’ vs. practical ‘instruction manuals’.)
Yes, I’m willing to concede the possibility that I could be using words in unclear ways and that may lead to problems.
I am interested though in how you define a rational decision if not in terms of which leads to the better outcome?
Maybe the focus shouldn’t be on the decision (or action) that leads to the best outcome, but on the decision procedure (or theory or algorithm) that leads to the best outcome.
If the outcome is entirely independent of the procedure, the difference is unimportant, so you can speak of “rational decision” and “rational decision procedure” interchangeably. But in newcomb’s problem, that’s not the case.
Yes, that’s my basic view.
The difficulty in part is that people seem to have different ideas of what it means to be rational.
That sounds fine to me. (Well, technically I think it’s a primitive concept, but that’s not important here.) It’s applying the term ‘rational’ to decision theories that I found ambiguous in the way noted.
Which means that one boxing is the better choice because it leads to the better outcome. I say that slightly tongue in cheek because I know you know that but, at the same time, I don’t really understand the position that says:
1.) The rational decision is the one that leads to the better outcome. 2.) In Newcomb’s Problem one boxing would actually lead to the better outcome. 3.) But the principle of strong dominance suggests that this shouldn’t be the case
I don’t understand how 3, a statement about how things should be, outweighs 2, a statement about how things are.
It seems like the sensible thing to do is say, well due to point 2, one boxing does lead to the better outcome. Due to point 1, this means one boxing is rational. A side note of this is that strong dominance must not be a rational way of making decisions (in all cases).
No, the choice of one-boxing doesn’t lead to the better outcome. It’s one’s prior possession of the disposition to one-box that leads to the good outcome. It would be best of all to have the general one-boxing disposition and yet (somehow, perhaps flukily) manage to choose both boxes.
(Compare Parfit’s case. Ignoring threats doesn’t lead to better outcomes. It’s merely the possession of the disposition that does so.)
Okay, so your dispositions are basically the counterfactual “If A occurred then I would do B” and your choice, C, is what you actually do when A occurs.
In the perfect predictor version of Newcomb’s, Omega predicts perfectly the choice you make, not your disposition. It may generate it’s own counterfactual for this “If A occurs then this person will do B” but that’s not to say it cares about your disposition just because the two counterfactuals look similar. Because Omega’s prediction of C is perfect, that means that if a stray bolt of lightning hits you and switches your decision, Omega will have taken that lightning into account. You will always be sad if it changes your choice, C, to two boxing because Omega perfectly predicts C and so will punish you.
Inversely, the rational disposition in Newcomb’s isn’t to one box. Instead, your disposition has no bearing on Newcomb’s except insofar as it is related to C (if you always act in line with your dispositions, for example, what your dispositions matter). It isn’t a disposition to one box that leads to Omega loading the boxes a certain way, it’s a choice to one box so your disposition neither helps nor hinders you.
As such, your choice of whether to one or two box is what is relevant. And hence, the choice of one boxing is what leads to the better outcome. Your disposition to one box plays no role whatsoever. Hence, based on the maximising utility definition of rationality, the rational choice is to one box because its this choice itself that leads to the boxes being loaded in a certain way (note on causality at the bottom of the post).
So to restate it in the terms used in the above comments: A prior possession of the disposition to one-box is irrelevant to Newcomb’s because Omega is interested in your choices not your dispositions to choices and is perfect at predicting your choices not your dispositions. Flukily choosing two boxes would be bad because Omega would have perfectly predicted the fluky choice and so you would end up loosing.
It seems like dispositions distract from the issue here because as humans we think “Omega must use dispositions to predict the choice.” But that need not be true. In fact, if dispositions and choices can differ (by a fluke, for example) then it must not be true. Omega must not use dispositions to predict choice. It simply predicts the choice using whatever means work.
If you use dispositions simply to mean, the decision you would make before you actually make the decision then you’re denying one of the parts of the problem itself in order to solve it—you’re denying that Omega is a perfect predictor of choices and you’re suggesting he’s only able to predict the way choices would be at a certain time and not the choice you actually make.
This can be extended to the imperfect predictor version of Newcomb’s easily enough.
I’ll grant you it leaves open the need for some causal explanation but we can’t simply retreat from difficult questions by suggesting that they’re not really questions. Ie. We can’t avoid needing to account for causality in Newcomb’s by simply suggesting that Omega predicts by reading your dispositions rather than predicts C using whatever means it is that gets it right (ie. taking your dispositions and then factoring in freak lightning strikes).
So far, everything I’ve said has been weakly defended so I’m interested to see whether this is any stronger or whether I’ll be doing some more time thinking tomorrow.
We’re going in circles a little aren’t we (my fault, I’ll grant). Okay, so there are two questions:
1.) Is it a rational choice to one box? Answer: No. 2.) Is it rational to have a disposition to one box? Answer: Yes.
As mentioned earlier, I think I’m more interested in creating a decision theory than wins than one that’s rational. But let’s say you are interested in a decision theory that captures rationality: It still seems arbitrary to say that the rationality of the choice is more important than the rationality of the decision. Yes, you could argue that choice is the domain of study for decision theory but the number of decision theorists that would one box (outside of LW) suggests that other people have a different idea of what decision theory would be.
I guess my question is this: Is the whole debate over one or two boxing on Newcomb’s just a disagreement over which question decision theory should be studying or are there people who use choice to mean the same thing that you do that think one boxing is the rational choice?
I don’t understand the distinction between choosing to one-box and being the sort of person who chooses to one-box. Can you formalize that difference?
The latter, I think. (Otherwise, one-boxers would not really be disagreeing with two-boxers. We two-boxers already granted that one-boxing is the better disposition. So if they’re merely aiming to construct a theory of desirable dispositions, rather than rational choice, then their claims would be utterly uncontroversial.)
I thought that debate was about free will.