You can’t have a disposition to act in a certain way without counter-factually acting that way. You can’t counter-factually act a certain way without actually acting that way in a situation indistinguishable form the counter-factual. What you seem to be talking about appears to be pretending to have a certain disposition (e.g. acting according to that disposition unless the stakes are really high and trying to hide that fact). In other words you are talking about signaling, and I don’t think the decision theory discussions here have progressed far enough for complicating the matter by trying to incorporate a theory of signalling to be productive at this point.
(or perhaps you believe in magical acausal free will)
No, neither. It’s more the idea that certain identifiable dispositions needn’t be 100% determinative. I may be disposed to X in C so long as I do X in a sufficiently high proportion of C-situations. But if (say) an unpredictable mental glitch leads me to do otherwise one day, that may well be all the better. My point is then that it would be a mistake to condemn this more-fortunate choice as “irrational”, in such cases.
If it’s due to a random glitch and not any qualities that you regard as part of defining who you are I don’t see how it could possibly described as choice. Randomness is incompatible with sensibly defined choice (of course the act of deciding to leave something up to chance itself is a choice, but which of the possible outcomes actually comes about is not).
If your disposition is to flip a quantum coin in certain situations that is a fact about your disposition. If your disposition is to decide differently in certain high stake situations that also is a fact about your disposition. You may choose to try to hide such facts and pretend your disposition is more simple than it actually is, but that’s a question of signaling, not of what your disposition is like. (Of course a disposition to signal in a certain way is also a disposition).
It’s an interesting question how to draw the line between chosen action and mere behaviour. If the “glitch” occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn’t wildly contrary to my core values, etc.), then I don’t see why the upshot couldn’t, in principle, qualify as “my” choice—even if it’s rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.
Your second point involves the notion of a kind of totalizing, all things considered disposition, such that your total disposition + environmental stimuli strictly entails your response (modulo quantum complications). Granted, the kind of distinction I’m wanting to draw won’t be applicable when we’re talking about such total dispositions.
But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge—e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.
It’s an interesting question how to draw the line between chosen action and mere behaviour. If the “glitch” occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn’t wildly contrary to my core values, etc.), then I don’t see why the upshot couldn’t, in principle, qualify as “my” choice—even if it’s rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.
If your normal decision making apparatus continues to work afterwards, has the chance to compensate for the glitch, doesn’t, and the glitch changes the result it would have to be almost exactly balanced in a counterfactual case without the glitch. How likely is that? And even so it doesn’t strike me as conceptionally all that different from unconsciously incorporating a small random element in the decision making process right from the start. In either case the more important the random element the less accurately is the outcome described as your choice, as far as I’m concerned (maybe some would define the the random element as the real you, and not the parts that include your values, experiences, your reasoning ability and so on; or possibly argue that for mysterious reasons they are so conveniently entangled that they are somehow the same thing)
But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge—e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.
But that’s just a map-territory difference. If you use disposition as your word for “map of the decision making process” of course that map will sometimes have inaccuracies. But calling the difference between map and territory “choice” strikes me as … well.. it matches the absolutely crazy way some people think about free will, but is worse than useless. Unless you want to outlaw psychology because it’s akin to slavery, trying to take away peoples choice by understanding them, oh the horror!
calling the difference between map and territory “choice”
Eh? That’s not what I’m doing. I’m pointing out that there’s a respectable (coarse-grained) sense of ‘disposition’ (i.e. tendency) according to which one can have a disposition to X without this necessarily entailing that one will actually do X. (There’s another sense of ‘total disposition’ where the entailment does hold. N.B. We make choices either way, but it only makes sense to separately evaluate choices from coarse-grained dispositions.)
I take these general dispositions to accurately correspond to real facts in the world—they’re just at a sufficiently high level of abstraction that they allow for various exceptions. (Ceteris paribus laws are not, just for that reason, “inaccurate”.)
My take on this is the following: It’s easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace “disposition” with “source code” and replace “change your disposition to do some action X” to “rewrite your source code so that it does action X”. Of course it would still want to incorporate the probability of a glitch as someone else already suggested.
If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it’s disposition).
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?
But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.
If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn’t predict that particular action? How do you know that’s not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn’t predict that particular action?
No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people’s brains, and so in that sense concern the ‘territory’ and not just the ‘map’. But I take it that these are also not exhaustive of one’s total disposition—human brains also contain a fair bit of ‘noise’ that the above descriptions fail to capture.
Regardless, this isn’t anything to do with signalling, since there’s no possibility of manipulated or false belief: it’s stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining ‘noise’ is not something that the agents involved have any changing beliefs about. Let’s just say it’s common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don’t think of it as the agent building randomness into their source code—as that would presumably have a folk-psychological analogue. It’s more a matter of the firmware being a little unreliable at carrying out the program.)
The upshot, as I see things, is as follows: the vast majority of people who “win” at Newcomb’s will be one-boxers. After all, it’s precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I’m considering) is not totally omniscient: she can accurately see the patterns in people’s brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining ‘noise’. So it’s compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.
(Though obviously one couldn’t plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable ‘change of heart’ at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)
I still don’t see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn’t), but I suspect discussing this further won’t lead anywhere.
Let’s try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?
Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.
If P>0.001 (I’m assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.
If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I’m a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.
You can’t have a disposition to act in a certain way without counter-factually acting that way. You can’t counter-factually act a certain way without actually acting that way in a situation indistinguishable form the counter-factual.
What is the relevance of this? Are you using this argument? (See comment above.)*
It is impossible to have the one-boxing disposition and then two-box.
Ought implies can.
Therefore, it is false that someone with a one-boxing disposition ought to two-box.
If that isn’t your argument, what is the force of the quoted text?
At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)
Notice that an analogous argument also cuts the other way:
It is impossible for someone with the two-boxing disposition to one-box.
Ought implies can.
Therefore, it is false that someone with the two-boxing disposition ought to one box.
*I made a similar comment above, but I don’t know how to link to it. Help appreciated.
Making a decision means discovering your disposition (if we are using that word, we could call it something else if that avoids terminology confusion. What I mean is the non-random element of how you react to a specific input) in respect to a certain action. In a certain sense you are your dispositions, and everything else is just meaningless extras (that is your values, experiences, non-value preferences, reasoning ability etc. collectively form your dispositions and are part of them). Controlling your dispositions is how you control your actions. And your dispositions are what is doing that controlling. Making a choice between A and B doesn’t mean letting disposition a and disposition b fight and pick a winner, it means that preferences vs A and B are the cause for your disposition being what it is. You can change your disposition vs act X in the sense that your disposition vs any X before time t is Y and your disposition for any X after t is Z, but not in the sense that you can change your disposition vs X at time t from Y to Z. Whatever you actually do (modulo randomness) at time t, that’s your one and only disposition vs X at time t.
Assume you prefer red to blue, but more strongly prefer cubes to spheres. When given the choice between a red sphere and a blue cube and only one of them you can’t just pick the red cube. And it’s not the case that you ought to pick the red after you already have the cube, that’s just nonsense. The problem is more than just impossibility.
Whatever you actually do (modulo randomness) at time t, that’s your one and only disposition vs X at time t.
Okay, I understand how you use the word “disposition” now. This is not the way I was using the word, but I don’t think that is relevant to our disagreement. I hereby resolve to use the phrase “disposition to A” in the same way as you for the rest of our conversation.
I still don’t understand how this point suggests that people with one-boxing dispositions ought not to two-box. I can only understand it in one way: as in the argument in my original reply to you. But that argument form leads to this absurd conclusion:
(a) whenever you have a disposition to A and you do A, it is false that you ought to have done something else
In particular, it backfires for the intended argumentative purpose, since it entails that two-boxers shouldn’t one-box.
No, when you have disposition a and do A it may be the case that you ought to have disposition b and do B, perhaps disposition a was formed by habit and disposition b would counter-factually have resulted if the disposition had formed on the basis of likely effects and your preferences. What is false is that you ought to have disposition a and do B.
What is false is that you ought to have disposition a and do B.
OK. So the argument is this one:
According to two-boxers, you ought to (i) have the disposition to one-box, and (ii) take two boxes.
It is impossible to do (i) and (ii).
Ought implies can.
So two-boxers are wrong.
But, on your use of “disposition”, two-boxers reject 1. They do not believe that you should have a FAWS-disposition to one-box, since having a FAWS-disposition to one-box just means “actually taking one box, where this is not a result of randomness”. Two-boxers think you should non-randomly choose to take two boxes.
ETA: Some two-boxers may hesitate to agree that you “ought to have a disposition to one-box”, even in the philosopher’s sense of “disposition”. This is because they might want “ought” to only apply to actions; such people would, at most, agree that you ought to make yourself a one-boxer.
Rachel does not envy Irene her choice at all. What she wishes is to have the one-boxer’s dispositions, so that the predictor puts a million in the first box, and then to confound all expectations by unpredictably choosing both boxes and reaping the most riches possible.
Richard is probably using disposition in a different sense (possibly the model someone has of someones disposition in my sense) but I believe Eliezer’s usage was closer to mine, and either way disposition in my sense is what she would need to actually get the million dollars.
You can’t have a disposition to act in a certain way without counter-factually acting that way. You can’t counter-factually act a certain way without actually acting that way in a situation indistinguishable form the counter-factual. What you seem to be talking about appears to be pretending to have a certain disposition (e.g. acting according to that disposition unless the stakes are really high and trying to hide that fact). In other words you are talking about signaling, and I don’t think the decision theory discussions here have progressed far enough for complicating the matter by trying to incorporate a theory of signalling to be productive at this point.
(or perhaps you believe in magical acausal free will)
No, neither. It’s more the idea that certain identifiable dispositions needn’t be 100% determinative. I may be disposed to X in C so long as I do X in a sufficiently high proportion of C-situations. But if (say) an unpredictable mental glitch leads me to do otherwise one day, that may well be all the better. My point is then that it would be a mistake to condemn this more-fortunate choice as “irrational”, in such cases.
If it’s due to a random glitch and not any qualities that you regard as part of defining who you are I don’t see how it could possibly described as choice. Randomness is incompatible with sensibly defined choice (of course the act of deciding to leave something up to chance itself is a choice, but which of the possible outcomes actually comes about is not).
If your disposition is to flip a quantum coin in certain situations that is a fact about your disposition. If your disposition is to decide differently in certain high stake situations that also is a fact about your disposition. You may choose to try to hide such facts and pretend your disposition is more simple than it actually is, but that’s a question of signaling, not of what your disposition is like. (Of course a disposition to signal in a certain way is also a disposition).
It’s an interesting question how to draw the line between chosen action and mere behaviour. If the “glitch” occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn’t wildly contrary to my core values, etc.), then I don’t see why the upshot couldn’t, in principle, qualify as “my” choice—even if it’s rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.
Your second point involves the notion of a kind of totalizing, all things considered disposition, such that your total disposition + environmental stimuli strictly entails your response (modulo quantum complications). Granted, the kind of distinction I’m wanting to draw won’t be applicable when we’re talking about such total dispositions.
But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge—e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.
If your normal decision making apparatus continues to work afterwards, has the chance to compensate for the glitch, doesn’t, and the glitch changes the result it would have to be almost exactly balanced in a counterfactual case without the glitch. How likely is that? And even so it doesn’t strike me as conceptionally all that different from unconsciously incorporating a small random element in the decision making process right from the start. In either case the more important the random element the less accurately is the outcome described as your choice, as far as I’m concerned (maybe some would define the the random element as the real you, and not the parts that include your values, experiences, your reasoning ability and so on; or possibly argue that for mysterious reasons they are so conveniently entangled that they are somehow the same thing)
But that’s just a map-territory difference. If you use disposition as your word for “map of the decision making process” of course that map will sometimes have inaccuracies. But calling the difference between map and territory “choice” strikes me as … well.. it matches the absolutely crazy way some people think about free will, but is worse than useless. Unless you want to outlaw psychology because it’s akin to slavery, trying to take away peoples choice by understanding them, oh the horror!
Eh? That’s not what I’m doing. I’m pointing out that there’s a respectable (coarse-grained) sense of ‘disposition’ (i.e. tendency) according to which one can have a disposition to X without this necessarily entailing that one will actually do X. (There’s another sense of ‘total disposition’ where the entailment does hold. N.B. We make choices either way, but it only makes sense to separately evaluate choices from coarse-grained dispositions.)
I take these general dispositions to accurately correspond to real facts in the world—they’re just at a sufficiently high level of abstraction that they allow for various exceptions. (Ceteris paribus laws are not, just for that reason, “inaccurate”.)
My take on this is the following: It’s easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace “disposition” with “source code” and replace “change your disposition to do some action X” to “rewrite your source code so that it does action X”. Of course it would still want to incorporate the probability of a glitch as someone else already suggested.
If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it’s disposition).
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?
But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.
If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn’t predict that particular action? How do you know that’s not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn’t predict that particular action?
No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people’s brains, and so in that sense concern the ‘territory’ and not just the ‘map’. But I take it that these are also not exhaustive of one’s total disposition—human brains also contain a fair bit of ‘noise’ that the above descriptions fail to capture.
Regardless, this isn’t anything to do with signalling, since there’s no possibility of manipulated or false belief: it’s stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining ‘noise’ is not something that the agents involved have any changing beliefs about. Let’s just say it’s common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don’t think of it as the agent building randomness into their source code—as that would presumably have a folk-psychological analogue. It’s more a matter of the firmware being a little unreliable at carrying out the program.)
The upshot, as I see things, is as follows: the vast majority of people who “win” at Newcomb’s will be one-boxers. After all, it’s precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I’m considering) is not totally omniscient: she can accurately see the patterns in people’s brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining ‘noise’. So it’s compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.
(Though obviously one couldn’t plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable ‘change of heart’ at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)
I still don’t see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn’t), but I suspect discussing this further won’t lead anywhere.
Let’s try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?
Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.
If P>0.001 (I’m assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.
If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I’m a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.
What is the relevance of this? Are you using this argument? (See comment above.)*
It is impossible to have the one-boxing disposition and then two-box.
Ought implies can.
Therefore, it is false that someone with a one-boxing disposition ought to two-box.
If that isn’t your argument, what is the force of the quoted text?
At any rate, it seems like a bad argument, since analogous arguments will entail that whenever you have some decisive disposition, it is false that you ought to act differently. (It will entail, for instance, NOT[people who have a decisive loss aversion disposition should follow expected utility theory].)
Notice that an analogous argument also cuts the other way:
It is impossible for someone with the two-boxing disposition to one-box.
Ought implies can.
Therefore, it is false that someone with the two-boxing disposition ought to one box.
*I made a similar comment above, but I don’t know how to link to it. Help appreciated.
Type
to get
(I got the URI
http://lesswrong.com/lw/2lg/desirable_dispositions_and_rational_actions/2gg4?c=1
from the Permalink on your comment above.)Thanks.
Making a decision means discovering your disposition (if we are using that word, we could call it something else if that avoids terminology confusion. What I mean is the non-random element of how you react to a specific input) in respect to a certain action. In a certain sense you are your dispositions, and everything else is just meaningless extras (that is your values, experiences, non-value preferences, reasoning ability etc. collectively form your dispositions and are part of them). Controlling your dispositions is how you control your actions. And your dispositions are what is doing that controlling. Making a choice between A and B doesn’t mean letting disposition a and disposition b fight and pick a winner, it means that preferences vs A and B are the cause for your disposition being what it is. You can change your disposition vs act X in the sense that your disposition vs any X before time t is Y and your disposition for any X after t is Z, but not in the sense that you can change your disposition vs X at time t from Y to Z. Whatever you actually do (modulo randomness) at time t, that’s your one and only disposition vs X at time t.
Assume you prefer red to blue, but more strongly prefer cubes to spheres. When given the choice between a red sphere and a blue cube and only one of them you can’t just pick the red cube. And it’s not the case that you ought to pick the red after you already have the cube, that’s just nonsense. The problem is more than just impossibility.
Okay, I understand how you use the word “disposition” now. This is not the way I was using the word, but I don’t think that is relevant to our disagreement. I hereby resolve to use the phrase “disposition to A” in the same way as you for the rest of our conversation.
I still don’t understand how this point suggests that people with one-boxing dispositions ought not to two-box. I can only understand it in one way: as in the argument in my original reply to you. But that argument form leads to this absurd conclusion:
(a) whenever you have a disposition to A and you do A, it is false that you ought to have done something else
In particular, it backfires for the intended argumentative purpose, since it entails that two-boxers shouldn’t one-box.
No, when you have disposition a and do A it may be the case that you ought to have disposition b and do B, perhaps disposition a was formed by habit and disposition b would counter-factually have resulted if the disposition had formed on the basis of likely effects and your preferences. What is false is that you ought to have disposition a and do B.
OK. So the argument is this one:
According to two-boxers, you ought to (i) have the disposition to one-box, and (ii) take two boxes.
It is impossible to do (i) and (ii).
Ought implies can.
So two-boxers are wrong.
But, on your use of “disposition”, two-boxers reject 1. They do not believe that you should have a FAWS-disposition to one-box, since having a FAWS-disposition to one-box just means “actually taking one box, where this is not a result of randomness”. Two-boxers think you should non-randomly choose to take two boxes.
ETA: Some two-boxers may hesitate to agree that you “ought to have a disposition to one-box”, even in the philosopher’s sense of “disposition”. This is because they might want “ought” to only apply to actions; such people would, at most, agree that you ought to make yourself a one-boxer.
From the original post:
Richard is probably using disposition in a different sense (possibly the model someone has of someones disposition in my sense) but I believe Eliezer’s usage was closer to mine, and either way disposition in my sense is what she would need to actually get the million dollars.