If it’s due to a random glitch and not any qualities that you regard as part of defining who you are I don’t see how it could possibly described as choice. Randomness is incompatible with sensibly defined choice (of course the act of deciding to leave something up to chance itself is a choice, but which of the possible outcomes actually comes about is not).
If your disposition is to flip a quantum coin in certain situations that is a fact about your disposition. If your disposition is to decide differently in certain high stake situations that also is a fact about your disposition. You may choose to try to hide such facts and pretend your disposition is more simple than it actually is, but that’s a question of signaling, not of what your disposition is like. (Of course a disposition to signal in a certain way is also a disposition).
It’s an interesting question how to draw the line between chosen action and mere behaviour. If the “glitch” occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn’t wildly contrary to my core values, etc.), then I don’t see why the upshot couldn’t, in principle, qualify as “my” choice—even if it’s rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.
Your second point involves the notion of a kind of totalizing, all things considered disposition, such that your total disposition + environmental stimuli strictly entails your response (modulo quantum complications). Granted, the kind of distinction I’m wanting to draw won’t be applicable when we’re talking about such total dispositions.
But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge—e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.
It’s an interesting question how to draw the line between chosen action and mere behaviour. If the “glitch” occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn’t wildly contrary to my core values, etc.), then I don’t see why the upshot couldn’t, in principle, qualify as “my” choice—even if it’s rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.
If your normal decision making apparatus continues to work afterwards, has the chance to compensate for the glitch, doesn’t, and the glitch changes the result it would have to be almost exactly balanced in a counterfactual case without the glitch. How likely is that? And even so it doesn’t strike me as conceptionally all that different from unconsciously incorporating a small random element in the decision making process right from the start. In either case the more important the random element the less accurately is the outcome described as your choice, as far as I’m concerned (maybe some would define the the random element as the real you, and not the parts that include your values, experiences, your reasoning ability and so on; or possibly argue that for mysterious reasons they are so conveniently entangled that they are somehow the same thing)
But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge—e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.
But that’s just a map-territory difference. If you use disposition as your word for “map of the decision making process” of course that map will sometimes have inaccuracies. But calling the difference between map and territory “choice” strikes me as … well.. it matches the absolutely crazy way some people think about free will, but is worse than useless. Unless you want to outlaw psychology because it’s akin to slavery, trying to take away peoples choice by understanding them, oh the horror!
calling the difference between map and territory “choice”
Eh? That’s not what I’m doing. I’m pointing out that there’s a respectable (coarse-grained) sense of ‘disposition’ (i.e. tendency) according to which one can have a disposition to X without this necessarily entailing that one will actually do X. (There’s another sense of ‘total disposition’ where the entailment does hold. N.B. We make choices either way, but it only makes sense to separately evaluate choices from coarse-grained dispositions.)
I take these general dispositions to accurately correspond to real facts in the world—they’re just at a sufficiently high level of abstraction that they allow for various exceptions. (Ceteris paribus laws are not, just for that reason, “inaccurate”.)
My take on this is the following: It’s easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace “disposition” with “source code” and replace “change your disposition to do some action X” to “rewrite your source code so that it does action X”. Of course it would still want to incorporate the probability of a glitch as someone else already suggested.
If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it’s disposition).
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?
But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.
If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn’t predict that particular action? How do you know that’s not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn’t predict that particular action?
No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people’s brains, and so in that sense concern the ‘territory’ and not just the ‘map’. But I take it that these are also not exhaustive of one’s total disposition—human brains also contain a fair bit of ‘noise’ that the above descriptions fail to capture.
Regardless, this isn’t anything to do with signalling, since there’s no possibility of manipulated or false belief: it’s stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining ‘noise’ is not something that the agents involved have any changing beliefs about. Let’s just say it’s common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don’t think of it as the agent building randomness into their source code—as that would presumably have a folk-psychological analogue. It’s more a matter of the firmware being a little unreliable at carrying out the program.)
The upshot, as I see things, is as follows: the vast majority of people who “win” at Newcomb’s will be one-boxers. After all, it’s precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I’m considering) is not totally omniscient: she can accurately see the patterns in people’s brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining ‘noise’. So it’s compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.
(Though obviously one couldn’t plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable ‘change of heart’ at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)
I still don’t see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn’t), but I suspect discussing this further won’t lead anywhere.
Let’s try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?
Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.
If P>0.001 (I’m assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.
If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I’m a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.
If it’s due to a random glitch and not any qualities that you regard as part of defining who you are I don’t see how it could possibly described as choice. Randomness is incompatible with sensibly defined choice (of course the act of deciding to leave something up to chance itself is a choice, but which of the possible outcomes actually comes about is not).
If your disposition is to flip a quantum coin in certain situations that is a fact about your disposition. If your disposition is to decide differently in certain high stake situations that also is a fact about your disposition. You may choose to try to hide such facts and pretend your disposition is more simple than it actually is, but that’s a question of signaling, not of what your disposition is like. (Of course a disposition to signal in a certain way is also a disposition).
It’s an interesting question how to draw the line between chosen action and mere behaviour. If the “glitch” occurs at an earlier enough stage, and the subsequent causal process includes enough of my usual reasons-responsive mechanisms (and so isn’t wildly contrary to my core values, etc.), then I don’t see why the upshot couldn’t, in principle, qualify as “my” choice—even if it’s rather surprising, at least to a casual observer, that I ended up acting contrary to my usual disposition.
Your second point involves the notion of a kind of totalizing, all things considered disposition, such that your total disposition + environmental stimuli strictly entails your response (modulo quantum complications). Granted, the kind of distinction I’m wanting to draw won’t be applicable when we’re talking about such total dispositions.
But there are cases where it is applicable. In particular, there are cases where everyone involved is less than omniscient (even about such local matters as the precise arrangement of matter in my head). They might have some fantastic knowledge—e.g. they might know everything there is to know about my brain that can be captured using the language of ordinary folk psychology. This can include various important dispositional facts about me. But if folk psychology is too coarse-grained to capture my total disposition, then we need to distinguish (and separately evaluate) my coarse-grained dispositions from my actual actions.
If your normal decision making apparatus continues to work afterwards, has the chance to compensate for the glitch, doesn’t, and the glitch changes the result it would have to be almost exactly balanced in a counterfactual case without the glitch. How likely is that? And even so it doesn’t strike me as conceptionally all that different from unconsciously incorporating a small random element in the decision making process right from the start. In either case the more important the random element the less accurately is the outcome described as your choice, as far as I’m concerned (maybe some would define the the random element as the real you, and not the parts that include your values, experiences, your reasoning ability and so on; or possibly argue that for mysterious reasons they are so conveniently entangled that they are somehow the same thing)
But that’s just a map-territory difference. If you use disposition as your word for “map of the decision making process” of course that map will sometimes have inaccuracies. But calling the difference between map and territory “choice” strikes me as … well.. it matches the absolutely crazy way some people think about free will, but is worse than useless. Unless you want to outlaw psychology because it’s akin to slavery, trying to take away peoples choice by understanding them, oh the horror!
Eh? That’s not what I’m doing. I’m pointing out that there’s a respectable (coarse-grained) sense of ‘disposition’ (i.e. tendency) according to which one can have a disposition to X without this necessarily entailing that one will actually do X. (There’s another sense of ‘total disposition’ where the entailment does hold. N.B. We make choices either way, but it only makes sense to separately evaluate choices from coarse-grained dispositions.)
I take these general dispositions to accurately correspond to real facts in the world—they’re just at a sufficiently high level of abstraction that they allow for various exceptions. (Ceteris paribus laws are not, just for that reason, “inaccurate”.)
My take on this is the following: It’s easier to see what is meant by disposition if you look at it in terms of AI. Replace the human with an AI, replace “disposition” with “source code” and replace “change your disposition to do some action X” to “rewrite your source code so that it does action X”. Of course it would still want to incorporate the probability of a glitch as someone else already suggested.
If an AI, which is running CDT expects to encounter a newcomb-like problem, it would be rational for it to self-modify (in advance) to use a decision theory which one-boxes (i.e. the AI will change it’s disposition).
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?
But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.
If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn’t predict that particular action? How do you know that’s not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn’t predict that particular action?
No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people’s brains, and so in that sense concern the ‘territory’ and not just the ‘map’. But I take it that these are also not exhaustive of one’s total disposition—human brains also contain a fair bit of ‘noise’ that the above descriptions fail to capture.
Regardless, this isn’t anything to do with signalling, since there’s no possibility of manipulated or false belief: it’s stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining ‘noise’ is not something that the agents involved have any changing beliefs about. Let’s just say it’s common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don’t think of it as the agent building randomness into their source code—as that would presumably have a folk-psychological analogue. It’s more a matter of the firmware being a little unreliable at carrying out the program.)
The upshot, as I see things, is as follows: the vast majority of people who “win” at Newcomb’s will be one-boxers. After all, it’s precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I’m considering) is not totally omniscient: she can accurately see the patterns in people’s brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining ‘noise’. So it’s compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.
(Though obviously one couldn’t plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable ‘change of heart’ at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)
I still don’t see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn’t), but I suspect discussing this further won’t lead anywhere.
Let’s try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?
Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.
If P>0.001 (I’m assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.
If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I’m a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.