But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.
If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn’t predict that particular action? How do you know that’s not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn’t predict that particular action?
No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people’s brains, and so in that sense concern the ‘territory’ and not just the ‘map’. But I take it that these are also not exhaustive of one’s total disposition—human brains also contain a fair bit of ‘noise’ that the above descriptions fail to capture.
Regardless, this isn’t anything to do with signalling, since there’s no possibility of manipulated or false belief: it’s stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining ‘noise’ is not something that the agents involved have any changing beliefs about. Let’s just say it’s common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don’t think of it as the agent building randomness into their source code—as that would presumably have a folk-psychological analogue. It’s more a matter of the firmware being a little unreliable at carrying out the program.)
The upshot, as I see things, is as follows: the vast majority of people who “win” at Newcomb’s will be one-boxers. After all, it’s precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I’m considering) is not totally omniscient: she can accurately see the patterns in people’s brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining ‘noise’. So it’s compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.
(Though obviously one couldn’t plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable ‘change of heart’ at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)
I still don’t see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn’t), but I suspect discussing this further won’t lead anywhere.
Let’s try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?
Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.
If P>0.001 (I’m assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.
If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I’m a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.
But you use disposition as word for the map, right? Otherwise why would you have mentioned folk psychology? If so talking about disposition in games involving other players is talking about signaling.
If not, what would it even mean to act contrary to ones disposition? That there exists a possible coarse-grained model of ones decision making process that that predicts a a majority of ones actions (where is the cutoff? 50%? 90%?), but doesn’t predict that particular action? How do you know that’s not the case for most actions? Or that the mathematically most simple model of ones decision making process that predicts a high enough percentage of ones actions doesn’t predict that particular action?
No, I referenced folk psychology just to give a sense of the appropriate level of abstraction. I assume that beliefs and desires (etc.) correspond to real (albeit coarse-grained) patterns in people’s brains, and so in that sense concern the ‘territory’ and not just the ‘map’. But I take it that these are also not exhaustive of one’s total disposition—human brains also contain a fair bit of ‘noise’ that the above descriptions fail to capture.
Regardless, this isn’t anything to do with signalling, since there’s no possibility of manipulated or false belief: it’s stipulated that your standing beliefs, desires, etc. are all completely transparent. (And we may also stipulate, in a particular case, that the remaining ‘noise’ is not something that the agents involved have any changing beliefs about. Let’s just say it’s common knowledge that the noise leads to unpredictable outcomes in a very small fraction of cases. But don’t think of it as the agent building randomness into their source code—as that would presumably have a folk-psychological analogue. It’s more a matter of the firmware being a little unreliable at carrying out the program.)
The upshot, as I see things, is as follows: the vast majority of people who “win” at Newcomb’s will be one-boxers. After all, it’s precisely the disposition to one-box that is being rewarded. But the predictor (in the variation I’m considering) is not totally omniscient: she can accurately see the patterns in people’s brains that correspond to various folk-psychological attributes (beliefs, desires, etc.), but is sometimes confounded by the remaining ‘noise’. So it’s compatible with having a one-boxing disposition (in the specified sense) that one go on to choose two boxes. And an individual who does this gains the most of all.
(Though obviously one couldn’t plan on winning this way, or their disposition would be for two-boxing. But if they have an unexpected and unpredictable ‘change of heart’ at the moment of decision, my claim is that the resulting decision to two-box is more rather than less rational.)
I still don’t see how statements about disposition in your sense are supposed to have an objective truth value (what does someone look like in visually simplified?), and why you think this disposition is supposed to better correlate with peoples predictions about decisions than the non-random component of the decision making process (total disposition) does (or why you think this concept is useful if it doesn’t), but I suspect discussing this further won’t lead anywhere.
Let’s try leaving the disposition discussion aside for a moment: You are postulating a scenario where someone spontaneously changes from a one-boxer into a two-boxer after the predictor has already made the prediction, just long enough to open the right hand box and collect the $1000. Is that right? And the question is whether I should regret not being able to change myself back into a one boxer in time to refuse the $1000?
Obviously if my behavior in this case was completely uncorrelated to the odds of finding the $1,000,000 box empty I should not. But the normal assumption for cases where your behavior is unpredictable (e. g. when you are using a quantum coin) is that P(two box) = P ( left box empty). Otherwise I would try to contrive to one-box with a probability of just over 0.5. So the details depend on P.
If P>0.001 (I’m assuming constant utility per dollar, which is unrealistic) my expected dollars before opening the left box have been reduced, and I bitterly regret my temporary lapse from sanity since it might have costed me $1,000,000. The rationale is the same as in the normal Newcomb problem.
If P<0.001 my expected dollars right at that point have increased, and according to some possible decision theories that one-box I should not regret the spontaneous change, since I already know I was lucky. But nevertheless my overall expected payoff in all branches is lower than it would be if temporary lapses like that were not possible. Since I’m a Counterfactual muggee I regret not being able to prevent the two-boxing, but am happy enough with the outcome for that particular instance of me.