So, what does it mean for a brain to do one thing 99% of the time and something else 1% of the time?
If the 1% case is a genuinely random event, or the result of some mysterious sort of unpredictable free will, or otherwise something that isn’t the effect of the causes that precede it, and therefore can’t be predicted short of some mysterious acausal precognition, then I agree that it follows that if Omega is a good-but-not-perfect predictor, then Omega cannot predict the 1% case, and Newcomb’s problem in its standard form can’t be implemented even in principle, with all the consequences previously discussed.
Conversely, if brain events—even rare ones—are instead the effects of causes that precede them, then a good-but-not-perfect predictor can make good-but-not-perfect predictions of the 1% case just as readily as the 99% case, and these problems don’t arise.
Personally, I consider brain events the effects of causes that precede them. So if I’m the sort of person who one-boxes 99% of the time and two-boxes 1% of the time, and Omega has a sufficient understanding of the causes of human behavior to make 95% accurate predictions of what I do, then Omega will predict 95% of my (common) one-boxing as well as 95% of my (rare) two-boxing. Further, if I somehow come to believe that Omega has such an understanding, then I will predict that Omega will predict my (rare) two-boxing, and therefore I will predict that two-boxing loses me money, and therefore I will one-box stably.
So, what does it mean for a brain to do one thing 99% of the time and something else 1% of the time?
For the sake of the least convenient world assume that the brain is particularly sensitive to quantum noise. This applies in the actual world too albeit at a far, far lower rate than 1% (but hey… perfect). That leaves a perfect predictor perfectly predicting that in the branches with most of the quantum goo (pick a word) the brain will make one choice while in the others it will make the other.
In this case it becomes a matter of how the counterfactual is specified. The most appropriate one seems to be with Omega filling the large box with an amount of money proportional to how much of the brain will be one boxing. A brain that actively flips a quantum coin would then be granted a large box with half the million.
The only other obvious alternative specification of Omega that wouldn’t break the counterfactual given this this context are a hard cutoff and some specific degree of ‘probability’.
As you say the one boxing remains stable under this uncertainty and even imperfect predictors.
I’m not sure what the quantum-goo explanation is adding here.
If Omega can’t predict the 1% case (whether because it’s due to unpredictable quantum goo, or for whatever other reason… picking a specific explanation only subjects me to a conjunction fallacy) then Omega’s behavior will not reflect the 1% case, and that completely changes the math. Someone for whom the 1% case is two-boxing is then entirely justified in two-boxing in the 1% case, since they ought to predict that Omega cannot predict their two-boxing. (Assuming that they can recognize that they are in such a case. If not, they are best off one-boxing in all cases. Though it follows from our premises that they will two-box 1% of the time anyway, though they might not have any idea why they did that. That said, compatibilist decision theory makes my teeth ache.)
Anyway, yeah, this is assuming some kind of hard cutoff strategy, where Omega puts a million dollars in a box for someone it has > N% confidence will one-box.
If instead Omega puts N% of $1m in the box if Omega has N% confidence the subject will one-box, the result isn’t terribly different if Omega is a good predictor.
I’m completely lost by the “proportional to how much of the brain will be one boxing” strategy. Can you say more about what you mean by this? It seems likely to me that most of the brain neither one-boxes nor two-boxes (that is, is not involved in this choice at all) and most of the remainder does both (that is, performs the same operations in the two-boxing case as in the one-boxing case).
I’m not sure what the quantum-goo explanation is adding here.
A perfect predictor will predict correctly and perfectly that the brain both one boxes and two boxes in different Everett branches (with vastly different weights). This is different in nature to an imperfect predictor that isn’t able to model the behavior of the brain with complete certainty yet given preferences that add up to normal it requires that you use the same math. It means you do not have to abandon the premise “perfect predictor” for the probabilistic reasoning to be necessary.
I’m completely lost by the “proportional to how much of the brain will be one boxing” strategy.
How much weight the everett branches in which it one box have relative to the everett branches in which it two boxes.
Allow me to emphasise:
As you say the one boxing remains stable under this uncertainty and even imperfect predictors.
So, what does it mean for a brain to do one thing 99% of the time and something else 1% of the time?
If the 1% case is a genuinely random event, or the result of some mysterious sort of unpredictable free will, or otherwise something that isn’t the effect of the causes that precede it, and therefore can’t be predicted short of some mysterious acausal precognition, then I agree that it follows that if Omega is a good-but-not-perfect predictor, then Omega cannot predict the 1% case, and Newcomb’s problem in its standard form can’t be implemented even in principle, with all the consequences previously discussed.
Conversely, if brain events—even rare ones—are instead the effects of causes that precede them, then a good-but-not-perfect predictor can make good-but-not-perfect predictions of the 1% case just as readily as the 99% case, and these problems don’t arise.
Personally, I consider brain events the effects of causes that precede them. So if I’m the sort of person who one-boxes 99% of the time and two-boxes 1% of the time, and Omega has a sufficient understanding of the causes of human behavior to make 95% accurate predictions of what I do, then Omega will predict 95% of my (common) one-boxing as well as 95% of my (rare) two-boxing. Further, if I somehow come to believe that Omega has such an understanding, then I will predict that Omega will predict my (rare) two-boxing, and therefore I will predict that two-boxing loses me money, and therefore I will one-box stably.
For the sake of the least convenient world assume that the brain is particularly sensitive to quantum noise. This applies in the actual world too albeit at a far, far lower rate than 1% (but hey… perfect). That leaves a perfect predictor perfectly predicting that in the branches with most of the quantum goo (pick a word) the brain will make one choice while in the others it will make the other.
In this case it becomes a matter of how the counterfactual is specified. The most appropriate one seems to be with Omega filling the large box with an amount of money proportional to how much of the brain will be one boxing. A brain that actively flips a quantum coin would then be granted a large box with half the million.
The only other obvious alternative specification of Omega that wouldn’t break the counterfactual given this this context are a hard cutoff and some specific degree of ‘probability’.
As you say the one boxing remains stable under this uncertainty and even imperfect predictors.
I’m not sure what the quantum-goo explanation is adding here.
If Omega can’t predict the 1% case (whether because it’s due to unpredictable quantum goo, or for whatever other reason… picking a specific explanation only subjects me to a conjunction fallacy) then Omega’s behavior will not reflect the 1% case, and that completely changes the math. Someone for whom the 1% case is two-boxing is then entirely justified in two-boxing in the 1% case, since they ought to predict that Omega cannot predict their two-boxing. (Assuming that they can recognize that they are in such a case. If not, they are best off one-boxing in all cases. Though it follows from our premises that they will two-box 1% of the time anyway, though they might not have any idea why they did that. That said, compatibilist decision theory makes my teeth ache.)
Anyway, yeah, this is assuming some kind of hard cutoff strategy, where Omega puts a million dollars in a box for someone it has > N% confidence will one-box.
If instead Omega puts N% of $1m in the box if Omega has N% confidence the subject will one-box, the result isn’t terribly different if Omega is a good predictor.
I’m completely lost by the “proportional to how much of the brain will be one boxing” strategy. Can you say more about what you mean by this? It seems likely to me that most of the brain neither one-boxes nor two-boxes (that is, is not involved in this choice at all) and most of the remainder does both (that is, performs the same operations in the two-boxing case as in the one-boxing case).
A perfect predictor will predict correctly and perfectly that the brain both one boxes and two boxes in different Everett branches (with vastly different weights). This is different in nature to an imperfect predictor that isn’t able to model the behavior of the brain with complete certainty yet given preferences that add up to normal it requires that you use the same math. It means you do not have to abandon the premise “perfect predictor” for the probabilistic reasoning to be necessary.
How much weight the everett branches in which it one box have relative to the everett branches in which it two boxes.
Allow me to emphasise:
(I think we agree?)
Ah, I see what you mean.
Yes, I think we agree. (I had previously been unsure.)