I’m not sure what the quantum-goo explanation is adding here.
If Omega can’t predict the 1% case (whether because it’s due to unpredictable quantum goo, or for whatever other reason… picking a specific explanation only subjects me to a conjunction fallacy) then Omega’s behavior will not reflect the 1% case, and that completely changes the math. Someone for whom the 1% case is two-boxing is then entirely justified in two-boxing in the 1% case, since they ought to predict that Omega cannot predict their two-boxing. (Assuming that they can recognize that they are in such a case. If not, they are best off one-boxing in all cases. Though it follows from our premises that they will two-box 1% of the time anyway, though they might not have any idea why they did that. That said, compatibilist decision theory makes my teeth ache.)
Anyway, yeah, this is assuming some kind of hard cutoff strategy, where Omega puts a million dollars in a box for someone it has > N% confidence will one-box.
If instead Omega puts N% of $1m in the box if Omega has N% confidence the subject will one-box, the result isn’t terribly different if Omega is a good predictor.
I’m completely lost by the “proportional to how much of the brain will be one boxing” strategy. Can you say more about what you mean by this? It seems likely to me that most of the brain neither one-boxes nor two-boxes (that is, is not involved in this choice at all) and most of the remainder does both (that is, performs the same operations in the two-boxing case as in the one-boxing case).
I’m not sure what the quantum-goo explanation is adding here.
A perfect predictor will predict correctly and perfectly that the brain both one boxes and two boxes in different Everett branches (with vastly different weights). This is different in nature to an imperfect predictor that isn’t able to model the behavior of the brain with complete certainty yet given preferences that add up to normal it requires that you use the same math. It means you do not have to abandon the premise “perfect predictor” for the probabilistic reasoning to be necessary.
I’m completely lost by the “proportional to how much of the brain will be one boxing” strategy.
How much weight the everett branches in which it one box have relative to the everett branches in which it two boxes.
Allow me to emphasise:
As you say the one boxing remains stable under this uncertainty and even imperfect predictors.
I’m not sure what the quantum-goo explanation is adding here.
If Omega can’t predict the 1% case (whether because it’s due to unpredictable quantum goo, or for whatever other reason… picking a specific explanation only subjects me to a conjunction fallacy) then Omega’s behavior will not reflect the 1% case, and that completely changes the math. Someone for whom the 1% case is two-boxing is then entirely justified in two-boxing in the 1% case, since they ought to predict that Omega cannot predict their two-boxing. (Assuming that they can recognize that they are in such a case. If not, they are best off one-boxing in all cases. Though it follows from our premises that they will two-box 1% of the time anyway, though they might not have any idea why they did that. That said, compatibilist decision theory makes my teeth ache.)
Anyway, yeah, this is assuming some kind of hard cutoff strategy, where Omega puts a million dollars in a box for someone it has > N% confidence will one-box.
If instead Omega puts N% of $1m in the box if Omega has N% confidence the subject will one-box, the result isn’t terribly different if Omega is a good predictor.
I’m completely lost by the “proportional to how much of the brain will be one boxing” strategy. Can you say more about what you mean by this? It seems likely to me that most of the brain neither one-boxes nor two-boxes (that is, is not involved in this choice at all) and most of the remainder does both (that is, performs the same operations in the two-boxing case as in the one-boxing case).
A perfect predictor will predict correctly and perfectly that the brain both one boxes and two boxes in different Everett branches (with vastly different weights). This is different in nature to an imperfect predictor that isn’t able to model the behavior of the brain with complete certainty yet given preferences that add up to normal it requires that you use the same math. It means you do not have to abandon the premise “perfect predictor” for the probabilistic reasoning to be necessary.
How much weight the everett branches in which it one box have relative to the everett branches in which it two boxes.
Allow me to emphasise:
(I think we agree?)
Ah, I see what you mean.
Yes, I think we agree. (I had previously been unsure.)