For the record, I read Nate’s comments again, and I now think of it like this:
To the extent that the predictor was accurate in her line of reasoning, then you left-boxing does NOT result in you slowly burning to death. It results in, well, the problem statement being wrong, because the following can’t all be true:
The predictor is accurate
The predictor predicts you right-box, and places the bomb in left
You left-box
And yes, apparently the predictor can be wrong, but I’d say, who even cares? The probability of the predictor being wrong is supposed to be virtually zero anyway (although as Nate notes, the problem description isn’t complete in that regard).
How do we know it? If the predictor is malevolent, then it can “err” as much as it wants.
For the record, I read Nate’s comments again, and I now think of it like this:
To the extent that the predictor was accurate in her line of reasoning, then you left-boxing does NOT result in you slowly burning to death. It results in, well, the problem statement being wrong, because the following can’t all be true:
The predictor is accurate
The predictor predicts you right-box, and places the bomb in left
You left-box
And yes, apparently the predictor can be wrong, but I’d say, who even cares? The probability of the predictor being wrong is supposed to be virtually zero anyway (although as Nate notes, the problem description isn’t complete in that regard).
We know it because it is given in the problem description, which you violate if the predictor ‘can “err” as much as it wants’.