As to inspection, maybe I’m not familiar enough with the terminology there.
Re your last point: I was just thinking about that too. And strangely enough I missed that the boxes are open. But wouldn’t the note be useless in that case?
I will think about this more, but it seems to me your decision theory can’t recommend “Left-box, unless you see a bomb in left.”, and FDT doesn’t do this. The problem is, in that case the prediction influences what you end up doing. What if the predictor is malevolent, and predicts you choose right, placing the bomb in left? It could make you lose $100 easily. Maybe if you believed the predictor to be benevolent?
And strangely enough I missed that the boxes are open.
Well, uh… that is rather an important aspect of the scenario…
… it seems to me your decision theory can’t recommend “Left-box, unless you see a bomb in left.” …
Why not?
The problem is, in that case the prediction influences what you end up doing.
Yes, it certainly does. And that’s a problem for the predictor, perhaps, but why should it be a problem for me? People condition their actions on knowledge of past events (including predictions of their actions!) all the time.
What if the predictor is malevolent, and predicts you choose right, placing the bomb in left? It could make you lose $100 easily.
Indeed, the predictor doesn’t have to predict anything to make me lose $100; it can just place the bomb in the left box, period. This then boils down to a simple threat: “pay $100 or die!”. Hardly a tricky decision theory problem…
Well, uh… that is rather an important aspect of the scenario…
Sure. But given the note, I had the knowledge needed already, it seems. But whatever.
Indeed, the predictor doesn’t have to predict anything to make me lose $100; it can just place the bomb in the left box, period. This then boils down to a simple threat: “pay $100 or die!”. Hardly a tricky decision theory problem…
Didn’t say it was a tricky decision problem. My point was that your strategy is easily exploitable and may therefore not be a good strategy.
If your strategy is “always choose Left”, then a malevolent “predictor” can put a bomb in Left and be guaranteed to kill you. That seems much worse than being mugged for $100.
I don’t see how that’s relevant. In the original problem, you’ve been placed in this weird situation against your will, where something bad will happen to you (either the loss of $100 or … death). If we’re supposing that the predictor is malevolent, she could certainly do all sorts of things… are we assuming that the predictor is constrained in some way? Clearly, she can make mistakes, so that opens up her options to any kind of thing you like. In any case, your choice (by construction) is as stated: pay $100, or die.
As to inspection, maybe I’m not familiar enough with the terminology there.
Re your last point: I was just thinking about that too. And strangely enough I missed that the boxes are open. But wouldn’t the note be useless in that case?
I will think about this more, but it seems to me your decision theory can’t recommend “Left-box, unless you see a bomb in left.”, and FDT doesn’t do this. The problem is, in that case the prediction influences what you end up doing. What if the predictor is malevolent, and predicts you choose right, placing the bomb in left? It could make you lose $100 easily. Maybe if you believed the predictor to be benevolent?
Well, uh… that is rather an important aspect of the scenario…
Why not?
Yes, it certainly does. And that’s a problem for the predictor, perhaps, but why should it be a problem for me? People condition their actions on knowledge of past events (including predictions of their actions!) all the time.
Indeed, the predictor doesn’t have to predict anything to make me lose $100; it can just place the bomb in the left box, period. This then boils down to a simple threat: “pay $100 or die!”. Hardly a tricky decision theory problem…
Sure. But given the note, I had the knowledge needed already, it seems. But whatever.
Didn’t say it was a tricky decision problem. My point was that your strategy is easily exploitable and may therefore not be a good strategy.
If your strategy is “always choose Left”, then a malevolent “predictor” can put a bomb in Left and be guaranteed to kill you. That seems much worse than being mugged for $100.
The problem description explicitly states the predictor doesn’t do that, so no.
I don’t see how that’s relevant. In the original problem, you’ve been placed in this weird situation against your will, where something bad will happen to you (either the loss of $100 or … death). If we’re supposing that the predictor is malevolent, she could certainly do all sorts of things… are we assuming that the predictor is constrained in some way? Clearly, she can make mistakes, so that opens up her options to any kind of thing you like. In any case, your choice (by construction) is as stated: pay $100, or die.
You don’t see how the problem description preventing it is relevant?
The description doesn’t prevent malevolence, but it does prevent putting a bomb in left if the agent left-boxes.