Yes, FDT insists that actually, you must choose in advance (by “choosing your algorithm” or what have you), and must stick to the choice no matter what. But that is a feature of FDT, it is not a feature of the scenario!
FDT doesn’t insist on this at all. FDT recognizes that IF your decision procedure is modelled prior to your current decision, than you did in fact choose in advance. If an FDT’er playing Bomb doesn’t believe her decision procedure was being modelled this way, she wouldn’t take Left!
If and only if it is a feature of the scenario, then FDT recognizes it. FDT isn’t insisting the world to be a certain way. I wouldn’t be a proponent of it if it did.
If a model of you predicts that you will choose A, but in fact you can choose B, and want to choose B, and do choose B, then clearly the model was wrong. Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense.
(Is there some other way to interpret what you’re saying? I don’t see it.)
“Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense.”
I choose whatever I want, knowing that it means the predictor predicted that choice.
In Bomb, if I choose Left, the predictor will have predicted that (given subjunctive dependence). Yes, the predictor said it predicted Right in the problem description; but if I choose Left, that simply means the problem ran differently from the start. It means, starting from the beginning, the predictor predicts I will choose Left, doesn’t put a bomb in Left, doesn’t leave the “I predicted you will pick Right”-note (but maybe leaves a “I predicted you will pick Left”-note) , and then I indeed choose Left, letting me live for free.
If the model is in fact (near) perfect, then choosing B means the model chose B too. That may seem like changing the past, but it really isn’t, that’s just the confusing way these problems are set up.
Claiming you can choose something a (near) perfect model of you didn’t predict is like claiming two identical calculators can give a different answer to 2 + 2.
FDT doesn’t insist on this at all. FDT recognizes that IF your decision procedure is modelled prior to your current decision, than you did in fact choose in advance. If an FDT’er playing Bomb doesn’t believe her decision procedure was being modelled this way, she wouldn’t take Left!
If and only if it is a feature of the scenario, then FDT recognizes it. FDT isn’t insisting the world to be a certain way. I wouldn’t be a proponent of it if it did.
If a model of you predicts that you will choose A, but in fact you can choose B, and want to choose B, and do choose B, then clearly the model was wrong. Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense.
(Is there some other way to interpret what you’re saying? I don’t see it.)
“Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense.”
I choose whatever I want, knowing that it means the predictor predicted that choice.
In Bomb, if I choose Left, the predictor will have predicted that (given subjunctive dependence). Yes, the predictor said it predicted Right in the problem description; but if I choose Left, that simply means the problem ran differently from the start. It means, starting from the beginning, the predictor predicts I will choose Left, doesn’t put a bomb in Left, doesn’t leave the “I predicted you will pick Right”-note (but maybe leaves a “I predicted you will pick Left”-note) , and then I indeed choose Left, letting me live for free.
If the model is in fact (near) perfect, then choosing B means the model chose B too. That may seem like changing the past, but it really isn’t, that’s just the confusing way these problems are set up.
Claiming you can choose something a (near) perfect model of you didn’t predict is like claiming two identical calculators can give a different answer to 2 + 2.