Well here is what Bryan said in the 80k Podcast about what would change his mind:
Now the thought experiment is to tell me that unconditional prediction — and you should be able to go and tell me the prediction in such a way that it incorporates all my reactions and secondary reactions and so on, all the way to infinity. And now guess what I’m going to do? I’m going to do the opposite of what you said I’m going to do. All right? Again, it’s not ironclad, and I know there’s a lot of people who say, no, no, no, feedback loops, and it doesn’t count. But it sure seems like if determinism was true, you should be able to give me unconditional predictions about what I’m going to do. And then intuitively, it seems like I could totally not do them.
There is of course the obvious way to make this work by making a prediction that is so close to the future, that you don’t have enough time to change your mind, thus rock-paper-scissors.
I honestly don’t know how this would fit into his worldview.
Caplan’s thought experiment does seem confused to me, so I’m not sure exactly what his position is and I’m not confident that it’s coherent. But his being told of the prediction in advance is a very deliberate feature of the thought experiment, so I don’t think you can make it testable by removing that.
As for whether being owned at RPS should surprise him, or should in general shake the confidence of a free-will libertarian—I can’t imagine anyone having failed to notice that better-than-chance predictions of human behaviour are often possible, so I still don’t see why a direct demonstration of this would threaten their beliefs. Any thoughtful free-will libertarian must have a theory that is (believed to be) compatible with partial predictability.
The question is whether he would break the prediction. He can certainly imagine breaking it, in principle, but would he actually break it? That’s something that a thought experiment can’t possibly address. Since we don’t yet have any way of predicting human behaviour to the required extent, we can’t actually conduct the experiment.
Of course, the thought experiment is rubbish. It applies just as well to a deterministic computer program that prints “What will I print next?”, reads the input, and then print something else (e.g. by printing “Fooled you!” to anything except “Fooled you!”, to which it prints “Nope.”). Would Bryan argue that this program is not deterministic? I’m not a foolproof Bryan-predictor, but I’m going to predict “no”.
There is obvious class of predictions like killing own family or yourself and such prediction are good example of what absence of free will feels from inside. There are things I care about, and I am not free to throw them away.
Well here is what Bryan said in the 80k Podcast about what would change his mind:
There is of course the obvious way to make this work by making a prediction that is so close to the future, that you don’t have enough time to change your mind, thus rock-paper-scissors. I honestly don’t know how this would fit into his worldview.
Caplan’s thought experiment does seem confused to me, so I’m not sure exactly what his position is and I’m not confident that it’s coherent. But his being told of the prediction in advance is a very deliberate feature of the thought experiment, so I don’t think you can make it testable by removing that.
As for whether being owned at RPS should surprise him, or should in general shake the confidence of a free-will libertarian—I can’t imagine anyone having failed to notice that better-than-chance predictions of human behaviour are often possible, so I still don’t see why a direct demonstration of this would threaten their beliefs. Any thoughtful free-will libertarian must have a theory that is (believed to be) compatible with partial predictability.
The question is whether he would break the prediction. He can certainly imagine breaking it, in principle, but would he actually break it? That’s something that a thought experiment can’t possibly address. Since we don’t yet have any way of predicting human behaviour to the required extent, we can’t actually conduct the experiment.
Of course, the thought experiment is rubbish. It applies just as well to a deterministic computer program that prints “What will I print next?”, reads the input, and then print something else (e.g. by printing “Fooled you!” to anything except “Fooled you!”, to which it prints “Nope.”). Would Bryan argue that this program is not deterministic? I’m not a foolproof Bryan-predictor, but I’m going to predict “no”.
This sounds like his confusion could be resolved by someone explaining to him what determinism is.
There is obvious class of predictions like killing own family or yourself and such prediction are good example of what absence of free will feels from inside. There are things I care about, and I am not free to throw them away.