I agree with e.g. Rafael that Caplan’s argument is just silly and wrong and doesn’t deserve this much analysis.
But: I don’t see how any version of this could possibly be the thing that makes Bryan Caplan say “oh, silly me, obviously my argument was no good”. It doesn’t amount to saying that there must be a way of saying “you will raise your hand” or “you will not raise your hand” that guarantees that he will do what you say. It’s more like this: if you say “the probability that you will raise your hand is p” and he responds by raising his hand with probability q, and if q is a continuous function of p, then there is a choice of p for which q=p. True, but again: can you imagine this observation being convincing to him? How? Won’t he just say “well, duh, you replaced my thought experiment with a different one and it came out differently; so what?”?
Of course the argument may not be convincing to Caplan specifically, since I don’t have that good of a model of how his mind works. I don’t see why this matters, though.
For some reason people seem to be reading my post as an attempt to say Caplan’s argument is actually good, when I’m just saying that Caplan’s argument is what got me thinking about this issue. The rest of what I write is my argument and doesn’t have much to do with what Caplan may or may not have thought.
As for the argument being convincing to libertarian free will advocates generally, to me the existence of such a probabilistic oracle that can inform you of your own actions in advance seems like a solid argument against it. There’s no way in which this oracle can be well-calibrated against someone who deliberately chooses to mess with it. For example, if your action space has two elements 0 and 1, then if the oracle gives you picking one of them >= 75% chance you pick the other one, and if it gives both options > 25% chance you just always pick 0. Even a cursory statistical analysis of that data will show that the oracle is failing to predict your behavior correctly.
If you can’t even implement such a simple strategy to beat this oracle, what grounds do we have for saying that you have libertarian free will? It seems like the whole concept becomes incoherent if you will say that this oracle can correctly anticipate your behavior no matter how incentivized you are to try to beat it, and yet you actually “can choose otherwise”.
I agree with e.g. Rafael that Caplan’s argument is just silly and wrong and doesn’t deserve this much analysis.
But: I don’t see how any version of this could possibly be the thing that makes Bryan Caplan say “oh, silly me, obviously my argument was no good”. It doesn’t amount to saying that there must be a way of saying “you will raise your hand” or “you will not raise your hand” that guarantees that he will do what you say. It’s more like this: if you say “the probability that you will raise your hand is p” and he responds by raising his hand with probability q, and if q is a continuous function of p, then there is a choice of p for which q=p. True, but again: can you imagine this observation being convincing to him? How? Won’t he just say “well, duh, you replaced my thought experiment with a different one and it came out differently; so what?”?
Of course the argument may not be convincing to Caplan specifically, since I don’t have that good of a model of how his mind works. I don’t see why this matters, though.
For some reason people seem to be reading my post as an attempt to say Caplan’s argument is actually good, when I’m just saying that Caplan’s argument is what got me thinking about this issue. The rest of what I write is my argument and doesn’t have much to do with what Caplan may or may not have thought.
As for the argument being convincing to libertarian free will advocates generally, to me the existence of such a probabilistic oracle that can inform you of your own actions in advance seems like a solid argument against it. There’s no way in which this oracle can be well-calibrated against someone who deliberately chooses to mess with it. For example, if your action space has two elements 0 and 1, then if the oracle gives you picking one of them >= 75% chance you pick the other one, and if it gives both options > 25% chance you just always pick 0. Even a cursory statistical analysis of that data will show that the oracle is failing to predict your behavior correctly.
If you can’t even implement such a simple strategy to beat this oracle, what grounds do we have for saying that you have libertarian free will? It seems like the whole concept becomes incoherent if you will say that this oracle can correctly anticipate your behavior no matter how incentivized you are to try to beat it, and yet you actually “can choose otherwise”.