Not really—all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it’s clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no “choice” in deciding to come to work tomorrow, if I predict based on your record that you’re 99.99% reliable? Where is the cut-off that free will gets lost?
Do you have no “choice” in deciding to come to work tomorrow, if I predict based on your record that you’re 99.99% reliable?
Humans are subtle beasts. If you tell me that you have predicted that I will go to work based upon my 99.99% attendance record, the probability that I will go to work drops dramatically upon me receiving that information, because there is a good chance that I’ll not go just to be awkward. This option of “taking your prediction into account, I’ll do the opposite to be awkward” is why it feels like you have free will.
Chances are I can predict such a response too, and so won’t tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. “I’ve a $50 bet you’ll attend tomorrow. Be there and I’ll split it 50:50”). It doesn’t change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he’s also able to predict the effects of him telling you) to a similar precision?
Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he’s also able to predict the effects of him telling you) to a similar precision?
Because that’s pretty much our intuitive definition of free will; that it is not possible for someone to predict your actions, announce it publicly, and still be correct. If you disagree, we are disagreeing about the intuitive definition of “free will” that most people carry around in their heads. At least admit that most people would be unsurprised if a person predicted that they would (e.g.) brush their teeth in the morning (without telling them in advance that it had predicted that), versus predicting that they would knock a vase over, and then as a result of that prediction, the vase actually getting knocked over.
Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don’t think it would be unusual to find someone who would indeed appear 99.99% of the time—does that mean that person has no free will?
People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn’t change matters much (or even increases predictability). There are obviously some situations where this doesn’t happen, but for Newcombe’s paradox, all that is needed is a predictor for the particular situation described, not any general situation. (In fact Newcombe’s paradox is equally broken by a similar revelation of knowledge. If Omega were to reveal its prediction before the boxes are chosen, a person determined to do the opposite of that prediction opens it up to a simple Epimenides paradox.)
Not really—all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it’s clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no “choice” in deciding to come to work tomorrow, if I predict based on your record that you’re 99.99% reliable? Where is the cut-off that free will gets lost?
Humans are subtle beasts. If you tell me that you have predicted that I will go to work based upon my 99.99% attendance record, the probability that I will go to work drops dramatically upon me receiving that information, because there is a good chance that I’ll not go just to be awkward. This option of “taking your prediction into account, I’ll do the opposite to be awkward” is why it feels like you have free will.
Chances are I can predict such a response too, and so won’t tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. “I’ve a $50 bet you’ll attend tomorrow. Be there and I’ll split it 50:50”). It doesn’t change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he’s also able to predict the effects of him telling you) to a similar precision?
Because that’s pretty much our intuitive definition of free will; that it is not possible for someone to predict your actions, announce it publicly, and still be correct. If you disagree, we are disagreeing about the intuitive definition of “free will” that most people carry around in their heads. At least admit that most people would be unsurprised if a person predicted that they would (e.g.) brush their teeth in the morning (without telling them in advance that it had predicted that), versus predicting that they would knock a vase over, and then as a result of that prediction, the vase actually getting knocked over.
Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don’t think it would be unusual to find someone who would indeed appear 99.99% of the time—does that mean that person has no free will?
People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn’t change matters much (or even increases predictability). There are obviously some situations where this doesn’t happen, but for Newcombe’s paradox, all that is needed is a predictor for the particular situation described, not any general situation. (In fact Newcombe’s paradox is equally broken by a similar revelation of knowledge. If Omega were to reveal its prediction before the boxes are chosen, a person determined to do the opposite of that prediction opens it up to a simple Epimenides paradox.)