I really fail to see why you’re all so fascinated by Newcomb-like problems.
Agreed. This problem seems uninteresting to me too. Though more realistic newcomb-like problems are interesting; for there are parts of life where newcombian reasoning works for real.
On second thoughts, since many clever philosophers spend careers on these problems, I may be missing something.
The obvious complaint about “would you choose X or Y given that Omega already knows your actions” is that it is logically inconsistent; if Omega already knows your actions, the word “choose” is nonsense. Strictly speaking, “choose” is nonsense anyway; it takes the naive free will point of view in its everyday usage.
In order to untangle this, a sophisticated understanding of what we mean by “choose” is needed. I may post on this. My intuition is that if we stick to a rigorous meaning of “choose”, the question will have a well-defined answer that no-one will dispute, however what this answer is will depend on the definition of “choose” that you, um, choose, so to speak…
This problem seems uninteresting to me too. Though more realistic newcomb-like problems are interesting; for there are parts of life where newcombian reasoning works for real.
I find the problem interesting, so I’ll try to explain why I find it interesting.
So there are these blogs called Overcoming Bias and Less Wrong, and the people posting on it seem like very smart people, and they say very reasonable things. They offer to teach how to become rational, in the sense of “winning more often”. I want to win more often too, so I read the blogs.
Now a lot of what these people are saying sounds very reasonable, but it’s also clear that the people saying these things are much smarter than me; so much so that although their conclusions sound very reasonable, I can’t always follow all the arguments or steps used to reach those conclusions. As part of my rationalist training, I try to notice when I can follow the steps to a conclusion, and when I can’t, and remember which conclusions I believe in because I fully understand it, and which conclusions I am “tentatively believing in” because someone smart said it, and I’m just taking their word for it for now.
So now Vladimir Nesov presents this puzzle, and I realize that I must not have understood one of the conclusions (or I did understand them, and the smart people were mistaken), because it sounds like if I were to follow the advice of this blog, I’d be doing something really stupid (depending on how you answered VN’s problem, the stupid thing is either “wasting $100” or “wasting $4950″).
So how do I reconcile this with everything I’ve learned on this blog?
Think of most of the blog as a textbook, with VN’s post being an “exercise to the reader” or a “homework problem”.
I think I’m not confused about free will, and that the links I gave should help to resolve most of the confusion. Maybe you should write a blog post/LW article where you formulate the nature of your confusion (if you still have it after reading the relevant material), I’ll respond to that.
Not really—all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it’s clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no “choice” in deciding to come to work tomorrow, if I predict based on your record that you’re 99.99% reliable? Where is the cut-off that free will gets lost?
Do you have no “choice” in deciding to come to work tomorrow, if I predict based on your record that you’re 99.99% reliable?
Humans are subtle beasts. If you tell me that you have predicted that I will go to work based upon my 99.99% attendance record, the probability that I will go to work drops dramatically upon me receiving that information, because there is a good chance that I’ll not go just to be awkward. This option of “taking your prediction into account, I’ll do the opposite to be awkward” is why it feels like you have free will.
Chances are I can predict such a response too, and so won’t tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. “I’ve a $50 bet you’ll attend tomorrow. Be there and I’ll split it 50:50”). It doesn’t change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he’s also able to predict the effects of him telling you) to a similar precision?
Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he’s also able to predict the effects of him telling you) to a similar precision?
Because that’s pretty much our intuitive definition of free will; that it is not possible for someone to predict your actions, announce it publicly, and still be correct. If you disagree, we are disagreeing about the intuitive definition of “free will” that most people carry around in their heads. At least admit that most people would be unsurprised if a person predicted that they would (e.g.) brush their teeth in the morning (without telling them in advance that it had predicted that), versus predicting that they would knock a vase over, and then as a result of that prediction, the vase actually getting knocked over.
Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don’t think it would be unusual to find someone who would indeed appear 99.99% of the time—does that mean that person has no free will?
People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn’t change matters much (or even increases predictability). There are obviously some situations where this doesn’t happen, but for Newcombe’s paradox, all that is needed is a predictor for the particular situation described, not any general situation. (In fact Newcombe’s paradox is equally broken by a similar revelation of knowledge. If Omega were to reveal its prediction before the boxes are chosen, a person determined to do the opposite of that prediction opens it up to a simple Epimenides paradox.)
Agreed. This problem seems uninteresting to me too. Though more realistic newcomb-like problems are interesting; for there are parts of life where newcombian reasoning works for real.
On second thoughts, since many clever philosophers spend careers on these problems, I may be missing something.
The obvious complaint about “would you choose X or Y given that Omega already knows your actions” is that it is logically inconsistent; if Omega already knows your actions, the word “choose” is nonsense. Strictly speaking, “choose” is nonsense anyway; it takes the naive free will point of view in its everyday usage.
In order to untangle this, a sophisticated understanding of what we mean by “choose” is needed. I may post on this. My intuition is that if we stick to a rigorous meaning of “choose”, the question will have a well-defined answer that no-one will dispute, however what this answer is will depend on the definition of “choose” that you, um, choose, so to speak…
I find the problem interesting, so I’ll try to explain why I find it interesting.
So there are these blogs called Overcoming Bias and Less Wrong, and the people posting on it seem like very smart people, and they say very reasonable things. They offer to teach how to become rational, in the sense of “winning more often”. I want to win more often too, so I read the blogs.
Now a lot of what these people are saying sounds very reasonable, but it’s also clear that the people saying these things are much smarter than me; so much so that although their conclusions sound very reasonable, I can’t always follow all the arguments or steps used to reach those conclusions. As part of my rationalist training, I try to notice when I can follow the steps to a conclusion, and when I can’t, and remember which conclusions I believe in because I fully understand it, and which conclusions I am “tentatively believing in” because someone smart said it, and I’m just taking their word for it for now.
So now Vladimir Nesov presents this puzzle, and I realize that I must not have understood one of the conclusions (or I did understand them, and the smart people were mistaken), because it sounds like if I were to follow the advice of this blog, I’d be doing something really stupid (depending on how you answered VN’s problem, the stupid thing is either “wasting $100” or “wasting $4950″).
So how do I reconcile this with everything I’ve learned on this blog?
Think of most of the blog as a textbook, with VN’s post being an “exercise to the reader” or a “homework problem”.
The primary reason for resolving Newcomb-like problems is to explore the fundamental limitations of decision theories.
It sounds like you are still confused about free will. See Righting a Wrong Question, Possibility and Could-ness, and Daniel Dennett’s lecture here.
yes, I am confused about free will, but I think that this confusion is legitimate given our current lack of knowledge about how the human mind works.
I hope I’m not making obvious errors about free will. But if I am, then I’d like to know...
I think I’m not confused about free will, and that the links I gave should help to resolve most of the confusion. Maybe you should write a blog post/LW article where you formulate the nature of your confusion (if you still have it after reading the relevant material), I’ll respond to that.
Not really—all that is neccessary is that Omega is a sufficiently accurate predictor that the payoff matrix, taking this accuracy into question, still amounts to a win for the given choice. There is no need to be a perfect predictor. And if an imperfect, 99.999% predictor violates free will, then it’s clearly a lost cause anyway (I can predict with similar precision many behaviours about people based on no more evidence than their behaviour and speech, never mind godlike brain introspection) Do you have no “choice” in deciding to come to work tomorrow, if I predict based on your record that you’re 99.99% reliable? Where is the cut-off that free will gets lost?
Humans are subtle beasts. If you tell me that you have predicted that I will go to work based upon my 99.99% attendance record, the probability that I will go to work drops dramatically upon me receiving that information, because there is a good chance that I’ll not go just to be awkward. This option of “taking your prediction into account, I’ll do the opposite to be awkward” is why it feels like you have free will.
Chances are I can predict such a response too, and so won’t tell you of my prediction (or tell you in such a way that you will be more likely to attend: eg. “I’ve a $50 bet you’ll attend tomorrow. Be there and I’ll split it 50:50”). It doesn’t change the fact that in this particular instance I can fortell the future with a high degree of accuracy. Why then would it violate free will if Omega could predict your accuracy in this different situation (one where he’s also able to predict the effects of him telling you) to a similar precision?
Because that’s pretty much our intuitive definition of free will; that it is not possible for someone to predict your actions, announce it publicly, and still be correct. If you disagree, we are disagreeing about the intuitive definition of “free will” that most people carry around in their heads. At least admit that most people would be unsurprised if a person predicted that they would (e.g.) brush their teeth in the morning (without telling them in advance that it had predicted that), versus predicting that they would knock a vase over, and then as a result of that prediction, the vase actually getting knocked over.
Then take my bet situation. I announce your attendance, and cut you in with a $25 stake in attendance. I don’t think it would be unusual to find someone who would indeed appear 99.99% of the time—does that mean that person has no free will?
People are highly, though not perfectly, predictable under a large number of situations. Revealing knowledge about the prediction complicates things by adding feedback to the system, but there are lots of cases where it still doesn’t change matters much (or even increases predictability). There are obviously some situations where this doesn’t happen, but for Newcombe’s paradox, all that is needed is a predictor for the particular situation described, not any general situation. (In fact Newcombe’s paradox is equally broken by a similar revelation of knowledge. If Omega were to reveal its prediction before the boxes are chosen, a person determined to do the opposite of that prediction opens it up to a simple Epimenides paradox.)
On second thoughts, since many clever philosophers spend careers on these problems, I may be missing something.
Nah, they just need something to talk about.