Do you really think that merely deciding to one-box in such a situation would change your personality in a way that gets picked up by the test? If it does, do you want to modify your personality in a measurable way just so that you can win if you happen to run into a Newcomb problem?
Suppose for example it had been determined empirically that whether or not one was religious correlated well with the number of boxes you took. This could then be one of the things that the personality test measures. Are you saying that a precommitment would change your religious beliefs, or that you would change them in addition to deciding to one-box (in which case, why are you changing the latter at all)?
The point in case 1 is that they are not making a direct measurement of your decision. They are merely measuring external factors so that for 99% of people these factors agree with their decision (I think that this is implausible, but not significantly more implausible than the existence of Omega in the first place). It seems to me very unlikely that just changing your mind on whether you should one-box would also automatically change these other factors. And if it does, do you necessarily want to be messing around with your personality just to win this game that will almost certainly never come up?
If merely deciding to one-box is not picked up by the test, and does not offer even a slight increase in the probability that the money is there (even 51% as opposed to 50% would be enough) then the test is not very good, in which case I would two-box. However, this seems to contradict the stated fact the Omega is in fact a very good predictor of decisions.
As a general principle, I am most definitely interested in modifying my personality to increase the number of situations in which I win. If I wasn’t, I probably wouldn’t be on LW. The religion example is a strawman, as it seems clear that applying the modification “believe in God” will cause me to do worse in many other much more common situations, whereas “one-box in Newcomb-type dilemma’s” doesn’t seem likely to have many side effects.
If Omega really is just measuring external factor’s, then how do you know he won’t pick up on my decision to always one-box. The decision was not made in a vacuum, it was caused by my personality, my style of thinking and my level of intelligence, all of which are things hat any reasonably competent predictor should pick up on.
As long as the test is reasonably good, I will still my million with a higher probability, and that’s all that really matters to me.
How about this version of Omega (and this is one that I think could actually be implemented to be 90% accurate). First off, box A is painted with pictures of snakes and box B with pictures of bananas. Omega’s prediction procedure is (and you are told this by the people running the experiment) that if you are a human he predicts that you two-box and if you are a chimpanzee, he predicts that you one-box.
I don’t think that 10% of people would give up $1000 to prove Omega wrong, and if you think so, why not make it $10^6 and $10^9 instead of $10^3 and $10^6.
I feel like this version satisfies the assumptions of the problem and makes it clear that you should two-box in this situation. Therefore any claims that one-boxing is the correct solution need to at least be qualified by extra assumptions about how Omega operates.
In this version Omega may be predicting decision’s in general with some accuracy, but it does not seem like he is predicting mine.
So it appears there are cases where I two-box. I think in general my specification of a Newcomb-type problem, has two requirements:
An outside observer who observed me to two-box would predict with high-probability that the money is not there.
An outside observer who observed me to one-box would predict with high-probability that the money is there.
The above version of the problem clearly does not meet the second requirement.
If this is what you meant by your statement that the problem is ambiguous, then I agree. This is one of the reasons I favour a formulation involving a brain-scanner rather than a nebulous godlike entity, since it seems more useful to focus on the particularly paradoxical cases rather than the easy ones.
I don’t think that you change of just that decision would be picked up on a personality test. Your changing that decision is unlikely to change how you answer questions not directly relating to Newcomb’s problem. The test would pick up your style of thinking that lead you to this decision, but making the decision differently would not change your style of thinking. Perhaps an example that illustrates my point even better:
Omega #1.1: Bases his prediction on a genetic test.
Now I agree that it is unlikely that this will get 99% accuracy, but I think it could plausibly obtain, say, 60% accuracy, which shouldn’t really change the issue at hand. Remember that Omega does not need to measure things that cause you to decide one way or another, he just needs to measure things that have a positive correlation with it.
As for modifying your personality… Should I really believe that you believe that arguments that you are making here, or are you just worried that you are going to be in this situation and that Omega will base his prediction on your posts?
Good point with the genetic test argument, in that situation I probably would two-box. The same might apply to any sufficiently poor personality test, or to a version of Omega that bases his decision of the posts I make on Less Wrong (although I think if my sole reason for being here was signalling my willingness to make certain choices in certain dilemma’s I could probably find better ways to do it).
I usually imagine Omega does better than that, and that his methods are at least as sophisticated as figuring out how I make decisions, then applying that algorithm to the problem at hand (the source of this assumption is that the first time I saw the problem Omega was a supercomputer that scanned people’s brains).
As for the personality modification thing, I really don’t see what you find so implausible about the idea that I’m not attached to my flaws, and would eliminate them if I had the chance.
I agree that the standard interpretation of Omega generally involves brain scans. But there is still a difference between running a simulation (Omega #2), or checking for relevant correlating personality traits. The later I would claim is at least somewhat analogous to genetic testing, though admittedly the case is somewhat murkier. I guess perhaps the Omega that is most in the spirit of the question is where he does a brain scan and searches for your cached answer of “this is what I do in Newcomb problems”.
As for personality modification, I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations. I also don’t see why these changes would necessarily be an improvement.
“I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations.”
It wouldn’t, that’s the point. But it would improve your performance in Newcomb situations, so there’s no downside (for an example of a newcomb type paradox which could happen in the real world, see Parfit’s hitch-hiker, given that I am not a perfect liar I would not consider it too unlikely that I will face a situation of that general type (if not that exact situation) at some point in my life).
My point was that if it didn’t change your behavior in non-Newcomb situations, no reasonable version of Omega #1 (or really any Omega that does not use either brain scans or lie detection could tell the difference).
As for changing my actions in the case of Parfit’s hitch-hiker, say that the chances of actually running into this situation (with someone who can actually lie detect and in a situation with no third alternatives, and where my internal sense of fairness wouldn’t just cause me to give him the $100 anyway) is say 10^-9. This means that changing my behavior would save me an expected say 3 seconds of life. So if you have a way that I can actually precommit myself that takes less than 3 seconds to do, I’m all ears.
In fact, it is applicable in any situation where you need to make a promise to someone who has a reasonable chance of spotting if you lie (I don’t know about you but I often get caught out when I lie), and while you prefer following through on the promise to not making it, you also prefer going back on the promise to following through on it, (technically they need to have a good enough chance of spotting you, with “good enough” determined by your relative preferences).
That’s quite a generic situation, and I would estimate at least 10% probability that you encounter it at some point, although the stakes will hopefully be lower than your life.
Perhaps. Though I believe that in the vast majority of these cases my internal (and perhaps irrational) sense of fairness would cause me to keep my word anyway.
Do you really think that merely deciding to one-box in such a situation would change your personality in a way that gets picked up by the test? If it does, do you want to modify your personality in a measurable way just so that you can win if you happen to run into a Newcomb problem?
Suppose for example it had been determined empirically that whether or not one was religious correlated well with the number of boxes you took. This could then be one of the things that the personality test measures. Are you saying that a precommitment would change your religious beliefs, or that you would change them in addition to deciding to one-box (in which case, why are you changing the latter at all)?
The point in case 1 is that they are not making a direct measurement of your decision. They are merely measuring external factors so that for 99% of people these factors agree with their decision (I think that this is implausible, but not significantly more implausible than the existence of Omega in the first place). It seems to me very unlikely that just changing your mind on whether you should one-box would also automatically change these other factors. And if it does, do you necessarily want to be messing around with your personality just to win this game that will almost certainly never come up?
If merely deciding to one-box is not picked up by the test, and does not offer even a slight increase in the probability that the money is there (even 51% as opposed to 50% would be enough) then the test is not very good, in which case I would two-box. However, this seems to contradict the stated fact the Omega is in fact a very good predictor of decisions.
As a general principle, I am most definitely interested in modifying my personality to increase the number of situations in which I win. If I wasn’t, I probably wouldn’t be on LW. The religion example is a strawman, as it seems clear that applying the modification “believe in God” will cause me to do worse in many other much more common situations, whereas “one-box in Newcomb-type dilemma’s” doesn’t seem likely to have many side effects.
If Omega really is just measuring external factor’s, then how do you know he won’t pick up on my decision to always one-box. The decision was not made in a vacuum, it was caused by my personality, my style of thinking and my level of intelligence, all of which are things hat any reasonably competent predictor should pick up on.
As long as the test is reasonably good, I will still my million with a higher probability, and that’s all that really matters to me.
How about this version of Omega (and this is one that I think could actually be implemented to be 90% accurate). First off, box A is painted with pictures of snakes and box B with pictures of bananas. Omega’s prediction procedure is (and you are told this by the people running the experiment) that if you are a human he predicts that you two-box and if you are a chimpanzee, he predicts that you one-box.
I don’t think that 10% of people would give up $1000 to prove Omega wrong, and if you think so, why not make it $10^6 and $10^9 instead of $10^3 and $10^6.
I feel like this version satisfies the assumptions of the problem and makes it clear that you should two-box in this situation. Therefore any claims that one-boxing is the correct solution need to at least be qualified by extra assumptions about how Omega operates.
In this version Omega may be predicting decision’s in general with some accuracy, but it does not seem like he is predicting mine.
So it appears there are cases where I two-box. I think in general my specification of a Newcomb-type problem, has two requirements:
An outside observer who observed me to two-box would predict with high-probability that the money is not there. An outside observer who observed me to one-box would predict with high-probability that the money is there.
The above version of the problem clearly does not meet the second requirement.
If this is what you meant by your statement that the problem is ambiguous, then I agree. This is one of the reasons I favour a formulation involving a brain-scanner rather than a nebulous godlike entity, since it seems more useful to focus on the particularly paradoxical cases rather than the easy ones.
I don’t think that you change of just that decision would be picked up on a personality test. Your changing that decision is unlikely to change how you answer questions not directly relating to Newcomb’s problem. The test would pick up your style of thinking that lead you to this decision, but making the decision differently would not change your style of thinking. Perhaps an example that illustrates my point even better:
Omega #1.1: Bases his prediction on a genetic test.
Now I agree that it is unlikely that this will get 99% accuracy, but I think it could plausibly obtain, say, 60% accuracy, which shouldn’t really change the issue at hand. Remember that Omega does not need to measure things that cause you to decide one way or another, he just needs to measure things that have a positive correlation with it.
As for modifying your personality… Should I really believe that you believe that arguments that you are making here, or are you just worried that you are going to be in this situation and that Omega will base his prediction on your posts?
Good point with the genetic test argument, in that situation I probably would two-box. The same might apply to any sufficiently poor personality test, or to a version of Omega that bases his decision of the posts I make on Less Wrong (although I think if my sole reason for being here was signalling my willingness to make certain choices in certain dilemma’s I could probably find better ways to do it).
I usually imagine Omega does better than that, and that his methods are at least as sophisticated as figuring out how I make decisions, then applying that algorithm to the problem at hand (the source of this assumption is that the first time I saw the problem Omega was a supercomputer that scanned people’s brains).
As for the personality modification thing, I really don’t see what you find so implausible about the idea that I’m not attached to my flaws, and would eliminate them if I had the chance.
I agree that the standard interpretation of Omega generally involves brain scans. But there is still a difference between running a simulation (Omega #2), or checking for relevant correlating personality traits. The later I would claim is at least somewhat analogous to genetic testing, though admittedly the case is somewhat murkier. I guess perhaps the Omega that is most in the spirit of the question is where he does a brain scan and searches for your cached answer of “this is what I do in Newcomb problems”.
As for personality modification, I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations. I also don’t see why these changes would necessarily be an improvement.
“I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations.”
It wouldn’t, that’s the point. But it would improve your performance in Newcomb situations, so there’s no downside (for an example of a newcomb type paradox which could happen in the real world, see Parfit’s hitch-hiker, given that I am not a perfect liar I would not consider it too unlikely that I will face a situation of that general type (if not that exact situation) at some point in my life).
My point was that if it didn’t change your behavior in non-Newcomb situations, no reasonable version of Omega #1 (or really any Omega that does not use either brain scans or lie detection could tell the difference).
As for changing my actions in the case of Parfit’s hitch-hiker, say that the chances of actually running into this situation (with someone who can actually lie detect and in a situation with no third alternatives, and where my internal sense of fairness wouldn’t just cause me to give him the $100 anyway) is say 10^-9. This means that changing my behavior would save me an expected say 3 seconds of life. So if you have a way that I can actually precommit myself that takes less than 3 seconds to do, I’m all ears.
It wouldn’t have to be that exact situation.
In fact, it is applicable in any situation where you need to make a promise to someone who has a reasonable chance of spotting if you lie (I don’t know about you but I often get caught out when I lie), and while you prefer following through on the promise to not making it, you also prefer going back on the promise to following through on it, (technically they need to have a good enough chance of spotting you, with “good enough” determined by your relative preferences).
That’s quite a generic situation, and I would estimate at least 10% probability that you encounter it at some point, although the stakes will hopefully be lower than your life.
Perhaps. Though I believe that in the vast majority of these cases my internal (and perhaps irrational) sense of fairness would cause me to keep my word anyway.