Good point with the genetic test argument, in that situation I probably would two-box. The same might apply to any sufficiently poor personality test, or to a version of Omega that bases his decision of the posts I make on Less Wrong (although I think if my sole reason for being here was signalling my willingness to make certain choices in certain dilemma’s I could probably find better ways to do it).
I usually imagine Omega does better than that, and that his methods are at least as sophisticated as figuring out how I make decisions, then applying that algorithm to the problem at hand (the source of this assumption is that the first time I saw the problem Omega was a supercomputer that scanned people’s brains).
As for the personality modification thing, I really don’t see what you find so implausible about the idea that I’m not attached to my flaws, and would eliminate them if I had the chance.
I agree that the standard interpretation of Omega generally involves brain scans. But there is still a difference between running a simulation (Omega #2), or checking for relevant correlating personality traits. The later I would claim is at least somewhat analogous to genetic testing, though admittedly the case is somewhat murkier. I guess perhaps the Omega that is most in the spirit of the question is where he does a brain scan and searches for your cached answer of “this is what I do in Newcomb problems”.
As for personality modification, I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations. I also don’t see why these changes would necessarily be an improvement.
“I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations.”
It wouldn’t, that’s the point. But it would improve your performance in Newcomb situations, so there’s no downside (for an example of a newcomb type paradox which could happen in the real world, see Parfit’s hitch-hiker, given that I am not a perfect liar I would not consider it too unlikely that I will face a situation of that general type (if not that exact situation) at some point in my life).
My point was that if it didn’t change your behavior in non-Newcomb situations, no reasonable version of Omega #1 (or really any Omega that does not use either brain scans or lie detection could tell the difference).
As for changing my actions in the case of Parfit’s hitch-hiker, say that the chances of actually running into this situation (with someone who can actually lie detect and in a situation with no third alternatives, and where my internal sense of fairness wouldn’t just cause me to give him the $100 anyway) is say 10^-9. This means that changing my behavior would save me an expected say 3 seconds of life. So if you have a way that I can actually precommit myself that takes less than 3 seconds to do, I’m all ears.
In fact, it is applicable in any situation where you need to make a promise to someone who has a reasonable chance of spotting if you lie (I don’t know about you but I often get caught out when I lie), and while you prefer following through on the promise to not making it, you also prefer going back on the promise to following through on it, (technically they need to have a good enough chance of spotting you, with “good enough” determined by your relative preferences).
That’s quite a generic situation, and I would estimate at least 10% probability that you encounter it at some point, although the stakes will hopefully be lower than your life.
Perhaps. Though I believe that in the vast majority of these cases my internal (and perhaps irrational) sense of fairness would cause me to keep my word anyway.
Good point with the genetic test argument, in that situation I probably would two-box. The same might apply to any sufficiently poor personality test, or to a version of Omega that bases his decision of the posts I make on Less Wrong (although I think if my sole reason for being here was signalling my willingness to make certain choices in certain dilemma’s I could probably find better ways to do it).
I usually imagine Omega does better than that, and that his methods are at least as sophisticated as figuring out how I make decisions, then applying that algorithm to the problem at hand (the source of this assumption is that the first time I saw the problem Omega was a supercomputer that scanned people’s brains).
As for the personality modification thing, I really don’t see what you find so implausible about the idea that I’m not attached to my flaws, and would eliminate them if I had the chance.
I agree that the standard interpretation of Omega generally involves brain scans. But there is still a difference between running a simulation (Omega #2), or checking for relevant correlating personality traits. The later I would claim is at least somewhat analogous to genetic testing, though admittedly the case is somewhat murkier. I guess perhaps the Omega that is most in the spirit of the question is where he does a brain scan and searches for your cached answer of “this is what I do in Newcomb problems”.
As for personality modification, I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations. I also don’t see why these changes would necessarily be an improvement.
“I don’t see why changing my stored values for how to behave in Newcomb situations would change how I behave in non-Newcomb situations.”
It wouldn’t, that’s the point. But it would improve your performance in Newcomb situations, so there’s no downside (for an example of a newcomb type paradox which could happen in the real world, see Parfit’s hitch-hiker, given that I am not a perfect liar I would not consider it too unlikely that I will face a situation of that general type (if not that exact situation) at some point in my life).
My point was that if it didn’t change your behavior in non-Newcomb situations, no reasonable version of Omega #1 (or really any Omega that does not use either brain scans or lie detection could tell the difference).
As for changing my actions in the case of Parfit’s hitch-hiker, say that the chances of actually running into this situation (with someone who can actually lie detect and in a situation with no third alternatives, and where my internal sense of fairness wouldn’t just cause me to give him the $100 anyway) is say 10^-9. This means that changing my behavior would save me an expected say 3 seconds of life. So if you have a way that I can actually precommit myself that takes less than 3 seconds to do, I’m all ears.
It wouldn’t have to be that exact situation.
In fact, it is applicable in any situation where you need to make a promise to someone who has a reasonable chance of spotting if you lie (I don’t know about you but I often get caught out when I lie), and while you prefer following through on the promise to not making it, you also prefer going back on the promise to following through on it, (technically they need to have a good enough chance of spotting you, with “good enough” determined by your relative preferences).
That’s quite a generic situation, and I would estimate at least 10% probability that you encounter it at some point, although the stakes will hopefully be lower than your life.
Perhaps. Though I believe that in the vast majority of these cases my internal (and perhaps irrational) sense of fairness would cause me to keep my word anyway.