On this point, I’ve going to have to agree with what EY said here (which I repeated here).
In short: Omega’s strategy and its consequences for you are not, in any sense, atypical. Omega is treating you based upon what you would do, given full (or approximate) knowledge of the situation. This is quite normal: people do in fact treat you differently based upon estimation of “what you would do”, which is also known as your “character”.
Your point would be valid if Omega were basing the reward profile on your genetics, or how you got to your decision, or some other strange factor. But here, Omega is someone who just bases its treatment of you on things that are normal to care about in normal problems.
You’re just emphasizing the fact that you have full knowledge of the situation.
I currently believe, that if I ever am in a position where I believe myself to be confronted with Newcomb’s problem, no matter how convinced I am at that time, it will be a hoax in some way; for example, Omega has limited prediction capability or there isn’t actually $1 million in the box.
I’m not saying “you should two-box because the money is already in there” I’m saying “maybe you should JUST take the $1000 box because you’ve seen that money and if you don’t think ve’s lying you’re probably hallucinating.”
True: you will probably never be in the epistemic state in which you will justifiably believe you are in Newcomb’s problem. Nevertheless, you will frequently be in probabilistic variants of the problem, and a sane decision theory that wins on those cases will have the implication that it should one-box when you take the limit of all variables as they go to what they need to be to make it the literal Newcomb’s problem.
On this point, I’ve going to have to agree with what EY said here (which I repeated here).
In short: Omega’s strategy and its consequences for you are not, in any sense, atypical. Omega is treating you based upon what you would do, given full (or approximate) knowledge of the situation. This is quite normal: people do in fact treat you differently based upon estimation of “what you would do”, which is also known as your “character”.
Your point would be valid if Omega were basing the reward profile on your genetics, or how you got to your decision, or some other strange factor. But here, Omega is someone who just bases its treatment of you on things that are normal to care about in normal problems.
You’re just emphasizing the fact that you have full knowledge of the situation.
I currently believe, that if I ever am in a position where I believe myself to be confronted with Newcomb’s problem, no matter how convinced I am at that time, it will be a hoax in some way; for example, Omega has limited prediction capability or there isn’t actually $1 million in the box.
I’m not saying “you should two-box because the money is already in there” I’m saying “maybe you should JUST take the $1000 box because you’ve seen that money and if you don’t think ve’s lying you’re probably hallucinating.”
True: you will probably never be in the epistemic state in which you will justifiably believe you are in Newcomb’s problem. Nevertheless, you will frequently be in probabilistic variants of the problem, and a sane decision theory that wins on those cases will have the implication that it should one-box when you take the limit of all variables as they go to what they need to be to make it the literal Newcomb’s problem.