How about this version of Omega (and this is one that I think could actually be implemented to be 90% accurate). First off, box A is painted with pictures of snakes and box B with pictures of bananas. Omega’s prediction procedure is (and you are told this by the people running the experiment) that if you are a human he predicts that you two-box and if you are a chimpanzee, he predicts that you one-box.
I don’t think that 10% of people would give up $1000 to prove Omega wrong, and if you think so, why not make it $10^6 and $10^9 instead of $10^3 and $10^6.
I feel like this version satisfies the assumptions of the problem and makes it clear that you should two-box in this situation. Therefore any claims that one-boxing is the correct solution need to at least be qualified by extra assumptions about how Omega operates.
In this version Omega may be predicting decision’s in general with some accuracy, but it does not seem like he is predicting mine.
So it appears there are cases where I two-box. I think in general my specification of a Newcomb-type problem, has two requirements:
An outside observer who observed me to two-box would predict with high-probability that the money is not there.
An outside observer who observed me to one-box would predict with high-probability that the money is there.
The above version of the problem clearly does not meet the second requirement.
If this is what you meant by your statement that the problem is ambiguous, then I agree. This is one of the reasons I favour a formulation involving a brain-scanner rather than a nebulous godlike entity, since it seems more useful to focus on the particularly paradoxical cases rather than the easy ones.
How about this version of Omega (and this is one that I think could actually be implemented to be 90% accurate). First off, box A is painted with pictures of snakes and box B with pictures of bananas. Omega’s prediction procedure is (and you are told this by the people running the experiment) that if you are a human he predicts that you two-box and if you are a chimpanzee, he predicts that you one-box.
I don’t think that 10% of people would give up $1000 to prove Omega wrong, and if you think so, why not make it $10^6 and $10^9 instead of $10^3 and $10^6.
I feel like this version satisfies the assumptions of the problem and makes it clear that you should two-box in this situation. Therefore any claims that one-boxing is the correct solution need to at least be qualified by extra assumptions about how Omega operates.
In this version Omega may be predicting decision’s in general with some accuracy, but it does not seem like he is predicting mine.
So it appears there are cases where I two-box. I think in general my specification of a Newcomb-type problem, has two requirements:
An outside observer who observed me to two-box would predict with high-probability that the money is not there. An outside observer who observed me to one-box would predict with high-probability that the money is there.
The above version of the problem clearly does not meet the second requirement.
If this is what you meant by your statement that the problem is ambiguous, then I agree. This is one of the reasons I favour a formulation involving a brain-scanner rather than a nebulous godlike entity, since it seems more useful to focus on the particularly paradoxical cases rather than the easy ones.