In this version Omega may be predicting decision’s in general with some accuracy, but it does not seem like he is predicting mine.
So it appears there are cases where I two-box. I think in general my specification of a Newcomb-type problem, has two requirements:
An outside observer who observed me to two-box would predict with high-probability that the money is not there.
An outside observer who observed me to one-box would predict with high-probability that the money is there.
The above version of the problem clearly does not meet the second requirement.
If this is what you meant by your statement that the problem is ambiguous, then I agree. This is one of the reasons I favour a formulation involving a brain-scanner rather than a nebulous godlike entity, since it seems more useful to focus on the particularly paradoxical cases rather than the easy ones.
In this version Omega may be predicting decision’s in general with some accuracy, but it does not seem like he is predicting mine.
So it appears there are cases where I two-box. I think in general my specification of a Newcomb-type problem, has two requirements:
An outside observer who observed me to two-box would predict with high-probability that the money is not there. An outside observer who observed me to one-box would predict with high-probability that the money is there.
The above version of the problem clearly does not meet the second requirement.
If this is what you meant by your statement that the problem is ambiguous, then I agree. This is one of the reasons I favour a formulation involving a brain-scanner rather than a nebulous godlike entity, since it seems more useful to focus on the particularly paradoxical cases rather than the easy ones.