A reference to a paper by David Wolpert and Gregory Benford on Newcomb’s paradox
Isn’t the whole issue with Newcomb’s paradox the fact that if you take two boxes Omega will predict it and if you take one box Omega will predict it? It doesn’t matter if both boxes are transparent, you’ll only take one if you did indeed precommit (or if you’re the kind of person who one-boxes ‘naturally’) to only take one box. Since I’ve read the first time about it I’m puzzled by why people think there is a paradox or that the problem is difficult. Maybe I just don’t get it.
If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!
If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!
It sounds like you find the second argument so unconvincing that you don’t see why people consider it a paradox.
It sounds like you find the second argument so unconvincing that you don’t see why people consider it a paradox.
It doesn’t make sense given the rules. The rules say that there will only be a million in box B iff you only take box B. I’m not the kind of person who calls the police when faced with the trolley problem thought experiment. Besides that, the laws of physics obviously do not permit you to deliberately take both boxes if a nearly perfect predictor knows that you’ll only take box B. Therefore considering that counterfactual makes no sense (much less than a nearly perfect predictor).
It mostly seems to be confusion about the impossibility of a perfect predictor. On LW we accept the concept of a philosophical Superintelligence, but among mainstream philosophers many disavow the notion of a perfect predictor, even when that is specified very clearly.
Steve+Anna at SIAI did a pretty thorough dissolution of Newcomb’s problem with variable accuracy for Omega as part of the problem definition.
Isn’t the whole issue with Newcomb’s paradox the fact that if you take two boxes Omega will predict it and if you take one box Omega will predict it? It doesn’t matter if both boxes are transparent, you’ll only take one if you did indeed precommit (or if you’re the kind of person who one-boxes ‘naturally’) to only take one box. Since I’ve read the first time about it I’m puzzled by why people think there is a paradox or that the problem is difficult. Maybe I just don’t get it.
In my interview of Gregory Benford I wrote:
It sounds like you find the second argument so unconvincing that you don’t see why people consider it a paradox.
For what it’s worth, I’d take only one box.
It doesn’t make sense given the rules. The rules say that there will only be a million in box B iff you only take box B. I’m not the kind of person who calls the police when faced with the trolley problem thought experiment. Besides that, the laws of physics obviously do not permit you to deliberately take both boxes if a nearly perfect predictor knows that you’ll only take box B. Therefore considering that counterfactual makes no sense (much less than a nearly perfect predictor).
It mostly seems to be confusion about the impossibility of a perfect predictor. On LW we accept the concept of a philosophical Superintelligence, but among mainstream philosophers many disavow the notion of a perfect predictor, even when that is specified very clearly.
Steve+Anna at SIAI did a pretty thorough dissolution of Newcomb’s problem with variable accuracy for Omega as part of the problem definition.