we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box
No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you “trick” Omega, you get strictly more money. But I guess your point is you can’t trick Omega this way.
Which brings me back to whether Omega is feasible. I just don’t share the intuition that Omega is capable of the sort of predictive capacity required of it.
Which brings me back to whether Omega is feasible. I just don’t share the intuition that Omega is capable of the sort of predictive capacity required of it.
Well, I guess my response to that would be that it’s a thought experiment. Omega’s really just an extreme—hypothetical—case of a powerful predictor, that makes problems in CDT more easily seen by amplifying them. If we were to talk about the prisoner’s dilemma, we could easily have roughly the same underlying discussion.
See mine and orthonormal’s comments on the PD on this post for my view of that.
The point I’m struggling to express is that I don’t think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb’s problem makes a problem with CDT clearer. But I argue that Newcomb’s problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can’t use CDT’s “failure” in this circumstance as evidence that CDT is wrong.
Here’s a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: “Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
Would one-boxing ever be advantageous if Omega were reasoning like that? No. The point is Omega will always reason that two-boxing dominates once the contents are fixed. There seems to be something unstable about Omega’s reasoning. I think this is related to why I feel Omega is impossible. (Though I’m not sure how the points interact exactly.)
Here’s a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: “Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
By that logic, you can never win in Kavka’s toxin/Parfit’s hitchhiker scenario.
So I agree. It’s lucky I’ve never met a game theorist in the desert.
Less flippantly. The logic pretty much the same yes. But I don’t see that as a problem for the point I’m making; which is that the perfect predictor isn’t a thought experiment we should worry about.
“Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
That line of reasoning is though available to Smith as well, so he can choose to one-boxing because he knows that Omega is a perfect predictor. You’re right to say that the interplay between Omega-prediction-of-Smith and Smith-prediction-of-Omega are in a meta-stable state, BUT: Smith has to decide, he is going to make a decision, and so whatever algorithm it implements, if it ever goes down this line of meta-stable reasoning, must have a way to get out and choose something, even if it’s just bounded computational power (or the limit step of computation in Hamkins infinite Turing machine). But since Omega is a perfect predictor, it will know that and choose accordingly.
I have the feeling that Omega existence is something like an axiom, you can refuse or accept it and both stances are coherent.
Well, i can implement omega by scanning your brain and simulating you. The other ‘non implementations’ of omega, though, imo are best ignored entirely. You can’t really blame a decision theory for failure if there’s no sensible model of the world for it to use.
My decision theory, personally, allows me to ignore unknown and edit my expected utility formula in ad-hoc way if i’m sufficiently convinced that omega will work as described. I think that’s practically useful because effective heuristics often have to be invented on spot without sufficient model of the world.
edit: albeit, if i was convinced that omega works as described, i’d be convinced that it has scanned my brain and is emulating my decision procedure, or is using time travel, or is deciding randomly then destroying the universes where it was wrong… with more time i can probably come up with other implementations, the common thing about the implementations though is that i should 1-box.
People with memory problems tend to repeat “spontaneous” interactions in essentially the same way, which is evidence that quantum noise doesn’t usually sway choices.
No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you “trick” Omega, you get strictly more money. But I guess your point is you can’t trick Omega this way.
Which brings me back to whether Omega is feasible. I just don’t share the intuition that Omega is capable of the sort of predictive capacity required of it.
Well, I guess my response to that would be that it’s a thought experiment. Omega’s really just an extreme—hypothetical—case of a powerful predictor, that makes problems in CDT more easily seen by amplifying them. If we were to talk about the prisoner’s dilemma, we could easily have roughly the same underlying discussion.
See mine and orthonormal’s comments on the PD on this post for my view of that.
The point I’m struggling to express is that I don’t think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb’s problem makes a problem with CDT clearer. But I argue that Newcomb’s problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can’t use CDT’s “failure” in this circumstance as evidence that CDT is wrong.
Here’s a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: “Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
Would one-boxing ever be advantageous if Omega were reasoning like that? No. The point is Omega will always reason that two-boxing dominates once the contents are fixed. There seems to be something unstable about Omega’s reasoning. I think this is related to why I feel Omega is impossible. (Though I’m not sure how the points interact exactly.)
By that logic, you can never win in Kavka’s toxin/Parfit’s hitchhiker scenario.
So I agree. It’s lucky I’ve never met a game theorist in the desert.
Less flippantly. The logic pretty much the same yes. But I don’t see that as a problem for the point I’m making; which is that the perfect predictor isn’t a thought experiment we should worry about.
That line of reasoning is though available to Smith as well, so he can choose to one-boxing because he knows that Omega is a perfect predictor. You’re right to say that the interplay between Omega-prediction-of-Smith and Smith-prediction-of-Omega are in a meta-stable state, BUT: Smith has to decide, he is going to make a decision, and so whatever algorithm it implements, if it ever goes down this line of meta-stable reasoning, must have a way to get out and choose something, even if it’s just bounded computational power (or the limit step of computation in Hamkins infinite Turing machine). But since Omega is a perfect predictor, it will know that and choose accordingly. I have the feeling that Omega existence is something like an axiom, you can refuse or accept it and both stances are coherent.
Well, i can implement omega by scanning your brain and simulating you. The other ‘non implementations’ of omega, though, imo are best ignored entirely. You can’t really blame a decision theory for failure if there’s no sensible model of the world for it to use.
My decision theory, personally, allows me to ignore unknown and edit my expected utility formula in ad-hoc way if i’m sufficiently convinced that omega will work as described. I think that’s practically useful because effective heuristics often have to be invented on spot without sufficient model of the world.
edit: albeit, if i was convinced that omega works as described, i’d be convinced that it has scanned my brain and is emulating my decision procedure, or is using time travel, or is deciding randomly then destroying the universes where it was wrong… with more time i can probably come up with other implementations, the common thing about the implementations though is that i should 1-box.
Provided my brain’s choice isn’t affected by quantum noise, otherwise I don’t think you can. :-)
People with memory problems tend to repeat “spontaneous” interactions in essentially the same way, which is evidence that quantum noise doesn’t usually sway choices.
Good point. Still, the brain’s choice can be quite deterministic, if you give it enough thought—averaging out noise.