Here’s an example of a related kind of “reflexivity makes prediction meaningless”. Let’s say Omega bets you $100 that she can predict what you will eat for breakfast. Once you accept this bet, you now try to think of something that you would never otherwise think to eat for breakfast, in order to win the bet. The fact that your actions and the prediction of your actions have been connected in this way by the bet makes your actions unpredictable.
Your actions have been determined in part by the bet that Omega has made with you—I do not see how that is supposed to make them unpredictable any more than adding any other variable would do so. Remember: You only appear to have free will from within the algorithm, you may decide to think of something you’d never otherwise think about but Omega is advanced enough to model you down to the most basic level—it can predict your more complex behaviours based upon the combination of far simpler rules. You cannot necessarily just decide to think of something random which would be required in order to be unpredictable.
Similarly, the whole question of whether you should choose to two box or one box is a bit iffy. Strictly speaking there’s no SHOULD about it. You will one box or you will two box. The question phrased as a should question—as a choice—is meaningless unless you’re treating choice as a high-level abstraction of lower level rules; and if you do that, then the difficult disappears—just as you don’t ask a rock whether it should or shouldn’t crush someone when it falls down a hill.
Meaningfully, we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box and make stinking great piles of dosh. And as it turns out I’m the sort of person who, holding a desire for filthy lucre, will do so.
It’s really difficult to side step your intuitions—your illusion that you actually get a free choice here. And I think the phrasing of the problem and its answers themselves have a lot to do with that. I think if you think that people get a choice—and the mechanisms of Omega’s prediction hinge upon you being strongly determined—then the question just ceases to make sense. And you’ve got to jettison one of the two; either Omega’s prediction ability or your ability to make a choice in the sense conventionally meant.
we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box
No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you “trick” Omega, you get strictly more money. But I guess your point is you can’t trick Omega this way.
Which brings me back to whether Omega is feasible. I just don’t share the intuition that Omega is capable of the sort of predictive capacity required of it.
Which brings me back to whether Omega is feasible. I just don’t share the intuition that Omega is capable of the sort of predictive capacity required of it.
Well, I guess my response to that would be that it’s a thought experiment. Omega’s really just an extreme—hypothetical—case of a powerful predictor, that makes problems in CDT more easily seen by amplifying them. If we were to talk about the prisoner’s dilemma, we could easily have roughly the same underlying discussion.
See mine and orthonormal’s comments on the PD on this post for my view of that.
The point I’m struggling to express is that I don’t think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb’s problem makes a problem with CDT clearer. But I argue that Newcomb’s problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can’t use CDT’s “failure” in this circumstance as evidence that CDT is wrong.
Here’s a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: “Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
Would one-boxing ever be advantageous if Omega were reasoning like that? No. The point is Omega will always reason that two-boxing dominates once the contents are fixed. There seems to be something unstable about Omega’s reasoning. I think this is related to why I feel Omega is impossible. (Though I’m not sure how the points interact exactly.)
Here’s a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: “Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
By that logic, you can never win in Kavka’s toxin/Parfit’s hitchhiker scenario.
So I agree. It’s lucky I’ve never met a game theorist in the desert.
Less flippantly. The logic pretty much the same yes. But I don’t see that as a problem for the point I’m making; which is that the perfect predictor isn’t a thought experiment we should worry about.
“Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
That line of reasoning is though available to Smith as well, so he can choose to one-boxing because he knows that Omega is a perfect predictor. You’re right to say that the interplay between Omega-prediction-of-Smith and Smith-prediction-of-Omega are in a meta-stable state, BUT: Smith has to decide, he is going to make a decision, and so whatever algorithm it implements, if it ever goes down this line of meta-stable reasoning, must have a way to get out and choose something, even if it’s just bounded computational power (or the limit step of computation in Hamkins infinite Turing machine). But since Omega is a perfect predictor, it will know that and choose accordingly.
I have the feeling that Omega existence is something like an axiom, you can refuse or accept it and both stances are coherent.
Well, i can implement omega by scanning your brain and simulating you. The other ‘non implementations’ of omega, though, imo are best ignored entirely. You can’t really blame a decision theory for failure if there’s no sensible model of the world for it to use.
My decision theory, personally, allows me to ignore unknown and edit my expected utility formula in ad-hoc way if i’m sufficiently convinced that omega will work as described. I think that’s practically useful because effective heuristics often have to be invented on spot without sufficient model of the world.
edit: albeit, if i was convinced that omega works as described, i’d be convinced that it has scanned my brain and is emulating my decision procedure, or is using time travel, or is deciding randomly then destroying the universes where it was wrong… with more time i can probably come up with other implementations, the common thing about the implementations though is that i should 1-box.
People with memory problems tend to repeat “spontaneous” interactions in essentially the same way, which is evidence that quantum noise doesn’t usually sway choices.
You cannot necessarily just decide to think of something random which would be required in order to be unpredictable.
Presented with this scenario, I’d come up with a scheme describing a table of as many different options as I could manage—ideally a very large number, but the combinatorics would probably get unwieldy after a while—and pull numbers from http://www.fourmilab.ch/hotbits/ to make a selection. I might still lose, but knowing (to some small p-value) that it’s possible to predict radioactive decay would easily be worth $100.
Your actions have been determined in part by the bet that Omega has made with you—I do not see how that is supposed to make them unpredictable any more than adding any other variable would do so. Remember: You only appear to have free will from within the algorithm, you may decide to think of something you’d never otherwise think about but Omega is advanced enough to model you down to the most basic level—it can predict your more complex behaviours based upon the combination of far simpler rules. You cannot necessarily just decide to think of something random which would be required in order to be unpredictable.
Similarly, the whole question of whether you should choose to two box or one box is a bit iffy. Strictly speaking there’s no SHOULD about it. You will one box or you will two box. The question phrased as a should question—as a choice—is meaningless unless you’re treating choice as a high-level abstraction of lower level rules; and if you do that, then the difficult disappears—just as you don’t ask a rock whether it should or shouldn’t crush someone when it falls down a hill.
Meaningfully, we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box and make stinking great piles of dosh. And as it turns out I’m the sort of person who, holding a desire for filthy lucre, will do so.
It’s really difficult to side step your intuitions—your illusion that you actually get a free choice here. And I think the phrasing of the problem and its answers themselves have a lot to do with that. I think if you think that people get a choice—and the mechanisms of Omega’s prediction hinge upon you being strongly determined—then the question just ceases to make sense. And you’ve got to jettison one of the two; either Omega’s prediction ability or your ability to make a choice in the sense conventionally meant.
No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you “trick” Omega, you get strictly more money. But I guess your point is you can’t trick Omega this way.
Which brings me back to whether Omega is feasible. I just don’t share the intuition that Omega is capable of the sort of predictive capacity required of it.
Well, I guess my response to that would be that it’s a thought experiment. Omega’s really just an extreme—hypothetical—case of a powerful predictor, that makes problems in CDT more easily seen by amplifying them. If we were to talk about the prisoner’s dilemma, we could easily have roughly the same underlying discussion.
See mine and orthonormal’s comments on the PD on this post for my view of that.
The point I’m struggling to express is that I don’t think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb’s problem makes a problem with CDT clearer. But I argue that Newcomb’s problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can’t use CDT’s “failure” in this circumstance as evidence that CDT is wrong.
Here’s a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: “Wait though. Even if Smith is a one-boxer, now that I’ve fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can’t causally affect the contents of the boxes.” So Omega doesn’t put the money in the box.
Would one-boxing ever be advantageous if Omega were reasoning like that? No. The point is Omega will always reason that two-boxing dominates once the contents are fixed. There seems to be something unstable about Omega’s reasoning. I think this is related to why I feel Omega is impossible. (Though I’m not sure how the points interact exactly.)
By that logic, you can never win in Kavka’s toxin/Parfit’s hitchhiker scenario.
So I agree. It’s lucky I’ve never met a game theorist in the desert.
Less flippantly. The logic pretty much the same yes. But I don’t see that as a problem for the point I’m making; which is that the perfect predictor isn’t a thought experiment we should worry about.
That line of reasoning is though available to Smith as well, so he can choose to one-boxing because he knows that Omega is a perfect predictor. You’re right to say that the interplay between Omega-prediction-of-Smith and Smith-prediction-of-Omega are in a meta-stable state, BUT: Smith has to decide, he is going to make a decision, and so whatever algorithm it implements, if it ever goes down this line of meta-stable reasoning, must have a way to get out and choose something, even if it’s just bounded computational power (or the limit step of computation in Hamkins infinite Turing machine). But since Omega is a perfect predictor, it will know that and choose accordingly. I have the feeling that Omega existence is something like an axiom, you can refuse or accept it and both stances are coherent.
Well, i can implement omega by scanning your brain and simulating you. The other ‘non implementations’ of omega, though, imo are best ignored entirely. You can’t really blame a decision theory for failure if there’s no sensible model of the world for it to use.
My decision theory, personally, allows me to ignore unknown and edit my expected utility formula in ad-hoc way if i’m sufficiently convinced that omega will work as described. I think that’s practically useful because effective heuristics often have to be invented on spot without sufficient model of the world.
edit: albeit, if i was convinced that omega works as described, i’d be convinced that it has scanned my brain and is emulating my decision procedure, or is using time travel, or is deciding randomly then destroying the universes where it was wrong… with more time i can probably come up with other implementations, the common thing about the implementations though is that i should 1-box.
Provided my brain’s choice isn’t affected by quantum noise, otherwise I don’t think you can. :-)
People with memory problems tend to repeat “spontaneous” interactions in essentially the same way, which is evidence that quantum noise doesn’t usually sway choices.
Good point. Still, the brain’s choice can be quite deterministic, if you give it enough thought—averaging out noise.
Presented with this scenario, I’d come up with a scheme describing a table of as many different options as I could manage—ideally a very large number, but the combinatorics would probably get unwieldy after a while—and pull numbers from http://www.fourmilab.ch/hotbits/ to make a selection. I might still lose, but knowing (to some small p-value) that it’s possible to predict radioactive decay would easily be worth $100.
Of course, that’s the smartassed answer.
Well the smartarse response is that Omega’s just plugged himself in on the other end of your hotbits request =p