In a Newcombless problem, where you can either have $1,000 or refuse it and have $1,000,000, you could argue that the rational choice is to take the $1,000,000, and then go back for the $1,000 when people’s backs were turned, but it would seem to go against the nature of the problem.
In much the same way, if Omega is a perfect predictor, there is no possible world where you receive $1,000,000 and still end up going back for the second. Either Rachel wouldn’t have objected, or the argument would’ve taken more than 5 minutes, and the boxes disappear, or something.
I’m not sure how Omega factors in the boxes’ contents in this “delayed decision” version. Like, let’s say Irene is will, absent external forces, one box, and Rachel, if Irene receives $1,000,000, will threaten Irene sufficiently to take the second box, and will do nothing if Irene receives nothing. (Also they’re automatons, and these are descriptions of their source code, and so no other unstated factors are able to be taken into account)
Omega simulates reality A, with the box full, sees that Irene will 2 box after threat by Rachel.
Omega simulates reality B, with the box empty, and sees that Irene will 1 box.
Omega, the perfect predictor, cannot make a consistent prediction, and, like the unstoppable force meeting the immovable object, vanishes in a puff of logic.
I think, if you want to aim at this sort of thing, the better formulation is to just claim that Omega is 90% accurate. Then there’s no (immediate) logical contradiction in receiving the $1,000,000 and going back for the second box. And the payoffs should still be correct.
1 box: .9*1,000,000 + .1*0 = 900,000
2 box: .9*1,000 + .1*1,001,000 = 101,000
I expect that this formulation runs folly of what was discussed in this post around the Smoking Lesions problem, where repeated trials may let you change things you shouldn’t be able to (in their example, if you chose to smoke every time, then if the correlation between smoking and lesions was held, then you can change the base rate of the lesions).
That is, I expect that if you ran repeated simulations, to try things out, then strategies like “I will one box, and iff it is full, then I will go back for the second box” will make it so Omega is incapable of predicting at the proposed 90% rate.
I think all of these things might be related to the problem of embedded agency, and people being confused (even if they don’t put it in these terms) that they have an atomic free will that can think about things without affecting or being affected by the world. I’m having trouble resolving this confusion myself, because I can’t figure out what Omega’s prediction looks like instead of vanishing in a puff of logic. It may just be that statements like “I will turn the lever on if, and only if, I expect the lever to be off at the end” are a nonsense decision criteria. But the problem as stated doesn’t seem like it should be impossible, so… I am confused.
“Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies ‘beliefs’, and the latter thingy ‘reality’.”
I think this is a fine response to Mr. Carrico, but not to the post-modernists. They can still fall back to something like “Why are you drawing a line between ‘predictions’ and ‘results’? Both are simply things in your head, and since you can’t directly observe reality, your ‘results’ are really just your predictions of the results based off of the adulterated model in your head! You’re still just asserting your belief is better.”
The tack I came up with in the meditation was that the “everything is a belief” might be a bit falsely dichotomous. I mean, it would seem odd, given that everything is a belief, to say that Anne telling you the marble is in the basket is just as good evidence as actually checking the basket yourself. It would imply weird things like, once you check and find it in the box, you should be only 50% sure of where the marble is, because Anne’s statement is weighed equally.
(And thought it’s difficult to put my mind in this state, I can think of this as not in service of determining reality, but instead as trying to inform my belief that, after I reach into the box, I will believe that I am holding a marble.)
Once you concede that different beliefs can weigh as different evidence, you can use Bayesian ideas to reconcile things. Something like “nothing is ‘true’ in the sense of deserving 100% credence assigned to it (saying something is true really does just mean that you really really believe it, or, more charitably, that belief has informed your future beliefs better than before you believed it), but you can take actions to become more ‘accurate’ in the sense of anticipating your future beliefs better. While they’re both guesses (you could be hallucinating, or something), your guess before checking is likely to be worse, more diluted, filtered through more layers from direct reality, than your guess after checking.”
I may be off the mark if the post-modernist claim is that reality doesn’t exist, not just that no one’s beliefs about it can be said to be better than anyone else’s.