That is, at that point it sounds like it’s no longer “Omega will ask me to pay him $100 if he predicts I wouldn’t pay him if he asked,” it’s “Omega will ask me to pay him $100 if he predicts I wouldn’t pay him if he asked OR if I’m a simulation.”
Not that any of this is necessary. Willingness to pay Omega depends on having arbitrarily high confidence in his predictions; it’s not clear that I could ever arrive at such a high level of confidence, but it doesn’t matter.
We’re just asking, if it’s true, what decision on my part maximizes expected results for entities for which I wish to maximize expected results? Perhaps I could never actually be expected to make that decision because I could never be expected to have sufficient confidence that it’s true. That changes nothing.
Also worth noting that if I pay him $100 in this scenario I ought to update my confidence level sharply downward. That is, if I’ve previously seen N predictions and Omega has been successful in each of them, I have now seen (N+1) predictions and he’s been successful in N of them.
Of course, by then I’ve already paid; there’s no longer a choice to make.
(Presumably I should believe, prior to agreeing, that my choosing to pay him will not actually result in my paying him, or something like that… I ought not expect to pay him, given that he’s offered me $100, regardless of what choice I make. In which case I might as well choose to pay him. This is absurd, of course, but the whole situation is absurd.)
ETA—I am apparently confused on more fundamental levels than I had previously understood, not least of which is what is being presumed about Omega in these cases. Apparently I am not presumed to be as confident of Omega’s predictions as I’d thought, which makes the rest of this comment fairly irrelevant. Oops.
Doesn’t that contradict the original assertion?
That is, at that point it sounds like it’s no longer “Omega will ask me to pay him $100 if he predicts I wouldn’t pay him if he asked,” it’s “Omega will ask me to pay him $100 if he predicts I wouldn’t pay him if he asked OR if I’m a simulation.”
Not that any of this is necessary. Willingness to pay Omega depends on having arbitrarily high confidence in his predictions; it’s not clear that I could ever arrive at such a high level of confidence, but it doesn’t matter.
We’re just asking, if it’s true, what decision on my part maximizes expected results for entities for which I wish to maximize expected results? Perhaps I could never actually be expected to make that decision because I could never be expected to have sufficient confidence that it’s true. That changes nothing.
Also worth noting that if I pay him $100 in this scenario I ought to update my confidence level sharply downward. That is, if I’ve previously seen N predictions and Omega has been successful in each of them, I have now seen (N+1) predictions and he’s been successful in N of them.
Of course, by then I’ve already paid; there’s no longer a choice to make.
(Presumably I should believe, prior to agreeing, that my choosing to pay him will not actually result in my paying him, or something like that… I ought not expect to pay him, given that he’s offered me $100, regardless of what choice I make. In which case I might as well choose to pay him. This is absurd, of course, but the whole situation is absurd.)
ETA—I am apparently confused on more fundamental levels than I had previously understood, not least of which is what is being presumed about Omega in these cases. Apparently I am not presumed to be as confident of Omega’s predictions as I’d thought, which makes the rest of this comment fairly irrelevant. Oops.
No. Paying is the winning strategy in the version where the predictor is correct only with probability, say, 0.8, too. ie.
(blink)
You’re right; I’m wrong. Clearly I haven’t actually been thinking carefully about the problem.
Thanks.