Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega’s omniscient, I’d be honest about it, too, and cough up the money if I lost.
If it’s rational to do this when Omega asks you in advance, isn’t it also rational to make such a commitment right now? Whether you make the commitment in response to Omega’s notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff.
If you now commit to a “if this exact situation comes up, I will commit to paying the $100 if I lose the coinflip”, and p(x) is the probability of this situation occurring, you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it’s positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn’t—shouldn’t you make such a commitment right now? Extend this and make a precommitment to always make the decision to perform the action that would maximise your average returns in all such newcombelike situations and you’re going to come off even better on average.
No, I will not precommit to giving up my $100 for cases where Omega demands the money after the coin flip has occurred. There is no incentive to precommit in those cases, because the outcome is already against me and there’s not a chance that it “would” go in my favour.
At that point, it’s no longer a precommittal—it’s how you face the consequences of your decision whether to precommit or not. Note that the hypothetical loss case presented in the post is not in fact the decision point—that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?
If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would do so. If you are not such a person (and given your answer, you are not) it is advantageous to change yourself to be such a person, by making that precommitment (or better, a generalised “I will always take the path would have maximised returns across the distribution of counterfactual outcomes in Newcomblike situations”) immediately.
Such commitments change the dynamics of many such thought experiments, but usually they require that that commitment be known to the other person, and enforced some way (The way to win at Chicken is to throw your steering wheel out the window). Here though, Omega’s knowledge of us removes the need to explicit announcement, and it is in our own interests to be self-enforcing (or rather we wish to reliably enforce the decision on our future selves), or we will not receive the benefit. For that reason, a silent decision is as effective as having a conversation with Omega and telling it how we decide.
Explicitly announcing our decision thus only has an effect insofar as it keeps your future self honest. Eg. if you know you wouldn’t keep to a decision idly arrived at, but value your word such that you would stick to doing what you said you would despite its irrationality in that case, then it is currently in your interest to give your word. It’s just as much in your interest to give your word now though—make some public promise that you would keep. Alternatively if you have sufficient mechanisms in your mind to commit to such future irrational behaviour without a formal promise, it becomes unneccessary.
Maybe in thought-experiment-world. But if there’s a significant chance that you’ll misidentify a con man as Omega, then this tendency makes you lose on average.
Sure—all bets are off if you aren’t absolutely sure Omega is trustworthy.
I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.
you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it’s positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn’t—shouldn’t you make such a commitment right now?
Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.
If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.
If it’s rational to do this when Omega asks you in advance, isn’t it also rational to make such a commitment right now? Whether you make the commitment in response to Omega’s notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff. If you now commit to a “if this exact situation comes up, I will commit to paying the $100 if I lose the coinflip”, and p(x) is the probability of this situation occurring, you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it’s positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn’t—shouldn’t you make such a commitment right now? Extend this and make a precommitment to always make the decision to perform the action that would maximise your average returns in all such newcombelike situations and you’re going to come off even better on average.
No, I will not precommit to giving up my $100 for cases where Omega demands the money after the coin flip has occurred. There is no incentive to precommit in those cases, because the outcome is already against me and there’s not a chance that it “would” go in my favour.
At that point, it’s no longer a precommittal—it’s how you face the consequences of your decision whether to precommit or not.
Note that the hypothetical loss case presented in the post is not in fact the decision point—that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?
If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would do so. If you are not such a person (and given your answer, you are not) it is advantageous to change yourself to be such a person, by making that precommitment (or better, a generalised “I will always take the path would have maximised returns across the distribution of counterfactual outcomes in Newcomblike situations”) immediately.
Such commitments change the dynamics of many such thought experiments, but usually they require that that commitment be known to the other person, and enforced some way (The way to win at Chicken is to throw your steering wheel out the window). Here though, Omega’s knowledge of us removes the need to explicit announcement, and it is in our own interests to be self-enforcing (or rather we wish to reliably enforce the decision on our future selves), or we will not receive the benefit. For that reason, a silent decision is as effective as having a conversation with Omega and telling it how we decide.
Explicitly announcing our decision thus only has an effect insofar as it keeps your future self honest. Eg. if you know you wouldn’t keep to a decision idly arrived at, but value your word such that you would stick to doing what you said you would despite its irrationality in that case, then it is currently in your interest to give your word. It’s just as much in your interest to give your word now though—make some public promise that you would keep. Alternatively if you have sufficient mechanisms in your mind to commit to such future irrational behaviour without a formal promise, it becomes unneccessary.
Maybe in thought-experiment-world. But if there’s a significant chance that you’ll misidentify a con man as Omega, then this tendency makes you lose on average.
Sure—all bets are off if you aren’t absolutely sure Omega is trustworthy.
I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.
Brianm understands reflective consistency!
Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.
If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.