Whether I give Omega the $100 depends entirely on whether there will be multiple iterations of coin-flipping. If there will be multiple iterations, giving Omega the $100 is indeed winning, just like buying a financial instrument that increases in value is winning.
In that case, I’d hate to disappoint Omega, but there’s no incentive for me to give up my $100. A utility of 0 is better than a negative utility, and if the coin-flip is deterministic, I won’t be serving the interests of my alternate-universe self. Why would I choose otherwise?
Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega’s omniscient, I’d be honest about it, too, and cough up the money if I lost.
Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega’s omniscient, I’d be honest about it, too, and cough up the money if I lost.
If it’s rational to do this when Omega asks you in advance, isn’t it also rational to make such a commitment right now? Whether you make the commitment in response to Omega’s notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff.
If you now commit to a “if this exact situation comes up, I will commit to paying the $100 if I lose the coinflip”, and p(x) is the probability of this situation occurring, you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it’s positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn’t—shouldn’t you make such a commitment right now? Extend this and make a precommitment to always make the decision to perform the action that would maximise your average returns in all such newcombelike situations and you’re going to come off even better on average.
No, I will not precommit to giving up my $100 for cases where Omega demands the money after the coin flip has occurred. There is no incentive to precommit in those cases, because the outcome is already against me and there’s not a chance that it “would” go in my favour.
At that point, it’s no longer a precommittal—it’s how you face the consequences of your decision whether to precommit or not. Note that the hypothetical loss case presented in the post is not in fact the decision point—that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?
If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would do so. If you are not such a person (and given your answer, you are not) it is advantageous to change yourself to be such a person, by making that precommitment (or better, a generalised “I will always take the path would have maximised returns across the distribution of counterfactual outcomes in Newcomblike situations”) immediately.
Such commitments change the dynamics of many such thought experiments, but usually they require that that commitment be known to the other person, and enforced some way (The way to win at Chicken is to throw your steering wheel out the window). Here though, Omega’s knowledge of us removes the need to explicit announcement, and it is in our own interests to be self-enforcing (or rather we wish to reliably enforce the decision on our future selves), or we will not receive the benefit. For that reason, a silent decision is as effective as having a conversation with Omega and telling it how we decide.
Explicitly announcing our decision thus only has an effect insofar as it keeps your future self honest. Eg. if you know you wouldn’t keep to a decision idly arrived at, but value your word such that you would stick to doing what you said you would despite its irrationality in that case, then it is currently in your interest to give your word. It’s just as much in your interest to give your word now though—make some public promise that you would keep. Alternatively if you have sufficient mechanisms in your mind to commit to such future irrational behaviour without a formal promise, it becomes unneccessary.
Maybe in thought-experiment-world. But if there’s a significant chance that you’ll misidentify a con man as Omega, then this tendency makes you lose on average.
Sure—all bets are off if you aren’t absolutely sure Omega is trustworthy.
I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.
you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it’s positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn’t—shouldn’t you make such a commitment right now?
Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.
If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.
So after you observe the coin toss, and find yourself in a position where you’ve lost, you’ll give Omega your money? Why would you? It won’t ever reciprocate, and it won’t enforce the deal, its only enforcement are those $10000 that you know got away anyway, because you didn’t win the coin toss.
Yes, I’ll give Omega the money, because if I’m going to refuse to give Omega the money after the coin toss occurs, Omega knows ahead of time on account of his omniscience. If I had won, Omega could look at me and say, “You get no money, because I know you wouldn’t have really given me the $100 if you’d lost. Your pre-commitment wasn’t genuine.”
My answer to this is that integrity is a virtue, and breaking one’s promises reduces one’s integrity. And being a person with integrity is vital to the good life.
Then I repeat the question with MBlume’s corrections, to make the problem less convenient. Would you still follow up and murder 15 people, to preserve your personal integrity? It’s not a question of values, it’s a question of decision theory.
The point is that the distinction between $0.02 and a trillion lives is irrelevant to the discussion, which is about the structure of preference order assigned to actions, whatever your values are. If you are determined to pay off Omega, the reason for that must be in your decision algorithm, not in an exquisite balance between $100, personal integrity, and murder. If you are willing to carry the deal through (note that there isn’t even any deal, only your premeditated decision), the reason for that must lie elsewhere, not in the value of personal integrity.
To make that claim, you do need to first establish that he would accept a bet of 15 lives vs some reward in the first place, which I think is what he is claiming he would not do. There’s a difference between making a bet and reneging, and not accepting the bet. If you would not commit murder to save a million lives in the first place, then the refusal is for a different reason than just the fact that the stakes are raised.
The values aren’t necessarily relevant after I’ve precommitted to the bet, but they’re absolutely relevant to whether I’d precommit to the bet. If murder is one of the options, count me out.
My reason for carrying the deal through is (partially) that it promotes virtue. I do not see any arguments that it cannot be so.
What’s vague? Let me try to spell this out in excruciating detail:
Making good on one’s commitments promotes the virtue of integrity. Integrity is constitutive of good character. One cannot consistently act as a person of good character without having it. To act ethically is to act as a person of good character does. Ethics specifies what one has most reason to do or want.
So, if you ask me what I have most reason to do in a circumstance where I’ve made a commitment, ceteris paribus, I’ll respond that I’ll make good on my commitments.
Whether I give Omega the $100 depends entirely on whether there will be multiple iterations of coin-flipping. If there will be multiple iterations, giving Omega the $100 is indeed winning, just like buying a financial instrument that increases in value is winning.
No, there are no iterations. Omega flies away from your galaxy, right after finishing the transaction. (Added to P.S.)
In that case, I’d hate to disappoint Omega, but there’s no incentive for me to give up my $100. A utility of 0 is better than a negative utility, and if the coin-flip is deterministic, I won’t be serving the interests of my alternate-universe self. Why would I choose otherwise?
Would you prefer to choose otherwise if you considered the deal before the actual coin toss, and arrange the precommitment to that end?
Yes, then, following the utility function you specified, I would gladly risk $100 for an even chance at $10000. Since Omega’s omniscient, I’d be honest about it, too, and cough up the money if I lost.
If it’s rational to do this when Omega asks you in advance, isn’t it also rational to make such a commitment right now? Whether you make the commitment in response to Omega’s notification, or on a whim when considering the thought experiment in response to a blog post makes no difference to the payoff. If you now commit to a “if this exact situation comes up, I will commit to paying the $100 if I lose the coinflip”, and p(x) is the probability of this situation occurring, you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it’s positive.)
Given that someone who makes such a precommitment comes out ahead of someone who doesn’t—shouldn’t you make such a commitment right now? Extend this and make a precommitment to always make the decision to perform the action that would maximise your average returns in all such newcombelike situations and you’re going to come off even better on average.
No, I will not precommit to giving up my $100 for cases where Omega demands the money after the coin flip has occurred. There is no incentive to precommit in those cases, because the outcome is already against me and there’s not a chance that it “would” go in my favour.
At that point, it’s no longer a precommittal—it’s how you face the consequences of your decision whether to precommit or not.
Note that the hypothetical loss case presented in the post is not in fact the decision point—that point is when you first consider the matter, which is exactly what you are doing right now. If you would really change your answer after considering the matter, then having now done so, have you changed it?
If you want to obtain the advantage of someone who makes such a precommittal (and sticks to it), you must be someone who would do so. If you are not such a person (and given your answer, you are not) it is advantageous to change yourself to be such a person, by making that precommitment (or better, a generalised “I will always take the path would have maximised returns across the distribution of counterfactual outcomes in Newcomblike situations”) immediately.
Such commitments change the dynamics of many such thought experiments, but usually they require that that commitment be known to the other person, and enforced some way (The way to win at Chicken is to throw your steering wheel out the window). Here though, Omega’s knowledge of us removes the need to explicit announcement, and it is in our own interests to be self-enforcing (or rather we wish to reliably enforce the decision on our future selves), or we will not receive the benefit. For that reason, a silent decision is as effective as having a conversation with Omega and telling it how we decide.
Explicitly announcing our decision thus only has an effect insofar as it keeps your future self honest. Eg. if you know you wouldn’t keep to a decision idly arrived at, but value your word such that you would stick to doing what you said you would despite its irrationality in that case, then it is currently in your interest to give your word. It’s just as much in your interest to give your word now though—make some public promise that you would keep. Alternatively if you have sufficient mechanisms in your mind to commit to such future irrational behaviour without a formal promise, it becomes unneccessary.
Maybe in thought-experiment-world. But if there’s a significant chance that you’ll misidentify a con man as Omega, then this tendency makes you lose on average.
Sure—all bets are off if you aren’t absolutely sure Omega is trustworthy.
I think this is a large part of the reason why the intuitive answer we jump to is rejection. Being told we believe a being making such extraordinary claims is different to actually believing them (especially when the claims may have unpleasant implications to our beliefs about ourselves), so have a tendency to consider the problem with the implicit doubt we have for everyday interactions lurking in our minds.
Brianm understands reflective consistency!
Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.
If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.
So after you observe the coin toss, and find yourself in a position where you’ve lost, you’ll give Omega your money? Why would you? It won’t ever reciprocate, and it won’t enforce the deal, its only enforcement are those $10000 that you know got away anyway, because you didn’t win the coin toss.
Yes, I’ll give Omega the money, because if I’m going to refuse to give Omega the money after the coin toss occurs, Omega knows ahead of time on account of his omniscience. If I had won, Omega could look at me and say, “You get no money, because I know you wouldn’t have really given me the $100 if you’d lost. Your pre-commitment wasn’t genuine.”
My answer to this is that integrity is a virtue, and breaking one’s promises reduces one’s integrity. And being a person with integrity is vital to the good life.
Then I repeat the question with MBlume’s corrections, to make the problem less convenient. Would you still follow up and murder 15 people, to preserve your personal integrity? It’s not a question of values, it’s a question of decision theory.
This thread assumes a precommitment. I would not precommit to murder.
I’m not sure what your point is here.
The point is that the distinction between $0.02 and a trillion lives is irrelevant to the discussion, which is about the structure of preference order assigned to actions, whatever your values are. If you are determined to pay off Omega, the reason for that must be in your decision algorithm, not in an exquisite balance between $100, personal integrity, and murder. If you are willing to carry the deal through (note that there isn’t even any deal, only your premeditated decision), the reason for that must lie elsewhere, not in the value of personal integrity.
To make that claim, you do need to first establish that he would accept a bet of 15 lives vs some reward in the first place, which I think is what he is claiming he would not do. There’s a difference between making a bet and reneging, and not accepting the bet. If you would not commit murder to save a million lives in the first place, then the refusal is for a different reason than just the fact that the stakes are raised.
Integrity is a virtue, not a value.
The values aren’t necessarily relevant after I’ve precommitted to the bet, but they’re absolutely relevant to whether I’d precommit to the bet. If murder is one of the options, count me out.
My reason for carrying the deal through is (partially) that it promotes virtue. I do not see any arguments that it cannot be so.
Too vague.
What’s vague? Let me try to spell this out in excruciating detail:
Making good on one’s commitments promotes the virtue of integrity.
Integrity is constitutive of good character.
One cannot consistently act as a person of good character without having it.
To act ethically is to act as a person of good character does.
Ethics specifies what one has most reason to do or want.
So, if you ask me what I have most reason to do in a circumstance where I’ve made a commitment, ceteris paribus, I’ll respond that I’ll make good on my commitments.