I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).
With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box. However, I have no way of knowing what would fool Omega: indeed if it is a sufficiently good predictor there may be no such way. Clearly then the way to be “as close as possible” to a one-boxer is to be a one-boxer. A person seeking to optimise their returns will be a person who wants their response to such stimulus to be “take one box”. I do want to win, so I do want my response to be that, so it is: I’m capable of locking my decisions (making promises) in ways that forgo short-term gain for longer term benefit.
The situation here is the same, even though I have already lost. It is beneficial for me to be that type of person in general (obscured by the fact that the situation is so unlikely to occur). Were I not the type of person who made the decision to pay out on loss, I would be the type of person that lost $10000 in an equally unlikely circumstance. Locking that response in now as a general response to such occurrances means I’m more likely to benefit than those who don’t.
I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).
With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box.
Well, the other way to look at it is “What action leads me to win?” in the Newcomb problem, one-boxing wins, so you and I are in agreement there.
But in this problem, not-giving-away-$100 wins. Sure, I want to be the “type of person who one boxes”, but why do I want to be that person? Because I want to win. Being that type of person in this problem actually makes you lose.
The problem states that this is a one-shot bet, and that after you do or don’t give Omega the $100, he flies away from this galaxy and will never interact with you again. So why give him the $100? It won’t make you win in the long term.
Yes, but Omega isn’t really here yet, and you, Nebu, deciding right now that you will give him $100 does make you win, since it gives you a shot at $10000.
The problem only asks about what you would do in the failure case, and I think this obscures the fact that the relevant decision point is right now.
If you would refuse to pay, that means that you are the type of person who would not have won had the coin flip turned out differently, either because you haven’t considered the matter (and luckily turn out to be in the situation where your choice worked out better), or because you would renege on such a commitment when it occurred in reality.
However at this point, the coin flip hasn’t been made. The globally optimal person to be right now is one that does precommit and doesn’t renege. This person will come out behind in the hypothetical case as it requires we lock ourselves into the bad choice for that situation, but by being a person who would act “irrationally” at that point, they will outperform a non-committer/reneger on average.
What if there is no “on average”, if the choice to give away the $100 is the only choice you are given in your life? There is no value in being the kind of person who globally optimizes because of the expectation to win on average. You only make this choice because it’s what you are, not because you expect the reality on average to be the way you want it to be.
From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.
There is no value in being the kind of person who globally optimizes because of the expectation to win on average.
There is no value in being such a person if they happen to lose, but that’s like saying there’s no value in being a person who avoids bets that lose on average by only posing the 1 in several million time they would have won the lottery. On average they’ll come out ahead, just not in the specific situation that was described.
I would one-box on Newcombe, and I believe I would give the $100 here as well (assuming I believed Omega).
With Newcombe, if I want to win, my optimal strategy is to mimic as closely as possible the type of person Omega would predict would take one box. However, I have no way of knowing what would fool Omega: indeed if it is a sufficiently good predictor there may be no such way. Clearly then the way to be “as close as possible” to a one-boxer is to be a one-boxer. A person seeking to optimise their returns will be a person who wants their response to such stimulus to be “take one box”. I do want to win, so I do want my response to be that, so it is: I’m capable of locking my decisions (making promises) in ways that forgo short-term gain for longer term benefit.
The situation here is the same, even though I have already lost. It is beneficial for me to be that type of person in general (obscured by the fact that the situation is so unlikely to occur). Were I not the type of person who made the decision to pay out on loss, I would be the type of person that lost $10000 in an equally unlikely circumstance. Locking that response in now as a general response to such occurrances means I’m more likely to benefit than those who don’t.
Well, the other way to look at it is “What action leads me to win?” in the Newcomb problem, one-boxing wins, so you and I are in agreement there.
But in this problem, not-giving-away-$100 wins. Sure, I want to be the “type of person who one boxes”, but why do I want to be that person? Because I want to win. Being that type of person in this problem actually makes you lose.
The problem states that this is a one-shot bet, and that after you do or don’t give Omega the $100, he flies away from this galaxy and will never interact with you again. So why give him the $100? It won’t make you win in the long term.
Yes, but Omega isn’t really here yet, and you, Nebu, deciding right now that you will give him $100 does make you win, since it gives you a shot at $10000.
Right, so if a normal person offered me the bet (and assuming I could somehow know it was a fair coin) then yes, I would accept the bet.
If it was Omega instead of a normal person offering the bet, we run into some problems...
But if Omega doesn’t actually offer the bet, and just does what is described by Vladimir Nesov, then I wouldn’t give him the $100. [1]
In other words, I do different things in different situations.
Edit 1: (Or maybe I would. I haven’t figured it out yet.)
The problem only asks about what you would do in the failure case, and I think this obscures the fact that the relevant decision point is right now. If you would refuse to pay, that means that you are the type of person who would not have won had the coin flip turned out differently, either because you haven’t considered the matter (and luckily turn out to be in the situation where your choice worked out better), or because you would renege on such a commitment when it occurred in reality.
However at this point, the coin flip hasn’t been made. The globally optimal person to be right now is one that does precommit and doesn’t renege. This person will come out behind in the hypothetical case as it requires we lock ourselves into the bad choice for that situation, but by being a person who would act “irrationally” at that point, they will outperform a non-committer/reneger on average.
What if there is no “on average”, if the choice to give away the $100 is the only choice you are given in your life? There is no value in being the kind of person who globally optimizes because of the expectation to win on average. You only make this choice because it’s what you are, not because you expect the reality on average to be the way you want it to be.
From my perspective now, I expect the reality to be the winning case 50% of the time because we are told this as part of the question: Omega is trustworthy and said it tossed a fair coin. In the possible futures where such an event could happen, 50% of the time my strategy would have paid off to a greater degree than it would lose the other 50% of the time. If omega did not toss a fair coin, then the situation is different, and my choice would be too.
There is no value in being such a person if they happen to lose, but that’s like saying there’s no value in being a person who avoids bets that lose on average by only posing the 1 in several million time they would have won the lottery. On average they’ll come out ahead, just not in the specific situation that was described.