That’s not the situation in question. The scenario laid out by Vladimir_Nesov does not allow for an equal probability of getting $10000 and paying $100. Omega has already flipped the coin, and it’s already been decided that I’m on the “losing” side. Join that with the fact that me giving $100 now does not increase the chance of me getting $10000 in the future because there is no repetition.
Perhaps there’s something fundamental I’m missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.
-- EDIT --
There is a third possibility after reading Cameron’s reply… If Omega is correct and honest, then I am indeed going to give up the money.
But it’s a bit of a trick question, isn’t it? I’m going to give up the money because Omega says I’m going to give up the money and everything Omega says is gospel truth. However, if Omega hadn’t said that I would give up the money, then I wouldn’t of given up the money. Which makes this a bit of an impossible situation.
Assuming the existence of Omega, his intelligence, and his honesty, this scenario is an impossibility.
I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.
I’m saying, go ahead and start by imagining a situation like the one in the problem, except it’s all happening in the future—you don’t yet know how the coin will land.
You would want to decide in advance that if the coin came up against you, you would cough up $100.
The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.
So it’s a shame that in the problem as stated, you don’t get to precommit.
But the fact that you don’t get advance knowledge shouldn’t change anything. You can just decide for yourself, right now, to follow this simple rule:
If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.
By adopting this rule, in any problem in which the oppurtunity for precommiting would have given you an advantage, you wind up gaining that advantage anyway.
I’m actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by “perfect knowledge”. Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I’m just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think “if there was a true fact which my past self could have learned, which would have caused him to precommit etc.” should do the trick. Gonna have to sleep on that.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
Note that this doesn’t apply here. It’s “What would you do if you were counterfactually mugged?” versus “What would you like to pre-commit to doing, should you ever be told about the coin flip before you knew the result?”. X isn’t the same.
“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
“Perfect knowledge would mean I also knew in advance that the coin would come up tails.”
This seems crucial to me.
Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.
Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.
From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).
I’ll give you the quick and dirty patch for dealing with omega:
There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.
“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.
Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions “what would you do upon encountering Omega?” and “what will you now precommit to doing upon encountering Omega?”
I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.
But No-mega also punishes people who didn’t precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn’t be that kind of person either. So it still doesn’t distinguish between the two questions.
I don’t see this situation is impossible, but I think it’s because I’ve interpreted it differently from you.
First of all, I’ll assume that everyone agrees that given a 50⁄50 bet to win $10′000 versus losing $100, everyone would take the bet. That’s a straightforward application of utilitarianism + probability theory = expected utility, right?
So Omega correctly predicts that you would have taken the bet if he had offered it to you (a real no brainer; I too can predict that you would have taken the bet had he offered it).
But he didn’t offer it to you. He comes up now, telling you that he predicted that you would accept the bet, and then carried out the bet without asking you (since he already knew you would accept the bet), and it turns out you lost. Now he’s asking you to give him $100. He’s not predicting that you will give him that number, nor is he demanding or commanding you to give it. He’s merely asking. So the question is, do you do it?
I don’t think there’s any inconsistency in this scenario regardless of whether you decide to give him the money or not, since Omega hasn’t told you what his prediction would be (though if we accept that Omega is infallible, then his prediction is obviously exactly whatever you would actually do in that situation).
Perhaps there’s something fundamental I’m missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.
That’s absolutely true. In exactly the same way, if the Omega really did calculate that I wouldn’t give him the $100 then either he miscalculated, or this situation cannot actually occur.
The difference between your counterfactual instance and my counterfactual instance is that yours just has a weird guy hassling you with deal you want to reject while my counterfactual is logically inconsistent for all values of ‘me’ that I identify as ‘me’.
So, if this scenario is logically inconsistent for all values of ‘me’ then there really is nothing that I can learn about ‘me’ from this problem. I wish I hadn’t thought about it so hard.
Logically inconsistent for all values of ″ that would hand over the $100. For all values of ″ that would keep the $100 it is logically consistent but rather obfuscated. It is difficult to answer a multiple choice question when considering the correct answer throws null.
That’s not the situation in question. The scenario laid out by Vladimir_Nesov does not allow for an equal probability of getting $10000 and paying $100. Omega has already flipped the coin, and it’s already been decided that I’m on the “losing” side. Join that with the fact that me giving $100 now does not increase the chance of me getting $10000 in the future because there is no repetition.
Perhaps there’s something fundamental I’m missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.
-- EDIT --
There is a third possibility after reading Cameron’s reply… If Omega is correct and honest, then I am indeed going to give up the money.
But it’s a bit of a trick question, isn’t it? I’m going to give up the money because Omega says I’m going to give up the money and everything Omega says is gospel truth. However, if Omega hadn’t said that I would give up the money, then I wouldn’t of given up the money. Which makes this a bit of an impossible situation.
Assuming the existence of Omega, his intelligence, and his honesty, this scenario is an impossibility.
I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.
I’m saying, go ahead and start by imagining a situation like the one in the problem, except it’s all happening in the future—you don’t yet know how the coin will land.
You would want to decide in advance that if the coin came up against you, you would cough up $100.
The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.
So it’s a shame that in the problem as stated, you don’t get to precommit.
But the fact that you don’t get advance knowledge shouldn’t change anything. You can just decide for yourself, right now, to follow this simple rule:
If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.
By adopting this rule, in any problem in which the oppurtunity for precommiting would have given you an advantage, you wind up gaining that advantage anyway.
That one sums it all up nicely!
I’m actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by “perfect knowledge”. Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I’m just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think “if there was a true fact which my past self could have learned, which would have caused him to precommit etc.” should do the trick. Gonna have to sleep on that.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
...and that’s an even better way of putting it.
Note that this doesn’t apply here. It’s “What would you do if you were counterfactually mugged?” versus “What would you like to pre-commit to doing, should you ever be told about the coin flip before you knew the result?”. X isn’t the same.
MBlume:
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
“Perfect knowledge would mean I also knew in advance that the coin would come up tails.”
This seems crucial to me.
Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.
Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.
From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).
What am I missing?
I’ll give you the quick and dirty patch for dealing with omega: There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.
Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.
...why should you also consider that possibility?
Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions “what would you do upon encountering Omega?” and “what will you now precommit to doing upon encountering Omega?”
I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.
But No-mega also punishes people who didn’t precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn’t be that kind of person either. So it still doesn’t distinguish between the two questions.
|Perfect knowledge
use a Quantum coin-it conveniently comes up both.
I don’t see this situation is impossible, but I think it’s because I’ve interpreted it differently from you.
First of all, I’ll assume that everyone agrees that given a 50⁄50 bet to win $10′000 versus losing $100, everyone would take the bet. That’s a straightforward application of utilitarianism + probability theory = expected utility, right?
So Omega correctly predicts that you would have taken the bet if he had offered it to you (a real no brainer; I too can predict that you would have taken the bet had he offered it).
But he didn’t offer it to you. He comes up now, telling you that he predicted that you would accept the bet, and then carried out the bet without asking you (since he already knew you would accept the bet), and it turns out you lost. Now he’s asking you to give him $100. He’s not predicting that you will give him that number, nor is he demanding or commanding you to give it. He’s merely asking. So the question is, do you do it?
I don’t think there’s any inconsistency in this scenario regardless of whether you decide to give him the money or not, since Omega hasn’t told you what his prediction would be (though if we accept that Omega is infallible, then his prediction is obviously exactly whatever you would actually do in that situation).
Omega hasn’t told you his predictions in the given scenario.
That’s absolutely true. In exactly the same way, if the Omega really did calculate that I wouldn’t give him the $100 then either he miscalculated, or this situation cannot actually occur.
The difference between your counterfactual instance and my counterfactual instance is that yours just has a weird guy hassling you with deal you want to reject while my counterfactual is logically inconsistent for all values of ‘me’ that I identify as ‘me’.
Thank you. Now I grok.
So, if this scenario is logically inconsistent for all values of ‘me’ then there really is nothing that I can learn about ‘me’ from this problem. I wish I hadn’t thought about it so hard.
Logically inconsistent for all values of ″ that would hand over the $100. For all values of ″ that would keep the $100 it is logically consistent but rather obfuscated. It is difficult to answer a multiple choice question when considering the correct answer throws null.