I’m actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by “perfect knowledge”. Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I’m just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think “if there was a true fact which my past self could have learned, which would have caused him to precommit etc.” should do the trick. Gonna have to sleep on that.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
Note that this doesn’t apply here. It’s “What would you do if you were counterfactually mugged?” versus “What would you like to pre-commit to doing, should you ever be told about the coin flip before you knew the result?”. X isn’t the same.
“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
“Perfect knowledge would mean I also knew in advance that the coin would come up tails.”
This seems crucial to me.
Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.
Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.
From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).
I’ll give you the quick and dirty patch for dealing with omega:
There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.
“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.
Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions “what would you do upon encountering Omega?” and “what will you now precommit to doing upon encountering Omega?”
I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.
But No-mega also punishes people who didn’t precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn’t be that kind of person either. So it still doesn’t distinguish between the two questions.
That one sums it all up nicely!
I’m actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by “perfect knowledge”. Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I’m just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think “if there was a true fact which my past self could have learned, which would have caused him to precommit etc.” should do the trick. Gonna have to sleep on that.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
...and that’s an even better way of putting it.
Note that this doesn’t apply here. It’s “What would you do if you were counterfactually mugged?” versus “What would you like to pre-commit to doing, should you ever be told about the coin flip before you knew the result?”. X isn’t the same.
MBlume:
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
“Perfect knowledge would mean I also knew in advance that the coin would come up tails.”
This seems crucial to me.
Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.
Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.
From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).
What am I missing?
I’ll give you the quick and dirty patch for dealing with omega: There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.
Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.
...why should you also consider that possibility?
Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions “what would you do upon encountering Omega?” and “what will you now precommit to doing upon encountering Omega?”
I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.
But No-mega also punishes people who didn’t precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn’t be that kind of person either. So it still doesn’t distinguish between the two questions.
|Perfect knowledge
use a Quantum coin-it conveniently comes up both.