Can you please explain the reasoning behind this? Given all of the restrictions mentioned (no iterations, no possible benefit to this self) I can’t see any reason to part with my hard earned cash. My “gut” says “Hell no!” but I’m curious to see if I’m missing something.
There are various intuition pumps to explain the answer.
The simplest is to imagine that a moment from now, Omega walks up to you and says “I’m sorry, I would have given you $10000, except I simulated what would happen if I asked you for $100 and you refused”. In that case, you would certainly wish you had been the sort of person to give up the $100.
Which means that right now, with both scenarios equally probable, you should want to be the sort of person who will give up the $100, since if you are that sort of person, there’s half a chance you’ll get $10000.
If you want to be the sort of person who’ll do X given Y, then when Y turns up, you’d better bloody well do X.
Thanks, it’s good to know I’m on the right track =)
I think this core insight is one of the clearest changes in my thought process since starting to read OB/LW—I can’t imagine myself leaping to “well, I’d hand him $100, of course” a couple years ago.
If you want to be the sort of person who’ll do X given Y, then when Y turns up, you’d better bloody well do X.
I think this describes one of the core principles of virtue theory under any ethical system.
I wonder how much it depends upon accidents of human psychology, like our tendency to form habits, and how much of it is definitional (if you don’t X when Y, then you’re simply not the sort of person who Xes when Y)
That’s not the situation in question. The scenario laid out by Vladimir_Nesov does not allow for an equal probability of getting $10000 and paying $100. Omega has already flipped the coin, and it’s already been decided that I’m on the “losing” side. Join that with the fact that me giving $100 now does not increase the chance of me getting $10000 in the future because there is no repetition.
Perhaps there’s something fundamental I’m missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.
-- EDIT --
There is a third possibility after reading Cameron’s reply… If Omega is correct and honest, then I am indeed going to give up the money.
But it’s a bit of a trick question, isn’t it? I’m going to give up the money because Omega says I’m going to give up the money and everything Omega says is gospel truth. However, if Omega hadn’t said that I would give up the money, then I wouldn’t of given up the money. Which makes this a bit of an impossible situation.
Assuming the existence of Omega, his intelligence, and his honesty, this scenario is an impossibility.
I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.
I’m saying, go ahead and start by imagining a situation like the one in the problem, except it’s all happening in the future—you don’t yet know how the coin will land.
You would want to decide in advance that if the coin came up against you, you would cough up $100.
The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.
So it’s a shame that in the problem as stated, you don’t get to precommit.
But the fact that you don’t get advance knowledge shouldn’t change anything. You can just decide for yourself, right now, to follow this simple rule:
If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.
By adopting this rule, in any problem in which the oppurtunity for precommiting would have given you an advantage, you wind up gaining that advantage anyway.
I’m actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by “perfect knowledge”. Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I’m just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think “if there was a true fact which my past self could have learned, which would have caused him to precommit etc.” should do the trick. Gonna have to sleep on that.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
Note that this doesn’t apply here. It’s “What would you do if you were counterfactually mugged?” versus “What would you like to pre-commit to doing, should you ever be told about the coin flip before you knew the result?”. X isn’t the same.
“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
“Perfect knowledge would mean I also knew in advance that the coin would come up tails.”
This seems crucial to me.
Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.
Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.
From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).
I’ll give you the quick and dirty patch for dealing with omega:
There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.
“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.
Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions “what would you do upon encountering Omega?” and “what will you now precommit to doing upon encountering Omega?”
I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.
But No-mega also punishes people who didn’t precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn’t be that kind of person either. So it still doesn’t distinguish between the two questions.
I don’t see this situation is impossible, but I think it’s because I’ve interpreted it differently from you.
First of all, I’ll assume that everyone agrees that given a 50⁄50 bet to win $10′000 versus losing $100, everyone would take the bet. That’s a straightforward application of utilitarianism + probability theory = expected utility, right?
So Omega correctly predicts that you would have taken the bet if he had offered it to you (a real no brainer; I too can predict that you would have taken the bet had he offered it).
But he didn’t offer it to you. He comes up now, telling you that he predicted that you would accept the bet, and then carried out the bet without asking you (since he already knew you would accept the bet), and it turns out you lost. Now he’s asking you to give him $100. He’s not predicting that you will give him that number, nor is he demanding or commanding you to give it. He’s merely asking. So the question is, do you do it?
I don’t think there’s any inconsistency in this scenario regardless of whether you decide to give him the money or not, since Omega hasn’t told you what his prediction would be (though if we accept that Omega is infallible, then his prediction is obviously exactly whatever you would actually do in that situation).
Perhaps there’s something fundamental I’m missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.
That’s absolutely true. In exactly the same way, if the Omega really did calculate that I wouldn’t give him the $100 then either he miscalculated, or this situation cannot actually occur.
The difference between your counterfactual instance and my counterfactual instance is that yours just has a weird guy hassling you with deal you want to reject while my counterfactual is logically inconsistent for all values of ‘me’ that I identify as ‘me’.
So, if this scenario is logically inconsistent for all values of ‘me’ then there really is nothing that I can learn about ‘me’ from this problem. I wish I hadn’t thought about it so hard.
Logically inconsistent for all values of ″ that would hand over the $100. For all values of ″ that would keep the $100 it is logically consistent but rather obfuscated. It is difficult to answer a multiple choice question when considering the correct answer throws null.
The simplest is to imagine that a moment from now, Omega walks up to you and says “I’m sorry, I would have given you $10000, except I simulated what would happen if I asked you for $100 and you refused”. In that case, you would certainly wish you had been the sort of person to give up the $100.
I liked this position—insightful, so I’m definitely upvoting.
But I’m not altogether convinced it’s a completely compelling argument. With the amounts reversed, Omega could have walked up to you and said “I would have given you $100 except if I asked you for $10.000 you would have refused.” You’d then certainly wish to have been the sort of person to counterfactually have given up the $10000, because in the real world it’d mean you’d get $100, even though you’d certainly REJECT that bet if you had a choice for it in advance.
Not necessarily; it depends on relative frequency. If Omega has a 10^-9 chance of asking me for $10000 and otherwise will simulate my response to judge whether to give me $100, and if I know that (perhaps Omega earlier warned me of this), I would want to be the type of person who gives the money.
If we’re going to invent someone who can read thoughts perfectly, we may as well invent someone who can conceal thoughts perfectly.
Anyway, there aren’t any beings like Omega running around to my knowledge. If you think that concealing motivations is harder than I think, and that the only way to make another human think you’re a certain way is to be that way, say that.
And if Omega comes up to me and says “I was going to kill you if you gave me $100. But since I’ve worked out that you won’t, I’ll leave you alone.” then I’ll be damn glad I wouldn’t agree.
This really does seem like pointless speculation.
Of course, I live in a world where there is no being like Omega that I know of. If I knew otherwise, and knew something of their properties, I might govern myself differently.
We’re not talking Pascal’s Wager here, you’re not guessing at the behaviour of capricious omnipotent beings. Omega has told you his properties, and is assumed to be trustworthy.
You are stating that. But as far as I can tell Omega is telling me its a capricious omnipotent being. If there is a distinction, I’m not seeing it. Let me break it down for you:
1) Capricious → I am completely unable to predict its actions. Yes. 2) Omnipotent → Can do the seemingly impossible. Yes.
It’s not capricious in the sense you give: you are capable of predicting some of its actions: because it’s assumed Omega is perfectly trustworthy, you can predict with certainty what it will do if it tells you what it will do.
So, if it says it’ll give you 10k$ in some condition (say, if you one-box its challenge), you can predict that it’ll give it the money if that condition arises.
If it were capricious in the sense of complete inability of being predicted, it might amputate three of your toes and give you a flower garland.
Note that the problem supposes you do have certainty that Omega is trustworthy; I see no way of reaching that epistemological state, but then again I see no way Omega could be omnipotent, either.
On an somewhat unrelated note, why would Omega ask you for 100$ if it had simulated you wouldn’t give it the money? Also, why would it do the same if it had simulated you would give it the money? What possible use would an omnipotent agent have for 100$?
And his asking you for 100$ could always be PART of the simulation.
Yes, it’s quite reasonable that if it was curious about you it would simulate you and ask the simulation a question. But once it did that, since the simulation was perfect, why would it waste the time to ask the real you? After all, in the time it takes you to understand Omega’s question it could probably simulate you many times over.
So I’m starting to think that encountering Omega is actually pretty strong evidence for the fact that you’re simulated.
Maybe Omega recognizes in advance that you might think this way, doesn’t want it to happen, and so precommits to asking the real you. With the existence of this precommitment, you may not properly make this reasoning. Moreover, you should be able to figure out that Omega would precommit, thus making it unnecessary for him to explicitlyy tell you he’s doing so.
Maybe Omega [...] doesn’t want it to happen [...] Moreover, you should be able to figure out that Omega would precommit
(Emphasis mine.)
I don’t think, given the usual problem formulation, that one can figure out what Omega wants without Omega explicitly saying it, and maybe not even in that case.
It’s a bit like a deal with a not-necessarily-evil devil. Even if it tells you something and you’re sure it’s not lying and you think you the wording is perfectly clear, you should still assign a very high probability that you have no idea what’s really going on and why.
If we assume I’m rational, then I’m not going to assume anything about Omega. I’ll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.
I realize this is fighting the problem, but: If I remember playing a billion rounds of the game with Omega, that is pretty strong evidence that I’m a (slightly altered) simulation. An average human takes about a ten million breaths each year...
OK, so assume that I’m a transhuman and can actually do something a billion times. But if Omega can simulate me perfectly, why would it actually waste the time to ask you a question, once it simulated you answering it? Let alone do that a billion times… This also seems like evidence that I’m actually simulated. (I notice that in most statements of the problem, the wording is such that it is implied but not clearly stated that the non-simulated version of you is ever involved.)
I work on AI. In particular, on decision systems stable under self-modification. Any agent who does not give the $100 in situations like this will self-modify to give $100 in situations like this. I don’t spend a whole lot of time thinking about decision theories that are unstable under reflection. QED.
If you need special cases, your decision theory is not consistent under reflection. In other words, it should simply always do the thing that it would precommit to doing, because, as MBlume put it, the decision theory is formulated in such fashion that “What would you precommit to?” and “What will you do?” work out to be one and the same question.
But this is precisely what humans don’t do, because we respond to a “near” situation differently than a “far” one. Your advance prediction of your decision is untrustworthy unless you can successfully simulate the real future environment in your mind with sufficient sensory detail to invoke “near” reasoning. Otherwise, you will fail to reach a consistent decision in the actual situation.
Unless of course, In the actual situation, you’re projecting back, “What would I have decided in advance to do had I thought about this in advance?”—and you successfully mitigate all priming effects and situationally-motivated reasoning.
Or to put all of the above in short, common-wisdom form: “that’s easy for you to say NOW...” ;-)
Before tossing the coin, the Omega perfectly emulates my decision making process. In this emulation he tells me that I lost the coin toss, explains the deal and asks me to give him $100. If this emulated me gives up the $100 then he has a good chance of getting $10,000.
I have absolutely no way of knowing whether I am the ‘emulated me’ or the real me. Vladmir’s specification is quite unambiguous. I, me, the one doing the deciding right now in this real world, am the same me as the one inside the Omega’s head. If the emulation is in any way different to me then the Omega isn’t the Omega. The guy in the Omega’s head has been offered a deal that any rational man would accept, and I am that man.
So, it may sound stupid that I’m giving up $100 with no hope of getting anything back. But that’s because the counterfactual is stupid, not me.
So, it may sound stupid that I’m giving up $100 with no hope of getting anything back. But that’s because the counterfactual is stupid, not me.
(Disclaimer: I’m going to use the exact language you used, which means I will call you “stupid” in this post. I apologize if this comes off as trollish. I will admit that I am also quite torn about this decision, and I feel quite stupid too.)
No offense, but assuming free will, you are the one who is deciding to actually hand over the $100. The conterfactual isn’t the one making the decision. You are. You are in a situation, and there are two possible actions (lose $100 or don’t lose $100), and you are choosing to lose $100.
And now I try to calculate what you should treat as being the probability that you’re being emulated. Assume that Omega only emulates you if the coin comes up heads.
Suppose you decide beforehand that you are going to give Omega the $100, as you ought to. The expected value of this is $4950, as has been calculated.
Suppose that instead, you decide beforehand that E is the probability you’re being emulated assuming you hear that came up tails. You’ll still decide to give Omega the $100; therefore, your expected value if you hear that it came up heads is $10,000. Your expected value if you hear that the coin came up tails is -$100(1-E) + $10,000E.
The probability that you hear that the coin comes up tails should be given by P(H) + P(T and ~E) + P(T and E) = 0, P(H) = P(T and ~E), P(T and ~E) = P(T) - P(T and E), P(T and E) = P(E|T) * P(T). Solving these equations, I get P(E|T) = 2, which probably means I’ve made a mistake somewhere. If not, c’est l’Omega?
to REALLY evaluate that, we technically need to know how long omega runs the simulation for.
now, we have two options: one, assume omega keeps running the simulation indefinitely. two, assume that omega shuts the simulation down once he has the info he’s looking for (and before he has to worry about debugging the simulation.)
in # 1, what we are left with is p(S)=1/3, p(H)=1/3, p(t)=1/3, which means we’re moving 200$/3 from part of our possibility cloud to gain 10,000$/3 in another part. In #2, we’re moving a total of 100⁄2 $ to gain 10000⁄2. The 100$ in the simulation is quantum-virtual.
so, unless you have reason to suspect omega is running a LOT of simulations of you, AND not terminating them after a minute or so...(aka, is not inadvertently simulation-mugging you)...
You can generally treat Omega’s simulation capacity as a dashed causality arrow from one universe to another-sortof like the shadow produced by the simulation...
Can you please explain the reasoning behind this? Given all of the restrictions mentioned (no iterations, no possible benefit to this self) I can’t see any reason to part with my hard earned cash. My “gut” says “Hell no!” but I’m curious to see if I’m missing something.
There are various intuition pumps to explain the answer.
The simplest is to imagine that a moment from now, Omega walks up to you and says “I’m sorry, I would have given you $10000, except I simulated what would happen if I asked you for $100 and you refused”. In that case, you would certainly wish you had been the sort of person to give up the $100.
Which means that right now, with both scenarios equally probable, you should want to be the sort of person who will give up the $100, since if you are that sort of person, there’s half a chance you’ll get $10000.
If you want to be the sort of person who’ll do X given Y, then when Y turns up, you’d better bloody well do X.
Well said. That’s a lot of the motivation behind my choice of decision theory in a nutshell.
Thanks, it’s good to know I’m on the right track =)
I think this core insight is one of the clearest changes in my thought process since starting to read OB/LW—I can’t imagine myself leaping to “well, I’d hand him $100, of course” a couple years ago.
I think this describes one of the core principles of virtue theory under any ethical system.
I wonder how much it depends upon accidents of human psychology, like our tendency to form habits, and how much of it is definitional (if you don’t X when Y, then you’re simply not the sort of person who Xes when Y)
That’s not the situation in question. The scenario laid out by Vladimir_Nesov does not allow for an equal probability of getting $10000 and paying $100. Omega has already flipped the coin, and it’s already been decided that I’m on the “losing” side. Join that with the fact that me giving $100 now does not increase the chance of me getting $10000 in the future because there is no repetition.
Perhaps there’s something fundamental I’m missing here, but the linearity of events seems pretty clear. If Omega really did calculate that I would give him the $100 then either he miscalculated, or this situation cannot actually occur.
-- EDIT --
There is a third possibility after reading Cameron’s reply… If Omega is correct and honest, then I am indeed going to give up the money.
But it’s a bit of a trick question, isn’t it? I’m going to give up the money because Omega says I’m going to give up the money and everything Omega says is gospel truth. However, if Omega hadn’t said that I would give up the money, then I wouldn’t of given up the money. Which makes this a bit of an impossible situation.
Assuming the existence of Omega, his intelligence, and his honesty, this scenario is an impossibility.
I feel like a man in an Escher painting, with all these recursive hypothetical mes, hypothetical kuriges, and hypothetical omegas.
I’m saying, go ahead and start by imagining a situation like the one in the problem, except it’s all happening in the future—you don’t yet know how the coin will land.
You would want to decide in advance that if the coin came up against you, you would cough up $100.
The ability to precommit in this way gives you an advantage. It gives you half a chance at $10000 you would not otherwise have had.
So it’s a shame that in the problem as stated, you don’t get to precommit.
But the fact that you don’t get advance knowledge shouldn’t change anything. You can just decide for yourself, right now, to follow this simple rule:
If there is an action to which my past self would have precommited, given perfect knowledge, and my current preferences, I will take that action.
By adopting this rule, in any problem in which the oppurtunity for precommiting would have given you an advantage, you wind up gaining that advantage anyway.
That one sums it all up nicely!
I’m actually not quite satisfied with it. Probability is in the mind, which makes it difficult to know what I mean by “perfect knowledge”. Perfect knowledge would mean I also knew in advance that the coin would come up tails.
I know giving up the $100 is right, I’m just having a hard time figuring out what worlds the agent is summing over, and by what rules.
ETA: I think “if there was a true fact which my past self could have learned, which would have caused him to precommit etc.” should do the trick. Gonna have to sleep on that.
ETA2: “What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
...and that’s an even better way of putting it.
Note that this doesn’t apply here. It’s “What would you do if you were counterfactually mugged?” versus “What would you like to pre-commit to doing, should you ever be told about the coin flip before you knew the result?”. X isn’t the same.
MBlume:
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
“Perfect knowledge would mean I also knew in advance that the coin would come up tails.”
This seems crucial to me.
Given what I know when asked to hand over the $100, I would want to have pre-committed to not pre-committing to hand over the $100 if offered the original bet.
Given what I would know if I were offered the bet before discovering the outcome of the flip I would wish to pre-commit to handing it over.
From which information set I should evaluate this? The information set I am actually at seems the most natural choice, and it also seems to be the one that WINS (at least in this world).
What am I missing?
I’ll give you the quick and dirty patch for dealing with omega: There is no way to know that, at that moment, you are not inside of his simulation. by giving him the 100$, there is a chance you are tranfering that money from within a simulation-which is about to be terminated-to outside of the simulation, with a nice big multiplier.
Not if precommiting potentially has other negative consequences. As Caspian suggested elsewhere in the thread, you should also consider the possibility that the universe contains No-megas who punish people who would cooperate with Omega.
...why should you also consider that possibility?
Because if that possibility exists, you should not necessarily precommit to cooperate with Omega, since that risks being punished by No-mega. In a universe of No-megas, precommiting to cooperate with Omega loses. This seems to me to create a distinction between the questions “what would you do upon encountering Omega?” and “what will you now precommit to doing upon encountering Omega?”
I suppose my real objection is that some people seem to have concluded in this thread that the correct thing to do is to, in advance, make some blanket precommitment to do the equivalent of cooperating with Omega should they ever find themselves in any similar problem. But I feel like these people have implicitly made some assumptions about what kind of Omega-like entities they are likely to encounter: for instance that they are much more likely to encounter Omega than No-mega.
But No-mega also punishes people who didn’t precommit but would have chosen to cooperate after meeting Omega. If you think No-mega is more likely than Omega, then you shouldn’t be that kind of person either. So it still doesn’t distinguish between the two questions.
|Perfect knowledge
use a Quantum coin-it conveniently comes up both.
I don’t see this situation is impossible, but I think it’s because I’ve interpreted it differently from you.
First of all, I’ll assume that everyone agrees that given a 50⁄50 bet to win $10′000 versus losing $100, everyone would take the bet. That’s a straightforward application of utilitarianism + probability theory = expected utility, right?
So Omega correctly predicts that you would have taken the bet if he had offered it to you (a real no brainer; I too can predict that you would have taken the bet had he offered it).
But he didn’t offer it to you. He comes up now, telling you that he predicted that you would accept the bet, and then carried out the bet without asking you (since he already knew you would accept the bet), and it turns out you lost. Now he’s asking you to give him $100. He’s not predicting that you will give him that number, nor is he demanding or commanding you to give it. He’s merely asking. So the question is, do you do it?
I don’t think there’s any inconsistency in this scenario regardless of whether you decide to give him the money or not, since Omega hasn’t told you what his prediction would be (though if we accept that Omega is infallible, then his prediction is obviously exactly whatever you would actually do in that situation).
Omega hasn’t told you his predictions in the given scenario.
That’s absolutely true. In exactly the same way, if the Omega really did calculate that I wouldn’t give him the $100 then either he miscalculated, or this situation cannot actually occur.
The difference between your counterfactual instance and my counterfactual instance is that yours just has a weird guy hassling you with deal you want to reject while my counterfactual is logically inconsistent for all values of ‘me’ that I identify as ‘me’.
Thank you. Now I grok.
So, if this scenario is logically inconsistent for all values of ‘me’ then there really is nothing that I can learn about ‘me’ from this problem. I wish I hadn’t thought about it so hard.
Logically inconsistent for all values of ″ that would hand over the $100. For all values of ″ that would keep the $100 it is logically consistent but rather obfuscated. It is difficult to answer a multiple choice question when considering the correct answer throws null.
I liked this position—insightful, so I’m definitely upvoting.
But I’m not altogether convinced it’s a completely compelling argument. With the amounts reversed, Omega could have walked up to you and said “I would have given you $100 except if I asked you for $10.000 you would have refused.” You’d then certainly wish to have been the sort of person to counterfactually have given up the $10000, because in the real world it’d mean you’d get $100, even though you’d certainly REJECT that bet if you had a choice for it in advance.
Not necessarily; it depends on relative frequency. If Omega has a 10^-9 chance of asking me for $10000 and otherwise will simulate my response to judge whether to give me $100, and if I know that (perhaps Omega earlier warned me of this), I would want to be the type of person who gives the money.
Is that an acceptable correction?
Well, with a being like Omega running around, the two become more or less identical.
If we’re going to invent someone who can read thoughts perfectly, we may as well invent someone who can conceal thoughts perfectly.
Anyway, there aren’t any beings like Omega running around to my knowledge. If you think that concealing motivations is harder than I think, and that the only way to make another human think you’re a certain way is to be that way, say that.
And if Omega comes up to me and says “I was going to kill you if you gave me $100. But since I’ve worked out that you won’t, I’ll leave you alone.” then I’ll be damn glad I wouldn’t agree.
This really does seem like pointless speculation.
Of course, I live in a world where there is no being like Omega that I know of. If I knew otherwise, and knew something of their properties, I might govern myself differently.
We’re not talking Pascal’s Wager here, you’re not guessing at the behaviour of capricious omnipotent beings. Omega has told you his properties, and is assumed to be trustworthy.
You are stating that. But as far as I can tell Omega is telling me its a capricious omnipotent being. If there is a distinction, I’m not seeing it. Let me break it down for you:
1) Capricious → I am completely unable to predict its actions. Yes.
2) Omnipotent → Can do the seemingly impossible. Yes.
So, what’s the difference?
It’s not capricious in the sense you give: you are capable of predicting some of its actions: because it’s assumed Omega is perfectly trustworthy, you can predict with certainty what it will do if it tells you what it will do.
So, if it says it’ll give you 10k$ in some condition (say, if you one-box its challenge), you can predict that it’ll give it the money if that condition arises.
If it were capricious in the sense of complete inability of being predicted, it might amputate three of your toes and give you a flower garland.
Note that the problem supposes you do have certainty that Omega is trustworthy; I see no way of reaching that epistemological state, but then again I see no way Omega could be omnipotent, either.
On an somewhat unrelated note, why would Omega ask you for 100$ if it had simulated you wouldn’t give it the money? Also, why would it do the same if it had simulated you would give it the money? What possible use would an omnipotent agent have for 100$?
Omega is assumed to be mildly bored and mildly anthropic. And his asking you for 100$ could always be PART of the simulation.
Yes, it’s quite reasonable that if it was curious about you it would simulate you and ask the simulation a question. But once it did that, since the simulation was perfect, why would it waste the time to ask the real you? After all, in the time it takes you to understand Omega’s question it could probably simulate you many times over.
So I’m starting to think that encountering Omega is actually pretty strong evidence for the fact that you’re simulated.
Maybe Omega recognizes in advance that you might think this way, doesn’t want it to happen, and so precommits to asking the real you. With the existence of this precommitment, you may not properly make this reasoning. Moreover, you should be able to figure out that Omega would precommit, thus making it unnecessary for him to explicitlyy tell you he’s doing so.
(Emphasis mine.)
I don’t think, given the usual problem formulation, that one can figure out what Omega wants without Omega explicitly saying it, and maybe not even in that case.
It’s a bit like a deal with a not-necessarily-evil devil. Even if it tells you something and you’re sure it’s not lying and you think you the wording is perfectly clear, you should still assign a very high probability that you have no idea what’s really going on and why.
If we assume I’m rational, then I’m not going to assume anything about Omega. I’ll base my decisions on the given evidence. So far, that appears to be described as being no more and no less than what Omega cares to tell us.
Fine, then interchange “assume omega is honest” with, say, “i’ve played a billiion rounds of one-box two-box with him” …It should be close enough.
I realize this is fighting the problem, but: If I remember playing a billion rounds of the game with Omega, that is pretty strong evidence that I’m a (slightly altered) simulation. An average human takes about a ten million breaths each year...
OK, so assume that I’m a transhuman and can actually do something a billion times. But if Omega can simulate me perfectly, why would it actually waste the time to ask you a question, once it simulated you answering it? Let alone do that a billion times… This also seems like evidence that I’m actually simulated. (I notice that in most statements of the problem, the wording is such that it is implied but not clearly stated that the non-simulated version of you is ever involved.)
I work on AI. In particular, on decision systems stable under self-modification. Any agent who does not give the $100 in situations like this will self-modify to give $100 in situations like this. I don’t spend a whole lot of time thinking about decision theories that are unstable under reflection. QED.
Even considering situations like this and having special cases for them sounds like it would add a bit much cruft to the system.
Do you have a working AI that I could look at to see how this would work?
If you need special cases, your decision theory is not consistent under reflection. In other words, it should simply always do the thing that it would precommit to doing, because, as MBlume put it, the decision theory is formulated in such fashion that “What would you precommit to?” and “What will you do?” work out to be one and the same question.
But this is precisely what humans don’t do, because we respond to a “near” situation differently than a “far” one. Your advance prediction of your decision is untrustworthy unless you can successfully simulate the real future environment in your mind with sufficient sensory detail to invoke “near” reasoning. Otherwise, you will fail to reach a consistent decision in the actual situation.
Unless of course, In the actual situation, you’re projecting back, “What would I have decided in advance to do had I thought about this in advance?”—and you successfully mitigate all priming effects and situationally-motivated reasoning.
Or to put all of the above in short, common-wisdom form: “that’s easy for you to say NOW...” ;-)
Here is one intuitive way of looking at it:
Before tossing the coin, the Omega perfectly emulates my decision making process. In this emulation he tells me that I lost the coin toss, explains the deal and asks me to give him $100. If this emulated me gives up the $100 then he has a good chance of getting $10,000.
I have absolutely no way of knowing whether I am the ‘emulated me’ or the real me. Vladmir’s specification is quite unambiguous. I, me, the one doing the deciding right now in this real world, am the same me as the one inside the Omega’s head. If the emulation is in any way different to me then the Omega isn’t the Omega. The guy in the Omega’s head has been offered a deal that any rational man would accept, and I am that man.
So, it may sound stupid that I’m giving up $100 with no hope of getting anything back. But that’s because the counterfactual is stupid, not me.
(Disclaimer: I’m going to use the exact language you used, which means I will call you “stupid” in this post. I apologize if this comes off as trollish. I will admit that I am also quite torn about this decision, and I feel quite stupid too.)
No offense, but assuming free will, you are the one who is deciding to actually hand over the $100. The conterfactual isn’t the one making the decision. You are. You are in a situation, and there are two possible actions (lose $100 or don’t lose $100), and you are choosing to lose $100.
So again, are you sure you are not stupid?
And now I try to calculate what you should treat as being the probability that you’re being emulated. Assume that Omega only emulates you if the coin comes up heads.
Suppose you decide beforehand that you are going to give Omega the $100, as you ought to. The expected value of this is $4950, as has been calculated.
Suppose that instead, you decide beforehand that E is the probability you’re being emulated assuming you hear that came up tails. You’ll still decide to give Omega the $100; therefore, your expected value if you hear that it came up heads is $10,000. Your expected value if you hear that the coin came up tails is -$100(1-E) + $10,000E.
The probability that you hear that the coin comes up tails should be given by P(H) + P(T and ~E) + P(T and E) = 0, P(H) = P(T and ~E), P(T and ~E) = P(T) - P(T and E), P(T and E) = P(E|T) * P(T). Solving these equations, I get P(E|T) = 2, which probably means I’ve made a mistake somewhere. If not, c’est l’Omega?
um… lets see....
to REALLY evaluate that, we technically need to know how long omega runs the simulation for.
now, we have two options: one, assume omega keeps running the simulation indefinitely. two, assume that omega shuts the simulation down once he has the info he’s looking for (and before he has to worry about debugging the simulation.)
in # 1, what we are left with is p(S)=1/3, p(H)=1/3, p(t)=1/3, which means we’re moving 200$/3 from part of our possibility cloud to gain 10,000$/3 in another part.
In #2, we’re moving a total of 100⁄2 $ to gain 10000⁄2. The 100$ in the simulation is quantum-virtual.
so, unless you have reason to suspect omega is running a LOT of simulations of you, AND not terminating them after a minute or so...(aka, is not inadvertently simulation-mugging you)...
You can generally treat Omega’s simulation capacity as a dashed causality arrow from one universe to another-sortof like the shadow produced by the simulation...