It changes the structure tremendously. A world in which Omega predicts you will give it $5 and you don’t, suddenly has a non-zero possibility.
If Omega is perfect, you may as well hand over the $5 right now. If he isn’t, you still know that most likely you will give over the $5, but you might as well wait around to see why. And the decision “I will not hand over $5” is no longer inconsistent.
That feels just like being mugged. I KNOW that eventually I will give Omega $5, but I prefer it not to happen by some unforeseeable process that may cause irreparable damage to me, like epileptic seizure or lightning strike. So I just hand over the cash. By the way, this reasoning applies regardless of Omega’s accuracy level.
Then you’re much more likely to be told this by Omega in the first place, for no better reason than that you were frightened enough to hand over the cash.
What do you mean by the likelihood of Omega saying something? You condition on something different from what I condition on, but I don’t understand what it is. Anyway, what I wrote stands even if we explicitly state that Omega does not say anything except “I am Omega. You will soon give me 5 dollars.”
He conditions on your response. It is like a simplified version of Newcombe’s paradox. You choose a decision theory, then Omega tells you to give him $5 iff your decision theory is such that you will give him $5 upon being told that. If you think the way you talked in the grandparent, then you will pay up.
tut, that’s correct, and I don’t feel bad about your conclusion at all. We have no disagreement, although I think your terminology obscures the fact that “my chosen decision theory” can in fact be a sudden, unforeseen brain hemorrhage during my conversation with Omega. So let me simply ask:
If Omega appeared right now, and said “I am Omega. You will give me 5 dollars in one minute.”, what would you actually do during that minute? (Please don’t answer that this is impossible because of your chosen decision theory. You can’t know your own decision theory.)
Of course you can’t predict any of the strange or not so strange things that could happen to you during the time, all perfectly transparent to Omega. But that’s not what I’m asking. I’m asking about your current plan.
All right, you are committed. :) At least admit that you would be frightened in the last five seconds of the minute. Does it change anything if Omega tells you in advance that it will not help you with any sort of information or goods?
It shouldn’t feel like being mugged. All that making Omega perfect predictor does is prevent it from bugging you if you are not willing to pay $5. It means Omega will ask less not that you will pay more.
Your analysis is one-sided. Please try to imagine the situation with a one minute time limit. Omega appears, and tells you that you will give it 5 dollars in one minute. You decide that you will not give it the money. You are very determined about this, maybe because you are curious about what will happen. The clock is ticking...
The less seconds are there left from the minute, the more worried you should objectively be, because eventually you WILL hand over the money, and the less seconds are there, the more disruptive the change will be that will eventually cause you to reconsider.
Note that Omega didn’t give any promises about being safe during the one minute. If you think that e.g. causing you brain damage would be unfair of Omega, then we are already in the territory of ethics, not decision theory. Maybe it wasn’t Omega that caused the brain damage, maybe it appeared before you exactly because it predicted that it will happen to you. With Omegas, it is not always possible to disentangle cause and effect.
Whoop, sorry, I deleted the comment before you replied.
Let us assume that you will never, under any circumstances hand over $5 unless you feel good and happy and marvelous about it. Omega can easily pick a circumstance where you feel good, happy, marvelous about handing it $5. In this scenario, by definition, you will not feel mugged.
On the other hand, let us assume that you can be bullied into handing over $5 by Omega appearing and demanding $5 in one minute. If this works, which we are assuming it does, Omega can appear and get its $5. You will like you were just mugged, but the only way this can happen is if you are the sort of person that will actually hand over $5 without understanding why. Omega is a “jerk” in the sense that it made you feel like you were being mugged but this doesn’t imply anything about the scenario or Omega. It implies something about the situations in which you would hand Omega $5. (And that Omega doesn’t care about being a jerk.)
The point is this: If you made a steadfast decision to never hand Omega $5 without feeling happy about it, Omega would never ask you for $5 without making you feel happy about it. If you decide to never, ever hand over $5 while feel happy about it, than you will never see a non-mugging scenario.
Note: This principle is totally limited to the scenario discussed in the OP. This has no bearing on Newcomb’s or Counterfactual Mugging or anything else.
It changes the structure tremendously. A world in which Omega predicts you will give it $5 and you don’t, suddenly has a non-zero possibility.
This is true but it doesn’t change how frequently you would give Omega $5. It changes Omega’s success rate, but only in the sense that it won’t play the game if you aren’t willing to give $5.
If A = You pay Omega $5 and O = Omega asks for $5: p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + p(O|~A) * p(~A))
Making Omega a perfect predictor sets p(Omega asks|You don’t pay) to 0, so p(O|~A) = 0.
Therefore, p(You pay Omega $5|Omega asks for $5) is 1. If Omega asks, you will pay. Big whoop. This is a restriction on Omega asking, not on you giving.
Yes, but consider what happens when you start conditioning on the statement B=”I do not intend to give Omega $5″. If Omega is perfect, this is irrelevant; you will hand over the cash.
If Omega is not perfect, then the situation changes. Use A and O as above; then a relvant question is: how many of Omega’s errors have B (nearly all of them) versus how many of Omega’s successes have B (nearly none of them). Basically, you’re trying to estimate the relative sizes of (B&A)|O versus (B&~A)|O.
Now A|O is very large while ~A|O is very small, but (B&A)|O is tiny in A|O while (B&~A)|O makes up most of ~A|O. So I’d crudely estimate that those two sets are generally of pretty comparable size. If Omega is only wrong one in a million, I’d estimate I’d have even odds of handing him the $5 if I didn’t want to.
Yes, but consider what happens when you start conditioning on the statement B=”I do not intend to give Omega $5″. If Omega is perfect, this is irrelevant; you will hand over the cash.
Right, when Omega is perfect, this isn’t really a useful distinction. The correlation between B and A is irrelevant for the odds of p(A|O). It does get more interesting when asking:
p(A|B) p(~A|B) p(O|B)
These are still interesting even when Omega is perfect. If, as you suggest, we look at the relationship between A, B, and O when Omega isn’t perfect, your questions are dead on in terms of what matters.
It changes the structure tremendously. A world in which Omega predicts you will give it $5 and you don’t, suddenly has a non-zero possibility.
If Omega is perfect, you may as well hand over the $5 right now. If he isn’t, you still know that most likely you will give over the $5, but you might as well wait around to see why. And the decision “I will not hand over $5” is no longer inconsistent.
That feels just like being mugged. I KNOW that eventually I will give Omega $5, but I prefer it not to happen by some unforeseeable process that may cause irreparable damage to me, like epileptic seizure or lightning strike. So I just hand over the cash. By the way, this reasoning applies regardless of Omega’s accuracy level.
Then you’re much more likely to be told this by Omega in the first place, for no better reason than that you were frightened enough to hand over the cash.
What do you mean by the likelihood of Omega saying something? You condition on something different from what I condition on, but I don’t understand what it is. Anyway, what I wrote stands even if we explicitly state that Omega does not say anything except “I am Omega. You will soon give me 5 dollars.”
He conditions on your response. It is like a simplified version of Newcombe’s paradox. You choose a decision theory, then Omega tells you to give him $5 iff your decision theory is such that you will give him $5 upon being told that. If you think the way you talked in the grandparent, then you will pay up.
tut, that’s correct, and I don’t feel bad about your conclusion at all. We have no disagreement, although I think your terminology obscures the fact that “my chosen decision theory” can in fact be a sudden, unforeseen brain hemorrhage during my conversation with Omega. So let me simply ask:
If Omega appeared right now, and said “I am Omega. You will give me 5 dollars in one minute.”, what would you actually do during that minute? (Please don’t answer that this is impossible because of your chosen decision theory. You can’t know your own decision theory.)
Of course you can’t predict any of the strange or not so strange things that could happen to you during the time, all perfectly transparent to Omega. But that’s not what I’m asking. I’m asking about your current plan.
I would try to get Omega to teach me psychology. Or just ask questions.
I would not give him anything if he would not answer.
All right, you are committed. :) At least admit that you would be frightened in the last five seconds of the minute. Does it change anything if Omega tells you in advance that it will not help you with any sort of information or goods?
I can only think about omega in far mode, so I can not predict that accurately. But I feel that I would be more curious than anything else
Good point. That’s a terrifying thought—and may be enough to get me to hand over the cash right away.
I might put the cash in one of twenty black boxes, and hand one of them over to Omega at random.
It shouldn’t feel like being mugged. All that making Omega perfect predictor does is prevent it from bugging you if you are not willing to pay $5. It means Omega will ask less not that you will pay more.
Your analysis is one-sided. Please try to imagine the situation with a one minute time limit. Omega appears, and tells you that you will give it 5 dollars in one minute. You decide that you will not give it the money. You are very determined about this, maybe because you are curious about what will happen. The clock is ticking...
The less seconds are there left from the minute, the more worried you should objectively be, because eventually you WILL hand over the money, and the less seconds are there, the more disruptive the change will be that will eventually cause you to reconsider.
Note that Omega didn’t give any promises about being safe during the one minute. If you think that e.g. causing you brain damage would be unfair of Omega, then we are already in the territory of ethics, not decision theory. Maybe it wasn’t Omega that caused the brain damage, maybe it appeared before you exactly because it predicted that it will happen to you. With Omegas, it is not always possible to disentangle cause and effect.
Whoop, sorry, I deleted the comment before you replied.
Let us assume that you will never, under any circumstances hand over $5 unless you feel good and happy and marvelous about it. Omega can easily pick a circumstance where you feel good, happy, marvelous about handing it $5. In this scenario, by definition, you will not feel mugged.
On the other hand, let us assume that you can be bullied into handing over $5 by Omega appearing and demanding $5 in one minute. If this works, which we are assuming it does, Omega can appear and get its $5. You will like you were just mugged, but the only way this can happen is if you are the sort of person that will actually hand over $5 without understanding why. Omega is a “jerk” in the sense that it made you feel like you were being mugged but this doesn’t imply anything about the scenario or Omega. It implies something about the situations in which you would hand Omega $5. (And that Omega doesn’t care about being a jerk.)
The point is this: If you made a steadfast decision to never hand Omega $5 without feeling happy about it, Omega would never ask you for $5 without making you feel happy about it. If you decide to never, ever hand over $5 while feel happy about it, than you will never see a non-mugging scenario.
Note: This principle is totally limited to the scenario discussed in the OP. This has no bearing on Newcomb’s or Counterfactual Mugging or anything else.
This is true but it doesn’t change how frequently you would give Omega $5. It changes Omega’s success rate, but only in the sense that it won’t play the game if you aren’t willing to give $5.
If A = You pay Omega $5 and O = Omega asks for $5:
p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + p(O|~A) * p(~A))
Making Omega a perfect predictor sets p(Omega asks|You don’t pay) to 0, so p(O|~A) = 0.
p(A|O) = p(O|A) * p(A) / (p(O|A) * p(A) + 0 * p(~A))
p(A|O) = p(O|A) * p(A) / p(O|A) * p(A)
p(A|O) = 1
Therefore, p(You pay Omega $5|Omega asks for $5) is 1. If Omega asks, you will pay. Big whoop. This is a restriction on Omega asking, not on you giving.
Yes, but consider what happens when you start conditioning on the statement B=”I do not intend to give Omega $5″. If Omega is perfect, this is irrelevant; you will hand over the cash.
If Omega is not perfect, then the situation changes. Use A and O as above; then a relvant question is: how many of Omega’s errors have B (nearly all of them) versus how many of Omega’s successes have B (nearly none of them). Basically, you’re trying to estimate the relative sizes of (B&A)|O versus (B&~A)|O.
Now A|O is very large while ~A|O is very small, but (B&A)|O is tiny in A|O while (B&~A)|O makes up most of ~A|O. So I’d crudely estimate that those two sets are generally of pretty comparable size. If Omega is only wrong one in a million, I’d estimate I’d have even odds of handing him the $5 if I didn’t want to.
Right, when Omega is perfect, this isn’t really a useful distinction. The correlation between B and A is irrelevant for the odds of p(A|O). It does get more interesting when asking:
p(A|B)
p(~A|B)
p(O|B)
These are still interesting even when Omega is perfect. If, as you suggest, we look at the relationship between A, B, and O when Omega isn’t perfect, your questions are dead on in terms of what matters.