Actually, it’s not analogous, because you don’t have any non-zero-sumness with the doomsday bettor beyond that which is always present when two parties have differing predictions.
Imagine two bettors who each try to maximize U_expected = P(e)U(e) + (1-P(e))U(~e)
Typically the bettors have the same U(e) and U(~e), and only disagree on P(e).
If you analyze the doomsday bet with e = “the world ends”, then it’s just a standard bet situation, because both bettors set U(the world ends) = 0.
If you analyze the doomsday bet with e = “whatever happens in the year 2013”, then it’s seemingly unusual in that both bettors set P(e) to the same value (1), but it’s really not unusual because you can factor their respective probability of doomsday out of their U(e) values.
So why isn’t my/WrongBot’s bet analogous? Let’s say Omega offered me $1 today in exchange for getting to kill me if the sun doesn’t rise tomorrow. Let e = “sun doesn’t rise tomorrow”.
My bet with Omega has two properties that are not true about a typical zero-sum bet:
Since my U(e) is 0, Omega’s U(e) must be positive for it to make that bet. Any time there’s a contract that the parties enter into because of differing U(e) values, and the U(e) difference doesn’t factor into a P(e_subevent) like in the 2012 doomsday bet, the contract is not so much a bet as a non-zero-sum trade.
I’d bet on ~e regardless of how high my P(e) is, because there’s no P(e) that can make P(e)U(e) + (1-P(e))U(~e) < 0 for me. That’s a general property of contracts which are guaranteed to make my life better than before, i.e. non-zero-sum trades.
I have to admit… I’m mostly confused by this comment. Not by the math, but by exactly what you’re getting at/disagreeing with.
If you’re just saying that the doomsday scenario isn’t perfectly analogous to the Omega scenario, I accept this, and never meant to imply that it was. I was only pointing out that the “if I lose I’ll be dead anyway” general type of reasoning could be applied to the other situation (and not necessarily through explicitly betting against the other party). If you’re saying that it couldn’t, then I confess that I still don’t understand why from your comment.
I was only pointing out that the “if I lose I’ll be dead anyway” general type of reasoning could be applied to the other situation (and not necessarily through explicitly betting against the other party).
My point is that actually, you don’t get any extra expected value from the doomsayer’s “if I lose I’ll be dead anyway” reasoning. You get exactly as much expected value from them as you would get from anyone with any kind of prediction whose accuracy is lower than your own by the same amount.
In contrast, WrongBot did get to capitalize on a special “if I lose I’m dead” property of his bet, and my previous post details the important properties that make WrongBot’s bet atypical (properties that your own bet does not have).
Ah, I see then where we miscommunicated. I meant that I, not he, would be applying that reasoning. I strongly anticipate not being dead, and for the purposes of this bet (and only for this bet) don’t care if I’m wrong about it. He would strongly anticipate being dead, and might therefore neglect the possibility that he’ll have to suffer the consequences of whatever we’re doing. My losing the bet is “protected” (in a rather dreary way), his isn’t.
Obviously, I haven’t worked out the details, and probably won’t actually go around taking advantage of these people, but it occurred to me the other day while I was pondering how one should almost always be able to turn better-calibrated expectations into utility.
Obviously, I haven’t worked out the details, and probably won’t actually go around taking advantage of these people
Hey, they’d be happy enough to still be alive, and you could donate the proceeds to eradicating polio. But unfortunately you’d also be encouraging people to take existential threats less seriously in general, which may be a bad idea. I can’t decide.
Anyway, good luck finding a believer in any kind of woo who is prepared to make a cash wager on a testable outcome. Think how quickly we would have eradicated homeopathy and astrology by now! :)
I’m looking forward to using this kind of reasoning to profit off end-of-the-worlders in late 2012.
Well, that kind of reasoning and just my run-of-the-mill, “no I don’t think the world is ending” reasoning.
Actually, it’s not analogous, because you don’t have any non-zero-sumness with the doomsday bettor beyond that which is always present when two parties have differing predictions.
Imagine two bettors who each try to maximize U_expected = P(e)U(e) + (1-P(e))U(~e)
Typically the bettors have the same U(e) and U(~e), and only disagree on P(e).
If you analyze the doomsday bet with e = “the world ends”, then it’s just a standard bet situation, because both bettors set U(the world ends) = 0.
If you analyze the doomsday bet with e = “whatever happens in the year 2013”, then it’s seemingly unusual in that both bettors set P(e) to the same value (1), but it’s really not unusual because you can factor their respective probability of doomsday out of their U(e) values.
So why isn’t my/WrongBot’s bet analogous? Let’s say Omega offered me $1 today in exchange for getting to kill me if the sun doesn’t rise tomorrow. Let e = “sun doesn’t rise tomorrow”.
My bet with Omega has two properties that are not true about a typical zero-sum bet:
Since my U(e) is 0, Omega’s U(e) must be positive for it to make that bet. Any time there’s a contract that the parties enter into because of differing U(e) values, and the U(e) difference doesn’t factor into a P(e_subevent) like in the 2012 doomsday bet, the contract is not so much a bet as a non-zero-sum trade.
I’d bet on ~e regardless of how high my P(e) is, because there’s no P(e) that can make P(e)U(e) + (1-P(e))U(~e) < 0 for me. That’s a general property of contracts which are guaranteed to make my life better than before, i.e. non-zero-sum trades.
I have to admit… I’m mostly confused by this comment. Not by the math, but by exactly what you’re getting at/disagreeing with.
If you’re just saying that the doomsday scenario isn’t perfectly analogous to the Omega scenario, I accept this, and never meant to imply that it was. I was only pointing out that the “if I lose I’ll be dead anyway” general type of reasoning could be applied to the other situation (and not necessarily through explicitly betting against the other party). If you’re saying that it couldn’t, then I confess that I still don’t understand why from your comment.
My point is that actually, you don’t get any extra expected value from the doomsayer’s “if I lose I’ll be dead anyway” reasoning. You get exactly as much expected value from them as you would get from anyone with any kind of prediction whose accuracy is lower than your own by the same amount.
In contrast, WrongBot did get to capitalize on a special “if I lose I’m dead” property of his bet, and my previous post details the important properties that make WrongBot’s bet atypical (properties that your own bet does not have).
Ah, I see then where we miscommunicated. I meant that I, not he, would be applying that reasoning. I strongly anticipate not being dead, and for the purposes of this bet (and only for this bet) don’t care if I’m wrong about it. He would strongly anticipate being dead, and might therefore neglect the possibility that he’ll have to suffer the consequences of whatever we’re doing. My losing the bet is “protected” (in a rather dreary way), his isn’t.
Obviously, I haven’t worked out the details, and probably won’t actually go around taking advantage of these people, but it occurred to me the other day while I was pondering how one should almost always be able to turn better-calibrated expectations into utility.
Hey, they’d be happy enough to still be alive, and you could donate the proceeds to eradicating polio. But unfortunately you’d also be encouraging people to take existential threats less seriously in general, which may be a bad idea. I can’t decide.
Anyway, good luck finding a believer in any kind of woo who is prepared to make a cash wager on a testable outcome. Think how quickly we would have eradicated homeopathy and astrology by now! :)