Imagine this bet: If you win, you’ll get to a point that is twice as good as the one you’re at right now: 2000 utils. If you lose, you’ll be at a point that sucked as much as that post-stroke month. What would the probabilty of winning have to be for you to be indifferent to this bet?
The utility of the awful state is then x in the equation 2000P(w) + x(1 - P(w)) = 1000, where P(w) is the probability of winning.
If x were 100, the bet would be worth taking if it offered odds better than 9 in 19. If you wouldn’t take that bet, x is lower. On this scale, I suspect your utility for x is very, very far below 0.
OK… that question at least makes sense to me. Thank you.
Hm. Would I take, say, a 50% bet along those lines? I flip a coin, heads I have another stroke, tails things suddenly get as much better than they are now as now is better than then. Nope, I don’t take that bet.
A 25% chance? Hm.
No, I don’t take that bet.
A 5% chance? Hmmmmmmmm… that is tempting. And I’ll probably always wonder what-if. But no, I don’t think I take that bet.
A 1% chance? Yeah, OK. I probably take that bet.
So P(w) is somewhere between .01 and .05; probably closer to .01. Call it .015.
I think what I’ve probably just demonstrated is that I’m subject to cognitive biases involving small percentage chances… I think my mind is just rounding 1% to “close enough to zero as makes no difference.”
But, OK, I guess utils can measure biased preferences as well as rational ones. All that matters is that it’s my preference, right, not why it’s my preference.
Ah! That makes far more sense. (That number seemed really implausible, but after triple-checking my math I shrugged my shoulders and went with it.)
OK. So, it no longer seems nearly so plausible that I’d turn down the original bet… it really does help me to have something concrete to attach these numbers to. Thanks.
No, wait. Thinking about this some more, I realize I’m being goofy.
You offered me a series of bets about “twice as good as the one you’re at right now: 2000 utils” vs “a point that sucked as much as that post-stroke month”. I interpreted that as “I have another stroke” vs. “things suddenly get as much better than they are now as now is better than then” and evaluated those bets based on that interpretation.
But that was a false interpretation, and my results are internally inconsistent. If how-things-were-then is −64.5K, then 2000 is not as much better than they are now as now is better than then… they are merely 1/65th better. In which case I don’t accept that bet, after all… a 1% chance of another stroke vs a 99% chance of a 1/65th improvement in my life is not nearly as compelling.
More generally, I accepted the initial statement that the state we labeled 2000 is “twice as good as” the state we labeled 1000, because that seemed to make sense when we were talking about numbers. But now that I’m trying to actually map those numbers to something, it’s less clear to me that it makes sense.
I mean, it follows that my stroke was “-64 times worse” than how things are now, and… well, what does that even mean?
Sorry… I’m not trying to be a pedant here, I’m just trying to make sure I actually understand what we’re talking about, and it’s pretty clear that I don’t.
Yeah, the notion of “twice as good as things are now” doesn’t actually make sense, because utility is only defined up to affine transformations. (That is, if you decided to raise your utility for every outcome by 1000, you’d make the same decisions afterward as you did before; it’s the relative distances that matter, not the scaling or the place you call 0. It’s rather like the Fahrenheit and Celsius scales for temperature.)
But anyway, you can figure out the relative distances in the same way; call what you have right now 1000, imagine some particular awesome scenario and call that 2000, and then figure out the utility of having another stroke, relative to that. For any plausible scenario (excluding things that could only happen post-Singularity), you should wind up again with an extremely negative (but not ridiculous) number for a stroke.
On the other hand, conscious introspection is a very poor tool for figuring out our relative utilities (to the degree that our decisions can be said to flow from a utility function at all!), because of signaling reasons in particular.
Not that I know of. Just a warning not to be too certain of the results you get from this algorithm- your extrapolations to actual decisions may be far from what you’d actually do.
I think what I’ve probably just demonstrated is that I’m subject to cognitive biases involving small percentage chances… I think my mind is just rounding 1% to “close enough to zero as makes no difference.”
Maybe, but I find it easier to fall for the opposite bias, the one known as “There’s still a chance, right?”
(nods) Sadly, my succeptability to rounding very small probabilities up when I want them to be true is not inversely correlated with my succeptability to rounding very small probabilities down when I want to ignore them. Ain’t motivated cognition grand?
I do find that I can subvert both of these failure modes by switching scales, though. That is, if I start thinking in “permil” rather than percent, all of a sudden a 1% chance (that is, a 10 permil chance) stops seeming quite so negligible.
Imagine this bet: If you win, you’ll get to a point that is twice as good as the one you’re at right now: 2000 utils. If you lose, you’ll be at a point that sucked as much as that post-stroke month. What would the probabilty of winning have to be for you to be indifferent to this bet?
The utility of the awful state is then x in the equation 2000P(w) + x(1 - P(w)) = 1000, where P(w) is the probability of winning.
If x were 100, the bet would be worth taking if it offered odds better than 9 in 19. If you wouldn’t take that bet, x is lower. On this scale, I suspect your utility for x is very, very far below 0.
OK… that question at least makes sense to me. Thank you.
Hm. Would I take, say, a 50% bet along those lines? I flip a coin, heads I have another stroke, tails things suddenly get as much better than they are now as now is better than then. Nope, I don’t take that bet.
A 25% chance? Hm.
No, I don’t take that bet.
A 5% chance? Hmmmmmmmm… that is tempting. And I’ll probably always wonder what-if. But no, I don’t think I take that bet.
A 1% chance? Yeah, OK. I probably take that bet.
So P(w) is somewhere between .01 and .05; probably closer to .01. Call it .015.
I think what I’ve probably just demonstrated is that I’m subject to cognitive biases involving small percentage chances… I think my mind is just rounding 1% to “close enough to zero as makes no difference.”
But, OK, I guess utils can measure biased preferences as well as rational ones. All that matters is that it’s my preference, right, not why it’s my preference.
So, all right. 2000P(w) + x(1 - P(w)) = 1000 ⇒ 2000(.015) + x(1 - .015) = 1000 ⇒ 30 + .985x = 1000 ⇒ x=(1000-30)/.985 = ~985.
OK, cool. So my current condition is 1000 util, and my stroke condition (which really really sucks) is 985 utils.
What does that tell us?
You got your numbers flipped. P(w) is your chance of winning. You want
2000(.985) + x(1 - .985) = 1000 ⇒ 1970 + .015x = 1000 ⇒ x = (1000-1970)/.015 = −64,666.66...
That tells you that you really don’t want to have another stroke. Which is hopefully unsurprising.
Ah! That makes far more sense. (That number seemed really implausible, but after triple-checking my math I shrugged my shoulders and went with it.)
OK. So, it no longer seems nearly so plausible that I’d turn down the original bet… it really does help me to have something concrete to attach these numbers to. Thanks.
And, yeah, that is profoundly unsurprising.
No, wait. Thinking about this some more, I realize I’m being goofy.
You offered me a series of bets about “twice as good as the one you’re at right now: 2000 utils” vs “a point that sucked as much as that post-stroke month”. I interpreted that as “I have another stroke” vs. “things suddenly get as much better than they are now as now is better than then” and evaluated those bets based on that interpretation.
But that was a false interpretation, and my results are internally inconsistent. If how-things-were-then is −64.5K, then 2000 is not as much better than they are now as now is better than then… they are merely 1/65th better. In which case I don’t accept that bet, after all… a 1% chance of another stroke vs a 99% chance of a 1/65th improvement in my life is not nearly as compelling.
More generally, I accepted the initial statement that the state we labeled 2000 is “twice as good as” the state we labeled 1000, because that seemed to make sense when we were talking about numbers. But now that I’m trying to actually map those numbers to something, it’s less clear to me that it makes sense.
I mean, it follows that my stroke was “-64 times worse” than how things are now, and… well, what does that even mean?
Sorry… I’m not trying to be a pedant here, I’m just trying to make sure I actually understand what we’re talking about, and it’s pretty clear that I don’t.
Yeah, the notion of “twice as good as things are now” doesn’t actually make sense, because utility is only defined up to affine transformations. (That is, if you decided to raise your utility for every outcome by 1000, you’d make the same decisions afterward as you did before; it’s the relative distances that matter, not the scaling or the place you call 0. It’s rather like the Fahrenheit and Celsius scales for temperature.)
But anyway, you can figure out the relative distances in the same way; call what you have right now 1000, imagine some particular awesome scenario and call that 2000, and then figure out the utility of having another stroke, relative to that. For any plausible scenario (excluding things that could only happen post-Singularity), you should wind up again with an extremely negative (but not ridiculous) number for a stroke.
On the other hand, conscious introspection is a very poor tool for figuring out our relative utilities (to the degree that our decisions can be said to flow from a utility function at all!), because of signaling reasons in particular.
Certainly. Or, really, much of anything else. Is there a better tool available in this case?
Not that I know of. Just a warning not to be too certain of the results you get from this algorithm- your extrapolations to actual decisions may be far from what you’d actually do.
Maybe, but I find it easier to fall for the opposite bias, the one known as “There’s still a chance, right?”
(nods) Sadly, my succeptability to rounding very small probabilities up when I want them to be true is not inversely correlated with my succeptability to rounding very small probabilities down when I want to ignore them. Ain’t motivated cognition grand?
I do find that I can subvert both of these failure modes by switching scales, though. That is, if I start thinking in “permil” rather than percent, all of a sudden a 1% chance (that is, a 10 permil chance) stops seeming quite so negligible.
Huh, that’s a pretty neat hack!