The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P
I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case—do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don’t contrive a $24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father’s farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.)
Risk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K).
That’s a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn’t lead to that. (It does lead to your take of the term though—your preference isn’t 1A/2B, though).
Your assignment looks like “diminishing utility”, i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?
I think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion—for myself, at least.
[I realized that the math I wrote here was wrong. I’m going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?]
Also, thinking about the paradox more, I’ve realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it?
One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.
I know Settlers of Catan, and own it. It’s been awhile since I last played it, though.
Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there’s a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33⁄34 chance might actually only be 30⁄34 or less, and then I’d be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there’s gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there’s usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.
[rewritten] Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome. In situation, you can pretty much advise whatever, because you’ll predict a failure; the outcome either confirms your prediction, or is a lucky windfall, so the king will be content with your advice in hindsight. In situation 2, you’ll predict a gain; if you advised A, your prediction will be confirmed, but if you advised B, there’s a chance it won’t be, with the king angry at you because he didn’t make the money you predicted he would. Your career is over. -- Now imagine a collection of autonomous agents, or a bundle of heuristics fighting for Darwinist survival, and you’ll see what strategy survives. [If you like stereotypes, imagine the “king” as “mathematician’s non-mathematical spouse”. ;-)]
One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise.
The problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty—for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn’t actually p=.1 or something! Ideally, we’d have a decaying factor of some sort that depends on the probabilities being close to 1 or 0.
The reason I asked is that it’s very possible that a correct model of “attaching a utility to certainty” would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we’d at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are.
Comparing a .33 with a .34 chance makes me think that there’s gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there’s usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.
If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense.
I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with $27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way?
Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome.
Obviously with the advisor situation, you have to take your advisee’s biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king’s satisfaction with it.
The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P
I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case—do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don’t contrive a $24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father’s farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.)
Risk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K).
That’s a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn’t lead to that. (It does lead to your take of the term though—your preference isn’t 1A/2B, though).
Your assignment looks like “diminishing utility”, i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?
I think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion—for myself, at least.
[I realized that the math I wrote here was wrong. I’m going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?]
Also, thinking about the paradox more, I’ve realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it?
One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.
I know Settlers of Catan, and own it. It’s been awhile since I last played it, though.
Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there’s a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33⁄34 chance might actually only be 30⁄34 or less, and then I’d be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there’s gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there’s usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.
[rewritten] Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome. In situation, you can pretty much advise whatever, because you’ll predict a failure; the outcome either confirms your prediction, or is a lucky windfall, so the king will be content with your advice in hindsight. In situation 2, you’ll predict a gain; if you advised A, your prediction will be confirmed, but if you advised B, there’s a chance it won’t be, with the king angry at you because he didn’t make the money you predicted he would. Your career is over. -- Now imagine a collection of autonomous agents, or a bundle of heuristics fighting for Darwinist survival, and you’ll see what strategy survives. [If you like stereotypes, imagine the “king” as “mathematician’s non-mathematical spouse”. ;-)]
The problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty—for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn’t actually p=.1 or something! Ideally, we’d have a decaying factor of some sort that depends on the probabilities being close to 1 or 0.
The reason I asked is that it’s very possible that a correct model of “attaching a utility to certainty” would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we’d at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are.
If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense.
I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with $27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way?
Obviously with the advisor situation, you have to take your advisee’s biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king’s satisfaction with it.