Re-read the first several paragraphs of the post, please. I disagree with your point, but it doesn’t matter, as it is irrelevant to this post.
Alternately, explain what you mean by moral behavior, or rational behavior, since you don’t believe in predictive models of behavior, nor that humans have any preference or way of ranking different possible behaviors (since any predictive model or ranking model can be phrased as a utility function).
since any predictive model or ranking model can be phrased as a utility function
This is just nonsense. Expected utility cannot even model straightforward risk aversion. Even simplest algorithms like “minimize possible loss” are impossible to express in terms of utility functions.
Crude approximations are sometimes useful, confusing them with the real thing never is.
Expected utility cannot even model straightforward risk aversion.
Either I’m missing something, or this seems dead wrong. Doesn’t risk aversion fall right out of expected utility, plus the diminishing marginal utility in whatever is getting you the utils?
Actual risk aversion looks nothing like a concave utility curve, you can get one bet right, then when you change bet size, bet probability, wealth level or anything, you start getting zeroes or infinities where you shouldn’t, always.
Prospect theory provides simple models that don’t immediately collapse and are of some use.
I don’t have the time to examine the paper in depth just now (I certainly will later, it looks interesting) but it appears our proximate disagreement is over what you meant when you said “risk aversion”—I was taking it to mean a broader “demanding a premium to accept risk”, whereas you seem to have meant a deeper “the magnitudes of risk aversion we actually observe in people for various scenarios.” Assuming the paper supports you (and I see no reason to think otherwise), then my original objection does not apply to what you were saying.
I am still not sure I agree with you, however. It has been shown that, hedonically, people react much more strongly to loss than to gain. If taking a loss feels worse than making a gain feels good, then I might be maximizing my expected utility by avoiding situations where I have a memory of taking a loss over and above what might be anticipated looking only at a “dollars-to-utility” approximation of my actual utility function.
I was taking it to mean a broader “demanding a premium to accept risk”
The only reason expected utility framework seems to “work” for single two-outcome bets is that is has more parameters to tweak than datapoints we want to simulate, and we throw away utility curve immediately other than for 3 points—no bet, bet fail, bet win.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you’ll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Suppose that, from any initial wealth level, a person turns down gambles where she loses $100 or gains $110, each with 50% probability. Then she will turn down 50-50 bets of losing $1,000 or gaining any sum of money.
A person who would always turn down 50-50 lose $1,000/gain $1,050 bets would always turn down 50-50 bets of losing $20,000 or gaining any sum. These are implausible degrees of risk aversion.
Suppose we knew a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than $350,000, but knew nothing about the degree of her risk aversion for wealth levels above $350,000. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670
Examples in paper are very simple (but explaining them with math and proving why expected utility fails so miserably takes much of the paper).
The intuition for such examples, and for the theorem itself, is that within the expected-utility framework turning down a modest-stakes gamble means that the marginal utility of money must diminish very quickly for small changes in wealth.
Your citations here are talking about trying to model human behavior based on trying to fit concave functions of networth-to-utility to realistic numbers. The bit you quoted here was from a passage wherein I was ceding this precise point.
I was explaining that I had previously thought you to be making a broader theoretical point, about any sort of risk premia—not just those that actually model real human behavior. Your quoting of that passage lead me to believe that was the case, but your response here leads me to wonder whether there is still confusion.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you’ll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Do you mean this to apply to any theoretical utility-to-dollars function, even those that do not well model people?
If so, can you please give an example of infinite or negative risk premia for an agent (an AI, say) whose dollars-to-utility function is U(x) = x / log(x + 10).
This utility function has near zero risk aversion at relevant range.
Assuming our AI has wealth level of $10000, it will happily take a 50:50 bet of gaining $100.10 vs losing $100.00.
Yes, it is weak risk aversion—but is it not still risk aversion, as I had initially meant (and initially thought you to mean)?
It also gets to infinities if there’s a risk of dollar worth below -$10.
Yes, of course. I’d considered this irrelevant for reasons I can’t quite recall, but it is trivially fixed; is there a problem with U(x) = x/log(x+10)?
Also, prospect theory is a utility theory. You compute a utility for each possible outcome associated with each action, add them up to compute the utility for each action, then choose the action with the highest utility. This is using a utility function. It is a special kind of utility function, where the utility for each possible outcome is calculated relative to some reference level of utility. But it’s still a utility function.
Every time someone thinks they have a knockdown argument against the use of utility functions, I find they have a knockdown argument against some special simple subclass of utility functions.
I’m just skimming the beginning of the paper, but it says
Suppose that, from any initial wealth level, a person turns down gambles where she loses $100 or gains $110, each with 50% probability. Then she will turn down 50-50 bets of losing
$1000 or gaining any sum of money.
This is shown by observing that this bet refusal means the person values the 11th dollar above her current wealth by at most 10⁄11 as much as the 10th-to-last dollar of her current wealth. You then consider that she would also turn down the same bet if she were $21 wealthier, and see that she values the 32nd dollar above her current wealth at most 10⁄11 x 10⁄11 as much as the 10th-to-last dollar of her current wealth. Etcetera. It then says,
Indeed, the theorem is really just an algebraic articulation of how implausible it is that the consumption value of a dollar changes significantly as a function of whether your lifetime wealth is $10, $100, or even $1,000 higher or lower.
There are 2 problems with this argument already. The first is that it’s not clear that people have a positive-expected-value bet that they would refuse regardless of how much money they already have. But the larger problem is that it assumes “utility” is some simple function of net worth. We already know this isn’t so, from the much simpler observation that people feel much worse about losing $10 than about not winning $10, even if their net worth is so much larger than $10 that the concavity of a utility function can’t explain it.
A person’s utility is not based on an accountant-like evaluation of their net worth. Utility measures feelings, not dollars. Feelings are context-dependent, and the amount someone already has in the bank is not as salient as it ought to be if net worth were the only consideration. We have all heard stories of misers who had a childhood of poverty and were irrationally cheap even after getting rich; and no one thinks this shatters utility theory.
So this paper is not a knockdown argument against utility functions. It’s an argument against the notion that human utility is based solely on dollars.
Re-read the first several paragraphs of the post, please. I disagree with your point, but it doesn’t matter, as it is irrelevant to this post.
Alternately, explain what you mean by moral behavior, or rational behavior, since you don’t believe in predictive models of behavior, nor that humans have any preference or way of ranking different possible behaviors (since any predictive model or ranking model can be phrased as a utility function).
This is just nonsense. Expected utility cannot even model straightforward risk aversion. Even simplest algorithms like “minimize possible loss” are impossible to express in terms of utility functions.
Crude approximations are sometimes useful, confusing them with the real thing never is.
Either I’m missing something, or this seems dead wrong. Doesn’t risk aversion fall right out of expected utility, plus the diminishing marginal utility in whatever is getting you the utils?
Here’s fundamental impossibility result for modeling risk aversion in expected utility framework, and this is for the ridiculously oversimplified and horribly wrong version of “risk aversion” invented specifically for expected utility framework.
Actual risk aversion looks nothing like a concave utility curve, you can get one bet right, then when you change bet size, bet probability, wealth level or anything, you start getting zeroes or infinities where you shouldn’t, always.
Prospect theory provides simple models that don’t immediately collapse and are of some use.
I don’t have the time to examine the paper in depth just now (I certainly will later, it looks interesting) but it appears our proximate disagreement is over what you meant when you said “risk aversion”—I was taking it to mean a broader “demanding a premium to accept risk”, whereas you seem to have meant a deeper “the magnitudes of risk aversion we actually observe in people for various scenarios.” Assuming the paper supports you (and I see no reason to think otherwise), then my original objection does not apply to what you were saying.
I am still not sure I agree with you, however. It has been shown that, hedonically, people react much more strongly to loss than to gain. If taking a loss feels worse than making a gain feels good, then I might be maximizing my expected utility by avoiding situations where I have a memory of taking a loss over and above what might be anticipated looking only at a “dollars-to-utility” approximation of my actual utility function.
The only reason expected utility framework seems to “work” for single two-outcome bets is that is has more parameters to tweak than datapoints we want to simulate, and we throw away utility curve immediately other than for 3 points—no bet, bet fail, bet win.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you’ll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Could you provide a simple (or at least, near minimally complex) example?
Examples in paper are very simple (but explaining them with math and proving why expected utility fails so miserably takes much of the paper).
You are being frustrating.
Your citations here are talking about trying to model human behavior based on trying to fit concave functions of networth-to-utility to realistic numbers. The bit you quoted here was from a passage wherein I was ceding this precise point.
I was explaining that I had previously thought you to be making a broader theoretical point, about any sort of risk premia—not just those that actually model real human behavior. Your quoting of that passage lead me to believe that was the case, but your response here leads me to wonder whether there is still confusion.
Do you mean this to apply to any theoretical utility-to-dollars function, even those that do not well model people?
If so, can you please give an example of infinite or negative risk premia for an agent (an AI, say) whose dollars-to-utility function is U(x) = x / log(x + 10).
This utility function has near zero risk aversion at relevant range.
Assuming our AI has wealth level of $10000, it will happily take a 50:50 bet of gaining $100.10 vs losing $100.00.
It also gets to infinities if there’s a risk of dollar worth below -$10.
Yes, it is weak risk aversion—but is it not still risk aversion, as I had initially meant (and initially thought you to mean)?
Yes, of course. I’d considered this irrelevant for reasons I can’t quite recall, but it is trivially fixed; is there a problem with U(x) = x/log(x+10)?
To quote from that paper
My reaction was essentially: yeah: right .
Also, prospect theory is a utility theory. You compute a utility for each possible outcome associated with each action, add them up to compute the utility for each action, then choose the action with the highest utility. This is using a utility function. It is a special kind of utility function, where the utility for each possible outcome is calculated relative to some reference level of utility. But it’s still a utility function.
Every time someone thinks they have a knockdown argument against the use of utility functions, I find they have a knockdown argument against some special simple subclass of utility functions.
I’m just skimming the beginning of the paper, but it says
This is shown by observing that this bet refusal means the person values the 11th dollar above her current wealth by at most 10⁄11 as much as the 10th-to-last dollar of her current wealth. You then consider that she would also turn down the same bet if she were $21 wealthier, and see that she values the 32nd dollar above her current wealth at most 10⁄11 x 10⁄11 as much as the 10th-to-last dollar of her current wealth. Etcetera. It then says,
There are 2 problems with this argument already. The first is that it’s not clear that people have a positive-expected-value bet that they would refuse regardless of how much money they already have. But the larger problem is that it assumes “utility” is some simple function of net worth. We already know this isn’t so, from the much simpler observation that people feel much worse about losing $10 than about not winning $10, even if their net worth is so much larger than $10 that the concavity of a utility function can’t explain it.
A person’s utility is not based on an accountant-like evaluation of their net worth. Utility measures feelings, not dollars. Feelings are context-dependent, and the amount someone already has in the bank is not as salient as it ought to be if net worth were the only consideration. We have all heard stories of misers who had a childhood of poverty and were irrationally cheap even after getting rich; and no one thinks this shatters utility theory.
So this paper is not a knockdown argument against utility functions. It’s an argument against the notion that human utility is based solely on dollars.
“Minimize possible loss” can be modelled by an utility function −exp(cL) in the limit of very large c.