I was taking it to mean a broader “demanding a premium to accept risk”
The only reason expected utility framework seems to “work” for single two-outcome bets is that is has more parameters to tweak than datapoints we want to simulate, and we throw away utility curve immediately other than for 3 points—no bet, bet fail, bet win.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you’ll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Suppose that, from any initial wealth level, a person turns down gambles where she loses $100 or gains $110, each with 50% probability. Then she will turn down 50-50 bets of losing $1,000 or gaining any sum of money.
A person who would always turn down 50-50 lose $1,000/gain $1,050 bets would always turn down 50-50 bets of losing $20,000 or gaining any sum. These are implausible degrees of risk aversion.
Suppose we knew a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than $350,000, but knew nothing about the degree of her risk aversion for wealth levels above $350,000. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670
Examples in paper are very simple (but explaining them with math and proving why expected utility fails so miserably takes much of the paper).
The intuition for such examples, and for the theorem itself, is that within the expected-utility framework turning down a modest-stakes gamble means that the marginal utility of money must diminish very quickly for small changes in wealth.
Your citations here are talking about trying to model human behavior based on trying to fit concave functions of networth-to-utility to realistic numbers. The bit you quoted here was from a passage wherein I was ceding this precise point.
I was explaining that I had previously thought you to be making a broader theoretical point, about any sort of risk premia—not just those that actually model real human behavior. Your quoting of that passage lead me to believe that was the case, but your response here leads me to wonder whether there is still confusion.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you’ll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Do you mean this to apply to any theoretical utility-to-dollars function, even those that do not well model people?
If so, can you please give an example of infinite or negative risk premia for an agent (an AI, say) whose dollars-to-utility function is U(x) = x / log(x + 10).
This utility function has near zero risk aversion at relevant range.
Assuming our AI has wealth level of $10000, it will happily take a 50:50 bet of gaining $100.10 vs losing $100.00.
Yes, it is weak risk aversion—but is it not still risk aversion, as I had initially meant (and initially thought you to mean)?
It also gets to infinities if there’s a risk of dollar worth below -$10.
Yes, of course. I’d considered this irrelevant for reasons I can’t quite recall, but it is trivially fixed; is there a problem with U(x) = x/log(x+10)?
The only reason expected utility framework seems to “work” for single two-outcome bets is that is has more parameters to tweak than datapoints we want to simulate, and we throw away utility curve immediately other than for 3 points—no bet, bet fail, bet win.
If you try to reuse this utility curve for any other bet or bet with more than two outcomes, you’ll start seeing the same person accepting infinite, near-zero, or even negative risk premia.
Could you provide a simple (or at least, near minimally complex) example?
Examples in paper are very simple (but explaining them with math and proving why expected utility fails so miserably takes much of the paper).
You are being frustrating.
Your citations here are talking about trying to model human behavior based on trying to fit concave functions of networth-to-utility to realistic numbers. The bit you quoted here was from a passage wherein I was ceding this precise point.
I was explaining that I had previously thought you to be making a broader theoretical point, about any sort of risk premia—not just those that actually model real human behavior. Your quoting of that passage lead me to believe that was the case, but your response here leads me to wonder whether there is still confusion.
Do you mean this to apply to any theoretical utility-to-dollars function, even those that do not well model people?
If so, can you please give an example of infinite or negative risk premia for an agent (an AI, say) whose dollars-to-utility function is U(x) = x / log(x + 10).
This utility function has near zero risk aversion at relevant range.
Assuming our AI has wealth level of $10000, it will happily take a 50:50 bet of gaining $100.10 vs losing $100.00.
It also gets to infinities if there’s a risk of dollar worth below -$10.
Yes, it is weak risk aversion—but is it not still risk aversion, as I had initially meant (and initially thought you to mean)?
Yes, of course. I’d considered this irrelevant for reasons I can’t quite recall, but it is trivially fixed; is there a problem with U(x) = x/log(x+10)?