I’m looking for an approach that’s made-up numbers all the way down.
You may want to rephrase that :-)
Once you have that, there’s an exact answer to the optimal risk/reward tradeoff
No, I don’t think so. For example, let’s say your utility = log(wealth). That’s a monotonous transformation, so if you want to maximize utility you just maximize your wealth. That doesn’t answer the question of what is the appropriate risk/reward trade-off because you haven’t even started talking about risk yet. And if you just want to maximize expected wealth you are open to being Pascal-mugged.
Maximizing expected log(wealth) is very different than maximizing expected wealth.
Yes, you are right. However even a log utility function does not let you escape a Pascal mugging (you just need bigger numbers).
A log utility function us much more risk averse.
Risk aversion (in reality) does not boil down to a concave utility function. So the OP’s claim that a well-defined utility function will fully determine the optimal risk-reward tradeoff is still false.
Risk aversion (in reality) does not boil down to a concave utility function.
See, e.g., this paper: there are theorems saying e.g. that if your utility function is concave enough to make you turn down a bet where you win $110 or lose $100 with equal probability, it must also be concave enough to make you turn down a bet where you win a trillion dollars or lose $1k with equal probability.
if your utility function is concave enough to make you turn down a bet where you win $110 or lose $100 with equal probability
...at any wealth level, which should be surprising. If Bill Gates thinks that gamble is an expected utility loss, we predict he’ll be opposed to basically any gamble, but why would we believe the premise that Bill Gates thinks that gamble is an expected utility loss?
The “concave utility function” theory of risk aversion predicts that, all else being equal, richer people will be less risk-averse about any given sum of money. And I would in fact expect Bill Gates to accept positive-dollar-expectation bets of size ~$100 without a moment’s thought.
Why would maximizing expectation on a concave utility function lead to losing your shirt? It seems like any course of action that predictably leads to losing your shirt is self-evidently not maximizing expected concave-utility-function, unless it’s a Pascal mugging type scenario. I don’t think there are credible Pascal muggings in the world of personal finance, and if there are I’d be willing to accept an ad hoc axiom that we limit our theory to more conventional investments.
Now, I’ll admit it’s possible we should have a loss averse utility function, but we can do that without abandoning the mathematical approach—just add a time derivative of wealth, or something.
Why would maximizing expectation on a concave utility function lead to losing your shirt?
Because you’re ignoring risk.
The expectation is a central measure of a distribution. If that’s the only thing you look at, you have no idea about the width of your distribution. How long and thick is that left tail which is curling around preparing to bite you in the ass? Um, you don’t know.
I’m still not sure which line you’re taking on this:
A) Disputing the VNM formulation of rational behavior that a rational agent should maximize expected utility (https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), or
B) Disputing that we can write down an approximate utility function accurate enough to sufficiently capture our risk preferences.
VNM doesn’t offer any “formulation of rational behavior”. VNM says that a function with a particular set of properties must exist and relies on assumptions that do not necessarily hold in real life.
I also don’t think that a utility function that can condense the risk preferences into a single scalar is likely to be accurate enough for practical purposes.
When I say “VNM doesn’t offer any formulation of rational behavior” I’m not disagreeing with any particular axiom. It’s like I’m saying that an orange is not an apple and you respond by asking me what kind of apples I dislike.
Which (possibly all) of the VNM axioms do you think are not appropriate as part of a formulation of rational behavior?
I think the Peano natural numbers is a reasonable model for the number of steins I own (with the possible exception that if my steins fill up the universe a successor number of steins might not exist). But I don’t think the Peano axioms are a good model for how much beer I drink. It is not the case that all quantities of beer can be expressed as successors to 0 beer, so beer does not follow the axiom of induction.
I think ZFC axioms are a poor model of impressionist paintings. For example, it is not the case that for every impressionist paintings x and y, there exists an impressionist painting that contains both x and y. Therefore impressionist paintings violate the axiom of pairing.
Which (possibly all) of the VNM axioms do you think are not appropriate as part of a formulation of rational behavior?
I don’t think that rational behaviour as understood on LW (basically, instrumental rationality) has anything to do with VNM axioms. In particular, I do not think that the VNM model is an adequate model of human decision-making once you go beyong toy examples.
Risk aversion (in reality) does not boil down to a concave utility function. So the OP’s claim that a well-defined utility function will fully determine the optimal risk-reward tradeoff is still false.
By “risk aversion in realty,” do you mean “the descriptive thing that people actually do when it comes to risk,” or “the prescriptive thing that people should do when it comes to risk”?
Because, sure, it looks like most people do some sort of prospect theory reasoning where they don’t use probabilities correctly / have a strong reliance on cached answers and avoiding planning. (This is one of the reasons to think loss aversion is helpful, for example; if you get a windfall you don’t need to replan things, but if you suffer a loss you may have to replan things.) But it’s not at all obvious that they’re making the right call.
By “risk aversion in realty,” do you mean “the descriptive thing that people actually do when it comes to risk,” or “the prescriptive thing that people should do when it comes to risk”?
Both. I primarily have in mind risk management in finance where what people actually do is much more than compensate for the curve of the utility function; and where people should do what they are doing or they will lose their shirts pretty quickly.
The OP is interested in the prescriptive mode so the simple answer is that dealing with the risk-return tradeoff solely on the basis of the concavity of the utility function is inadequate (see finance which has to and does deal with risk all day long).
You may want to rephrase that :-)
No, I don’t think so. For example, let’s say your utility = log(wealth). That’s a monotonous transformation, so if you want to maximize utility you just maximize your wealth. That doesn’t answer the question of what is the appropriate risk/reward trade-off because you haven’t even started talking about risk yet. And if you just want to maximize expected wealth you are open to being Pascal-mugged.
Maximizing expected log(wealth) is very different than maximizing expected wealth. A log utility function us much more risk averse.
The Wikipedia article on VNM Utility Theory explains the relationship between the utility function and risk aversion (in the Consequences section).
Yes, you are right. However even a log utility function does not let you escape a Pascal mugging (you just need bigger numbers).
Risk aversion (in reality) does not boil down to a concave utility function. So the OP’s claim that a well-defined utility function will fully determine the optimal risk-reward tradeoff is still false.
See, e.g., this paper: there are theorems saying e.g. that if your utility function is concave enough to make you turn down a bet where you win $110 or lose $100 with equal probability, it must also be concave enough to make you turn down a bet where you win a trillion dollars or lose $1k with equal probability.
...at any wealth level, which should be surprising. If Bill Gates thinks that gamble is an expected utility loss, we predict he’ll be opposed to basically any gamble, but why would we believe the premise that Bill Gates thinks that gamble is an expected utility loss?
The “concave utility function” theory of risk aversion predicts that, all else being equal, richer people will be less risk-averse about any given sum of money. And I would in fact expect Bill Gates to accept positive-dollar-expectation bets of size ~$100 without a moment’s thought.
Why would maximizing expectation on a concave utility function lead to losing your shirt? It seems like any course of action that predictably leads to losing your shirt is self-evidently not maximizing expected concave-utility-function, unless it’s a Pascal mugging type scenario. I don’t think there are credible Pascal muggings in the world of personal finance, and if there are I’d be willing to accept an ad hoc axiom that we limit our theory to more conventional investments.
Now, I’ll admit it’s possible we should have a loss averse utility function, but we can do that without abandoning the mathematical approach—just add a time derivative of wealth, or something.
Because you’re ignoring risk.
The expectation is a central measure of a distribution. If that’s the only thing you look at, you have no idea about the width of your distribution. How long and thick is that left tail which is curling around preparing to bite you in the ass? Um, you don’t know.
Is that a critique of expected utility maximization in general, or are you saying that concave functions of wealth aren’t risk-averse enough?
Is it an observation that expected utility maximization does not include risk management for free, just because it’s “utility”.
I’m still not sure which line you’re taking on this: A) Disputing the VNM formulation of rational behavior that a rational agent should maximize expected utility (https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem), or B) Disputing that we can write down an approximate utility function accurate enough to sufficiently capture our risk preferences.
Both.
VNM doesn’t offer any “formulation of rational behavior”. VNM says that a function with a particular set of properties must exist and relies on assumptions that do not necessarily hold in real life.
I also don’t think that a utility function that can condense the risk preferences into a single scalar is likely to be accurate enough for practical purposes.
Can you by chance pin down your disagreement to a particular axiom? You’re modus tollensing where I expected you would modus ponens.
You are looking at the wrong meta level.
When I say “VNM doesn’t offer any formulation of rational behavior” I’m not disagreeing with any particular axiom. It’s like I’m saying that an orange is not an apple and you respond by asking me what kind of apples I dislike.
Which (possibly all) of the VNM axioms do you think are not appropriate as part of a formulation of rational behavior?
I think the Peano natural numbers is a reasonable model for the number of steins I own (with the possible exception that if my steins fill up the universe a successor number of steins might not exist). But I don’t think the Peano axioms are a good model for how much beer I drink. It is not the case that all quantities of beer can be expressed as successors to 0 beer, so beer does not follow the axiom of induction.
I think ZFC axioms are a poor model of impressionist paintings. For example, it is not the case that for every impressionist paintings x and y, there exists an impressionist painting that contains both x and y. Therefore impressionist paintings violate the axiom of pairing.
I don’t think that rational behaviour as understood on LW (basically, instrumental rationality) has anything to do with VNM axioms. In particular, I do not think that the VNM model is an adequate model of human decision-making once you go beyong toy examples.
By “risk aversion in realty,” do you mean “the descriptive thing that people actually do when it comes to risk,” or “the prescriptive thing that people should do when it comes to risk”?
Because, sure, it looks like most people do some sort of prospect theory reasoning where they don’t use probabilities correctly / have a strong reliance on cached answers and avoiding planning. (This is one of the reasons to think loss aversion is helpful, for example; if you get a windfall you don’t need to replan things, but if you suffer a loss you may have to replan things.) But it’s not at all obvious that they’re making the right call.
Both. I primarily have in mind risk management in finance where what people actually do is much more than compensate for the curve of the utility function; and where people should do what they are doing or they will lose their shirts pretty quickly.
The OP is interested in the prescriptive mode so the simple answer is that dealing with the risk-return tradeoff solely on the basis of the concavity of the utility function is inadequate (see finance which has to and does deal with risk all day long).