The context is: Dawes is explaining von Neumann and Morgenstern’s axioms.
Aside: I don’t know how familiar you are with the VNM utility theorem, but just in case, here’s a brief primer.
The VNM utility theorem presents a set of axioms, and then says that if an agent’s preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as “the expected value of x”.) That is to say, the agent’s preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).
In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent’s preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)
(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)
N.B.: “Alternatives” in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.
(If all of this is old hat to you, apologies; I didn’t want to assume.)
The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?
It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don’t adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it’s mandatory for a rational agent to satisfy that axiom.
Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.
The independence axiom says that A ≥ B (i.e., A is preferred to B) if and only if ApC ≥ BpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).
Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:
Is such irrationality the only reason for violating the independence axiom? I believe there is another reason. Axiom 5 [Independence] implies that the decision maker cannot be affected by the skewness of the consequences, which can be conceptualized as a probability distribution over personal values. Figure 8.1 shows (Note: This is my reproduction of the figure. I’ve tried to make it as exact as possible.) the skewed distributions of two different alternatives. Both distributions have the same average, hence the same expected personal value, which is a criterion of choice implied by the axioms. These distributions also have the same variance.
If the distributions in Figure 8.1 were those of wealth in a society, I have a definite preference for distribution a; its positive skewness means that income can be increased from any point — an incentive for productive work. Moreover, those people lowest in the distribution are not as distant from the average as in distribution b. In contrast, in distribution b, a large number of people are already earning a maximal amount of money, and there is a “tail” of people in the negatively skewed part of this distribution who are quite distant from the average income.[5] If I have such concerns about the distribution of outcomes in society, why not of the consequences for choosing alternatives in my own life? In fact, I believe that I do. Counter to the implications of prospect theory, I do not like alternatives with large negative skews, especially when the consequences in the negatively skewed part of the distribution have negative personal value.
[5] This is Dawes’ footnote; it talks about an objection to “Reaganomics” on similar grounds.
Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the “degree of goal satisfaction” which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.
But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you’re quite likely to do alright, there’s a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you’re quite likely to do pretty well, there’s a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you’ll do equally well either way.
The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn’t I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?
But if it’s really a preference — if I’m not totally indifferent — then I should also prefer less “risky” (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it’s called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there’s an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function u :: outcome -> real such that you maximise expected utility, not that some particular function (such as the two graphs you’ve drawn) actually represents your utility.
In other words, you haven’t really shown that “to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom” since the two distributions don’t have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function.
Assuming you privilege some reference point as your x-axis origin, sure. But there’s no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of “regular risk aversion” is what Dawes refers to when he talks about independence axiom violation due to framing effects, or “pseudocertainty”.
Note that the axioms require that there exist a utility function u :: outcome → real such that you maximise expected utility, not that some particular function (such as the two graphs you’ve drawn) actually represents your utility.
The graphs are not graphs of utility functions. See the first paragraph of my post here.
How do you show that there’s an actual violation of the independence axiom from this example? … the two distributions don’t have the form ApC, BpC with A≅B.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be… etc. If it’s the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged.
Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of “personal value”) will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person’s preferences and doesn’t lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.
It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person.
Couldn’t this still be rational in general if the fact that a particular reference point is presented provides information under normal circumstances (though perhaps not rational in a laboratory setting)?
Assuming you privilege some reference point as your x-axis origin, sure.
What? This has nothing to do with “privileged reference points”. If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn’t mean I am irrational, it means you don’t have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($).
That is what I mean by “regular risk aversion”.
The graphs are not graphs of utility functions.
I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative;
Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of “personal value”) will lead to this sort of preference between two such distributions.
And I say that is assuming the conclusion. And, if only established for some set of utility functions that “more or less track an intuitive notion of “personal value”″, fails to imply the conclusion that the independence axiom is violated for a rational human.
What? This has nothing to do with “privileged reference points”. If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn’t mean I am irrational, it means you don’t have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($).
That is what I mean by “regular risk aversion”.
It actually doesn’t matter what the values are, because we know from prospect theory that people’s preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can’t have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.
I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.
True enough. I rounded your objection to the nearest misunderstanding, I think.
And I say that is assuming the conclusion.
Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!
The core of Dawes’ argument is not a mathematical one, to be sure (and it would be difficult to make it into a mathematical argument, without some sort of rigorous account of what sorts of outcome distribution shapes humans prefer, which in turn would presumably require substantial field data, at the very least). It’s an argument from intuition: Dawes is saying, “Look, I prefer this sort of distribution of outcomes. [Implied: ‘And so do other people.’] However, such a preference is irrational, according to the VNM axioms...” Your objection seems to be: “No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly.” Is that a fair characterization?
Your talk of the utility function possibly being wrong makes me vaguely suspect a misunderstanding. It’s likely I’m just misunderstanding you, however, so if you already know this, I apologize, but just in case:
If you have some set of preferences, then (assuming your preferences satisfy the axioms), we can construct a utility function (up to positive affine transformation). But having constructed this function — which is the only function you could possibly construct from that set of preferences (up to positive affine transformation) — you are not then free to say “oh, well, maybe this is the wrong utility function; maybe the right function is something else”.
Of course you might instead be saying “well, we haven’t actually constructed any actual utility function from any actual set of preferences; we’re only imagining some vague, hypothetical utility function, and a vague hypothetical utility function certainly can be the wrong function”. Fair enough, if so. However, I once again invite you to exhibit a utility function — or even a preference ordering — which does not give rise to a preference for less-negatively-skewed distributions.
Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?
I’m afraid an answer to this part will have to wait until I have some free time to do some math.
It actually doesn’t matter what the values are, because we know from prospect theory that people’s preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can’t have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.
Yes, framing effects are irrational, I agree. I’m saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).
“No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly.”
That would be one way of describing my objection. The argument Dawes is making is simply not valid. He says “Suppose my utility function is X. Then my intuition says that I prefer certain distributions over X that have the same expected value. Therefore my utility function is not X, and in fact I have no utility function.” There are two complementary ways this argument may break:
If you take as a premise that the function X is actually your utility function (ie. “assuming I have a utility function, let X be that function”) then you have no license to apply your intuition to derive preferences over various distributions over the values of X. Your intuition has no facilities for judging meaningless numbers that have only abstract mathematical reasoning tying them to your actual preferences. If you try to shoehorn the abstract constructed utility function X into your intuition by imagining that X represents “money” or “lives saved” or “amount of something nice” you are making a logical error.
On the other hand, if you start by applying your intuition to something it understands (such as “money” or “amount of nice things”) you can certainly say “I am risk averse with respect to X”, but you have not shown that X is your utility function, so there’s no license to conclude “I (it is rational for me to) violate the VNM axioms”.
Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!
No, but that doesn’t mean such a thing does not exist!
Yes, framing effects are irrational, I agree. I’m saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).
Well, now, hold on. Dawes is not actually saying that (and neither am I)! The claim is not “risk aversion demonstrates that there’s a framing effect going on (which is clearly irrational, and not just in the ‘violates VNM axioms’ sense)”. The point is that risk aversion (at least, risk aversion construed as “preferring less negatively skewed distributions”) constitutes departure from the VNM axioms. The independence axiom strictly precludes such risk aversion.
Whether risk aversion is actually irrational upon consideration — rather than merely irrational by technical definition, i.e. irrational by virtue of VNM axiom violation — is what Dawes is questioning.
The argument Dawes is making is simply not valid. He says …
That is not a good way to characterize Dawes’ argument.
I don’t know if you’ve read Rational Choice in an Uncertain World. Earlier in the same chapter, Dawes, introducing von Neumann and Morgenstern’s work, comments that utilities are intended to represent personal values. This makes sense, as utilities by definition have to track personal values, at least insofar as something with more utility is going to be preferred (by a VNM-satisfying agent) to something with less utility. Given that our notion of personal value is so vague, there’s little else we can expect from a measure that purports to represent personal value (it’s not like we’ve got some intuitive notion of what mathematical operations are appropriate to perform on estimates of personal value, which utilities then might or might not satisfy...). So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.
So the only real assumption behind those graphs is that this agent’s utility function tracks, in some vague sense, an intuitive notion of personal value — meaning what? Nothing more than that this person places greater value on things he prefers, than on things he doesn’t prefer (relatively speaking). And that (by definition!) will be true of the utility function derived from his preferences.
It seems impossible that we can have a utility function that doesn’t give rise to such preferences over distributions. Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all). But such a preference constitutes independence axiom violation, as mentioned...
The point is that risk aversion (at least, risk aversion construed as “preferring less negatively skewed distributions”) constitutes departure from the VNM axioms.
No, it doesn’t. Not unless it’s literally risk aversion with respect to utility.
So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.
That seems to me a completely unfounded assumption.
Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all).
The fact that the x-axis is not labeled is exactly why it’s unreasonable to think that just asking your intuition which graph “looks better” is a good way of determining whether you have an actual preference between the graphs. The shape of the graph is meaningless.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, “if I lose this game, that’s it, I won’t get a chance to play again, which makes this game a bad option.” If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.
Nonetheless, those exponential distributions make a very interesting argument.
I’m not entirely sure, I need to mull it over a bit more.
Just a brief comment: the argument is not predicated on being “kicked out” of the game. We’re not assuming that even the lowest-utility outcomes cause you to no longer be able to continue “playing”. We’re merely saying that they are significantly worse than average.
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, “losing makes it harder to continue playing competitively.” But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I’ll continue to ponder.
The problem feels related to Pascal’s wager—how to deal with the low-probability disaster.
I really do want to emphasize that if you assume that “losing” (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be “losing takes you out of the game”, or “losing makes it harder to keep playing”, or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it’s stated to have.
I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or “utility but without taking into account secondary effects”, or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that’s what determines that outcome’s position on the graph’s x-axis. (Edit: And it’s crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.)
This is not a Pascal’s Wager argument. The low-utility outcomes aren’t assumed to be “infinitely” bad, or somehow massively, disproportionately, unrealistically bad; they’re just… bad. (I don’t want to get into the realm of offering up examples of bad things, because people’s lives are different and personal value scales are not absolute, but I hope that I’ve been able to clarify things at least a bit.)
If you assume.… [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it’s stated to have.
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn’t been correctly drawn. If B is worse than A, how can their average payoffs be the same?
To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?
I’m not saying I’m sure Dawes’ argument is wrong, I just have no intuition at the moment for how it could be right.
A point of terminology: “utility function” usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes’ occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is “distribution” (or more fully, “frequency [or probability] distribution over utility of outcomes”).
To the rest of your comment, I’m afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to “quantify betterness”. It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are “supposed” to say, and proceed from there.
Dawes’ argument, as promised.
The context is: Dawes is explaining von Neumann and Morgenstern’s axioms.
Aside: I don’t know how familiar you are with the VNM utility theorem, but just in case, here’s a brief primer.
The VNM utility theorem presents a set of axioms, and then says that if an agent’s preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as “the expected value of x”.) That is to say, the agent’s preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).
In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent’s preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)
(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)
N.B.: “Alternatives” in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.
(If all of this is old hat to you, apologies; I didn’t want to assume.)
The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?
It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don’t adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it’s mandatory for a rational agent to satisfy that axiom.
Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.
The independence axiom says that A ≥ B (i.e., A is preferred to B) if and only if ApC ≥ BpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).
Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:
[5] This is Dawes’ footnote; it talks about an objection to “Reaganomics” on similar grounds.
Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the “degree of goal satisfaction” which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.
But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you’re quite likely to do alright, there’s a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you’re quite likely to do pretty well, there’s a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you’ll do equally well either way.
The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn’t I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?
But if it’s really a preference — if I’m not totally indifferent — then I should also prefer less “risky” (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it’s called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there’s an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function
u :: outcome -> real
such that you maximise expected utility, not that some particular function (such as the two graphs you’ve drawn) actually represents your utility.In other words, you haven’t really shown that “to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom” since the two distributions don’t have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
Assuming you privilege some reference point as your x-axis origin, sure. But there’s no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of “regular risk aversion” is what Dawes refers to when he talks about independence axiom violation due to framing effects, or “pseudocertainty”.
The graphs are not graphs of utility functions. See the first paragraph of my post here.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be… etc. If it’s the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged.
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of “personal value”) will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person’s preferences and doesn’t lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.
Couldn’t this still be rational in general if the fact that a particular reference point is presented provides information under normal circumstances (though perhaps not rational in a laboratory setting)?
I think you’ll have to give an example of such a scenario before I could comment on whether it’s plausible.
What? This has nothing to do with “privileged reference points”. If I am [VNM-]rational, with utility function
U
, and you consider an alternative function$ = exp(U)
(or an affine transformation thereof), I will appear to be risk averse with respect to$
. This doesn’t mean I am irrational, it means you don’t have the correct utility function. And in this case, you can turn the wrong utility function into the right one by takinglog($)
.That is what I mean by “regular risk aversion”.
I know, they are graphs of
P(U)
. Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?
And I say that is assuming the conclusion. And, if only established for some set of utility functions that “more or less track an intuitive notion of “personal value”″, fails to imply the conclusion that the independence axiom is violated for a rational human.
It actually doesn’t matter what the values are, because we know from prospect theory that people’s preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can’t have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.
True enough. I rounded your objection to the nearest misunderstanding, I think.
Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!
The core of Dawes’ argument is not a mathematical one, to be sure (and it would be difficult to make it into a mathematical argument, without some sort of rigorous account of what sorts of outcome distribution shapes humans prefer, which in turn would presumably require substantial field data, at the very least). It’s an argument from intuition: Dawes is saying, “Look, I prefer this sort of distribution of outcomes. [Implied: ‘And so do other people.’] However, such a preference is irrational, according to the VNM axioms...” Your objection seems to be: “No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly.” Is that a fair characterization?
Your talk of the utility function possibly being wrong makes me vaguely suspect a misunderstanding. It’s likely I’m just misunderstanding you, however, so if you already know this, I apologize, but just in case:
If you have some set of preferences, then (assuming your preferences satisfy the axioms), we can construct a utility function (up to positive affine transformation). But having constructed this function — which is the only function you could possibly construct from that set of preferences (up to positive affine transformation) — you are not then free to say “oh, well, maybe this is the wrong utility function; maybe the right function is something else”.
Of course you might instead be saying “well, we haven’t actually constructed any actual utility function from any actual set of preferences; we’re only imagining some vague, hypothetical utility function, and a vague hypothetical utility function certainly can be the wrong function”. Fair enough, if so. However, I once again invite you to exhibit a utility function — or even a preference ordering — which does not give rise to a preference for less-negatively-skewed distributions.
I’m afraid an answer to this part will have to wait until I have some free time to do some math.
Yes, framing effects are irrational, I agree. I’m saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).
That would be one way of describing my objection. The argument Dawes is making is simply not valid. He says “Suppose my utility function is X. Then my intuition says that I prefer certain distributions over X that have the same expected value. Therefore my utility function is not X, and in fact I have no utility function.” There are two complementary ways this argument may break:
If you take as a premise that the function X is actually your utility function (ie. “assuming I have a utility function, let X be that function”) then you have no license to apply your intuition to derive preferences over various distributions over the values of X. Your intuition has no facilities for judging meaningless numbers that have only abstract mathematical reasoning tying them to your actual preferences. If you try to shoehorn the abstract constructed utility function X into your intuition by imagining that X represents “money” or “lives saved” or “amount of something nice” you are making a logical error.
On the other hand, if you start by applying your intuition to something it understands (such as “money” or “amount of nice things”) you can certainly say “I am risk averse with respect to X”, but you have not shown that X is your utility function, so there’s no license to conclude “I (it is rational for me to) violate the VNM axioms”.
No, but that doesn’t mean such a thing does not exist!
Well, now, hold on. Dawes is not actually saying that (and neither am I)! The claim is not “risk aversion demonstrates that there’s a framing effect going on (which is clearly irrational, and not just in the ‘violates VNM axioms’ sense)”. The point is that risk aversion (at least, risk aversion construed as “preferring less negatively skewed distributions”) constitutes departure from the VNM axioms. The independence axiom strictly precludes such risk aversion.
Whether risk aversion is actually irrational upon consideration — rather than merely irrational by technical definition, i.e. irrational by virtue of VNM axiom violation — is what Dawes is questioning.
That is not a good way to characterize Dawes’ argument.
I don’t know if you’ve read Rational Choice in an Uncertain World. Earlier in the same chapter, Dawes, introducing von Neumann and Morgenstern’s work, comments that utilities are intended to represent personal values. This makes sense, as utilities by definition have to track personal values, at least insofar as something with more utility is going to be preferred (by a VNM-satisfying agent) to something with less utility. Given that our notion of personal value is so vague, there’s little else we can expect from a measure that purports to represent personal value (it’s not like we’ve got some intuitive notion of what mathematical operations are appropriate to perform on estimates of personal value, which utilities then might or might not satisfy...). So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.
So the only real assumption behind those graphs is that this agent’s utility function tracks, in some vague sense, an intuitive notion of personal value — meaning what? Nothing more than that this person places greater value on things he prefers, than on things he doesn’t prefer (relatively speaking). And that (by definition!) will be true of the utility function derived from his preferences.
It seems impossible that we can have a utility function that doesn’t give rise to such preferences over distributions. Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all). But such a preference constitutes independence axiom violation, as mentioned...
No, it doesn’t. Not unless it’s literally risk aversion with respect to utility.
That seems to me a completely unfounded assumption.
The fact that the x-axis is not labeled is exactly why it’s unreasonable to think that just asking your intuition which graph “looks better” is a good way of determining whether you have an actual preference between the graphs. The shape of the graph is meaningless.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, “if I lose this game, that’s it, I won’t get a chance to play again, which makes this game a bad option.” If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.
Nonetheless, those exponential distributions make a very interesting argument.
I’m not entirely sure, I need to mull it over a bit more.
Thanks again, I appreciate it.
Just a brief comment: the argument is not predicated on being “kicked out” of the game. We’re not assuming that even the lowest-utility outcomes cause you to no longer be able to continue “playing”. We’re merely saying that they are significantly worse than average.
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, “losing makes it harder to continue playing competitively.” But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I’ll continue to ponder.
The problem feels related to Pascal’s wager—how to deal with the low-probability disaster.
I really do want to emphasize that if you assume that “losing” (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be “losing takes you out of the game”, or “losing makes it harder to keep playing”, or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it’s stated to have.
I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or “utility but without taking into account secondary effects”, or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that’s what determines that outcome’s position on the graph’s x-axis. (Edit: And it’s crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.)
This is not a Pascal’s Wager argument. The low-utility outcomes aren’t assumed to be “infinitely” bad, or somehow massively, disproportionately, unrealistically bad; they’re just… bad. (I don’t want to get into the realm of offering up examples of bad things, because people’s lives are different and personal value scales are not absolute, but I hope that I’ve been able to clarify things at least a bit.)
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn’t been correctly drawn. If B is worse than A, how can their average payoffs be the same?
To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?
I’m not saying I’m sure Dawes’ argument is wrong, I just have no intuition at the moment for how it could be right.
A point of terminology: “utility function” usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes’ occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is “distribution” (or more fully, “frequency [or probability] distribution over utility of outcomes”).
To the rest of your comment, I’m afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to “quantify betterness”. It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are “supposed” to say, and proceed from there.
I will reply more fully when I have time.