Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others.
I think this needs to differentiated further or partly corrected:
Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:
Cognitive biases which improve individual fitness by avoiding dangerous parts of life space. Examples: Risk aversion, status-quo bias (in a way this is a more abstract for of the basic fears like fear of heigh or spiders which also avoid dangerous situations (or help getting out of them quickly)).
Cognitive biases which improve individual fitness by increasing likelihood of reproductive success. These are probably the most complex and intricately connected to emotions. In a way emotions are comparable to biases or at least trigger specific biases. For example infatuation does activate powerful biases regarding the object of the infatuation and the situation at large: Positive thinking, confirmation bias, …
Cognitive biases that developed which improve collective fitness (i.e. benefitting other carriers of the same gene). My first examples are all not really biases but emotions: Love toward children (your own, but also others), initial friendliness toward strangers (tit-for-tat strategy), altruism in general. An example of a real bias is the positive thinking related to children. Disregard of their faults, confirmation bias. But these are I think mostly used to rationalize ones behavior in the absence of the real explanation: You love your children and expend significant energy never to be payed back because those who do have more successful offspring.
In general I wonder how to disentangle biases from emotions. You wouldn’t want to rationalize against your emotions. That will not work. And if emotions trigger/streangthen biases then suppressing biases essentially means suppressing emotion.
I think the expression of the relationship between emotions and biases is at least partly learned. It could be possible to unlearn the triggering effect of the emotions. Kind of hacking your terminal goals. The question is: If you tricked your emotions to no longer grip what it means to have them expect providing internal sensation.
Yes. I didn’t think this through to all its consequences.
It is a well-know psychological fact that humans have a quite diverse set of basic fears that appear, develop and are normally overcome (understood, limited, suppressed,...) during childhood. Dealing with your fear, comming to terms with them is indeed a normal process.
Indeed, having them initially is in the most cases adaptive (I wonder whether it would be a globally net positive if we could remove fear of spiders weighing up the cost of lost time and energy due to spider fear versus the remaining dangerous cases).
The key point is that a very unspecific fear like fear of darkness is moderated into a form where it doesn’t control you and where it only applies to cases that you didn’t adapt to earlier (many people still freak out if put into extremely unusual situations which add (multiply?) multiple such fears). And whether having them in these cases is positive I can as best speculate on.
Nonetheless this argument that many fears are less adaptive then they used to (because civilization weeded them out) is independent of the other emotions esp. the ‘positive’ ones like love, empathy, happiness and curiosity which it appears also do put you into a biased state. Whould you want to get rid of these too? Which?
Humans exist in permanent “biased state”. The unbiased state is the province of Mr.Spock and Mr.Data, Vulcans and androids.
I think that rationality does not get rid of biases, but rather allows you to recognize them and compensate for them. Just like with e.g. fear—you rarely lose a particular fear altogether, you just learn to control and manage it.
You seem to mean that biases are the brains way to perceive the world in a way that focusses on the ‘important’ parts. Beside terminal goals which just evaluate the perception with respect to utility this acts acts as a filter but thereby also implies goals (namely the reduction of the importance of the filtered out parts).
Yes, but note that a lot of biases are universal to all humans. This means they are biological (as opposed to cultural) in nature. And this implies that the goals they developed to further are biological in nature as well. Which means that you are stuck with these goals whether you conscious mind likes it or not.
If your conscious mind has goals incompatible with the effects of bioneuropsychological processes then frustrations seems the least result.
I still don’t know about that. A collection of such “incompatible goals” has been described as civilization :-)
For example, things like “kill or drive away those-not-like-us” look like biologically hardwired goals to me. Having a conscious mind have its own goals incompatible with that one is probably a good thing.
Sure we have to deal with some of these inconsistencies. And for some of us this is an continuous source of frustration. But we do not have to add more to these than absolutely necessary, or?
I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.
It’s not a bias, it’s a preference. Insofar as we reserve the term bias for irrational “preferences” or tendencies or behaviors, risk aversion does not qualify.
It (as I described it—my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one’s goals being fulfilled (this is the definition of ‘payoff’, right?).
Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.
1. “goals being fulfilled” is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous.
Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant.
Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it’s not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn’t seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant “higher”, whoops) expectation.
[1] pp. 159-161 in the 1988 edition, if anyone’s curious enough to look this up. Extra bonus: This section of the book (chapter 8, “Subjective Expected Utility Theory”, where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn’t, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
But risk is integral to the calculation of utility. ‘Risk avoidance’ and ‘value’ are synonyms.
Point 2:
Thanks for the reference.
But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.
If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?
Perhaps you could outline Dawes’ argument? I’m open to the possibility that I’m missing something.
Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.)
“Risk avoidance” and “value” are not synonyms. I don’t know why you would say that. I suspect one or both of us is seriously misunderstanding the other.
Re: point #2: I don’t have the time right now, but sometime over the next couple of days I should have some time and then I’ll gladly outline Dawes’ argument for you. (I’ll post a sibling comment.)
The question is not one of your goals being 50% fulfilled
If I’m talking about a goal actually being 50% fulfilled, then it is.
“Risk avoidance” and “value” are not synonyms.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don’t know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I’m terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.
The context is: Dawes is explaining von Neumann and Morgenstern’s axioms.
Aside: I don’t know how familiar you are with the VNM utility theorem, but just in case, here’s a brief primer.
The VNM utility theorem presents a set of axioms, and then says that if an agent’s preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as “the expected value of x”.) That is to say, the agent’s preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).
In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent’s preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)
(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)
N.B.: “Alternatives” in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.
(If all of this is old hat to you, apologies; I didn’t want to assume.)
The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?
It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don’t adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it’s mandatory for a rational agent to satisfy that axiom.
Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.
The independence axiom says that A ≥ B (i.e., A is preferred to B) if and only if ApC ≥ BpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).
Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:
Is such irrationality the only reason for violating the independence axiom? I believe there is another reason. Axiom 5 [Independence] implies that the decision maker cannot be affected by the skewness of the consequences, which can be conceptualized as a probability distribution over personal values. Figure 8.1 shows (Note: This is my reproduction of the figure. I’ve tried to make it as exact as possible.) the skewed distributions of two different alternatives. Both distributions have the same average, hence the same expected personal value, which is a criterion of choice implied by the axioms. These distributions also have the same variance.
If the distributions in Figure 8.1 were those of wealth in a society, I have a definite preference for distribution a; its positive skewness means that income can be increased from any point — an incentive for productive work. Moreover, those people lowest in the distribution are not as distant from the average as in distribution b. In contrast, in distribution b, a large number of people are already earning a maximal amount of money, and there is a “tail” of people in the negatively skewed part of this distribution who are quite distant from the average income.[5] If I have such concerns about the distribution of outcomes in society, why not of the consequences for choosing alternatives in my own life? In fact, I believe that I do. Counter to the implications of prospect theory, I do not like alternatives with large negative skews, especially when the consequences in the negatively skewed part of the distribution have negative personal value.
[5] This is Dawes’ footnote; it talks about an objection to “Reaganomics” on similar grounds.
Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the “degree of goal satisfaction” which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.
But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you’re quite likely to do alright, there’s a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you’re quite likely to do pretty well, there’s a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you’ll do equally well either way.
The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn’t I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?
But if it’s really a preference — if I’m not totally indifferent — then I should also prefer less “risky” (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it’s called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there’s an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function u :: outcome -> real such that you maximise expected utility, not that some particular function (such as the two graphs you’ve drawn) actually represents your utility.
In other words, you haven’t really shown that “to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom” since the two distributions don’t have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function.
Assuming you privilege some reference point as your x-axis origin, sure. But there’s no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of “regular risk aversion” is what Dawes refers to when he talks about independence axiom violation due to framing effects, or “pseudocertainty”.
Note that the axioms require that there exist a utility function u :: outcome → real such that you maximise expected utility, not that some particular function (such as the two graphs you’ve drawn) actually represents your utility.
The graphs are not graphs of utility functions. See the first paragraph of my post here.
How do you show that there’s an actual violation of the independence axiom from this example? … the two distributions don’t have the form ApC, BpC with A≅B.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be… etc. If it’s the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged.
Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of “personal value”) will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person’s preferences and doesn’t lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.
It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person.
Couldn’t this still be rational in general if the fact that a particular reference point is presented provides information under normal circumstances (though perhaps not rational in a laboratory setting)?
Assuming you privilege some reference point as your x-axis origin, sure.
What? This has nothing to do with “privileged reference points”. If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn’t mean I am irrational, it means you don’t have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($).
That is what I mean by “regular risk aversion”.
The graphs are not graphs of utility functions.
I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative;
Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of “personal value”) will lead to this sort of preference between two such distributions.
And I say that is assuming the conclusion. And, if only established for some set of utility functions that “more or less track an intuitive notion of “personal value”″, fails to imply the conclusion that the independence axiom is violated for a rational human.
What? This has nothing to do with “privileged reference points”. If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn’t mean I am irrational, it means you don’t have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($).
That is what I mean by “regular risk aversion”.
It actually doesn’t matter what the values are, because we know from prospect theory that people’s preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can’t have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.
I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.
True enough. I rounded your objection to the nearest misunderstanding, I think.
And I say that is assuming the conclusion.
Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!
The core of Dawes’ argument is not a mathematical one, to be sure (and it would be difficult to make it into a mathematical argument, without some sort of rigorous account of what sorts of outcome distribution shapes humans prefer, which in turn would presumably require substantial field data, at the very least). It’s an argument from intuition: Dawes is saying, “Look, I prefer this sort of distribution of outcomes. [Implied: ‘And so do other people.’] However, such a preference is irrational, according to the VNM axioms...” Your objection seems to be: “No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly.” Is that a fair characterization?
Your talk of the utility function possibly being wrong makes me vaguely suspect a misunderstanding. It’s likely I’m just misunderstanding you, however, so if you already know this, I apologize, but just in case:
If you have some set of preferences, then (assuming your preferences satisfy the axioms), we can construct a utility function (up to positive affine transformation). But having constructed this function — which is the only function you could possibly construct from that set of preferences (up to positive affine transformation) — you are not then free to say “oh, well, maybe this is the wrong utility function; maybe the right function is something else”.
Of course you might instead be saying “well, we haven’t actually constructed any actual utility function from any actual set of preferences; we’re only imagining some vague, hypothetical utility function, and a vague hypothetical utility function certainly can be the wrong function”. Fair enough, if so. However, I once again invite you to exhibit a utility function — or even a preference ordering — which does not give rise to a preference for less-negatively-skewed distributions.
Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?
I’m afraid an answer to this part will have to wait until I have some free time to do some math.
It actually doesn’t matter what the values are, because we know from prospect theory that people’s preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can’t have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.
Yes, framing effects are irrational, I agree. I’m saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).
“No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly.”
That would be one way of describing my objection. The argument Dawes is making is simply not valid. He says “Suppose my utility function is X. Then my intuition says that I prefer certain distributions over X that have the same expected value. Therefore my utility function is not X, and in fact I have no utility function.” There are two complementary ways this argument may break:
If you take as a premise that the function X is actually your utility function (ie. “assuming I have a utility function, let X be that function”) then you have no license to apply your intuition to derive preferences over various distributions over the values of X. Your intuition has no facilities for judging meaningless numbers that have only abstract mathematical reasoning tying them to your actual preferences. If you try to shoehorn the abstract constructed utility function X into your intuition by imagining that X represents “money” or “lives saved” or “amount of something nice” you are making a logical error.
On the other hand, if you start by applying your intuition to something it understands (such as “money” or “amount of nice things”) you can certainly say “I am risk averse with respect to X”, but you have not shown that X is your utility function, so there’s no license to conclude “I (it is rational for me to) violate the VNM axioms”.
Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!
No, but that doesn’t mean such a thing does not exist!
Yes, framing effects are irrational, I agree. I’m saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).
Well, now, hold on. Dawes is not actually saying that (and neither am I)! The claim is not “risk aversion demonstrates that there’s a framing effect going on (which is clearly irrational, and not just in the ‘violates VNM axioms’ sense)”. The point is that risk aversion (at least, risk aversion construed as “preferring less negatively skewed distributions”) constitutes departure from the VNM axioms. The independence axiom strictly precludes such risk aversion.
Whether risk aversion is actually irrational upon consideration — rather than merely irrational by technical definition, i.e. irrational by virtue of VNM axiom violation — is what Dawes is questioning.
The argument Dawes is making is simply not valid. He says …
That is not a good way to characterize Dawes’ argument.
I don’t know if you’ve read Rational Choice in an Uncertain World. Earlier in the same chapter, Dawes, introducing von Neumann and Morgenstern’s work, comments that utilities are intended to represent personal values. This makes sense, as utilities by definition have to track personal values, at least insofar as something with more utility is going to be preferred (by a VNM-satisfying agent) to something with less utility. Given that our notion of personal value is so vague, there’s little else we can expect from a measure that purports to represent personal value (it’s not like we’ve got some intuitive notion of what mathematical operations are appropriate to perform on estimates of personal value, which utilities then might or might not satisfy...). So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.
So the only real assumption behind those graphs is that this agent’s utility function tracks, in some vague sense, an intuitive notion of personal value — meaning what? Nothing more than that this person places greater value on things he prefers, than on things he doesn’t prefer (relatively speaking). And that (by definition!) will be true of the utility function derived from his preferences.
It seems impossible that we can have a utility function that doesn’t give rise to such preferences over distributions. Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all). But such a preference constitutes independence axiom violation, as mentioned...
The point is that risk aversion (at least, risk aversion construed as “preferring less negatively skewed distributions”) constitutes departure from the VNM axioms.
No, it doesn’t. Not unless it’s literally risk aversion with respect to utility.
So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.
That seems to me a completely unfounded assumption.
Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all).
The fact that the x-axis is not labeled is exactly why it’s unreasonable to think that just asking your intuition which graph “looks better” is a good way of determining whether you have an actual preference between the graphs. The shape of the graph is meaningless.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, “if I lose this game, that’s it, I won’t get a chance to play again, which makes this game a bad option.” If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.
Nonetheless, those exponential distributions make a very interesting argument.
I’m not entirely sure, I need to mull it over a bit more.
Just a brief comment: the argument is not predicated on being “kicked out” of the game. We’re not assuming that even the lowest-utility outcomes cause you to no longer be able to continue “playing”. We’re merely saying that they are significantly worse than average.
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, “losing makes it harder to continue playing competitively.” But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I’ll continue to ponder.
The problem feels related to Pascal’s wager—how to deal with the low-probability disaster.
I really do want to emphasize that if you assume that “losing” (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be “losing takes you out of the game”, or “losing makes it harder to keep playing”, or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it’s stated to have.
I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or “utility but without taking into account secondary effects”, or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that’s what determines that outcome’s position on the graph’s x-axis. (Edit: And it’s crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.)
This is not a Pascal’s Wager argument. The low-utility outcomes aren’t assumed to be “infinitely” bad, or somehow massively, disproportionately, unrealistically bad; they’re just… bad. (I don’t want to get into the realm of offering up examples of bad things, because people’s lives are different and personal value scales are not absolute, but I hope that I’ve been able to clarify things at least a bit.)
If you assume.… [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it’s stated to have.
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn’t been correctly drawn. If B is worse than A, how can their average payoffs be the same?
To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?
I’m not saying I’m sure Dawes’ argument is wrong, I just have no intuition at the moment for how it could be right.
A point of terminology: “utility function” usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes’ occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is “distribution” (or more fully, “frequency [or probability] distribution over utility of outcomes”).
To the rest of your comment, I’m afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to “quantify betterness”. It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are “supposed” to say, and proceed from there.
I think this needs to differentiated further or partly corrected:
Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:
Cognitive biases which improve individual fitness by avoiding dangerous parts of life space. Examples: Risk aversion, status-quo bias (in a way this is a more abstract for of the basic fears like fear of heigh or spiders which also avoid dangerous situations (or help getting out of them quickly)).
Cognitive biases which improve individual fitness by increasing likelihood of reproductive success. These are probably the most complex and intricately connected to emotions. In a way emotions are comparable to biases or at least trigger specific biases. For example infatuation does activate powerful biases regarding the object of the infatuation and the situation at large: Positive thinking, confirmation bias, …
Cognitive biases that developed which improve collective fitness (i.e. benefitting other carriers of the same gene). My first examples are all not really biases but emotions: Love toward children (your own, but also others), initial friendliness toward strangers (tit-for-tat strategy), altruism in general. An example of a real bias is the positive thinking related to children. Disregard of their faults, confirmation bias. But these are I think mostly used to rationalize ones behavior in the absence of the real explanation: You love your children and expend significant energy never to be payed back because those who do have more successful offspring.
In general I wonder how to disentangle biases from emotions. You wouldn’t want to rationalize against your emotions. That will not work. And if emotions trigger/streangthen biases then suppressing biases essentially means suppressing emotion.
I think the expression of the relationship between emotions and biases is at least partly learned. It could be possible to unlearn the triggering effect of the emotions. Kind of hacking your terminal goals. The question is: If you tricked your emotions to no longer grip what it means to have them expect providing internal sensation.
Thanks for the thoughts. These points all strike me as reasonable.
Why not? Rationalizing against (unreasonable) fear seems fine to me. Rationalizing against anger looks useful. Etc., etc.
Yes. I didn’t think this through to all its consequences.
It is a well-know psychological fact that humans have a quite diverse set of basic fears that appear, develop and are normally overcome (understood, limited, suppressed,...) during childhood. Dealing with your fear, comming to terms with them is indeed a normal process.
Quite a good read about this is Helping Children Overcome Fears.
Indeed, having them initially is in the most cases adaptive (I wonder whether it would be a globally net positive if we could remove fear of spiders weighing up the cost of lost time and energy due to spider fear versus the remaining dangerous cases).
The key point is that a very unspecific fear like fear of darkness is moderated into a form where it doesn’t control you and where it only applies to cases that you didn’t adapt to earlier (many people still freak out if put into extremely unusual situations which add (multiply?) multiple such fears). And whether having them in these cases is positive I can as best speculate on.
Nonetheless this argument that many fears are less adaptive then they used to (because civilization weeded them out) is independent of the other emotions esp. the ‘positive’ ones like love, empathy, happiness and curiosity which it appears also do put you into a biased state. Whould you want to get rid of these too? Which?
Humans exist in permanent “biased state”. The unbiased state is the province of Mr.Spock and Mr.Data, Vulcans and androids.
I think that rationality does not get rid of biases, but rather allows you to recognize them and compensate for them. Just like with e.g. fear—you rarely lose a particular fear altogether, you just learn to control and manage it.
You seem to mean that biases are the brains way to perceive the world in a way that focusses on the ‘important’ parts. Beside terminal goals which just evaluate the perception with respect to utility this acts acts as a filter but thereby also implies goals (namely the reduction of the importance of the filtered out parts).
Yes, but note that a lot of biases are universal to all humans. This means they are biological (as opposed to cultural) in nature. And this implies that the goals they developed to further are biological in nature as well. Which means that you are stuck with these goals whether you conscious mind likes it or not.
Yes. That’s what I meant when I said: “You wouldn’t want to rationalize against your emotions. That will not work.”
If your conscious mind has goals incompatible with the effects of bioneuropsychological processes then frustrations seems the least result.
I still don’t know about that. A collection of such “incompatible goals” has been described as civilization :-)
For example, things like “kill or drive away those-not-like-us” look like biologically hardwired goals to me. Having a conscious mind have its own goals incompatible with that one is probably a good thing.
Sure we have to deal with some of these inconsistencies. And for some of us this is an continuous source of frustration. But we do not have to add more to these than absolutely necessary, or?
risk aversion is not a bias.
It might or might not be. If it is coming from your utility function, it’s not. If it is “extra” to the utility function it can be a bias.
I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.
It’s not a bias, it’s a preference. Insofar as we reserve the term bias for irrational “preferences” or tendencies or behaviors, risk aversion does not qualify.
I would call it a bias because it is irrational.
It (as I described it—my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one’s goals being fulfilled (this is the definition of ‘payoff’, right?).
Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.
Problems with your position:
1. “goals being fulfilled” is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous.
Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.
2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant.
Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it’s not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn’t seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant “higher”, whoops) expectation.
[1] pp. 159-161 in the 1988 edition, if anyone’s curious enough to look this up. Extra bonus: This section of the book (chapter 8, “Subjective Expected Utility Theory”, where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.
Point 1:
If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn’t, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.
But risk is integral to the calculation of utility. ‘Risk avoidance’ and ‘value’ are synonyms.
Point 2:
Thanks for the reference.
But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.
If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?
Perhaps you could outline Dawes’ argument? I’m open to the possibility that I’m missing something.
Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.)
“Risk avoidance” and “value” are not synonyms. I don’t know why you would say that. I suspect one or both of us is seriously misunderstanding the other.
Re: point #2: I don’t have the time right now, but sometime over the next couple of days I should have some time and then I’ll gladly outline Dawes’ argument for you. (I’ll post a sibling comment.)
If I’m talking about a goal actually being 50% fulfilled, then it is.
Really?
I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don’t know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?
If I’m terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.
That would be very kind :) No need to hurry.
Dawes’ argument, as promised.
The context is: Dawes is explaining von Neumann and Morgenstern’s axioms.
Aside: I don’t know how familiar you are with the VNM utility theorem, but just in case, here’s a brief primer.
The VNM utility theorem presents a set of axioms, and then says that if an agent’s preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as “the expected value of x”.) That is to say, the agent’s preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).
In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent’s preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)
(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)
N.B.: “Alternatives” in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.
(If all of this is old hat to you, apologies; I didn’t want to assume.)
The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?
It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don’t adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it’s mandatory for a rational agent to satisfy that axiom.
Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.
The independence axiom says that A ≥ B (i.e., A is preferred to B) if and only if ApC ≥ BpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).
Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:
[5] This is Dawes’ footnote; it talks about an objection to “Reaganomics” on similar grounds.
Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the “degree of goal satisfaction” which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.
But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you’re quite likely to do alright, there’s a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you’re quite likely to do pretty well, there’s a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you’ll do equally well either way.
The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn’t I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?
But if it’s really a preference — if I’m not totally indifferent — then I should also prefer less “risky” (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it’s called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.
This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there’s an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function
u :: outcome -> real
such that you maximise expected utility, not that some particular function (such as the two graphs you’ve drawn) actually represents your utility.In other words, you haven’t really shown that “to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom” since the two distributions don’t have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.
Assuming you privilege some reference point as your x-axis origin, sure. But there’s no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of “regular risk aversion” is what Dawes refers to when he talks about independence axiom violation due to framing effects, or “pseudocertainty”.
The graphs are not graphs of utility functions. See the first paragraph of my post here.
Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be… etc. If it’s the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged.
The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of “personal value”) will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person’s preferences and doesn’t lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.
Couldn’t this still be rational in general if the fact that a particular reference point is presented provides information under normal circumstances (though perhaps not rational in a laboratory setting)?
I think you’ll have to give an example of such a scenario before I could comment on whether it’s plausible.
What? This has nothing to do with “privileged reference points”. If I am [VNM-]rational, with utility function
U
, and you consider an alternative function$ = exp(U)
(or an affine transformation thereof), I will appear to be risk averse with respect to$
. This doesn’t mean I am irrational, it means you don’t have the correct utility function. And in this case, you can turn the wrong utility function into the right one by takinglog($)
.That is what I mean by “regular risk aversion”.
I know, they are graphs of
P(U)
. Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?
And I say that is assuming the conclusion. And, if only established for some set of utility functions that “more or less track an intuitive notion of “personal value”″, fails to imply the conclusion that the independence axiom is violated for a rational human.
It actually doesn’t matter what the values are, because we know from prospect theory that people’s preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can’t have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.
True enough. I rounded your objection to the nearest misunderstanding, I think.
Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!
The core of Dawes’ argument is not a mathematical one, to be sure (and it would be difficult to make it into a mathematical argument, without some sort of rigorous account of what sorts of outcome distribution shapes humans prefer, which in turn would presumably require substantial field data, at the very least). It’s an argument from intuition: Dawes is saying, “Look, I prefer this sort of distribution of outcomes. [Implied: ‘And so do other people.’] However, such a preference is irrational, according to the VNM axioms...” Your objection seems to be: “No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly.” Is that a fair characterization?
Your talk of the utility function possibly being wrong makes me vaguely suspect a misunderstanding. It’s likely I’m just misunderstanding you, however, so if you already know this, I apologize, but just in case:
If you have some set of preferences, then (assuming your preferences satisfy the axioms), we can construct a utility function (up to positive affine transformation). But having constructed this function — which is the only function you could possibly construct from that set of preferences (up to positive affine transformation) — you are not then free to say “oh, well, maybe this is the wrong utility function; maybe the right function is something else”.
Of course you might instead be saying “well, we haven’t actually constructed any actual utility function from any actual set of preferences; we’re only imagining some vague, hypothetical utility function, and a vague hypothetical utility function certainly can be the wrong function”. Fair enough, if so. However, I once again invite you to exhibit a utility function — or even a preference ordering — which does not give rise to a preference for less-negatively-skewed distributions.
I’m afraid an answer to this part will have to wait until I have some free time to do some math.
Yes, framing effects are irrational, I agree. I’m saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).
That would be one way of describing my objection. The argument Dawes is making is simply not valid. He says “Suppose my utility function is X. Then my intuition says that I prefer certain distributions over X that have the same expected value. Therefore my utility function is not X, and in fact I have no utility function.” There are two complementary ways this argument may break:
If you take as a premise that the function X is actually your utility function (ie. “assuming I have a utility function, let X be that function”) then you have no license to apply your intuition to derive preferences over various distributions over the values of X. Your intuition has no facilities for judging meaningless numbers that have only abstract mathematical reasoning tying them to your actual preferences. If you try to shoehorn the abstract constructed utility function X into your intuition by imagining that X represents “money” or “lives saved” or “amount of something nice” you are making a logical error.
On the other hand, if you start by applying your intuition to something it understands (such as “money” or “amount of nice things”) you can certainly say “I am risk averse with respect to X”, but you have not shown that X is your utility function, so there’s no license to conclude “I (it is rational for me to) violate the VNM axioms”.
No, but that doesn’t mean such a thing does not exist!
Well, now, hold on. Dawes is not actually saying that (and neither am I)! The claim is not “risk aversion demonstrates that there’s a framing effect going on (which is clearly irrational, and not just in the ‘violates VNM axioms’ sense)”. The point is that risk aversion (at least, risk aversion construed as “preferring less negatively skewed distributions”) constitutes departure from the VNM axioms. The independence axiom strictly precludes such risk aversion.
Whether risk aversion is actually irrational upon consideration — rather than merely irrational by technical definition, i.e. irrational by virtue of VNM axiom violation — is what Dawes is questioning.
That is not a good way to characterize Dawes’ argument.
I don’t know if you’ve read Rational Choice in an Uncertain World. Earlier in the same chapter, Dawes, introducing von Neumann and Morgenstern’s work, comments that utilities are intended to represent personal values. This makes sense, as utilities by definition have to track personal values, at least insofar as something with more utility is going to be preferred (by a VNM-satisfying agent) to something with less utility. Given that our notion of personal value is so vague, there’s little else we can expect from a measure that purports to represent personal value (it’s not like we’ve got some intuitive notion of what mathematical operations are appropriate to perform on estimates of personal value, which utilities then might or might not satisfy...). So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.
So the only real assumption behind those graphs is that this agent’s utility function tracks, in some vague sense, an intuitive notion of personal value — meaning what? Nothing more than that this person places greater value on things he prefers, than on things he doesn’t prefer (relatively speaking). And that (by definition!) will be true of the utility function derived from his preferences.
It seems impossible that we can have a utility function that doesn’t give rise to such preferences over distributions. Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all). But such a preference constitutes independence axiom violation, as mentioned...
No, it doesn’t. Not unless it’s literally risk aversion with respect to utility.
That seems to me a completely unfounded assumption.
The fact that the x-axis is not labeled is exactly why it’s unreasonable to think that just asking your intuition which graph “looks better” is a good way of determining whether you have an actual preference between the graphs. The shape of the graph is meaningless.
Thanks very much for the taking the time to explain this.
It seems like the argument (very crudely) is that, “if I lose this game, that’s it, I won’t get a chance to play again, which makes this game a bad option.” If so, again, I wonder if our measure of utility has been properly calibrated.
It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.
Nonetheless, those exponential distributions make a very interesting argument.
I’m not entirely sure, I need to mull it over a bit more.
Thanks again, I appreciate it.
Just a brief comment: the argument is not predicated on being “kicked out” of the game. We’re not assuming that even the lowest-utility outcomes cause you to no longer be able to continue “playing”. We’re merely saying that they are significantly worse than average.
Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.
One generalization might be something like, “losing makes it harder to continue playing competitively.” But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I’ll continue to ponder.
The problem feels related to Pascal’s wager—how to deal with the low-probability disaster.
I really do want to emphasize that if you assume that “losing” (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be “losing takes you out of the game”, or “losing makes it harder to keep playing”, or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it’s stated to have.
I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or “utility but without taking into account secondary effects”, or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that’s what determines that outcome’s position on the graph’s x-axis. (Edit: And it’s crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.)
This is not a Pascal’s Wager argument. The low-utility outcomes aren’t assumed to be “infinitely” bad, or somehow massively, disproportionately, unrealistically bad; they’re just… bad. (I don’t want to get into the realm of offering up examples of bad things, because people’s lives are different and personal value scales are not absolute, but I hope that I’ve been able to clarify things at least a bit.)
Thanks, that focuses the argument for me a bit.
So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn’t been correctly drawn. If B is worse than A, how can their average payoffs be the same?
To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?
I’m not saying I’m sure Dawes’ argument is wrong, I just have no intuition at the moment for how it could be right.
A point of terminology: “utility function” usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes’ occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is “distribution” (or more fully, “frequency [or probability] distribution over utility of outcomes”).
To the rest of your comment, I’m afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to “quantify betterness”. It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are “supposed” to say, and proceed from there.
I will reply more fully when I have time.