It’s more than a metaphor; a utility function is the structure any consistent preference ordering that respects probability must have. It may or may not be a useful conceptual tool for practical human ethical reasoning, but “just a metaphor” is too strong a judgment.
a utility function is the structure any consistent preference ordering that respects probability must have.
This is the sort of thing I mean when I say that people take utility functions too seriously. I think the von Neumann-Morgenstern theorem is much weaker than it initially appears. It’s full of hidden assumptions that are constantly violated in practice, e.g. that an agent can know probabilities to arbitrary precision, can know utilities to arbitrary precision, can compute utilities in time to make decisions, makes a single plan at the beginning of time about how they’ll behave for eternity (or else you need to take into account factors like how the agent should behave in order to acquire more information in the future and that just isn’t modeled by the setup of vNM at all), etc.
The biggest problematic unstated assumption behind applying VNM-rationality to humans, I think, is the assumption that we’re actually trying to maximize something.
To elaborate, the VNM theorem defines preferences by the axiom of completeness, which states that for any two lotteries A and B, one of the following holds: A is preferred to B, B is preferred to A, or one is indifferent between them.
So basically, a “preference” as defined by the axioms is a function that (given the state of the agent and the state of the world in general) outputs an agent’s decision between two or more choices. Now suppose that the agent’s preferences violate the Von Neumann-Morgenstern axioms, so that in one situation it prefers to make a deal that causes it to end up with an apple rather than an orange, and in another situation it prefers to make a deal that causes it to end up with an orange rather than an apple. Is that an argument against having circular preferences?
By itself, it’s not. It simply establishes that the function that outputs the agent’s actions behaves differently in different situations. Now the normal way to establish that this is bad is to assume that all choices are between monetary payouts, and that an agent with inconsistent preferences can be Dutch Booked and made to lose money. An alternative way, which doesn’t require us to assume that all the choices are between monetary payouts, is to construct a series of trades between resources that leaves us with less resources than when we started.
Stated that way, this sounds kinda bad. But then there are things that kind of fit that description, but which we would intuitively think of as good. For instance, some time back I asked:
Suppose someone has a preference to have sex each evening, and is in a relationship with someone what a similar level of sexual desire. So each evening they get into bed, undress, make love, get dressed again, get out of bed. Repeat the next evening.
How is this different from having exploitable circular preferences? After all, the people involved clearly have cycles in their preferences—first they prefer getting undressed to not having sex, after which they prefer getting dressed to having (more) sex. And they’re “clearly” being the victims of a Dutch Book, too—they keep repeating this set of trades every evening, and losing lots of time because of that.
In response, I was told that
The circular preferences that go against the axioms of utility theory, and which are Dutch book exploitable, are not of the kind “I prefer A to B at time t1 and B to A at time t2”, like the ones of your example. They are more like “I prefer A to B and B to C and C to A, all at the same time”.
The couple, if they had to pay a third party a cent to get undressed and then a cent to get dressed, would probably do it and consider it worth it—they end up two cents short but having had an enjoyable experience. Nothing irrational about that. To someone with the other “bad” kind of circular preferences, we can offer a sequence of trades (first A for B and a cent, then C for A and a cent, then B for C and a cent) after which they end up three cents short but otherwise exactly as they started (they didn’t actually obtain enjoyable experiences, they made all the trades before anything happened). It is difficult to consider this rational.
But then I asked that, if we accept this, then what real-life situation does count as an actual circular preference in the VNM sense, given that just about every potential circularity that I can think of is the kind “I prefer A to B at time t1 and B to A at time t2”? And I didn’t get very satisfactory replies.
Intuitively, there are a lot of real-life situations that feel kind of like losing out due to inconsistent preferences, like someone who wants to get into a relationship when he’s single and then wants to be single when he gets into a relationship, but there our actual problem is that the person spends a lot of time being unhappy, rather than with the fact that he makes different choices in different situations. Whereas with the couple, we think that’s fine because they get enjoyment from the “trades”.
The general problem that I’m trying to get at is that in order to hold up VNM rationality as a normative standard, we would need to have a meta-preference: a preference over preferences, stating that it would be better to have preferences that lead to some particular outcomes. The standard Dutch Book example kind of smuggles in that assumption by the way that it talks about money, and thus makes us think that we are in a situation where we are only trying to maximize money and care about nothing else. And if you really are trying to only maximize a single concrete variable or resource and care about nothing else, then you really should try to make sure that your choices follow the VNM axioms. If you run a betting office, then do make sure that nobody can Dutch Book you.
But we don’t have such a clear normative standard for life in general. It would be reasonable to try to construct an argument for why the couple having sex were rational but the person who kept vacillating about being in a relationship was irrational by suggesting that the couple got happiness whereas the other person was unhappy… but we also care about other things than justhappiness (or pleasure) and thus aren’t optimizing just for pleasure either. And unless you’re a hedonistic utilitarian, you’re unlikely to say that we should optimize only for pleasure either.
So basically, if you want to say that people should be VNM-rational, then you need to have some specific set of values or goals that you think people should strive towards. If you don’t have that, then VNM-rationality is basically irrelevant aside for the small set of special cases where people really do have a clear explicit goal that’s valued above other things.
Now suppose that the agent’s preferences violate the Von Neumann-Morgenstern axioms, so that in one situation it prefers to make a deal that causes it to end up with an apple rather than an orange, and in another situation it prefers to make a deal that causes it to end up with an orange rather than an apple. Is that an argument against having circular preferences?
I’m not sure I follow in what sense this is a violation of the vNM axioms. A vNM agent has preferences over world-histories; in general one can’t isolate the effect of having an apple vs. having an orange without looking at how that affects the entire future history of the world.
Right, I was trying to say “it prefers an apple to an orange and an orange to an apple in such a way that does violate the axioms”. But I was unsure of what example to actually give of that, since I’m unsure of what real-life situations really would violate the axioms.
The example that comes to mind to show the how the sex thing isn’t a problem is that of a robot car with a goal to drive as many miles as possible. Every day it will burn through all its fuel and fuel up. Right after it fuels up, it will have no desire for further fuel—more fuel simply does not help it go further at this point, and forcing it can be detrimental. Clearly not contradictory
You could have a similar situation with a couple wanting sex iff they haven’t had sex in a day, or wanting an orange if you’ve just eaten an apple but wanting an apple if you’ve just eaten an orange.
To strictly show that something violates vNM axioms, you’d have to show that this behavior (in context) can’t be fulfilling any preferences better than other options that the agent is aware of—or at least be able to argue that the revealed utility function is contrived and unlikely to hold up in other situations (not what the agent “really wants”).
Constantly wanting what one doesn’t have can have this defect. If I keep paying you to switch my apple for your orange and back (without actually eating either), then you have a decent case, if you’re pretty confident I’m not actually fulfilling my desire to troll you ;)
The “want’s a relationship when single” and “wants to be single when not” thing does look like such a violation to me. If you let him flip flop as often as he desires, he’s not going to end up happily endorsing his past actions. If you offered him a pill that would prevent him from flip flopping, he very well may take it. So there’s a contradiction there.
To bring human-specific psychology into it, its not that his inherent desires are contradictory, but that he wants something like “freedom”, which he doesn’t know how to get in a relationship and something like “intimacy”, which he doesn’t know how to get while single. It’s not that he want’s intimacy when single and freedom when not, it’s that he wants both always, but the unfulfilled need is the salient one.
Picture me standing on your left foot. “Oww! Get off my left foot!”. Then I switch to the right “Ahh! Get off my right foot!”. If you’re not very quick and/or the pain is overwhelming, it might take you a few iterations to realize the situation you’re in and to put the pain aside while you think of a way to get me off both feet (intimacy when single/freedom in a relationship). Or if you can’t have that, it’s another challenge to figure out what you want to do about it.
I wouldn’t model you as “just VNM-irrational”, even if your external behaviors are ineffective for everything you might want. I’d model you as “not knowing how to be VNM-rational in presence of strong pain(s)”, and would expect you to start behaving more effectively when shown how.
(and that is what I find, although showing someone how to be more rational is not trivial and “here’s a proof of the inconsistency of your actions now pick a side and stop feeling the desire for the other side” is almost never sufficient. You have to be able to model the specific way that they’re stuck and meet them there)
tl;dr: We’re not VNM-rational because we don’t know how to be, not because it’s not something we’re trying to do.
How do you distinguish his preferences being irrationally inconsistent (he is worse off from entering and leaving relationships repeatedly) from him truly wanting to be in relationships periodically (like how it’s rational to alternate between sleeping and waking rather than always doing one or the other)?
If there’s a pill that can make him stop switching (but doesn’t change his preferences), one of two things will happen: either he’ll never be in a relationship (prevented from entering), or he’ll stay in his current relationship forever (prevented from leaving). I wouldn’t be surprised if he dislikes both of the outcomes and decides not to take the pill.
The pill could instead change his preferences so that he no longer wants to flip-flop, but this argument seems too general—why not just give him a pill that makes him like everything much more than he does now? If my behavior is irrational, I should be able to make myself better off simply by changing my behavior, without having to modify my preferences.
How do you distinguish his preferences being irrationally inconsistent [...] from him truly wanting to be in relationships periodically[...]?
By talking to him. If it’s the latter, he’ll be able to say he prefers flip flopping like it’s just a matter of fact and if you probe into why he likes flip flopping, he’ll either have an answer that makes sense or he’ll talk about it in a way that shows that he is comfortable with not knowing. If it’s the former, he’ll probably say that he doesn’t like flip flopping, and if he doesn’t, it’ll leak signs of bullshit. It’ll come off like he’s trying to convince you of something because he is. And if you probe his answers for inconsistencies he’ll get hostile because he doesn’t want you to.
I’m not sure where you’re going with the “magic pill” hypotheticals, but I agree. The only thing I can think to add is that a lot of times the “winning behaviors” are largely mental and aren’t really available until you understand the situation better.
For example, if you break your foot and can’t get it x-rayed for a day, the right answer might be to just get some writing done—but if you try to force that behavior while you’re suffering, it’s not gonna go well. You have to actually be able to dismiss the pain signal before you have a mental space to write in.
I’m not sure where you’re going with the “magic pill” hypotheticals, but I agree.
I meant that if someone is behaving irrationally, forcing them to stop that behavior should make them better off. But it seems unlikely to me that forcing him to stay in his current relationship forever, or preventing him from ever entering a relationship (these are the two ways he can be stopped from flip-flopping) actually benefit him.
Forcing anyone to stay in their current relationship forever or forever preventing them from entering a relationship would be quite bad. In order to help him, he’d have to be doing worse than that.
The way to help him would be a bit trickier than that: let him have “good” relationships but not bad. Let him leave “bad” relationships but not good. And then control his mental behaviors so that he’s not allowed to spend time being miserable about his lack of options… (it’s hard to force rationality)
Controlling his mental behaviors would either be changing his preferences or giving him another option. For judging whether he is behaving irrationally, shouldn’t his preferences and set of choices be held fixed?
Relevant question: what does the cognitive science literature on choice-making, preference, and valuation have to say about all this? What mathematical structure actually does model human preferences?
Given that we run on top of neural networks and seem to use some Bayesian algorithms for certain forms of learning (citations available), I currently expect that our choice-making mechanisms might involve conditioning on features or states of our environment at some fundamental level.
My first guess would be that evolution has selected us for circular preferences that our genes money-pump so that we will propagate them. You can’t get off this ride while you’re human.
:-) I mean that if you embody human value, you’ll probably be a money-pumpable entity. Very few humans actually achieve an end to desire while still alive and mentally active.
I’ll take the challenge, then. I was already walking around thinking that the Four Noble Truths of the Buddha are a bunch of depressing bullshit that need to be fixed.
I’ve seen a bunch of different theories backed with varying amounts of experimental data—for instance, this, this and this—but I haven’t looked at them enough to tell which ones seem most correct.
That said, I still don’t remember running into any thorough discussion of what human preferences are, other than just “something that makes us make some choice in some situations”. I mention here that
some of our preferences are implicit in our automatic habits (the things that we show we value with our daily routines), some in the preprocessing of sensory data that our brains carry out (the things and ideas that are ”painted with” positive associations or feelings), and some in the configuration of our executive processes (the actions we actually end up doing in response to novel or conflicting situations).
And I’m a little skeptical of any theory of human preferences that doesn’t attempt to make any such breakdown and only takes a “black box” approach of looking at the outputs of our choice mechanism.
I think your original post would have been better if it included any arguments against utility functions, such as those you mention under “e.g.” here.
Besides being a more meaningful post, we would also be able to discuss your comments. For example, without more detail, I can’t tell whether your last comment is addressed sufficiently by the standard equivalence of normal-form and extensive-form games.
Essentially every post would have been better if it had included some additional thing. Based on various recent comments I was under the impression that people want more posts in Discussion so I’ve been experimenting with that, and I’m keeping the burden of quality deliberately low so that I’ll post at all.
I appreciate you writing this way—speaking for myself, I’m perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.
Let me rephrase: would you like to describe your arguments against utility functions in more detail?
For example, as I mentioned, there’s an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.
The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges? And if so, is the objection to the normal-form assumption essentially the same?
For example, as I mentioned, there’s an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.
Can you give more details here? I’m not familiar with extensive-form vs. normal-form games.
The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges?
Something like that. It seems like the computational concerns are extremely important: after all, a theory of morality should ultimately output actions, and to output actions in the context of a utility function-based model you need to be able to actually calculate probabilities and utilities.
Sure. Say you have to make some decision now, and you will be asked to make a decision later about something else. Your decision later may depend on your decision now as well as part of the world that you don’t control, and you may learn new information from the world in the meantime. Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.
This is vaguely analogous to how you can curry a function of multiple arguments. Taking one argument X and returning (a function of one argument Y that returns Z) is equivalent to taking two arguments X and Y and returning X.
There’s potentially a huge computational complexity blowup here, which is why I stressed mathematical equivalence in my posts.
Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.
It’s full of hidden assumptions that are constantly violated in practice, e.g. that an agent can know probabilities to arbitrary precision, can know utilities to arbitrary precision, can compute utilities in time to make decisions, makes a single plan at the beginning of time about how they’ll behave for eternity (or else you need to take into account factors like how the agent should behave in order to acquire more information in the future and that just isn’t modeled by the setup of vNM at all), etc.
Those are not assumptions of the von Neumann-Morgenstern theorem, nor of the concept of utility functions itself. Those are assumptions of an intelligent agent implemented by measuring its potential actions against an explicitly constructed representation of its utility function.
I get the impression that you’re conflating the mathematical structure that is a utility function on the one hand, and representations thereof as a technique for ethical reasoning on the other hand. The former can be valid even if the latter is misleading.
the mathematical structure that is a utility function
Can you describe this “mathematical structure” in terms of mathematics? In particular, the argument(s) to this function, what do they look like mathematically?
Certainly, though I should note that there is no original work in the following; I’m just rephrasing standard stuff. I particularly like Eliezer’s explanation about it.
Assume that there is a set of things-that-could-happen, “outcomes”, say “you win $10″ and “you win $100”. Assume that you have a preference over those outcomes; say, you prefer winning $100 over winning $10. What’s more, assume that you have a preference over probability distributions over outcomes: say, you prefer a 90% chance of winning $100 and a 10% chance of winning $10 over a 80% chance of winning $100 and a 20% change of winning $10, which in turn you prefer over 70%/30% chances, etc.
A utility function is a function f from outcomes to the real numbers; for an outcome O, f(O) is called the utility of O. A utility function induces a preference ordering in which probability-distribution-over-outcomes A is preferred over B if and only if the sum of the utilities of the outcomes in A, scaled by their respective probabilities, is larger than the same for B.
Now assume that you have a preference ordering over probability distributions over outcomes that is “consistent”, that is, such that it satisfies a collection of axioms that we generally like reasonable such orderings to have, such as transitivity (details here). Then the von Neumann-Morgenstern theorem says that there exists a utility function f such that the induced preference ordering of f equals your preference ordering.
Thus, if some agent has a set of preferences that is consistent—which, basically, means the preferences scale with probability in the way one would expect—we know that those preferences must be induced by some utility function. And that is a strong claim, because a priori, preference orderings over probability distributions over outcomes have a great many more degrees of freedom than utility functions do. The fact that a given preference ordering is induced by a utility function disallows a great many possible forms that ordering might have, allowing you to infer particular preferences from other preferences in a way that would not be possible with preference orderings in general. (Compare this LW article for another example of the degrees-of-freedom thing.) This is the mathematical structure I referred to above.
So, keeping in mind that the issue is separating the pure mathematical structure from the messy world of humans, tell me what outcomes are, mathematically. What properties do they have? Where can we find them outside of the argument list to the utility function?
“statement x is not currently the case and is probably unfeasible” does in fact mean we shouldn’t try to act on it. Maybe we can try to act to make statement x true, but we shouldn’t act as if it already is. For a more concrete example, imagine this: “I’ve never done a backflip. It’s not even clear I can do one”. We know backflips are possible, and with training you’re probably going to be able to do one. But at the time you’re making that statement, saying “doesn’t mean you shouldn’t try” is TERRIBLE advice that could get you a broken neck.
Firstly, that’s kind of an uncharitable reading. If I said “I’m going to try and pass an exam” you’d naturally understand me as planning to do the requisite work first. “Backflip” just pattern-matches to ‘the sort of thing silly people try to do without training’.
However, that said, I’m being disingenuous. What I really truly meant at the time I typed that was moral-should, not practical-should, which come apart if one isn’t a perfect consequentialist. Which I ain’t, which is at least partly the point.
It may well do. Yvain has pointed out on his blog (I recall the post, though I couldn’t find it just now) that in daily life we do actually use something like utilitarianism quite a bit, which carries a presumption of something like a utility function at least in that case. But what works in normal ranges does not necessarily extrapolate: utilitarianism is observably brittle, and routinely reaches conclusions that humans consider absurd.
There’s occasionally LW posts showing that utilitarianism gives some apparently-absurd result or other, and too often the poster seems to be saying “look, absurd result, but the numbers work out so this is important!” rather than “oh, I hit an absurdity, perhaps I’m stretching this way further than it goes.” It’s entirely unclear to me that pretending you’re an agent with a utility function is actually a good idea; it seems to me to be setting yourself up to fall into absurdities.
Below, you claim this is a moral choice; I would suggest that trying to achieve an actually impossible moral code, let alone advocating it, is basically unhealthy.
Firstly, I thought we were just appealing to consequentialism, not utilitarianism?
So I think I agree with you that believing you have a utility function if you in fact don’t might suck, and that baseline humans in fact don’t. I was trying to distinguish that from:
a) believing one ought to have a utility function, in which case I might seek to self-modify appropriately if it became possible; so something a bit stronger than the “pretending” you suggested. b) believing one should strive to act as if one did, while knowing that I’ll fall short because I don’t.
The second you addressed by saying
I would suggest that trying to achieve an actually impossible moral code, let alone advocating it, is basically unhealthy.
I have one group of intuitions here that claim impossibility in a moral code is a feature, not a bug, because it helps avoid deluding youself that you’ve finished the job and are now perfect; and why would I expect the right action to be healthy anyway?
But this seems like a line of thinking that is specific to coping with being an inconsistent human, in the absence of an engineering fix for that.
...too often the poster seems to be saying “look, absurd result, but the numbers work out so this is important!” rather than “oh, I hit an absurdity, perhaps I’m stretching this way further than it goes.”
Yes, I don’t understand this at all. For example, even Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”. Now as far as I can tell, they still use this framework to make decisions, a framework that implies absurd decisions, rather than concentrating on examining the framework itself, and looking for better alternatives.
What I am having problems with is that they seem to teach people to “shut up and multiply”, and approximate EU maximization, yet arbitrarily ignore low probabilities. I say “arbitrarily” because nobody ever told me at what point it is rational to step out of this framework and ignore a calculation.
You could argue that our current grasp of rationality is less wrong. But why then worry about something as dutch booking when any stranger can make you give them all your money simply by conjecturing vast utilities if you don’t? Seems more wrong to me.
Lots of frameworks imply different absurd decisions (especially when viewed from other frameworks) but it’s hard to go about your life without using some sort of framework.
If rationality is on average less wrong but you think your intuition is better in a certain scenario, a mixed strategy makes sense.
If rationality is on average less wrong but you think your intuition is better in a certain scenario, a mixed strategy makes sense.
No, it means your intuition is better than your rationality, and you should fix that. If your rational model is not as good as your intuition at making decisions, then it is flawed and you need to move on.
Let’s say I have 300 situations where I recorded my decision making process. I tried to use rationality to make the right decision in all of them, and kept track of whether I regretted the outcome. In 100 of these situations, my intuitions disagreed with my rational model, and I followed my rational model. If I only regret the outcome in 1 of these 100 situations, in what way does make sense to throw out my model? You can RATIONALLY decide that certain situations are not amenable to your rational framework without deciding the framework is without value.
Let’s say we do 100 physics experiments, and 99% of the results agree with our model. Do we get to ignore / throw out that one “erroneous” result? No, that result if verified shows a flaw in our model.
If afterwards you regretted a choice and wish you had made a better choice even with the information available to you at the time, then this realization should have bolt upright in your chair. If verified, your decision making process needs updating.
it’s still a pretty damn good model. Why can’t you get that point? Newtonian mechanics was still a very useful model and would’ve been ridiculous to replace with intuition just because it gave absurd answers in relativistic situations.
I never contradicted that point. Newtonian physics works quite fine in many situations. It is still wrong.
Edit: to expand on that point when we use physics we know that there a certain circumstances in whichwe use classical physics because it is easier and faster and the results are good enough for the precision we need. Other times we use quantum physics or relativity. the decision of which model to use is itself part of the decision-making frameworks and is what I’m talking about. if you chose to use the wrong framework and get incorrect results then your metamodel of which framework to use use to be updated.
I don’t think I have much to add to this discussion that you guys aren’t already going to have covered, except to note that Qiaochu definitely understands what a utility function is and all of the standard arguments for why they “should” exist, so his beliefs are not a function of not having heard these arguments (just noting this because this thread and some of the siblings seem to be trying to explain basic concepts to Qiaochu that I’m confident he already knows, and I’m hoping that pointing this out will speed up the discussion).
It’s more than a metaphor; a utility function is the structure any consistent preference ordering that respects probability must have. It may or may not be a useful conceptual tool for practical human ethical reasoning, but “just a metaphor” is too strong a judgment.
This is the sort of thing I mean when I say that people take utility functions too seriously. I think the von Neumann-Morgenstern theorem is much weaker than it initially appears. It’s full of hidden assumptions that are constantly violated in practice, e.g. that an agent can know probabilities to arbitrary precision, can know utilities to arbitrary precision, can compute utilities in time to make decisions, makes a single plan at the beginning of time about how they’ll behave for eternity (or else you need to take into account factors like how the agent should behave in order to acquire more information in the future and that just isn’t modeled by the setup of vNM at all), etc.
The biggest problematic unstated assumption behind applying VNM-rationality to humans, I think, is the assumption that we’re actually trying to maximize something.
To elaborate, the VNM theorem defines preferences by the axiom of completeness, which states that for any two lotteries A and B, one of the following holds: A is preferred to B, B is preferred to A, or one is indifferent between them.
So basically, a “preference” as defined by the axioms is a function that (given the state of the agent and the state of the world in general) outputs an agent’s decision between two or more choices. Now suppose that the agent’s preferences violate the Von Neumann-Morgenstern axioms, so that in one situation it prefers to make a deal that causes it to end up with an apple rather than an orange, and in another situation it prefers to make a deal that causes it to end up with an orange rather than an apple. Is that an argument against having circular preferences?
By itself, it’s not. It simply establishes that the function that outputs the agent’s actions behaves differently in different situations. Now the normal way to establish that this is bad is to assume that all choices are between monetary payouts, and that an agent with inconsistent preferences can be Dutch Booked and made to lose money. An alternative way, which doesn’t require us to assume that all the choices are between monetary payouts, is to construct a series of trades between resources that leaves us with less resources than when we started.
Stated that way, this sounds kinda bad. But then there are things that kind of fit that description, but which we would intuitively think of as good. For instance, some time back I asked:
In response, I was told that
But then I asked that, if we accept this, then what real-life situation does count as an actual circular preference in the VNM sense, given that just about every potential circularity that I can think of is the kind “I prefer A to B at time t1 and B to A at time t2”? And I didn’t get very satisfactory replies.
Intuitively, there are a lot of real-life situations that feel kind of like losing out due to inconsistent preferences, like someone who wants to get into a relationship when he’s single and then wants to be single when he gets into a relationship, but there our actual problem is that the person spends a lot of time being unhappy, rather than with the fact that he makes different choices in different situations. Whereas with the couple, we think that’s fine because they get enjoyment from the “trades”.
The general problem that I’m trying to get at is that in order to hold up VNM rationality as a normative standard, we would need to have a meta-preference: a preference over preferences, stating that it would be better to have preferences that lead to some particular outcomes. The standard Dutch Book example kind of smuggles in that assumption by the way that it talks about money, and thus makes us think that we are in a situation where we are only trying to maximize money and care about nothing else. And if you really are trying to only maximize a single concrete variable or resource and care about nothing else, then you really should try to make sure that your choices follow the VNM axioms. If you run a betting office, then do make sure that nobody can Dutch Book you.
But we don’t have such a clear normative standard for life in general. It would be reasonable to try to construct an argument for why the couple having sex were rational but the person who kept vacillating about being in a relationship was irrational by suggesting that the couple got happiness whereas the other person was unhappy… but we also care about other things than just happiness (or pleasure) and thus aren’t optimizing just for pleasure either. And unless you’re a hedonistic utilitarian, you’re unlikely to say that we should optimize only for pleasure either.
So basically, if you want to say that people should be VNM-rational, then you need to have some specific set of values or goals that you think people should strive towards. If you don’t have that, then VNM-rationality is basically irrelevant aside for the small set of special cases where people really do have a clear explicit goal that’s valued above other things.
I’m not sure I follow in what sense this is a violation of the vNM axioms. A vNM agent has preferences over world-histories; in general one can’t isolate the effect of having an apple vs. having an orange without looking at how that affects the entire future history of the world.
Right, I was trying to say “it prefers an apple to an orange and an orange to an apple in such a way that does violate the axioms”. But I was unsure of what example to actually give of that, since I’m unsure of what real-life situations really would violate the axioms.
The example that comes to mind to show the how the sex thing isn’t a problem is that of a robot car with a goal to drive as many miles as possible. Every day it will burn through all its fuel and fuel up. Right after it fuels up, it will have no desire for further fuel—more fuel simply does not help it go further at this point, and forcing it can be detrimental. Clearly not contradictory
You could have a similar situation with a couple wanting sex iff they haven’t had sex in a day, or wanting an orange if you’ve just eaten an apple but wanting an apple if you’ve just eaten an orange.
To strictly show that something violates vNM axioms, you’d have to show that this behavior (in context) can’t be fulfilling any preferences better than other options that the agent is aware of—or at least be able to argue that the revealed utility function is contrived and unlikely to hold up in other situations (not what the agent “really wants”).
Constantly wanting what one doesn’t have can have this defect. If I keep paying you to switch my apple for your orange and back (without actually eating either), then you have a decent case, if you’re pretty confident I’m not actually fulfilling my desire to troll you ;)
The “want’s a relationship when single” and “wants to be single when not” thing does look like such a violation to me. If you let him flip flop as often as he desires, he’s not going to end up happily endorsing his past actions. If you offered him a pill that would prevent him from flip flopping, he very well may take it. So there’s a contradiction there.
To bring human-specific psychology into it, its not that his inherent desires are contradictory, but that he wants something like “freedom”, which he doesn’t know how to get in a relationship and something like “intimacy”, which he doesn’t know how to get while single. It’s not that he want’s intimacy when single and freedom when not, it’s that he wants both always, but the unfulfilled need is the salient one.
Picture me standing on your left foot. “Oww! Get off my left foot!”. Then I switch to the right “Ahh! Get off my right foot!”. If you’re not very quick and/or the pain is overwhelming, it might take you a few iterations to realize the situation you’re in and to put the pain aside while you think of a way to get me off both feet (intimacy when single/freedom in a relationship). Or if you can’t have that, it’s another challenge to figure out what you want to do about it.
I wouldn’t model you as “just VNM-irrational”, even if your external behaviors are ineffective for everything you might want. I’d model you as “not knowing how to be VNM-rational in presence of strong pain(s)”, and would expect you to start behaving more effectively when shown how.
(and that is what I find, although showing someone how to be more rational is not trivial and “here’s a proof of the inconsistency of your actions now pick a side and stop feeling the desire for the other side” is almost never sufficient. You have to be able to model the specific way that they’re stuck and meet them there)
tl;dr: We’re not VNM-rational because we don’t know how to be, not because it’s not something we’re trying to do.
How do you distinguish his preferences being irrationally inconsistent (he is worse off from entering and leaving relationships repeatedly) from him truly wanting to be in relationships periodically (like how it’s rational to alternate between sleeping and waking rather than always doing one or the other)?
If there’s a pill that can make him stop switching (but doesn’t change his preferences), one of two things will happen: either he’ll never be in a relationship (prevented from entering), or he’ll stay in his current relationship forever (prevented from leaving). I wouldn’t be surprised if he dislikes both of the outcomes and decides not to take the pill.
The pill could instead change his preferences so that he no longer wants to flip-flop, but this argument seems too general—why not just give him a pill that makes him like everything much more than he does now? If my behavior is irrational, I should be able to make myself better off simply by changing my behavior, without having to modify my preferences.
By talking to him. If it’s the latter, he’ll be able to say he prefers flip flopping like it’s just a matter of fact and if you probe into why he likes flip flopping, he’ll either have an answer that makes sense or he’ll talk about it in a way that shows that he is comfortable with not knowing. If it’s the former, he’ll probably say that he doesn’t like flip flopping, and if he doesn’t, it’ll leak signs of bullshit. It’ll come off like he’s trying to convince you of something because he is. And if you probe his answers for inconsistencies he’ll get hostile because he doesn’t want you to.
I’m not sure where you’re going with the “magic pill” hypotheticals, but I agree. The only thing I can think to add is that a lot of times the “winning behaviors” are largely mental and aren’t really available until you understand the situation better.
For example, if you break your foot and can’t get it x-rayed for a day, the right answer might be to just get some writing done—but if you try to force that behavior while you’re suffering, it’s not gonna go well. You have to actually be able to dismiss the pain signal before you have a mental space to write in.
I meant that if someone is behaving irrationally, forcing them to stop that behavior should make them better off. But it seems unlikely to me that forcing him to stay in his current relationship forever, or preventing him from ever entering a relationship (these are the two ways he can be stopped from flip-flopping) actually benefit him.
Forcing anyone to stay in their current relationship forever or forever preventing them from entering a relationship would be quite bad. In order to help him, he’d have to be doing worse than that.
The way to help him would be a bit trickier than that: let him have “good” relationships but not bad. Let him leave “bad” relationships but not good. And then control his mental behaviors so that he’s not allowed to spend time being miserable about his lack of options… (it’s hard to force rationality)
Controlling his mental behaviors would either be changing his preferences or giving him another option. For judging whether he is behaving irrationally, shouldn’t his preferences and set of choices be held fixed?
Relevant question: what does the cognitive science literature on choice-making, preference, and valuation have to say about all this? What mathematical structure actually does model human preferences?
Given that we run on top of neural networks and seem to use some Bayesian algorithms for certain forms of learning (citations available), I currently expect that our choice-making mechanisms might involve conditioning on features or states of our environment at some fundamental level.
My first guess would be that evolution has selected us for circular preferences that our genes money-pump so that we will propagate them. You can’t get off this ride while you’re human.
Is that a challenge?
:-) I mean that if you embody human value, you’ll probably be a money-pumpable entity. Very few humans actually achieve an end to desire while still alive and mentally active.
I’ll take the challenge, then. I was already walking around thinking that the Four Noble Truths of the Buddha are a bunch of depressing bullshit that need to be fixed.
I’ve seen a bunch of different theories backed with varying amounts of experimental data—for instance, this, this and this—but I haven’t looked at them enough to tell which ones seem most correct.
That said, I still don’t remember running into any thorough discussion of what human preferences are, other than just “something that makes us make some choice in some situations”. I mention here that
And I’m a little skeptical of any theory of human preferences that doesn’t attempt to make any such breakdown and only takes a “black box” approach of looking at the outputs of our choice mechanism.
Looks like the relevant textbook came out with an updated edition this year.
I think your original post would have been better if it included any arguments against utility functions, such as those you mention under “e.g.” here.
Besides being a more meaningful post, we would also be able to discuss your comments. For example, without more detail, I can’t tell whether your last comment is addressed sufficiently by the standard equivalence of normal-form and extensive-form games.
Essentially every post would have been better if it had included some additional thing. Based on various recent comments I was under the impression that people want more posts in Discussion so I’ve been experimenting with that, and I’m keeping the burden of quality deliberately low so that I’ll post at all.
I appreciate you writing this way—speaking for myself, I’m perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.
Let me rephrase: would you like to describe your arguments against utility functions in more detail?
For example, as I mentioned, there’s an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.
The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges? And if so, is the objection to the normal-form assumption essentially the same?
Can you give more details here? I’m not familiar with extensive-form vs. normal-form games.
Something like that. It seems like the computational concerns are extremely important: after all, a theory of morality should ultimately output actions, and to output actions in the context of a utility function-based model you need to be able to actually calculate probabilities and utilities.
Sure. Say you have to make some decision now, and you will be asked to make a decision later about something else. Your decision later may depend on your decision now as well as part of the world that you don’t control, and you may learn new information from the world in the meantime. Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.
This is vaguely analogous to how you can curry a function of multiple arguments. Taking one argument X and returning (a function of one argument Y that returns Z) is equivalent to taking two arguments X and Y and returning X.
There’s potentially a huge computational complexity blowup here, which is why I stressed mathematical equivalence in my posts.
Thanks for the explanation! It seems pretty clear to me that humans don’t even approximately do this, though.
Sounds not very feasible...
Those are not assumptions of the von Neumann-Morgenstern theorem, nor of the concept of utility functions itself. Those are assumptions of an intelligent agent implemented by measuring its potential actions against an explicitly constructed representation of its utility function.
I get the impression that you’re conflating the mathematical structure that is a utility function on the one hand, and representations thereof as a technique for ethical reasoning on the other hand. The former can be valid even if the latter is misleading.
Can you describe this “mathematical structure” in terms of mathematics? In particular, the argument(s) to this function, what do they look like mathematically?
Certainly, though I should note that there is no original work in the following; I’m just rephrasing standard stuff. I particularly like Eliezer’s explanation about it.
Assume that there is a set of things-that-could-happen, “outcomes”, say “you win $10″ and “you win $100”. Assume that you have a preference over those outcomes; say, you prefer winning $100 over winning $10. What’s more, assume that you have a preference over probability distributions over outcomes: say, you prefer a 90% chance of winning $100 and a 10% chance of winning $10 over a 80% chance of winning $100 and a 20% change of winning $10, which in turn you prefer over 70%/30% chances, etc.
A utility function is a function f from outcomes to the real numbers; for an outcome O, f(O) is called the utility of O. A utility function induces a preference ordering in which probability-distribution-over-outcomes A is preferred over B if and only if the sum of the utilities of the outcomes in A, scaled by their respective probabilities, is larger than the same for B.
Now assume that you have a preference ordering over probability distributions over outcomes that is “consistent”, that is, such that it satisfies a collection of axioms that we generally like reasonable such orderings to have, such as transitivity (details here). Then the von Neumann-Morgenstern theorem says that there exists a utility function f such that the induced preference ordering of f equals your preference ordering.
Thus, if some agent has a set of preferences that is consistent—which, basically, means the preferences scale with probability in the way one would expect—we know that those preferences must be induced by some utility function. And that is a strong claim, because a priori, preference orderings over probability distributions over outcomes have a great many more degrees of freedom than utility functions do. The fact that a given preference ordering is induced by a utility function disallows a great many possible forms that ordering might have, allowing you to infer particular preferences from other preferences in a way that would not be possible with preference orderings in general. (Compare this LW article for another example of the degrees-of-freedom thing.) This is the mathematical structure I referred to above.
Right.
So, keeping in mind that the issue is separating the pure mathematical structure from the messy world of humans, tell me what outcomes are, mathematically. What properties do they have? Where can we find them outside of the argument list to the utility function?
“a utility function is the structure any consistent preference ordering that respects probability must have.”
Yes, but humans still don’t have one. It’s not even clear they can make themselves have one.
Doesn’t mean we shouldn’t try.
“statement x is not currently the case and is probably unfeasible” does in fact mean we shouldn’t try to act on it. Maybe we can try to act to make statement x true, but we shouldn’t act as if it already is. For a more concrete example, imagine this: “I’ve never done a backflip. It’s not even clear I can do one”. We know backflips are possible, and with training you’re probably going to be able to do one. But at the time you’re making that statement, saying “doesn’t mean you shouldn’t try” is TERRIBLE advice that could get you a broken neck.
Firstly, that’s kind of an uncharitable reading. If I said “I’m going to try and pass an exam” you’d naturally understand me as planning to do the requisite work first. “Backflip” just pattern-matches to ‘the sort of thing silly people try to do without training’.
However, that said, I’m being disingenuous. What I really truly meant at the time I typed that was moral-should, not practical-should, which come apart if one isn’t a perfect consequentialist. Which I ain’t, which is at least partly the point.
It may well do. Yvain has pointed out on his blog (I recall the post, though I couldn’t find it just now) that in daily life we do actually use something like utilitarianism quite a bit, which carries a presumption of something like a utility function at least in that case. But what works in normal ranges does not necessarily extrapolate: utilitarianism is observably brittle, and routinely reaches conclusions that humans consider absurd.
There’s occasionally LW posts showing that utilitarianism gives some apparently-absurd result or other, and too often the poster seems to be saying “look, absurd result, but the numbers work out so this is important!” rather than “oh, I hit an absurdity, perhaps I’m stretching this way further than it goes.” It’s entirely unclear to me that pretending you’re an agent with a utility function is actually a good idea; it seems to me to be setting yourself up to fall into absurdities.
Below, you claim this is a moral choice; I would suggest that trying to achieve an actually impossible moral code, let alone advocating it, is basically unhealthy.
Firstly, I thought we were just appealing to consequentialism, not utilitarianism?
So I think I agree with you that believing you have a utility function if you in fact don’t might suck, and that baseline humans in fact don’t. I was trying to distinguish that from:
a) believing one ought to have a utility function, in which case I might seek to self-modify appropriately if it became possible; so something a bit stronger than the “pretending” you suggested.
b) believing one should strive to act as if one did, while knowing that I’ll fall short because I don’t.
The second you addressed by saying
Did you have the same position re. Trying to Try?
I have one group of intuitions here that claim impossibility in a moral code is a feature, not a bug, because it helps avoid deluding youself that you’ve finished the job and are now perfect; and why would I expect the right action to be healthy anyway? But this seems like a line of thinking that is specific to coping with being an inconsistent human, in the absence of an engineering fix for that.
Yes, I don’t understand this at all. For example, even Yudkowsky writes that he would sooner question his grasp of “rationality” than give five dollars to a Pascal’s Mugger because he thought it was “rational”. Now as far as I can tell, they still use this framework to make decisions, a framework that implies absurd decisions, rather than concentrating on examining the framework itself, and looking for better alternatives.
What I am having problems with is that they seem to teach people to “shut up and multiply”, and approximate EU maximization, yet arbitrarily ignore low probabilities. I say “arbitrarily” because nobody ever told me at what point it is rational to step out of this framework and ignore a calculation.
You could argue that our current grasp of rationality is less wrong. But why then worry about something as dutch booking when any stranger can make you give them all your money simply by conjecturing vast utilities if you don’t? Seems more wrong to me.
Lots of frameworks imply different absurd decisions (especially when viewed from other frameworks) but it’s hard to go about your life without using some sort of framework.
If rationality is on average less wrong but you think your intuition is better in a certain scenario, a mixed strategy makes sense.
No, it means your intuition is better than your rationality, and you should fix that. If your rational model is not as good as your intuition at making decisions, then it is flawed and you need to move on.
You seem to have completely missed my point.
Let’s say I have 300 situations where I recorded my decision making process. I tried to use rationality to make the right decision in all of them, and kept track of whether I regretted the outcome. In 100 of these situations, my intuitions disagreed with my rational model, and I followed my rational model. If I only regret the outcome in 1 of these 100 situations, in what way does make sense to throw out my model? You can RATIONALLY decide that certain situations are not amenable to your rational framework without deciding the framework is without value.
Let’s say we do 100 physics experiments, and 99% of the results agree with our model. Do we get to ignore / throw out that one “erroneous” result? No, that result if verified shows a flaw in our model.
If afterwards you regretted a choice and wish you had made a better choice even with the information available to you at the time, then this realization should have bolt upright in your chair. If verified, your decision making process needs updating.
it’s still a pretty damn good model. Why can’t you get that point? Newtonian mechanics was still a very useful model and would’ve been ridiculous to replace with intuition just because it gave absurd answers in relativistic situations.
I never contradicted that point. Newtonian physics works quite fine in many situations. It is still wrong.
Edit: to expand on that point when we use physics we know that there a certain circumstances in whichwe use classical physics because it is easier and faster and the results are good enough for the precision we need. Other times we use quantum physics or relativity. the decision of which model to use is itself part of the decision-making frameworks and is what I’m talking about. if you chose to use the wrong framework and get incorrect results then your metamodel of which framework to use use to be updated.
I don’t think I have much to add to this discussion that you guys aren’t already going to have covered, except to note that Qiaochu definitely understands what a utility function is and all of the standard arguments for why they “should” exist, so his beliefs are not a function of not having heard these arguments (just noting this because this thread and some of the siblings seem to be trying to explain basic concepts to Qiaochu that I’m confident he already knows, and I’m hoping that pointing this out will speed up the discussion).