It is rather strange to say that utility is for choosing the best outcome, given that a utility function can only be constructed in the first place if it’s already true that we can impose a total ordering on outcomes.
If what you have in mind when you say “utility” is VNM-utility (which is what it sounds like), then as you know, only agents whose preferences satisfy the axioms, have a utility function (i.e., a utility function can be constructed for an agent if and only if the agent’s preference satisfy the axioms).
Whether an agent’s preferences do, or do not, satisfy the VNM axioms, is an empirical question. We can ask it about a particular human, for instance. The answer will be “yes” or “no”, and it will, again, be an empirical fact.
Suppose we investigate some person’s preferences, and find that they do not satisfy the VNM axioms? Well, that’s it, then; that person has no utility function. That is a fact. No normative discussions (such as discussions about what a utility function is “for”) can change it.
I read moridinamael’s commentary to be aimed at just such an empirical question. He is asking: what can we say about human preferences? Is our understanding of the facts on the ground mistaken in a particular way? Are human preferences in fact like this, and not like that? —and so on.
Given that, comments about what “the point of” having a utility function, or what a utility function is “for”, or any other such normative concerns, seem inapplicable and somewhat strange. Asking “what benefit having more dimensions adds” seems like entirely the wrong sort of question to ask—a confusion underlies it, about what sort of thing we’re talking about. The additional dimensions either are present in the data, or they’re not. (Would you ask “what benefit” is derived from using three dimensions to measure space—why not define points in space using a one-dimensional scalar, isn’t that enough…? etc.)
Flagging that (I think) utilitarianism and VNM-utility are different things. They are closely related, but I think Bentham invented utiltarianism before VNM utility was formalized. They are named similar things for similar reasons but the formalisms don’t necessarily transfer.
It is separately the case that:
a) humans (specific or general) might be VNM agents, and that if they are not, they might aspire to try to be so that they don’t spend all their time driving from San Francisco to San Diego to New York
b) even if they are not, if you care about global welfare (either altruisticly or for self-interested Rawlsian veil of ignorance style thinking), you may want to approximate whether given decisions help or harm people, and this eventually needs to cash out into some kind of ability to decide whether a decision is net-positive.
tldr: “utilitariansim” (the term the OP used) does not formally imply VNM utility, although it does waggle its eyebrows suggestively.
Addressing your points separately, just as you made them:
I.
I do not think that is the mistake zulupineapple is making (getting VNM utility, and utilitarianism, mixed up somehow).
(Though it is a common mistake, and I have commented on it many times myself. I just think it is not the problem here. I think utility, in the sense of VNM utility (or something approximately like it) is in fact what zulupineapple had in mind. Of course, he should correct me if I misunderstood.)
II.
re: a): Someone we know once said: “the utility function is not up for grabs”. Well, indeed; and neither is my lack of utility function (i.e., my preferences) up for grabs. It seems very strange indeed, to say “in order to be rational, change your preferences”; when the whole point of (instrumental) rationality is to satisfy my preferences.
And I can’t help but notice that actual humans, in real life, do not spend all their time driving from SF to SD to NY and so on. Why is that? Now, perhaps you meant that scenario figuratively—yes? What you had in mind was some other, more subtle (apparent) preference reversal, and the driving between cities was a metaphor. Very well; but I suspect that, if you described the actual (apparent) preference reversal(s) you had in mind, their status as irrational would be rather more controversial, and harder to establish to everyone’s satisfaction.
III.
re: b): Making decisions does not require having a total ordering on outcomes—not even if we are consequentialists (as I certainly am) and care about helping vs. harming people (which is certainly one of the things I care about).
Furthermore, notice specifically that even if your requirement is that we have an “ability to decide whether a decision is net-positive”, even that does not require having a total ordering on outcomes. (Example 1: I can prefer situation B to A, and C to A, and D to A, while having a preference cycle between B, C, and D (this is the trivial case). Example 2: I can have a preference cycle between A, B, and C, and believe that any decision to go from one of those states to the next one in the cycle is net positive (this is the substantive case).)
IV.
By the way, violation of transitivity is not the most interesting form of VNM axiom violation—because it’s relatively easy to make the case that it constitutes irrationality. Far more interesting is violation of continuity; and you will, I suspect, have a more difficult time convincingly showing it to be irrational. (Correspondingly, it’s also—in my experience—more common among humans.)
Robyn Dawes describes one class of such violations in his Rational Choice in an Uncertain World. (Edit: And he makes the case—quite convincingly, IMO—that such violations are not irrational.) You can search my old LessWrong comments and find some threads where I explain this. If you also search my comments for keywords “grandmother” and “chicken”, you’ll find some more examples.
If you can’t find this stuff, I’ll take some time to find it myself at some point, but not right now, sorry.
Let us say I prefer the nonextinction of chickens to their extinction (that is, I would choose not to murder all chickens, or any chickens, all else being equal). I also prefer my grandmother remaining alive to my grandmother dying. Finally, I prefer the deaths of arbitrary numbers of chickens, taking place with any probability, to any probability of my grandmother dying.
Would you also prefer losing an arbitrary amount of money to any probability of your grandmother dying? I think chicken can be converted into money, so you should prefer this as well. I’m hoping that you’ll find this preference equivalent, but then find that your actions don’t actually follow it.
a) Chickens certainly can’t be converted into money (in the sense you mean)
b) Even if they could be, the comparison is nonsensical, because in the money case, we’re talking about my money, whereas in the chickens case we’re talking about chickens existing in the world (none of which I own)
c) That aside, I do not, in fact, prefer losing an arbitrary amount of money to any probability of my grandmother dying (but I do prefer losing quite substantial amounts of money to relatively small probabilities of my grandmother coming to any harm, and my actions certainly do follow this)
Chickens are real wealth owned by real people. Pressing a magical button that destroys all chickens would do massive damage to the well being of many people. So, you’re not willing to sacrifice your own wealth for tiny reductions in probability of dead grandma, but you’d gladly sacrifice the wealth of other people? That would make you a bad person. And the economic damage would end up affecting you eventually anyway.
I rather think you’ve missed most, if not all, of the point of that hypothetical (and you also don’t seem to have fully read the grandparent comment to this one, judging by your question).
Perhaps we should set the grandmother/chickens example aside for now, as we’re approaching the limit of how much explaining I’m willing to do (given that the threads where I originally discussed this are quite long and answer all these questions).
and you also don’t seem to have fully read the grandparent comment to this one, judging by your question
Do you mean the a), b), c) comment? Which section did I miss?
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
Take a look at the other example I cited.
From some book? You know, it would be great if your arguments were contained in your comments.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Edited to add:
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
What difference does it make?
If I say “yes”, then we can have the same conversation as the one about the chickens. It’s just another example of the same thing.
If I say “no”, then it’s not a relevant example at all and there’s no reason to discuss it further.
This is a totally pointless line of inquiry; this is the last I’ll say about it.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Where? I didn’t see any such things in the LW comments I found. Are there more threads? Are you going to link to them? You’ve made a big claim, and I haven’t seen nearly enough defense for it.
What difference does it make?
Of course, the question is such that I get to feel right either way. If you say “no”, then I can deduce that you don’t understand what “wealth” is. If you say “yes”, then I can deduce that you’re a sociopath with poor understanding of cause and effect. Charitably, I could imagine that you were talking about destroying chickens in some parallel universe, where their destruction could 100% certainly not have consequences for you, but that’s a silly scenario too.
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity.
I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim?
Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
I’m afraid that the moderation policy on this site does not permit me to do so effectively
Are you referring to that one moderation note? I think you’re overreacting.
This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
This de-facto means that you would never pay any attention whatsoever to the live’s of chicken
Only if that agent has a grandmother.
Suppose my grandmother (may she live to be 120) were to die. My preferences about the survival of chickens would now come into play. This is hardly an exotic scenario! There are many parallel constructions we can imagine. (Or do you propose that we decline to have preferences that bear only on possible future situations, not currently possible ones?)
Edited to add:
This is called “lexicographic preferences”, and it too is hardly exotic or unprecedented.
(end edit)
+++
Of course, even that is moot if we reject the proposition that “our world … is interconnected to a degree that almost all actions will have some potential consequences to your grandmother”.
And there are good reasons to reject it. If nothing else, it’s a fact that given sufficiently small probabilities, we humans are not capable of considering numbers of such precision, and so it seems strange to speak of basing our choices on them! There is also noise in measurement, errors in calculation, inaccuracies in the model, uncertainty, and a host of other factors that add up to the fact that in practice, “almost all actions” will, in fact, have no (foreseeable) consequences for my grandmother.
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value.
Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
Would you ask “what benefit” is derived from using three dimensions to measure space
That’s an easy question. The benefit is increased model accuracy. You could also ask, “is there any benefit to using even more space dimensions” and this is also a good question, and a topic of modern physics.
Whether an agent’s preferences do, or do not, satisfy the VNM axioms, is an empirical question.
Yes. And if OP wanted to show that something is wrong with the usual utility, they should show how the axioms are broken (transitivity in particular, the rest I could see hand-waving). I don’t think they did that.
It is rather strange to say that utility is for choosing the best outcome, given that a utility function can only be constructed in the first place if it’s already true that we can impose a total ordering on outcomes.
A car is for moving around, regardless of whether you actually have a car. I don’t really understand your criticism. Ask yourself, why does the VNM utility theorem exist, why do we care about it, why do we care whether humans satisfy its axioms. The answer will presumably involve choosing good outcomes.
That’s an easy question. The benefit is increased model accuracy.
Well, there you go, then. Same thing.
A car is for moving around, regardless of whether you actually have a car. I don’t really understand your criticism.
If I say that I don’t (contrary to popular belief) have a car, and you reply that I’m confused about what cars are for, then something has gone wrong with your reasoning.
Ask yourself, why does the VNM utility theorem exist
What does “exist” mean, here?
why do we care about it
Who’s “we”? I, personally, find it to be of abstract mathematical interest, no more. (This was also more or less the view of Oskar Morgenstern himself.)
why do we care whether humans satisfy its axioms.
Ditto. In any case, we can want to choose good outcomes all we want, and that still won’t affect the facts about whether or not humans have utility functions. Our purposes or our reasons for caring about some mathematical results or any such thing, doesn’t change the facts.
Edited to add:
And if OP wanted to show that something is wrong with the usual utility, they should show how the axioms are broken (transitivity in particular, the rest I could see hand-waving). I don’t think they did that.
Perhaps, perhaps not, but that still leaves us with your response being a non sequitur! If you think moridinamael is wrong about the facts—if you think that in fact, despite any alleged multidimensionality of experience, a total preference ordering over [lotteries over] outcomes is possible—that’s all well and good; but what does “what is utility for” have to do with it?
(Incidentally, it’s not even correct to say that “utility is for choosing the best outcome”. After all, we can only construct your utility function after we already know what you think the best outcome is! Before we have the total ordering over outcomes, we can’t construct the utility function…)
I didn’t see OP explaining how preference model accuracy is increased by having more dimensions. Rather, I don’t think OP is even modeling the same thing that I’m modeling.
If I say that I don’t (contrary to popular belief) have a car, and you reply that I’m confused about what cars are for, then something has gone wrong with your reasoning.
OP didn’t say he doesn’t have a car, from my point of view OP says that he doesn’t need a car, because a car can’t cook for him.
What does “exist” mean, here?
It means “has been conjectured, proven, talked about or etc”. Nothing fancy.
despite any alleged multidimensionality of experience <...>
This is a weird thing to say. Multidimensionality of experience is not being questioned. The proposition that the entirety of human mental state can be meaningfully compressed to one number is stupid to the extent that I doubt any one has ever seriously suggested it in the entire human history. The problem is that OP argues against this trivially false claim, and treats it as some problem of utility. My response is that utility fails to express the entire human experience, because it is not for expressing the entire human experience. The same way that a car fails at cooking because it is not for cooking.
After all, we can only construct your utility function after we already know what you think the best outcome is!
No, we can construct a utility function after we have verified the axioms (or just convinced ourselves that they should work). This is easier than actually ranking every possible outcome.
<earlier> only agents whose preferences satisfy the axioms, have a utility function
This is actually a flawed perspective. I guess it’s indicative of your belief that utility has no practical applications. If my preferences don’t satisfy the axioms, that only means that no utility function will describe my preferences perfectly. But some functions might approximate them and there could still be practical benefit to using them.
Who’s “we”? I, personally, find it to be of abstract mathematical interest, no more. (This was also more or less the view of Oskar Morgenstern himself.)
Well, I guess that explains something. I guess we should expand on this, but I struggle to understand why you think this, or what you think that I think.
OP didn’t say he doesn’t have a car, from my point of view OP says that he doesn’t need a car, because a car can’t cook for him.
“I don’t have a car” is exactly how I read the OP. Sibling comment seems to confirm this reading.
we can construct a utility function after we have verified the axioms (or just convinced ourselves that they should work). This is easier than actually ranking every possible outcome.
This is a novel claim! How do we do this? It seems manifestly false!
If my preferences don’t satisfy the axioms, that only means that no utility function will describe my preferences perfectly. But some functions might approximate them and there could still be practical benefit to using them.
What does “approximate” mean, here? Let’s recall that according to the VNM theorem, if an agent’s preferences satisfy the axioms, then
there exists a real-valued function u defined by possible outcomes such that every preference of the agent is characterized by maximizing the expected value of u
In other words, for a VNM-compliant agent attempting to decide between outcomes A and B, A will be preferred to B if and only if u(A) > u(B).
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
This means that using the utility function as a guide to decision-making is guaranteed to violate the agent’s preferences in at least some case.
Such an agent then has two choices:
a) He can ignore his own preferences, and use the utility function as a means of decision-making; or
b) He can evaluate the utility function’s output by comparing it to his own preferences, deferring to the latter when the two conflict.
Choosing (a) seems completely unmotivated. And if the agent chooses (b), well, then what’s the point of the utility function to begin with? Just do what you prefer.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case, compared to constructing, and then following, a utility function, given that we have to elicit all of an agent’s preferences in order to construct the utility function to begin with!
And what does that leave us with, in terms of uses for a utility function? It’s not like we can do interpersonal utility comparisons (such operations are completely meaningless under VNM); which means we can’t aggregate VNM-utility across persons. So what practical benefit is there? How do we “use” a utility function, even assuming a VNM-compliant agent (quite an assumption) and assuming we can elicit all the agent’s preferences and construct the thing (another big assumption)? What do we do with it?
I didn’t mean immediately. I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
Yes, that’s what “approximate” means, especially if B is preferred to most other possible outcomes C.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast. The chances to get dutch booked are many. Your strategy can hardly be reasoned about.
It’s not like we can do interpersonal utility comparisons
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
How? Demonstrate, please.
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
The chances to get dutch booked are many.
Again: demonstrate. I tell you I follow the “do what I prefer” strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
Your strategy can hardly be reasoned about.
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t “approximate” it, or do a “not-exactly-correct” computation, or anything like that. There’s nothing to approximate in the first place!
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with.
I think you’re confusing outcomes with lotteries. To build utility I need make comparisons for unique outcomes. E.g. if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence). You, on the other hand, need to separately evaluate every possible lottery.
I tell you I follow the “do what I prefer” strategy. Dutch book me!
I also need you to explain to me in what ways your “do what I prefer” violates the axioms, and how that works. I’m waiting for that in our other thread. To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs.
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities. Then observe what kind of strategy you generated. Most likely you’ll find that the strategy is quite fair and reasonable to everyone. Voila, you have a decision procedure for a group of people. It’s not perfect, but it’s not terrible either. All other criticisms are pointless. The day that I find some usefulness in comparing ohms to inches, I will start comparing ohms to inches.
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
For example, I get the automatic guarantee that I can’t be dutch booked. Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence).
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
What’s this?! I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”! Are they many and commonplace, or are they few, esoteric, and possibly nonexistent? Why not at least present some hypotheticals to back up your claim? Where are these chances to get Dutch booked? If they’re many, then name three!
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities.
So, in other words:
… Step 2: Do something that is completely unmotivated, baseless, and nonsensical mathematically, and, to boot, extremely questionable (to put it very mildly) intuitively and practically even if it weren’t mathematical nonsense. …
Like I said: impossible.
It’s not perfect, but it’s not terrible either.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality, and is based wholly on a totally arbitrary normalization procedure which you made up from whole cloth and which was motivated by nothing but wanting there to be such a procedure.
Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
You said my strategy “can hardly be reasoned about”. What difficulties in reasoning about it do you see? “No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
“No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer. By the way, if I find B < C, I no longer need to check if A < C, which is another saving.
I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”!
No, it was supposed to be “The chances to get dutch booked [if one frequently exhibits preferences that violate the axioms] are many”. I have a suspicion that all of your preferences that violate the axioms happen to be ones that never influence your real choices, though I haven’t given up yet. You’re right that I should try to actually dutch book you with what I have, I’ll take some time to read your link form the other thread and maybe give it a try.
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
I can’t imagine what you could possibly mean by “works”, here. What does it mean to say that your procedure “works”? That it generates answers? So does pulling numbers out of a hat, or astrology. That “works”, too.
Your procedure generates answers to questions of interpersonal utility comparison. This, according to you, means that it “works”. But those questions don’t make the slightest bit of sense in the first place! And so the answers are just as meaningless.
If I have a black box that can give me yes/no answers to questions of the form “is X meters more than Y kilograms”, can I say that this box “works”? Absurd! Suppose I ask it whether 5 meters is more than 10 kilograms, and it says “yes”. What do I do with that information? What does it mean? Suppose I use the box’s output to try to maximize “total number”. What the heck am I maximizing?? It’s not a quantity that has any meaning or significance!
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
How is it? Why would it be? What practical problems does it present? What practical problems does it present even hypothetically (in any even remotely plausible scenario)?
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer.
Please avoid condescending language like “X is not a word you want to use”.
That aside, no, I definitely meant “irrelevant”. You said we can construct a utility function without having to rank outcomes. You’re now apparently retreating from that claim. This leaves the VNM theorem as useless in practice as I said at the start. Again, this was my contention:
(Incidentally, it’s not even correct to say that “utility is for choosing the best outcome”. After all, we can only construct your utility function after we already know what you think the best outcome is! Before we have the total ordering over outcomes, we can’t construct the utility function…)
And you have yet to make any sensible argument against this.
As for attempting to Dutch-book me, please, by all means, proceed!
Your procedure generates answers to questions of interpersonal utility comparison.
No, my procedure is a decision procedure that answers the question “what should our group do”. It’s a very sensible question. What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone. I’ll be condescending again—it’s very bad that you can’t figure out what sort of questions we’re trying to answer here.
You said we can construct a utility function without having to rank outcomes.
Let me recap what our discussion on this topic looks like from my point of view. I said that “we can construct a utility function after we have verified the axioms”. You asked how. I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well). You asked how. I then give a trivial example of a comparison that I don’t need to make if I used the axioms. Then you said that this is irrelevant.
Well, it’s not irrelevant, it’s a direct answer to your question and a trivial proof of my earlier claim. “Irrelevant” is not a reply I could have predicted, it took me completely by surprise. It is important to me to figure out what happened here. Presumably one (or both) of us struggles with the English language, or with basic logic, or just isn’t paying any attention. If we failed to communicate this badly on this topic, are we failing equally badly on all other topics? If we are, is there any point in continuing the discussion, or can it be fixed somehow?
No, my procedure is a decision procedure that answers the question “what should our group do”.
By the standards you seem to be applying, a random number generator also answers that question. Here’s a procedure: for any binary decision, flip a coin. Heads yes, tails no. Does it “work”? Sure. It “works” just as well as using your VNM utility “normalization” scheme.
What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone.
Your procedure doesn’t. It can’t (except by coincidence). This is because it contains a step which is purely arbitrary, and not causally linked with anyone’s preferences, sense of fairness, etc.
This is, of course, without getting into the weeds of just what on earth it means for decisions to “generally” seem “fair” to “everyone”. (Each of those scare-quoted words conceals a black morass of details, sets of potential—and potentially contradictory—operationalizations, nigh-unsolvable methodological questions, etc., etc.) But let’s bracket that.
The fact is, what you’ve done is come up with a procedure for generating answers to a certain class of difficult questions. (A procedure, note, that does not actually work for at least two reasons, but even assuming its prerequisites are satisfied…) The problem is that those answers are basically arbitrary. They don’t reflect anything like the “real” answers (i.e. they’re not consistent with what our pre-existing understand of what the answers are or should be). Your method works [well, it doesn’t actually work, but if it did work, it would do so] only because it’s useless.
I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well).
If that is indeed what you meant, then your claim has been completely trivial all along, and I dearly wish you’d been clear to begin with. Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
I am disappointed that this discussion has turned out to be yet another instance of:
You keep repeating that, but it remains unconvincing. What I need is a specific example of a situation where my procedure would generate outcomes that we could all agree are bad.
flip a coin. Heads yes, tails no. Does it “work”? Sure.
Let’s use this for an example of what kind of argument I’m waiting for from you. Suppose you (and your group) run into lions every day. You have to compare your preferences for “run away” and “get eaten”. A coin flip is eventually going to select option 2. Everyone in your group ends up dead, even though every single one of them individually preferred to live. Every outside observer would agree that they don’t want to use this sort of decision procedure for their own group. Therefore I propose that the procedure “doesn’t work” or is “bad”.
Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
Technically there is an infinite number of comparisons left, and also an infinite number of comparisons saved. I believe that in a practical setting this difference is not insignificant, but I don’t see an easy way to exhibit that. In part that’s because I suspect that you already save those comparisons in your practical reasoning, despite denying the axioms which permit it.
your claim has been completely trivial all along
Yes, it has, so your resistance to it did seem pretty weird to me. I personally believe that my other claims are quite trivial as well, but it’s really hard to tell misunderstandings from true disagreement. What I want to do, is figure out whether this particular misunderstanding came from my failure at writing or from your failure at reading.
For starters, after reading my first post, did you think, that I think, that the utility function poofed into existence with U(“eat pancakes”)=3.91 already set by itself, after performing zero comparisons? This isn’t a charitable interpretation, but I can understand it. How did you interpret my two attempts to clarify my point in the further comments?
I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
In school I learned about utility in context of constructing decision problems. You rank the possible outcomes of a scenario in a preference ordering. You assign utilities to the possible outcomes, using an explicitly mushy, introspective process—unless money is involved, in which case the “mushy” step came in when you calibrated your nonlinear value-of-money function. You estimate probabilities where appropriate. You chug through the calculations of the decision tree and conclude that the best choice is the one that results probablistically in the best outcome as described by proxy as the outcome with the greatest probability-weighted utility.
That’s all good. Assuming you can actually do all of the above steps, I see no problem at all with using utility in that way. Very useful for deciding whether to drill a particular oil well or invest in a more expensive kind of ball bearing for your engine design.
But if you’ve ever actually tried to do that for, say, an important life decision, I would bet money that you ran up against problems. (My very first post on lesswrong.com concerned a software tool that I built to do exactly this. So I’ve been struggling with these issues for many years.) If you’re having trouble making a choice, it’s very likely that your certainty about your preferences is poor. Perhaps you’re able to construct the decision tree, and find that the computed “best choice” is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision, but on some level you already knew that, after all that’s why you were building a decision tree in the first place.
---
Another property of 3D space is that there is, in fact, a natural and useful definition of a norm, the 3D vector magnitude, which gives us the intuitive quantity “total distance”. I daresay physics would look very different if this weren’t the case.
“Total distance” (or vector magnitude or whatever) is both real and useful. “Real” in the sense that physics stops making sense without it. “Useful” in the sense that engineering becomes impossible without it.
My contention with “utility” is not real and only narrowly useful.
It’s not real because, again, there’s no neurological correlate for utility, there’s no introspective sense of utility, utility is a purely abstract mathematical quantity.
It’s only narrowly useful because, at best, it helps you make the “best choice” in decision problems in a sort of rigorously systematic way, such that you can show your work to a third party and have them agree that that was indeed the best choice by some pseudo-objective metric.
All of the above is uncontroversial, as far as I can tell, which makes it all the weirder when rationalists talk about “giving utility”, “standing on top of a pile of utility”, “trading utilons”, and “human utility functions”. None of those phrases make any sense, unless the speaker is using “utility” in some kind of folk terminology sense, and departing completely from the actual definition of the concept.
At the risk of repeating myself, this community takes certain problems very seriously, problems which are only actually problems if utility is the right abstraction for systematizing human wellbeing. I don’t see that it is, unless you find yourself in a situation where you can converge on a clear preference ordering with relatively good certainty.
Perhaps you’re able to construct the decision tree, and find that the computed “best choice” is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision
Are you sure that optimizing oil wells and ball bearings causes no such problems? These sound like generic problems you’d find with any sufficiently complex system, not something unique to human condition and experience.
I could argue that the abstract concept of utility is both quite real/natural and a useful abstraction, but there is nothing too disagreeable in your above comment. What bothers me is, I don’t see how adding more dimensions to utility solves any of the problems you just talked about.
If these are indeed problems that crop up with any sufficiently complex system, that’s even worse news for the idea that we can/should be using utility as the Ur-abstraction for quantifying value.
Perhaps adding more dimensions doesn’t solve anything. Perhaps all I’ve accomplished is suggesting a specific, semi-novel critique of utilitarianism. I remain unconvinced that I should push past my intuitive reservations and just swallow the Torture pill or the Repugnant Conclusion pill because the numbers say so.
That being said, every formulation of Utilitarianism that I can find depends on some sense of the “most good” and utility is a mathematical formalization of that idea. My quibble is less with the idea of doing the “most good” and more with the idea that the “most good” precisely corresponds to VNM utility.
Ur- is a prefix which strictly mean “original” but which I was using here intending more of a connotation of “fundamental”. Also I probably shouldn’t have capitalized it.
My point is that you can accept that “most good” does in fact correspond to VNM utility but reject that we want to add up this “most good” for all people and maximize the sum.
Hm. Yeah, you can accept that. You can choose to. I’m not arguing that you can’t — if you accept the axioms, then you must accept the conclusions of the axioms. I just don’t see why you would feel compelled to accept the axioms.
I feel a very strong urge to accept transitivity, others I care somewhat less about, but they seem reasonable too.
then you must accept the conclusions of the axioms
Which conclusions? To reiterate, my point is that “the Torture pill or the Repugnant Conclusion” don’t follow immediately from the existence of individual utility. They also require a demand to increase the total sum of utilities for a category of agents, which does sound vaguely good, but isn’t the only option.
It is rather strange to say that utility is for choosing the best outcome, given that a utility function can only be constructed in the first place if it’s already true that we can impose a total ordering on outcomes.
If what you have in mind when you say “utility” is VNM-utility (which is what it sounds like), then as you know, only agents whose preferences satisfy the axioms, have a utility function (i.e., a utility function can be constructed for an agent if and only if the agent’s preference satisfy the axioms).
Whether an agent’s preferences do, or do not, satisfy the VNM axioms, is an empirical question. We can ask it about a particular human, for instance. The answer will be “yes” or “no”, and it will, again, be an empirical fact.
Suppose we investigate some person’s preferences, and find that they do not satisfy the VNM axioms? Well, that’s it, then; that person has no utility function. That is a fact. No normative discussions (such as discussions about what a utility function is “for”) can change it.
I read moridinamael’s commentary to be aimed at just such an empirical question. He is asking: what can we say about human preferences? Is our understanding of the facts on the ground mistaken in a particular way? Are human preferences in fact like this, and not like that? —and so on.
Given that, comments about what “the point of” having a utility function, or what a utility function is “for”, or any other such normative concerns, seem inapplicable and somewhat strange. Asking “what benefit having more dimensions adds” seems like entirely the wrong sort of question to ask—a confusion underlies it, about what sort of thing we’re talking about. The additional dimensions either are present in the data, or they’re not. (Would you ask “what benefit” is derived from using three dimensions to measure space—why not define points in space using a one-dimensional scalar, isn’t that enough…? etc.)
Flagging that (I think) utilitarianism and VNM-utility are different things. They are closely related, but I think Bentham invented utiltarianism before VNM utility was formalized. They are named similar things for similar reasons but the formalisms don’t necessarily transfer.
It is separately the case that:
a) humans (specific or general) might be VNM agents, and that if they are not, they might aspire to try to be so that they don’t spend all their time driving from San Francisco to San Diego to New York
b) even if they are not, if you care about global welfare (either altruisticly or for self-interested Rawlsian veil of ignorance style thinking), you may want to approximate whether given decisions help or harm people, and this eventually needs to cash out into some kind of ability to decide whether a decision is net-positive.
tldr: “utilitariansim” (the term the OP used) does not formally imply VNM utility, although it does waggle its eyebrows suggestively.
Addressing your points separately, just as you made them:
I.
I do not think that is the mistake zulupineapple is making (getting VNM utility, and utilitarianism, mixed up somehow).
(Though it is a common mistake, and I have commented on it many times myself. I just think it is not the problem here. I think utility, in the sense of VNM utility (or something approximately like it) is in fact what zulupineapple had in mind. Of course, he should correct me if I misunderstood.)
II.
re: a): Someone we know once said: “the utility function is not up for grabs”. Well, indeed; and neither is my lack of utility function (i.e., my preferences) up for grabs. It seems very strange indeed, to say “in order to be rational, change your preferences”; when the whole point of (instrumental) rationality is to satisfy my preferences.
And I can’t help but notice that actual humans, in real life, do not spend all their time driving from SF to SD to NY and so on. Why is that? Now, perhaps you meant that scenario figuratively—yes? What you had in mind was some other, more subtle (apparent) preference reversal, and the driving between cities was a metaphor. Very well; but I suspect that, if you described the actual (apparent) preference reversal(s) you had in mind, their status as irrational would be rather more controversial, and harder to establish to everyone’s satisfaction.
III.
re: b): Making decisions does not require having a total ordering on outcomes—not even if we are consequentialists (as I certainly am) and care about helping vs. harming people (which is certainly one of the things I care about).
Furthermore, notice specifically that even if your requirement is that we have an “ability to decide whether a decision is net-positive”, even that does not require having a total ordering on outcomes. (Example 1: I can prefer situation B to A, and C to A, and D to A, while having a preference cycle between B, C, and D (this is the trivial case). Example 2: I can have a preference cycle between A, B, and C, and believe that any decision to go from one of those states to the next one in the cycle is net positive (this is the substantive case).)
IV.
By the way, violation of transitivity is not the most interesting form of VNM axiom violation—because it’s relatively easy to make the case that it constitutes irrationality. Far more interesting is violation of continuity; and you will, I suspect, have a more difficult time convincingly showing it to be irrational. (Correspondingly, it’s also—in my experience—more common among humans.)
(Edit: corrected redundant phrasing)
Can you describe the violation of continuity you observe in humans?
Robyn Dawes describes one class of such violations in his Rational Choice in an Uncertain World. (Edit: And he makes the case—quite convincingly, IMO—that such violations are not irrational.) You can search my old LessWrong comments and find some threads where I explain this. If you also search my comments for keywords “grandmother” and “chicken”, you’ll find some more examples.
If you can’t find this stuff, I’ll take some time to find it myself at some point, but not right now, sorry.
Found it here
Would you also prefer losing an arbitrary amount of money to any probability of your grandmother dying? I think chicken can be converted into money, so you should prefer this as well. I’m hoping that you’ll find this preference equivalent, but then find that your actions don’t actually follow it.
a) Chickens certainly can’t be converted into money (in the sense you mean)
b) Even if they could be, the comparison is nonsensical, because in the money case, we’re talking about my money, whereas in the chickens case we’re talking about chickens existing in the world (none of which I own)
c) That aside, I do not, in fact, prefer losing an arbitrary amount of money to any probability of my grandmother dying (but I do prefer losing quite substantial amounts of money to relatively small probabilities of my grandmother coming to any harm, and my actions certainly do follow this)
Chickens are real wealth owned by real people. Pressing a magical button that destroys all chickens would do massive damage to the well being of many people. So, you’re not willing to sacrifice your own wealth for tiny reductions in probability of dead grandma, but you’d gladly sacrifice the wealth of other people? That would make you a bad person. And the economic damage would end up affecting you eventually anyway.
I rather think you’ve missed most, if not all, of the point of that hypothetical (and you also don’t seem to have fully read the grandparent comment to this one, judging by your question).
Perhaps we should set the grandmother/chickens example aside for now, as we’re approaching the limit of how much explaining I’m willing to do (given that the threads where I originally discussed this are quite long and answer all these questions).
Take a look at the other example I cited.
Do you mean the a), b), c) comment? Which section did I miss?
Either way, I ask: do you prefer destroying an arbitrary amount of wealth (not yours) to any probability of your grandma dying? At least give a Yes/No.
From some book? You know, it would be great if your arguments were contained in your comments.
As I said, there are old LW comments of mine where I explain said argument in some detail (though not quite as much detail as the source). (I even included diagrams!)
Edited to add:
What difference does it make?
If I say “yes”, then we can have the same conversation as the one about the chickens. It’s just another example of the same thing.
If I say “no”, then it’s not a relevant example at all and there’s no reason to discuss it further.
This is a totally pointless line of inquiry; this is the last I’ll say about it.
Where? I didn’t see any such things in the LW comments I found. Are there more threads? Are you going to link to them? You’ve made a big claim, and I haven’t seen nearly enough defense for it.
Of course, the question is such that I get to feel right either way. If you say “no”, then I can deduce that you don’t understand what “wealth” is. If you say “yes”, then I can deduce that you’re a sociopath with poor understanding of cause and effect. Charitably, I could imagine that you were talking about destroying chickens in some parallel universe, where their destruction could 100% certainly not have consequences for you, but that’s a silly scenario too.
http://www.greaterwrong.com/posts/g9msGr7DDoPAwHF6D/to-what-extent-does-improved-rationality-lead-to-effective#kpMS4usW5rvyGkFgM
Regarding the grandma-chicken argument, having given it some thought, I think I understand it better now. I’d explain it like this. There is a utility function u, such that all of my actions maximize Eu. Suppose that u(A) = u(B) for some two choices A, B. Then I can claim that A > B, and exhibit this preference in my choices, i.e. given a choice between A and B I would always choose A. However for every B+, such that u(B+) > u(B) I would also claim B < A < B+. This does violate continuity, however because I’m still maximizing Eu, my actions can’t be called irrational, and the function u is hardly any less useful than it would be without the violation.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/wpT7LwqLnzJYFMveS.
Finally I read your link. So the main argument is that there is a preference between different probability distributions over utility, even if expected utility is the same. This is intuitively understandable, but I find it lacking specificity.
I propose the following three step experiment. First a human chooses a distribution X from two choices (X=A or X=B). Then we randomly draw a number P from the selected distribution X, then we try to win 1$ with probability P (and 0$ otherwise, which I’ll ignore by setting u(0$)=0, because I can). Here you can plot X as a distribution over expected utility, which equals P times u(1$). The claim is that some distributions X are more preferable to others, despite what pure utility calculations say. I.e. Eu(A) > Eu(B), but a human would choose B over A and would not be irrational. Do you agree that this experiment accurately represents Dawes claim?
Naturally, I find the argument bad. The double lottery can be easily collapsed into a single lottery, the final probabilities can be easily computed (which is what Eu does). If P(win 1$ | A) = P(win 1$ | B) then you’re free to make either choice, but if P(win 1$ | A) > P(win 1$ | B) even by a hair, and you choose B, you’re being irrational. Note that the choices of 0$ and 1$ as the prizes are completely arbitrary.
Are you referring to that one moderation note? I think you’re overreacting.
I would love to respond to your comment, and will certainly do so, but not here. Let me know what other venue you prefer.
I’m afraid not.
I think that he set the mind experiment in the Least convenient possible world. So your last hypothesis is right.
This seems like a weird preference to have. This de-facto means that you would never pay any attention whatsoever to the live’s of chicken, since any infinitesimally small change to the probability of your grandmother dying will outweigh any potential moral relevance. For all practical purposes in our world (which is interconnected to a degree that almost all actions will have some potential consequences to your grandmother), an agent following this preference would be indistinguishable from someone who does not care at all about chickens.
Only if that agent has a grandmother.
Suppose my grandmother (may she live to be 120) were to die. My preferences about the survival of chickens would now come into play. This is hardly an exotic scenario! There are many parallel constructions we can imagine. (Or do you propose that we decline to have preferences that bear only on possible future situations, not currently possible ones?)
Edited to add:
This is called “lexicographic preferences”, and it too is hardly exotic or unprecedented.
(end edit)
+++
Of course, even that is moot if we reject the proposition that “our world … is interconnected to a degree that almost all actions will have some potential consequences to your grandmother”.
And there are good reasons to reject it. If nothing else, it’s a fact that given sufficiently small probabilities, we humans are not capable of considering numbers of such precision, and so it seems strange to speak of basing our choices on them! There is also noise in measurement, errors in calculation, inaccuracies in the model, uncertainty, and a host of other factors that add up to the fact that in practice, “almost all actions” will, in fact, have no (foreseeable) consequences for my grandmother.
The value of information of finding out the consequences that any action has on the life of your grandmother is infinitely larger than the value you would assign to any number of chickens. De-facto this means that even if your grandmother is dead, as long as you are not literally 100% certain that she is dead and forever gone and could not possibly be brought back, you completely ignore the plight of chickens.
The fact that he is not willing to kill his grandmother to save the chickens doesn’t imply that chickens have 0 value or that his grandmother has infinite value.
Consider the problem from an egocentric point of view: to be responsible for one’s grandmother’s death feels awful, but also dedicating your life to a very unlikely possibility to save someone who has been declared dead, seems awful.
Stuart wrote a post about this a while ago, though it’s not the most understandable.
That’s an easy question. The benefit is increased model accuracy. You could also ask, “is there any benefit to using even more space dimensions” and this is also a good question, and a topic of modern physics.
Yes. And if OP wanted to show that something is wrong with the usual utility, they should show how the axioms are broken (transitivity in particular, the rest I could see hand-waving). I don’t think they did that.
A car is for moving around, regardless of whether you actually have a car. I don’t really understand your criticism. Ask yourself, why does the VNM utility theorem exist, why do we care about it, why do we care whether humans satisfy its axioms. The answer will presumably involve choosing good outcomes.
Well, there you go, then. Same thing.
If I say that I don’t (contrary to popular belief) have a car, and you reply that I’m confused about what cars are for, then something has gone wrong with your reasoning.
What does “exist” mean, here?
Who’s “we”? I, personally, find it to be of abstract mathematical interest, no more. (This was also more or less the view of Oskar Morgenstern himself.)
Ditto. In any case, we can want to choose good outcomes all we want, and that still won’t affect the facts about whether or not humans have utility functions. Our purposes or our reasons for caring about some mathematical results or any such thing, doesn’t change the facts.
Edited to add:
Perhaps, perhaps not, but that still leaves us with your response being a non sequitur! If you think moridinamael is wrong about the facts—if you think that in fact, despite any alleged multidimensionality of experience, a total preference ordering over [lotteries over] outcomes is possible—that’s all well and good; but what does “what is utility for” have to do with it?
(Incidentally, it’s not even correct to say that “utility is for choosing the best outcome”. After all, we can only construct your utility function after we already know what you think the best outcome is! Before we have the total ordering over outcomes, we can’t construct the utility function…)
I didn’t see OP explaining how preference model accuracy is increased by having more dimensions. Rather, I don’t think OP is even modeling the same thing that I’m modeling.
OP didn’t say he doesn’t have a car, from my point of view OP says that he doesn’t need a car, because a car can’t cook for him.
It means “has been conjectured, proven, talked about or etc”. Nothing fancy.
This is a weird thing to say. Multidimensionality of experience is not being questioned. The proposition that the entirety of human mental state can be meaningfully compressed to one number is stupid to the extent that I doubt any one has ever seriously suggested it in the entire human history. The problem is that OP argues against this trivially false claim, and treats it as some problem of utility. My response is that utility fails to express the entire human experience, because it is not for expressing the entire human experience. The same way that a car fails at cooking because it is not for cooking.
No, we can construct a utility function after we have verified the axioms (or just convinced ourselves that they should work). This is easier than actually ranking every possible outcome.
This is actually a flawed perspective. I guess it’s indicative of your belief that utility has no practical applications. If my preferences don’t satisfy the axioms, that only means that no utility function will describe my preferences perfectly. But some functions might approximate them and there could still be practical benefit to using them.
Well, I guess that explains something. I guess we should expand on this, but I struggle to understand why you think this, or what you think that I think.
“I don’t have a car” is exactly how I read the OP. Sibling comment seems to confirm this reading.
This is a novel claim! How do we do this? It seems manifestly false!
What does “approximate” mean, here? Let’s recall that according to the VNM theorem, if an agent’s preferences satisfy the axioms, then
In other words, for a VNM-compliant agent attempting to decide between outcomes A and B, A will be preferred to B if and only if u(A) > u(B).
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
This means that using the utility function as a guide to decision-making is guaranteed to violate the agent’s preferences in at least some case.
Such an agent then has two choices:
a) He can ignore his own preferences, and use the utility function as a means of decision-making; or
b) He can evaluate the utility function’s output by comparing it to his own preferences, deferring to the latter when the two conflict.
Choosing (a) seems completely unmotivated. And if the agent chooses (b), well, then what’s the point of the utility function to begin with? Just do what you prefer.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case, compared to constructing, and then following, a utility function, given that we have to elicit all of an agent’s preferences in order to construct the utility function to begin with!
And what does that leave us with, in terms of uses for a utility function? It’s not like we can do interpersonal utility comparisons (such operations are completely meaningless under VNM); which means we can’t aggregate VNM-utility across persons. So what practical benefit is there? How do we “use” a utility function, even assuming a VNM-compliant agent (quite an assumption) and assuming we can elicit all the agent’s preferences and construct the thing (another big assumption)? What do we do with it?
I didn’t mean immediately. I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
Yes, that’s what “approximate” means, especially if B is preferred to most other possible outcomes C.
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast. The chances to get dutch booked are many. Your strategy can hardly be reasoned about.
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
How? Demonstrate, please.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
Again: demonstrate. I tell you I follow the “do what I prefer” strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t “approximate” it, or do a “not-exactly-correct” computation, or anything like that. There’s nothing to approximate in the first place!
I think you’re confusing outcomes with lotteries. To build utility I need make comparisons for unique outcomes. E.g. if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence). You, on the other hand, need to separately evaluate every possible lottery.
I also need you to explain to me in what ways your “do what I prefer” violates the axioms, and how that works. I’m waiting for that in our other thread. To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities. Then observe what kind of strategy you generated. Most likely you’ll find that the strategy is quite fair and reasonable to everyone. Voila, you have a decision procedure for a group of people. It’s not perfect, but it’s not terrible either. All other criticisms are pointless. The day that I find some usefulness in comparing ohms to inches, I will start comparing ohms to inches.
For example, I get the automatic guarantee that I can’t be dutch booked. Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
What’s this?! I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”! Are they many and commonplace, or are they few, esoteric, and possibly nonexistent? Why not at least present some hypotheticals to back up your claim? Where are these chances to get Dutch booked? If they’re many, then name three!
So, in other words:
… Step 2: Do something that is completely unmotivated, baseless, and nonsensical mathematically, and, to boot, extremely questionable (to put it very mildly) intuitively and practically even if it weren’t mathematical nonsense. …
Like I said: impossible.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality, and is based wholly on a totally arbitrary normalization procedure which you made up from whole cloth and which was motivated by nothing but wanting there to be such a procedure.
You said my strategy “can hardly be reasoned about”. What difficulties in reasoning about it do you see? “No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer. By the way, if I find B < C, I no longer need to check if A < C, which is another saving.
No, it was supposed to be “The chances to get dutch booked [if one frequently exhibits preferences that violate the axioms] are many”. I have a suspicion that all of your preferences that violate the axioms happen to be ones that never influence your real choices, though I haven’t given up yet. You’re right that I should try to actually dutch book you with what I have, I’ll take some time to read your link form the other thread and maybe give it a try.
I can’t imagine what you could possibly mean by “works”, here. What does it mean to say that your procedure “works”? That it generates answers? So does pulling numbers out of a hat, or astrology. That “works”, too.
Your procedure generates answers to questions of interpersonal utility comparison. This, according to you, means that it “works”. But those questions don’t make the slightest bit of sense in the first place! And so the answers are just as meaningless.
If I have a black box that can give me yes/no answers to questions of the form “is X meters more than Y kilograms”, can I say that this box “works”? Absurd! Suppose I ask it whether 5 meters is more than 10 kilograms, and it says “yes”. What do I do with that information? What does it mean? Suppose I use the box’s output to try to maximize “total number”. What the heck am I maximizing?? It’s not a quantity that has any meaning or significance!
How is it? Why would it be? What practical problems does it present? What practical problems does it present even hypothetically (in any even remotely plausible scenario)?
Please avoid condescending language like “X is not a word you want to use”.
That aside, no, I definitely meant “irrelevant”. You said we can construct a utility function without having to rank outcomes. You’re now apparently retreating from that claim. This leaves the VNM theorem as useless in practice as I said at the start. Again, this was my contention:
And you have yet to make any sensible argument against this.
As for attempting to Dutch-book me, please, by all means, proceed!
No, my procedure is a decision procedure that answers the question “what should our group do”. It’s a very sensible question. What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone. I’ll be condescending again—it’s very bad that you can’t figure out what sort of questions we’re trying to answer here.
Let me recap what our discussion on this topic looks like from my point of view. I said that “we can construct a utility function after we have verified the axioms”. You asked how. I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well). You asked how. I then give a trivial example of a comparison that I don’t need to make if I used the axioms. Then you said that this is irrelevant.
Well, it’s not irrelevant, it’s a direct answer to your question and a trivial proof of my earlier claim. “Irrelevant” is not a reply I could have predicted, it took me completely by surprise. It is important to me to figure out what happened here. Presumably one (or both) of us struggles with the English language, or with basic logic, or just isn’t paying any attention. If we failed to communicate this badly on this topic, are we failing equally badly on all other topics? If we are, is there any point in continuing the discussion, or can it be fixed somehow?
By the standards you seem to be applying, a random number generator also answers that question. Here’s a procedure: for any binary decision, flip a coin. Heads yes, tails no. Does it “work”? Sure. It “works” just as well as using your VNM utility “normalization” scheme.
Your procedure doesn’t. It can’t (except by coincidence). This is because it contains a step which is purely arbitrary, and not causally linked with anyone’s preferences, sense of fairness, etc.
This is, of course, without getting into the weeds of just what on earth it means for decisions to “generally” seem “fair” to “everyone”. (Each of those scare-quoted words conceals a black morass of details, sets of potential—and potentially contradictory—operationalizations, nigh-unsolvable methodological questions, etc., etc.) But let’s bracket that.
The fact is, what you’ve done is come up with a procedure for generating answers to a certain class of difficult questions. (A procedure, note, that does not actually work for at least two reasons, but even assuming its prerequisites are satisfied…) The problem is that those answers are basically arbitrary. They don’t reflect anything like the “real” answers (i.e. they’re not consistent with what our pre-existing understand of what the answers are or should be). Your method works [well, it doesn’t actually work, but if it did work, it would do so] only because it’s useless.
If that is indeed what you meant, then your claim has been completely trivial all along, and I dearly wish you’d been clear to begin with. Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
I am disappointed that this discussion has turned out to be yet another instance of:
Alice: <Extraordinary, novel, truly stunning claim>!
Bob: What?! Impossible! Shocking, if true! Explain!
long discussion/argument ensues
Alice: Of course I actually meant <a version of the original claim so much weaker as to be trivial>, duh.
Bob: Damnit.
You keep repeating that, but it remains unconvincing. What I need is a specific example of a situation where my procedure would generate outcomes that we could all agree are bad.
Let’s use this for an example of what kind of argument I’m waiting for from you. Suppose you (and your group) run into lions every day. You have to compare your preferences for “run away” and “get eaten”. A coin flip is eventually going to select option 2. Everyone in your group ends up dead, even though every single one of them individually preferred to live. Every outside observer would agree that they don’t want to use this sort of decision procedure for their own group. Therefore I propose that the procedure “doesn’t work” or is “bad”.
Technically there is an infinite number of comparisons left, and also an infinite number of comparisons saved. I believe that in a practical setting this difference is not insignificant, but I don’t see an easy way to exhibit that. In part that’s because I suspect that you already save those comparisons in your practical reasoning, despite denying the axioms which permit it.
Yes, it has, so your resistance to it did seem pretty weird to me. I personally believe that my other claims are quite trivial as well, but it’s really hard to tell misunderstandings from true disagreement. What I want to do, is figure out whether this particular misunderstanding came from my failure at writing or from your failure at reading.
For starters, after reading my first post, did you think, that I think, that the utility function poofed into existence with U(“eat pancakes”)=3.91 already set by itself, after performing zero comparisons? This isn’t a charitable interpretation, but I can understand it. How did you interpret my two attempts to clarify my point in the further comments?
Hi zulupineapple,
I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/4vD2B3aG87EGJb7L5
Perhaps the following context will be useful.
In school I learned about utility in context of constructing decision problems. You rank the possible outcomes of a scenario in a preference ordering. You assign utilities to the possible outcomes, using an explicitly mushy, introspective process—unless money is involved, in which case the “mushy” step came in when you calibrated your nonlinear value-of-money function. You estimate probabilities where appropriate. You chug through the calculations of the decision tree and conclude that the best choice is the one that results probablistically in the best outcome as described by proxy as the outcome with the greatest probability-weighted utility.
That’s all good. Assuming you can actually do all of the above steps, I see no problem at all with using utility in that way. Very useful for deciding whether to drill a particular oil well or invest in a more expensive kind of ball bearing for your engine design.
But if you’ve ever actually tried to do that for, say, an important life decision, I would bet money that you ran up against problems. (My very first post on lesswrong.com concerned a software tool that I built to do exactly this. So I’ve been struggling with these issues for many years.) If you’re having trouble making a choice, it’s very likely that your certainty about your preferences is poor. Perhaps you’re able to construct the decision tree, and find that the computed “best choice” is actually highly sensitive to small changes in the utility values of the outcomes, in which case, the whole exercise was pointless, aside from the fact that it explicated why this was a hard decision, but on some level you already knew that, after all that’s why you were building a decision tree in the first place.
---
Another property of 3D space is that there is, in fact, a natural and useful definition of a norm, the 3D vector magnitude, which gives us the intuitive quantity “total distance”. I daresay physics would look very different if this weren’t the case.
“Total distance” (or vector magnitude or whatever) is both real and useful. “Real” in the sense that physics stops making sense without it. “Useful” in the sense that engineering becomes impossible without it.
My contention with “utility” is not real and only narrowly useful.
It’s not real because, again, there’s no neurological correlate for utility, there’s no introspective sense of utility, utility is a purely abstract mathematical quantity.
It’s only narrowly useful because, at best, it helps you make the “best choice” in decision problems in a sort of rigorously systematic way, such that you can show your work to a third party and have them agree that that was indeed the best choice by some pseudo-objective metric.
All of the above is uncontroversial, as far as I can tell, which makes it all the weirder when rationalists talk about “giving utility”, “standing on top of a pile of utility”, “trading utilons”, and “human utility functions”. None of those phrases make any sense, unless the speaker is using “utility” in some kind of folk terminology sense, and departing completely from the actual definition of the concept.
At the risk of repeating myself, this community takes certain problems very seriously, problems which are only actually problems if utility is the right abstraction for systematizing human wellbeing. I don’t see that it is, unless you find yourself in a situation where you can converge on a clear preference ordering with relatively good certainty.
Are you sure that optimizing oil wells and ball bearings causes no such problems? These sound like generic problems you’d find with any sufficiently complex system, not something unique to human condition and experience.
I could argue that the abstract concept of utility is both quite real/natural and a useful abstraction, but there is nothing too disagreeable in your above comment. What bothers me is, I don’t see how adding more dimensions to utility solves any of the problems you just talked about.
If these are indeed problems that crop up with any sufficiently complex system, that’s even worse news for the idea that we can/should be using utility as the Ur-abstraction for quantifying value.
Perhaps adding more dimensions doesn’t solve anything. Perhaps all I’ve accomplished is suggesting a specific, semi-novel critique of utilitarianism. I remain unconvinced that I should push past my intuitive reservations and just swallow the Torture pill or the Repugnant Conclusion pill because the numbers say so.
Maybe you’re confusing utility with utilitariansim? The two are not identical.
I’m going to be using utility until you propose something better. What’s “Ur”, by the way?
Not confused, just being lazy with language.
That being said, every formulation of Utilitarianism that I can find depends on some sense of the “most good” and utility is a mathematical formalization of that idea. My quibble is less with the idea of doing the “most good” and more with the idea that the “most good” precisely corresponds to VNM utility.
Ur- is a prefix which strictly mean “original” but which I was using here intending more of a connotation of “fundamental”. Also I probably shouldn’t have capitalized it.
My point is that you can accept that “most good” does in fact correspond to VNM utility but reject that we want to add up this “most good” for all people and maximize the sum.
Hm. Yeah, you can accept that. You can choose to. I’m not arguing that you can’t — if you accept the axioms, then you must accept the conclusions of the axioms. I just don’t see why you would feel compelled to accept the axioms.
I feel a very strong urge to accept transitivity, others I care somewhat less about, but they seem reasonable too.
Which conclusions? To reiterate, my point is that “the Torture pill or the Repugnant Conclusion” don’t follow immediately from the existence of individual utility. They also require a demand to increase the total sum of utilities for a category of agents, which does sound vaguely good, but isn’t the only option.