I didn’t mean immediately. I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
If, however, the agent is VNM-noncompliant, then for any utility function u, there will exist at least one pair of outcomes A, B such that A is preferred to B, but u(A) < u(B).
Yes, that’s what “approximate” means, especially if B is preferred to most other possible outcomes C.
In fact, I struggle to see how “just do what you prefer” isn’t a superior strategy in any case
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast. The chances to get dutch booked are many. Your strategy can hardly be reasoned about.
It’s not like we can do interpersonal utility comparisons
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
How? Demonstrate, please.
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
The chances to get dutch booked are many.
Again: demonstrate. I tell you I follow the “do what I prefer” strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
Your strategy can hardly be reasoned about.
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t “approximate” it, or do a “not-exactly-correct” computation, or anything like that. There’s nothing to approximate in the first place!
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with.
I think you’re confusing outcomes with lotteries. To build utility I need make comparisons for unique outcomes. E.g. if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence). You, on the other hand, need to separately evaluate every possible lottery.
I tell you I follow the “do what I prefer” strategy. Dutch book me!
I also need you to explain to me in what ways your “do what I prefer” violates the axioms, and how that works. I’m waiting for that in our other thread. To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs.
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities. Then observe what kind of strategy you generated. Most likely you’ll find that the strategy is quite fair and reasonable to everyone. Voila, you have a decision procedure for a group of people. It’s not perfect, but it’s not terrible either. All other criticisms are pointless. The day that I find some usefulness in comparing ohms to inches, I will start comparing ohms to inches.
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
For example, I get the automatic guarantee that I can’t be dutch booked. Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence).
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
What’s this?! I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”! Are they many and commonplace, or are they few, esoteric, and possibly nonexistent? Why not at least present some hypotheticals to back up your claim? Where are these chances to get Dutch booked? If they’re many, then name three!
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities.
So, in other words:
… Step 2: Do something that is completely unmotivated, baseless, and nonsensical mathematically, and, to boot, extremely questionable (to put it very mildly) intuitively and practically even if it weren’t mathematical nonsense. …
Like I said: impossible.
It’s not perfect, but it’s not terrible either.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality, and is based wholly on a totally arbitrary normalization procedure which you made up from whole cloth and which was motivated by nothing but wanting there to be such a procedure.
Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
You said my strategy “can hardly be reasoned about”. What difficulties in reasoning about it do you see? “No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
“No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer. By the way, if I find B < C, I no longer need to check if A < C, which is another saving.
I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”!
No, it was supposed to be “The chances to get dutch booked [if one frequently exhibits preferences that violate the axioms] are many”. I have a suspicion that all of your preferences that violate the axioms happen to be ones that never influence your real choices, though I haven’t given up yet. You’re right that I should try to actually dutch book you with what I have, I’ll take some time to read your link form the other thread and maybe give it a try.
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
I can’t imagine what you could possibly mean by “works”, here. What does it mean to say that your procedure “works”? That it generates answers? So does pulling numbers out of a hat, or astrology. That “works”, too.
Your procedure generates answers to questions of interpersonal utility comparison. This, according to you, means that it “works”. But those questions don’t make the slightest bit of sense in the first place! And so the answers are just as meaningless.
If I have a black box that can give me yes/no answers to questions of the form “is X meters more than Y kilograms”, can I say that this box “works”? Absurd! Suppose I ask it whether 5 meters is more than 10 kilograms, and it says “yes”. What do I do with that information? What does it mean? Suppose I use the box’s output to try to maximize “total number”. What the heck am I maximizing?? It’s not a quantity that has any meaning or significance!
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
How is it? Why would it be? What practical problems does it present? What practical problems does it present even hypothetically (in any even remotely plausible scenario)?
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer.
Please avoid condescending language like “X is not a word you want to use”.
That aside, no, I definitely meant “irrelevant”. You said we can construct a utility function without having to rank outcomes. You’re now apparently retreating from that claim. This leaves the VNM theorem as useless in practice as I said at the start. Again, this was my contention:
(Incidentally, it’s not even correct to say that “utility is for choosing the best outcome”. After all, we can only construct your utility function after we already know what you think the best outcome is! Before we have the total ordering over outcomes, we can’t construct the utility function…)
And you have yet to make any sensible argument against this.
As for attempting to Dutch-book me, please, by all means, proceed!
Your procedure generates answers to questions of interpersonal utility comparison.
No, my procedure is a decision procedure that answers the question “what should our group do”. It’s a very sensible question. What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone. I’ll be condescending again—it’s very bad that you can’t figure out what sort of questions we’re trying to answer here.
You said we can construct a utility function without having to rank outcomes.
Let me recap what our discussion on this topic looks like from my point of view. I said that “we can construct a utility function after we have verified the axioms”. You asked how. I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well). You asked how. I then give a trivial example of a comparison that I don’t need to make if I used the axioms. Then you said that this is irrelevant.
Well, it’s not irrelevant, it’s a direct answer to your question and a trivial proof of my earlier claim. “Irrelevant” is not a reply I could have predicted, it took me completely by surprise. It is important to me to figure out what happened here. Presumably one (or both) of us struggles with the English language, or with basic logic, or just isn’t paying any attention. If we failed to communicate this badly on this topic, are we failing equally badly on all other topics? If we are, is there any point in continuing the discussion, or can it be fixed somehow?
No, my procedure is a decision procedure that answers the question “what should our group do”.
By the standards you seem to be applying, a random number generator also answers that question. Here’s a procedure: for any binary decision, flip a coin. Heads yes, tails no. Does it “work”? Sure. It “works” just as well as using your VNM utility “normalization” scheme.
What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone.
Your procedure doesn’t. It can’t (except by coincidence). This is because it contains a step which is purely arbitrary, and not causally linked with anyone’s preferences, sense of fairness, etc.
This is, of course, without getting into the weeds of just what on earth it means for decisions to “generally” seem “fair” to “everyone”. (Each of those scare-quoted words conceals a black morass of details, sets of potential—and potentially contradictory—operationalizations, nigh-unsolvable methodological questions, etc., etc.) But let’s bracket that.
The fact is, what you’ve done is come up with a procedure for generating answers to a certain class of difficult questions. (A procedure, note, that does not actually work for at least two reasons, but even assuming its prerequisites are satisfied…) The problem is that those answers are basically arbitrary. They don’t reflect anything like the “real” answers (i.e. they’re not consistent with what our pre-existing understand of what the answers are or should be). Your method works [well, it doesn’t actually work, but if it did work, it would do so] only because it’s useless.
I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well).
If that is indeed what you meant, then your claim has been completely trivial all along, and I dearly wish you’d been clear to begin with. Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
I am disappointed that this discussion has turned out to be yet another instance of:
You keep repeating that, but it remains unconvincing. What I need is a specific example of a situation where my procedure would generate outcomes that we could all agree are bad.
flip a coin. Heads yes, tails no. Does it “work”? Sure.
Let’s use this for an example of what kind of argument I’m waiting for from you. Suppose you (and your group) run into lions every day. You have to compare your preferences for “run away” and “get eaten”. A coin flip is eventually going to select option 2. Everyone in your group ends up dead, even though every single one of them individually preferred to live. Every outside observer would agree that they don’t want to use this sort of decision procedure for their own group. Therefore I propose that the procedure “doesn’t work” or is “bad”.
Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
Technically there is an infinite number of comparisons left, and also an infinite number of comparisons saved. I believe that in a practical setting this difference is not insignificant, but I don’t see an easy way to exhibit that. In part that’s because I suspect that you already save those comparisons in your practical reasoning, despite denying the axioms which permit it.
your claim has been completely trivial all along
Yes, it has, so your resistance to it did seem pretty weird to me. I personally believe that my other claims are quite trivial as well, but it’s really hard to tell misunderstandings from true disagreement. What I want to do, is figure out whether this particular misunderstanding came from my failure at writing or from your failure at reading.
For starters, after reading my first post, did you think, that I think, that the utility function poofed into existence with U(“eat pancakes”)=3.91 already set by itself, after performing zero comparisons? This isn’t a charitable interpretation, but I can understand it. How did you interpret my two attempts to clarify my point in the further comments?
I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
I didn’t mean immediately. I meant that assuming the axioms allows you to compute or at least bound the expected utility of some lotteries with respect to Eu of other lotteries.
Yes, that’s what “approximate” means, especially if B is preferred to most other possible outcomes C.
“Just do what you prefer” is awful. Preference checking is in many cases not cheap or easy. The space of possible outcomes is vast. The chances to get dutch booked are many. Your strategy can hardly be reasoned about.
Adding up people’s utilities doesn’t particularly interest me, so I don’t want to say too much, but the arguments would be pretty similar to the above. The common theme is that you fail to appreciate the value of exchanging exact correctness for computability.
How? Demonstrate, please.
If I can’t check my preferences over a pair of outcomes, then I can’t construct my utility function to begin with. You have yet to say or show anything that even approaches a rebuttal to this basic point.
Again: demonstrate. I tell you I follow the “do what I prefer” strategy. Dutch book me! I offer real money (up to $100 USD). I promise to consider any bet you offer (less those that are illegal where I live).
What difficulties do you see (that are absent if we instead construct and then use a utility function)?
Edited to add:
I don’t think you understand how fundamental the difficulty is. Interpersonal comparison, and aggregation, of VNM-utility is not hard. It is not even uncomputable. It is undefined. It does not mean anything to speak of comparing it between agents. It is, literally, mathematical nonsense, like comparing ohms to inches, or adding kilograms to QALYs. You can’t “approximate” it, or do a “not-exactly-correct” computation, or anything like that. There’s nothing to approximate in the first place!
I think you’re confusing outcomes with lotteries. To build utility I need make comparisons for unique outcomes. E.g. if I know that A < B then I no longer need to think if 0.5A + 0.5C < 0.5B + 0.5C (as per axiom of independence). You, on the other hand, need to separately evaluate every possible lottery.
I also need you to explain to me in what ways your “do what I prefer” violates the axioms, and how that works. I’m waiting for that in our other thread. To be clear, I might be unable to dutch book you, for example, if you follow the axioms in all cases except some extreme or impossible scenarios that I couldn’t possibly reproduce.
Step 1: build utility functions for several people in a group. Step 2: normalize the utilities based on the assumption that people are mostly the same (there are many ways to do it though). Step 3: maximize the sum of expected utilities. Then observe what kind of strategy you generated. Most likely you’ll find that the strategy is quite fair and reasonable to everyone. Voila, you have a decision procedure for a group of people. It’s not perfect, but it’s not terrible either. All other criticisms are pointless. The day that I find some usefulness in comparing ohms to inches, I will start comparing ohms to inches.
For example, I get the automatic guarantee that I can’t be dutch booked. Not only do you not have this guarantee, you can’t have any formal guarantees at all. Anything is possible.
Irrelevant, because that doesn’t save you from having to check whether you prefer A to C, or B to C.
What’s this?! I thought you said “The chances to get dutch booked [if one does just does what one prefers] are many”! Are they many and commonplace, or are they few, esoteric, and possibly nonexistent? Why not at least present some hypotheticals to back up your claim? Where are these chances to get Dutch booked? If they’re many, then name three!
So, in other words:
… Step 2: Do something that is completely unmotivated, baseless, and nonsensical mathematically, and, to boot, extremely questionable (to put it very mildly) intuitively and practically even if it weren’t mathematical nonsense. …
Like I said: impossible.
Of course it’s terrible. In fact it’s maximally terrible, insofar as it has no basis whatsoever in any reality, and is based wholly on a totally arbitrary normalization procedure which you made up from whole cloth and which was motivated by nothing but wanting there to be such a procedure.
You said my strategy “can hardly be reasoned about”. What difficulties in reasoning about it do you see? “No formal guarantee of not being Dutch booked” does not even begin to qualify as such a difficulty.
A procedure is terrible if it has terrible outcomes. But you didn’t mention outcomes at all. Saying that it’s “nonsensical” doesn’t convince me of anything. If it’s nonsensical but works, then it is good. Are you, by any chance, not a consequentialist?
I’m somewhat confused why this doesn’t qualify. Not even when phrased as “does not permit proving formal guarantees”? Is that not a difficulty in reasoning?
“Irrelevant” is definitely not a word you want to use here. Maybe “insufficient”? I never claimed that you would need zero comparisons, only that you’d need way fewer. By the way, if I find B < C, I no longer need to check if A < C, which is another saving.
No, it was supposed to be “The chances to get dutch booked [if one frequently exhibits preferences that violate the axioms] are many”. I have a suspicion that all of your preferences that violate the axioms happen to be ones that never influence your real choices, though I haven’t given up yet. You’re right that I should try to actually dutch book you with what I have, I’ll take some time to read your link form the other thread and maybe give it a try.
I can’t imagine what you could possibly mean by “works”, here. What does it mean to say that your procedure “works”? That it generates answers? So does pulling numbers out of a hat, or astrology. That “works”, too.
Your procedure generates answers to questions of interpersonal utility comparison. This, according to you, means that it “works”. But those questions don’t make the slightest bit of sense in the first place! And so the answers are just as meaningless.
If I have a black box that can give me yes/no answers to questions of the form “is X meters more than Y kilograms”, can I say that this box “works”? Absurd! Suppose I ask it whether 5 meters is more than 10 kilograms, and it says “yes”. What do I do with that information? What does it mean? Suppose I use the box’s output to try to maximize “total number”. What the heck am I maximizing?? It’s not a quantity that has any meaning or significance!
How is it? Why would it be? What practical problems does it present? What practical problems does it present even hypothetically (in any even remotely plausible scenario)?
Please avoid condescending language like “X is not a word you want to use”.
That aside, no, I definitely meant “irrelevant”. You said we can construct a utility function without having to rank outcomes. You’re now apparently retreating from that claim. This leaves the VNM theorem as useless in practice as I said at the start. Again, this was my contention:
And you have yet to make any sensible argument against this.
As for attempting to Dutch-book me, please, by all means, proceed!
No, my procedure is a decision procedure that answers the question “what should our group do”. It’s a very sensible question. What it means for it to “work” is debatable, but I meant that the procedure would generate decisions that generally seem fair to everyone. I’ll be condescending again—it’s very bad that you can’t figure out what sort of questions we’re trying to answer here.
Let me recap what our discussion on this topic looks like from my point of view. I said that “we can construct a utility function after we have verified the axioms”. You asked how. I understand why my first claim might have been misleading, as if the function poofed into existence with U(“eat pancakes”)=3.91 already set by itself. I immediately explain that I didn’t mean zero comparisons, I just meant fewer comparisons than you would need without the axioms (I wonder if that was misunderstood as well). You asked how. I then give a trivial example of a comparison that I don’t need to make if I used the axioms. Then you said that this is irrelevant.
Well, it’s not irrelevant, it’s a direct answer to your question and a trivial proof of my earlier claim. “Irrelevant” is not a reply I could have predicted, it took me completely by surprise. It is important to me to figure out what happened here. Presumably one (or both) of us struggles with the English language, or with basic logic, or just isn’t paying any attention. If we failed to communicate this badly on this topic, are we failing equally badly on all other topics? If we are, is there any point in continuing the discussion, or can it be fixed somehow?
By the standards you seem to be applying, a random number generator also answers that question. Here’s a procedure: for any binary decision, flip a coin. Heads yes, tails no. Does it “work”? Sure. It “works” just as well as using your VNM utility “normalization” scheme.
Your procedure doesn’t. It can’t (except by coincidence). This is because it contains a step which is purely arbitrary, and not causally linked with anyone’s preferences, sense of fairness, etc.
This is, of course, without getting into the weeds of just what on earth it means for decisions to “generally” seem “fair” to “everyone”. (Each of those scare-quoted words conceals a black morass of details, sets of potential—and potentially contradictory—operationalizations, nigh-unsolvable methodological questions, etc., etc.) But let’s bracket that.
The fact is, what you’ve done is come up with a procedure for generating answers to a certain class of difficult questions. (A procedure, note, that does not actually work for at least two reasons, but even assuming its prerequisites are satisfied…) The problem is that those answers are basically arbitrary. They don’t reflect anything like the “real” answers (i.e. they’re not consistent with what our pre-existing understand of what the answers are or should be). Your method works [well, it doesn’t actually work, but if it did work, it would do so] only because it’s useless.
If that is indeed what you meant, then your claim has been completely trivial all along, and I dearly wish you’d been clear to begin with. Fewer comparisons?! What good is that?? How much fewer? Is it still an infinite number? (yes)
I am disappointed that this discussion has turned out to be yet another instance of:
Alice: <Extraordinary, novel, truly stunning claim>!
Bob: What?! Impossible! Shocking, if true! Explain!
long discussion/argument ensues
Alice: Of course I actually meant <a version of the original claim so much weaker as to be trivial>, duh.
Bob: Damnit.
You keep repeating that, but it remains unconvincing. What I need is a specific example of a situation where my procedure would generate outcomes that we could all agree are bad.
Let’s use this for an example of what kind of argument I’m waiting for from you. Suppose you (and your group) run into lions every day. You have to compare your preferences for “run away” and “get eaten”. A coin flip is eventually going to select option 2. Everyone in your group ends up dead, even though every single one of them individually preferred to live. Every outside observer would agree that they don’t want to use this sort of decision procedure for their own group. Therefore I propose that the procedure “doesn’t work” or is “bad”.
Technically there is an infinite number of comparisons left, and also an infinite number of comparisons saved. I believe that in a practical setting this difference is not insignificant, but I don’t see an easy way to exhibit that. In part that’s because I suspect that you already save those comparisons in your practical reasoning, despite denying the axioms which permit it.
Yes, it has, so your resistance to it did seem pretty weird to me. I personally believe that my other claims are quite trivial as well, but it’s really hard to tell misunderstandings from true disagreement. What I want to do, is figure out whether this particular misunderstanding came from my failure at writing or from your failure at reading.
For starters, after reading my first post, did you think, that I think, that the utility function poofed into existence with U(“eat pancakes”)=3.91 already set by itself, after performing zero comparisons? This isn’t a charitable interpretation, but I can understand it. How did you interpret my two attempts to clarify my point in the further comments?
Hi zulupineapple,
I’d love to continue this discussion, but I’m afraid that the moderation policy on this site does not permit me to do so effectively, as you see. I’d be happy to take this to another forum (email, IRC, the comments section of my blog—whatever you prefer). If you’re interested, feel free to email me at myfirstname@myfullname.net (you could also PM me via LW’s PM system, but last time I tried using it, I couldn’t figure out how to make it work, so caveat emptor). If not, that’s fine too; in that case, I’ll have to bow out of the discussion.
Please see https://www.lesserwrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible/4vD2B3aG87EGJb7L5