I encourage you to make a full post on this topic. I don’t think I’ve seen one about this before. You could explain what assumptions we’re making, why the’re unwarranted, what assumptions you make, what exactly coherence is, etc, in full and proper arguments. Because leaving comments on random posts that mention utility is not productive.
Perhaps. Frankly, I find it hard to see what more there is to say. What I said in the grandparent seems perfectly straightforward to me; I was aiming simply to point it out, to bring it to the attention of readers. There’s just not much to disagree with, in what I said; do you think otherwise? What did I say that wasn’t simply true?
(Note that I said nothing about the assumptions in question being unwarranted; it’s just that they’re unstated—which, if one is claiming to be reasoning in a straightforward way from basic principles, rather undermines the whole endeavor. As for what assumptions I would make—well, I wouldn’t! Why should I? I am not the one trying to demonstrate that beliefs X inevitably lead to outcome Y…)
(Re: “what exactly coherence is”, I was using the term in the usual way, not in any specific technical sense. Feel free to substitute “only that this scenario could take place”, or some similar phrasing, if the word “coherent” bothers you.)
I meant a post not just on this, but on all of your problems with preferences and utilities and VNM axioms. It seems to me that you have many beliefs about those, and you could at least put them all in one place.
Now, your current disagreement seems less about utility and more about the usefulness of the preference model itself. But I’m not sure what you’re saying exactly. The case where Alice would choose X over Y, but wouldn’t pay a penny to trade her Y for Bob’s X, is indeed possible, and there are a few ways to model that in preferences. But maybe you’re saying that there are agents where the entire preference model breaks down? And that these are “intelligent” and “sane” agents that we could actually care about?
Note that I said nothing about the assumptions in question being unwarranted
But surely you believe they are unwarranted? Because if the only problem with those assumptions is that they are unstated, then you’re just being pedantic.
I meant a post not just on this, but on all of your problems with preferences and utilities and VNM axioms. It seems to me that you have many beliefs about those, and you could at least put them all in one place.
Hmm… an analogy:
Suppose you frequented some forum where, on occasion, other people said various things like:
“2 + 2 equals 3.7.”
“Adding negative numbers is impossible.”
“64 is a prime number.”
“Any integer is divisible by 3.”
And so on. Whenever you encountered any such strange, mistaken statement about numbers/arithmetic/etc., you replied with a correction. But one day, another commenter said to you: “Why don’t you make a post about all of your problems with numbers and arithmetic etc.? It seems to me that you have many beliefs about those, and you could at least put them all in one place.”
What might you say, to such a commenter? Perhaps something like:
“Textbooks of arithmetic, number theory, and so on are easy to find. It would be silly and absurd for me to recapitulate their contents from scratch in a post. I simply correct mistakes where I see them, which is all that may reasonably be asked.”
Now, your current disagreement seems less about utility and more about the usefulness of the preference model itself. But I’m not sure what you’re saying exactly. The case where Alice would choose X over Y, but wouldn’t pay a penny to trade her Y for Bob’s X, is indeed possible, and there are a few ways to model that in preferences. But maybe you’re saying that there are agents where the entire preference model breaks down? And that these are “intelligent” and “sane” agents that we could actually care about?
What I’m saying is nothing more than what I said. I don’t see what’s confusing about it. If someone prefers X to Y, that doesn’t mean that they’ll pay to upgrade from Y to X. Without this assumption, it is, at least, a good deal harder to construct Dutch Book arguments. (This is, among other reasons, why it’s actually very difficult—indeed, usually impossible—to money-pump actual people in the real world, despite the extreme ubiquity of “irrational” (i.e., VNM-noncompliant) preferences.)
But surely you believe they are unwarranted? Because if the only problem with those assumptions is that they are unstated, then you’re just being pedantic.
I disagree wholeheartedly. Unstated assumptions are poison to “from first principles” arguments. Whether they’re warranted is of entirely secondary importance to the question of whether they are out in the open, so that they may be examined in order to determine whether they’re warranted.
Why don’t you make a post about all of your problems with numbers and arithmetic etc.? It seems to me that you have many beliefs about those, and you could at least put them all in one place.
Yes, even in your analogy this makes sense. There are several benefits.
you would then be able to link to this post instead of repeating those same corrections over and over again.
you would be able to measure to what extent the other users on this site actually disagree with you. You may find out that you have been strawmanning them all along.
other users would be able to try to build constructive arguments why you are wrong (hopefully, the possibility of being wrong has occured to you).
If someone prefers X to Y, that doesn’t mean that they’ll pay to upgrade from Y to X.
Yes, the statement that there exists an agent that would choose X over Y but would not pay to upgrade Y to X, is not controversial. I’ve already agreed to that. And I don’t see that the OP disagrees with it either. It is, however, true that most people would upgrade, for many instances of X and Y. It is normal to make simplifying assumptions in such cases, and you’re supposed to be able to parse them.
Yes, even in your analogy this makes sense. There are several benefits. …
It doesn’t make any sense whatsoever in the analogy (and, analogically, in the actual case). If my analogy (and subsequent commentary) has failed to convince you of this, I’m not sure what more there is to say.
It is, however, true that most people would upgrade, for many instances of X and Y.
Citation needed. (To forestall the obvious follow-up question: yes, I actually don’t think the quoted claim is true, on any non-trivial reading—I’m not merely asking for a citation out of sheer pedantry.)
It is normal to make simplifying assumptions in such cases
The “simplifying” assumptions, in this case, are far too strong, and far too simplifying, to bear being assumed without comment.
If my analogy (and subsequent commentary) has failed to convince you of this, I’m not sure what more there is to say.
Well, you could, for example, address my bullet points. Honestly, I didn’t see any reasons against making a post yet from you. I’d only count the analogy as a reason if it’s meant to imply that everyone in LW is insane, which you hopefully do not believe. Also, I think you’re overestimating the time required for a proper post with proper arguments, compared to the time you put into these comments.
Citation needed.
Really? Take X=”a cake” and Y=”a turd”. Would you really not pay to upgrade? Or did you make some unwarranted assumptions about X and Y? Yes, when X and Y are very similar, people will sometimes not trade, because trading is a pain in the ass.
I’d only count the analogy as a reason if it’s meant to imply that everyone in LW is insane, which you hopefully do not believe.
Why insane? Even in the analogy, no one needs to be insane, only wildly mistaken (which I do indeed believe that many, maybe most, people on LW [of those who have an opinion on the subject at all] are, where utility functions and related topics are concerned).
That said, I will take your suggestion to write a post about this under advisement.
Because sane people can be reasoned with. If a sane person is wildly mistaken, and you correct them, in a way that’s not insulting and in a way that’s useful to them (as opposed to pedantry), they can be quite grateful for that, at least sometimes.
Really? Take X=“a cake” and Y=“a turd”. Would you really not pay to upgrade? Or did you make some unwarranted assumptions about X and Y? Yes, when X and Y are very similar, people will sometimes not trade, because trading is a pain in the ass.
Fair point. I was, indeed, making some unwarranted assumptions; your example is, of course, correct and illustrative.
However, this leaves us with the problem that, when those assumptions (which involve, e.g., X and Y both being preferred to some third alternative which might be described as “neutral”, or X and Y both being describable as “positive value” on some non-preference-ordering-based view of value, or something along such lines) are relaxed, we find that this…
most people would upgrade, for many instances of X and Y
… while now clearly true, is no longer a useful claim to make. Yes, perhaps most people would upgrade, for many instances of X and Y, but the claim in the OP can only be read as a universal claim—or else it’s vacuous. (Note also that transitive preferences are quite implausible in the absence of the aforesaid assumptions.)
those assumptions (which involve, e.g., X and Y both being preferred to some third alternative which might be described as “neutral”, or X and Y both being describable as “positive value” on some non-preference-ordering-based view of value, or something along such lines)
X being “good” and Y being “bad” has nothing to do with it (although those are the most obvious examples). E.g. if X=$200 and Y=$100, then anyone would also pay to upgrade, when clearly both X and Y are “good” things. Or if X=”flu” and Y=”cancer”, anyone would upgrade, when both are “bad”.
The only case where people really wouldn’t upgrade is when X and Y are in some sense very close, e.g. if we have Y < X < Y + “1 penny” + “5 minutes of my time”.
But I agree, it is indeed reasonable that if someone has intransitive preferences, those preferences are actually very close in this sense and money pumping doesn’t work.
I encourage you to make a full post on this topic. I don’t think I’ve seen one about this before. You could explain what assumptions we’re making, why the’re unwarranted, what assumptions you make, what exactly coherence is, etc, in full and proper arguments. Because leaving comments on random posts that mention utility is not productive.
Perhaps. Frankly, I find it hard to see what more there is to say. What I said in the grandparent seems perfectly straightforward to me; I was aiming simply to point it out, to bring it to the attention of readers. There’s just not much to disagree with, in what I said; do you think otherwise? What did I say that wasn’t simply true?
(Note that I said nothing about the assumptions in question being unwarranted; it’s just that they’re unstated—which, if one is claiming to be reasoning in a straightforward way from basic principles, rather undermines the whole endeavor. As for what assumptions I would make—well, I wouldn’t! Why should I? I am not the one trying to demonstrate that beliefs X inevitably lead to outcome Y…)
(Re: “what exactly coherence is”, I was using the term in the usual way, not in any specific technical sense. Feel free to substitute “only that this scenario could take place”, or some similar phrasing, if the word “coherent” bothers you.)
I meant a post not just on this, but on all of your problems with preferences and utilities and VNM axioms. It seems to me that you have many beliefs about those, and you could at least put them all in one place.
Now, your current disagreement seems less about utility and more about the usefulness of the preference model itself. But I’m not sure what you’re saying exactly. The case where Alice would choose X over Y, but wouldn’t pay a penny to trade her Y for Bob’s X, is indeed possible, and there are a few ways to model that in preferences. But maybe you’re saying that there are agents where the entire preference model breaks down? And that these are “intelligent” and “sane” agents that we could actually care about?
But surely you believe they are unwarranted? Because if the only problem with those assumptions is that they are unstated, then you’re just being pedantic.
Hmm… an analogy:
Suppose you frequented some forum where, on occasion, other people said various things like:
“2 + 2 equals 3.7.”
“Adding negative numbers is impossible.”
“64 is a prime number.”
“Any integer is divisible by 3.”
And so on. Whenever you encountered any such strange, mistaken statement about numbers/arithmetic/etc., you replied with a correction. But one day, another commenter said to you: “Why don’t you make a post about all of your problems with numbers and arithmetic etc.? It seems to me that you have many beliefs about those, and you could at least put them all in one place.”
What might you say, to such a commenter? Perhaps something like:
“Textbooks of arithmetic, number theory, and so on are easy to find. It would be silly and absurd for me to recapitulate their contents from scratch in a post. I simply correct mistakes where I see them, which is all that may reasonably be asked.”
What I’m saying is nothing more than what I said. I don’t see what’s confusing about it. If someone prefers X to Y, that doesn’t mean that they’ll pay to upgrade from Y to X. Without this assumption, it is, at least, a good deal harder to construct Dutch Book arguments. (This is, among other reasons, why it’s actually very difficult—indeed, usually impossible—to money-pump actual people in the real world, despite the extreme ubiquity of “irrational” (i.e., VNM-noncompliant) preferences.)
I disagree wholeheartedly. Unstated assumptions are poison to “from first principles” arguments. Whether they’re warranted is of entirely secondary importance to the question of whether they are out in the open, so that they may be examined in order to determine whether they’re warranted.
Yes, even in your analogy this makes sense. There are several benefits.
you would then be able to link to this post instead of repeating those same corrections over and over again.
you would be able to measure to what extent the other users on this site actually disagree with you. You may find out that you have been strawmanning them all along.
other users would be able to try to build constructive arguments why you are wrong (hopefully, the possibility of being wrong has occured to you).
Yes, the statement that there exists an agent that would choose X over Y but would not pay to upgrade Y to X, is not controversial. I’ve already agreed to that. And I don’t see that the OP disagrees with it either. It is, however, true that most people would upgrade, for many instances of X and Y. It is normal to make simplifying assumptions in such cases, and you’re supposed to be able to parse them.
It doesn’t make any sense whatsoever in the analogy (and, analogically, in the actual case). If my analogy (and subsequent commentary) has failed to convince you of this, I’m not sure what more there is to say.
Citation needed. (To forestall the obvious follow-up question: yes, I actually don’t think the quoted claim is true, on any non-trivial reading—I’m not merely asking for a citation out of sheer pedantry.)
The “simplifying” assumptions, in this case, are far too strong, and far too simplifying, to bear being assumed without comment.
Well, you could, for example, address my bullet points. Honestly, I didn’t see any reasons against making a post yet from you. I’d only count the analogy as a reason if it’s meant to imply that everyone in LW is insane, which you hopefully do not believe. Also, I think you’re overestimating the time required for a proper post with proper arguments, compared to the time you put into these comments.
Really? Take X=”a cake” and Y=”a turd”. Would you really not pay to upgrade? Or did you make some unwarranted assumptions about X and Y? Yes, when X and Y are very similar, people will sometimes not trade, because trading is a pain in the ass.
Why insane? Even in the analogy, no one needs to be insane, only wildly mistaken (which I do indeed believe that many, maybe most, people on LW [of those who have an opinion on the subject at all] are, where utility functions and related topics are concerned).
That said, I will take your suggestion to write a post about this under advisement.
Because sane people can be reasoned with. If a sane person is wildly mistaken, and you correct them, in a way that’s not insulting and in a way that’s useful to them (as opposed to pedantry), they can be quite grateful for that, at least sometimes.
Fair point. I was, indeed, making some unwarranted assumptions; your example is, of course, correct and illustrative.
However, this leaves us with the problem that, when those assumptions (which involve, e.g., X and Y both being preferred to some third alternative which might be described as “neutral”, or X and Y both being describable as “positive value” on some non-preference-ordering-based view of value, or something along such lines) are relaxed, we find that this…
… while now clearly true, is no longer a useful claim to make. Yes, perhaps most people would upgrade, for many instances of X and Y, but the claim in the OP can only be read as a universal claim—or else it’s vacuous. (Note also that transitive preferences are quite implausible in the absence of the aforesaid assumptions.)
X being “good” and Y being “bad” has nothing to do with it (although those are the most obvious examples). E.g. if X=$200 and Y=$100, then anyone would also pay to upgrade, when clearly both X and Y are “good” things. Or if X=”flu” and Y=”cancer”, anyone would upgrade, when both are “bad”.
The only case where people really wouldn’t upgrade is when X and Y are in some sense very close, e.g. if we have Y < X < Y + “1 penny” + “5 minutes of my time”.
But I agree, it is indeed reasonable that if someone has intransitive preferences, those preferences are actually very close in this sense and money pumping doesn’t work.