This one is not a central example, since I’ve not seen any VNM-proponent put it in quite these terms. A citation for this would be nice. In any case, the sort of thing you cite is not really my primary objection to VNM (insofar as I even have “objections” to the theorem itself rather than to the irresponsible way in which it’s often used), so we can let this pass.
VNM is used to show why you need to have utility functions if you don’t want to get Dutch-booked. It’s not something the OP invented, it’s the whole point of VNM. One wonder what you thought VNM was about.
Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.
That we face trade-offs in the real world is a claim under dispute ?
Ditto.
Another way of phrasing it is that we can model “ignore” as a choice, and derive the VNM theorem just as usual.
Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.
Ditto.
Once again, please provide some real-world examples of when this applies.
OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.
VNM is used to show why you need to have utility functions if you don’t want to get Dutch-booked. It’s not something the OP invented, it’s the whole point of VNM. One wonder what you thought VNM was about.
This is a confused and inaccurate comment.
The von Neumann-Morgenstern utility theorem states that if an agent’s preferences conform to the given axioms, then there exists a “utility function” that will correspond to the agent’s preferences (and so that agent can be said to behave as if maximizing a “utility function”).
We may then ask whether there is any normative reason for our preferences to conform to the given axioms (or, in other words, whether the axioms are justified by anything).
If the answer to this latter question turned out to be “no”, the VNM theorem would continue to hold. The theorem is entirely agnostic about whether any agent “should” hold the given axioms; it only tells us a certain mathematical fact about agents that do hold said axioms.
It so happens to be the case that for at least some[1] of the axioms, an agent that violates that axiom will agree to a Dutch book. Note, however, that the truth of this fact is independent of the truth of the VNM theorem.
Once again: if the VNM theorem were false, it could still be the case that an agent that violated one or more of the given axioms would agree to a Dutch book; and, conversely, if the latter were not the case, the VNM theorem would remain as true as ever.
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
We face trade-offs in the real world, and if we don’t choose between the options consistently, we’ll end up throwing away resources unnecessarily. … The more situations where we locally violate VNM utility, the more situations where we’ll lose resources.
Yes, this is exactly the claim under dispute. This is the one you need to be defending, seriously and in detail.
That we face trade-offs in the real world is a claim under dispute ?
Your questions give the impression that you’re being deliberately dense.
Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.
“Dutch Book theorems are unrealistic! They assume we’re willing to accept either a bet or it’s opposite, rather than ignoring both.” Response: same as previous.
Ditto again. I have asked for a demonstration of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in related contexts. I’ve never gotten so much as a serious attempt at a response. I ask you the same: demonstrate, please, and with (real-world!) examples.
Another way of phrasing it is that we can model “ignore” as a choice, and derive the VNM theorem just as usual.
As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.
More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
How can an agent not conform the completeness axiom ? It literally just say “either the agent prefer A to B, or B to A, or don’t prefer anything”. Offer me an example of an agent that don’t conform to the completeness axiom.
Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.
The entire rest of the section is a straightforward application of the theorem. The objection is that X don’t happen in real life, and the counter-objection is that something like X do happen in real life, meaning the theorem do apply.
As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.
Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace “VNM” by “Dutch book” ? Your behavior does not really send the vibe of someone who want to approach this complicated issue honestly, and more send the vibe of someone looking for Internet debate points.
More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.
If I cross the street, I make a bet about whether a car will run over me.
If I eat a pizza, I make a bet about whether the pizza will taste good.
If I’m posting this comment, I make a bet about whether it will convince anyone.
(Note: I ask that you not take this as an invitation to continue arguing the primary topic of this thread; however, one of the points you made is interesting enough on its own, and tangential enough from the main dispute, that I wanted to address it for the benefits of anyone reading this.)
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
How can an agent not conform the completeness axiom ? It literally just say “either the agent prefer A to B, or B to A, or don’t prefer anything”. Offer me an example of an agent that don’t conform to the completeness axiom.
This turns out to be an interesting question.
One obvious counterexample is simply an agent whose preferences are not totally deterministic; suppose that when choosing between A and B (though not necessarily in other cases involving other choices), the agent flips a coin, preferring A if heads, B otherwise (and thenceforth behaves according to this coin flip). However, until they actually have to make the choice, they have no preference. How do you propose to construct a Dutch book for this agent? Remember, the agent will only determine their preference after being provided with your offered bets.
A less trivial example is the case of bounded rationality. Suppose you want to know if I prefer A to B. However, either or both of A/B are outcomes that I have not considered yet. Suppose also (as is often the case in reality) that whenever I do encounter this choice, I will at once perceive that to fully evaluate it would be computationally (or otherwise cognitively) intractable given the limitations of time and other resources that I am willing to spend on making this decision. I will therefore rely on certain heuristics (which I have inherited from evolution, from my life experiences, or from god knows where else), I will consider certain previously known data, I will perhaps spend some small amount of time/effort on acquiring information to improve my understanding of A and B, and then form a preference.
My preference will thus depend on various contingent factors (what heuristics I can readily call to mind, what information is easily available for me to use in deciding, what has taken place in my life up to the point when I have to decide, etc.). Many, if not most, of these contingent factors, are not known to you; and even were they known to you, their effects on my preference are likely to be intractable to determine. You therefore are not able to model me as an agent whose preferences are complete. (We might, at most, be able to say something like “Omega, who can see the entire manifold of existence in all dimensions and time directions, can model me as an agent with complete preferences”, but certainly not that you, nor any other realistic agent, can do so.)
Before stating more carefully our goal and the contribution thereof, let us note that there are several economic reasons why one would like to study incomplete preference relations. First of all, as advanced by several authors in the literature, it is not evident if completeness is a fundamental rationality tenet the way the transitivity property is. Aumann (1962), Bewley (1986) and Mandler (1999), among others, defend this position very strongly from both the normative and positive viewpoints. Indeed, if one takes the psychological preference approach (which derives choices from preferences), and not the revealed preference approach, it seems natural to define a preference relation as a potentially incomplete preorder, thereby allowing for the occasional “indecisiveness” of the agents. Secondly, there are economic instances in which a decision maker is in fact composed of several agents each with a possibly distinct objective function. For instance, in coalitional bargaining games, it is in the nature of things to specify the preferences of each coalition by means of a vector of utility functions (one for each member of the coalition), and this requires one to view the preference relation of each coalition as an incomplete preference relation. The same reasoning applies to social choice problems; after all, the most commonly used social welfare ordering in economics, the Pareto dominance, is an incomplete preorder. Finally, we note that incomplete preferences allow one to enrich the decision making process of the agents by providing room for introducing to the model important behavioral traits like status quo bias, loss aversion, procedural decision making, etc.
I encourage you to read the whole thing (it’s a mere 13 pages long).
P.S. Here’s the aforementioned “Aumann (1962)” (yes, that very same Robert J. Aumann)—a paper called “Utility Theory without the Completeness Axiom”. Aumann writes in plain language wherever possible, and the paper is very readable. It includes this line:
Of all the axioms of utility theory, the completeness axiom is perhaps the
most questionable.[8] Like others of the axioms, it is inaccurate as a description
of real life; but unlike them, we find it hard to accept even from the
normative viewpoint.
The full elaboration for this (perhaps quite shocking) comment is too long to quote; I encourage anyone who’s at all interested in utility theory to read the paper.
I happened upon this old thread, and found the discussion intriguing. Thanks for posting these references! Unless I’m mistaken, it sounds like you’ve discussed this topic a lot on LW but have never made a big post detailing your whole perspective. Maybe that would be useful! At least I personally find discussions of applicability/generalizability of VNM and other rationality axioms quite interesting.
Indeed, I think I recently ran into another old comment of yours in which you made a remark about how Dutch Books only hold for repeated games? I don’t recall the details now.
I have some comments on the preceding discussion. You said:
It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
For me, it seems that transitivity and completeness are on an equally justified footing, based on the classic money-pump argument.
Just to keep things clear, here is how I think about the details. There are outcomes. Then there are gambles, which we will define recursively. An outcome counts as a gamble for the sake of the base case of our recursion. For gambles A and B, pA+(1-p)B also counts as a gamble, where p is a real number in the range [0,1].
Now we have a preference relation > on our gambles. I understand its negation to be ≤; saying ¬(A>B) is the same thing as A≤B. The indifference relation, A∼B is just the same thing as (A≤B)&(B≤A).
This is different than the development on wikipedia, where ~ is defined separately. But I think it makes more sense to define > and then define ~ from that. A>B can be understood as “definitely choose A when given the choice between A and B”. ~ then represents indifference as well as uncertainty like the kind you describe when you discuss bounded rationality.
From this starting point, it’s clear that either A<B, or B<A, or A~B. This is just a way of saying “either A<B or B<A or neither”. What’s important about the completeness axiom is the assumption that exactly one of these hold; this tells us that we cannot have both A<B and B<A.
But this is practically the same as circular preferences A<B<C<A, which transitivity outlaws. It’s just a circle of length 2.
The classic money-pump against circularity is that if we have circular preferences, someone can charge us for making a round trip around the circle, swapping A for B for C for A again. They leave us in the same position we started, less some money. They can then do this again and again, “pumping” all the money out of us.
Personally I find the this argument extremely metaphysically weird, for several reasons.
The money-pumper must be God, to be able to swap arbitrary A for B, and B for C, etc.
But furthermore, the agent must not understand the true nature of the money-pumper. When God asks about swapping A for B, the agent thinks it’ll get B in the end, and makes the decision accordingly. Yet, God proceeds to then ask a new question, offering to swap B for C. So God doesn’t actually put the agent in universe B; rather, God puts the agent in “B+God”, a universe with the possibility of B, but also a new offer from God, namely, to move on to C. So God is actually fooling the agent, making an offer of B but really giving the agent something different than B. Bad decision-making should not count against the agent if the agent was mislead in such a manner!
It’s also pretty weird that we can end up “in the same situation, but with less money”. If the outcomes A,B,C were capturing everything about the situation, they’d include how much money we had!
I have similar (but less severe) objections to Dutch-book arguments.
However, I also find the argument extremely practically applicable, so much so that I can excuse the metaphysical weirdness. I have come to think of Dutch-book and money-pump arguments as illustrative of important types of (in)consistency rather than literal arguments.
OK, why do I find money-pumps practical?
Simply put, if I have a loop in my preferences, then I will waste a lot of time deliberating. The real money-pump isn’t someone taking advantage of me, but rather, time itself passing.
What I find is that I get stuck deliberating until I can find a way to get rid of the loop. Or, if I “just choose randomly”, I’m stuck with a yucky dissatisfied feeling (I have regret, because I see another option as better than the one I chose).
This is equally true of three-choice loops and two-choice loops. So, transitivity and completeness seem equally well-justified to me.
Stuart Armstrong argues that there is a weak money pump for the independence axiom. I made a very technical post (not all of which seems to render correctly on LessWrong :/) justifying as much as I could with money-pump/dutch-book arguments, and similarly got everything except continuity.
I regard continuity as not very theoretically important, but highly applicable in practice. IE, I think the pure theory of rationality should exclude continuity, but a realistic agent will usually have continuous values. The reason for this is again because of deliberation time.
If we drop continuity, we get a version of utility theory with infinite and infinitesimal values. This is perfectly fine, has the advantage of being more general, and is in some sense more elegant. To reference the OP, continuity is definitely just boilerplate; we get a nice generalization if we want to drop it.
However, a real agent will ignore its own infinitesimal preferences, because it’s not worth spending time thinking about that. Indeed, it will almost always just think about the largest infinity in its preferences. This is especially true if we assume that the agent places positive probability on a really broad class of things, which again seems true of capable agents in practice. (IE, if you have infinities in your values, and a broad probability distribution, you’ll be Pascal-mugged—you’ll only think of the infinite payoffs, neglecting finite payoffs.)
So all of the axioms except independence have what appear to me to be rather practical justifications, and independence has a weak money-pump justification (which may or may not translate to anything practical).
Correction: I now see that my formulation turns the question of completeness into a question of transitivity of indifference. An “incomplete” preference relation should not be understood as one in which allows strict preferences to go in both directions (which is what I interpret them as, above) but rather, a preference relation in which the ≤ relation (and hence the ∼ relation) is not transitive.
In this case, we can distinguish between ~ and “gaps”, IE, incomparable A and B. ~ might be transitive, but this doesn’t bridge across the gaps. So we might have a preference chain A>B>C and a chain X>Y>Z, but not have any way to compare between the two chains.
In my formulation, which lumps together indifference and gaps, we can’t have this two-chain situation. If A~X, then we must have A>Y, since X>Y, by transitivity of ≥.
So what would be a completeness violation in the wikipedia formulation becomes a transitivity violation in mine.
But notice that I never argued for the transitivity of ~ or ≥ in my comment; I only argued for the transitivity of >.
I don’t think a money-pump argument can be offered for transitivity here.
However, I took a look at the paper by Aumann which you cited, and I’m fairly happy with the generalization of VNM therein! Dropping uniqueness does not seem like a big cost. This seems like more of an example of John Wentworth’s “boilerplate” point, rather than a counterexample.
Though there’s a great deal more I could say here, I think that when accusations of “looking for Internet debate points” start to fly, that’s the point at which it’s best to bow out of the conversation.
VNM is used to show why you need to have utility functions if you don’t want to get Dutch-booked. It’s not something the OP invented, it’s the whole point of VNM. One wonder what you thought VNM was about.
That we face trade-offs in the real world is a claim under dispute ?
Another way of phrasing it is that we can model “ignore” as a choice, and derive the VNM theorem just as usual.
Ditto.
OP said it: every time we make a decision under uncertainty. Every decision under uncertainty can be modeled as a bet, and Dutch book theorems are derived as usual.
This is a confused and inaccurate comment.
The von Neumann-Morgenstern utility theorem states that if an agent’s preferences conform to the given axioms, then there exists a “utility function” that will correspond to the agent’s preferences (and so that agent can be said to behave as if maximizing a “utility function”).
We may then ask whether there is any normative reason for our preferences to conform to the given axioms (or, in other words, whether the axioms are justified by anything).
If the answer to this latter question turned out to be “no”, the VNM theorem would continue to hold. The theorem is entirely agnostic about whether any agent “should” hold the given axioms; it only tells us a certain mathematical fact about agents that do hold said axioms.
It so happens to be the case that for at least some[1] of the axioms, an agent that violates that axiom will agree to a Dutch book. Note, however, that the truth of this fact is independent of the truth of the VNM theorem.
Once again: if the VNM theorem were false, it could still be the case that an agent that violated one or more of the given axioms would agree to a Dutch book; and, conversely, if the latter were not the case, the VNM theorem would remain as true as ever.
[1] It would be rather audacious to claim that this is true for each of the four axioms. For instance, do please demonstrate how you would Dutch-book an agent that does not conform to the completeness axiom!
Your questions give the impression that you’re being deliberately dense.
Obviously it’s true that we face trade-offs. What is not so obvious is literally the entire rest of the section I quoted.
As I explained above, the VNM theorem is orthogonal to Dutch book theorems, so this response is a non sequitur.
More generally, however… I have heard glib responses such as “Every decision under uncertainty can be modeled as a bet” many times. Yet if the applicability of Dutch book theorems is so ubiquitous, why do you (and others who say similar things) seem to find it so difficult to provide an actual, concrete, real-world example of any of the claims in the OP? Not a class of examples; not an analogy; not even a formal proof that examples exist; but an actual example. In fact, it should not be onerous to provide—let’s say—three examples, yes? Please be specific.
How can an agent not conform the completeness axiom ? It literally just say “either the agent prefer A to B, or B to A, or don’t prefer anything”. Offer me an example of an agent that don’t conform to the completeness axiom.
The entire rest of the section is a straightforward application of the theorem. The objection is that X don’t happen in real life, and the counter-objection is that something like X do happen in real life, meaning the theorem do apply.
Yeah, sorry for being imprecise in my language. Can you just be charitable and see that my statement make sense if you replace “VNM” by “Dutch book” ? Your behavior does not really send the vibe of someone who want to approach this complicated issue honestly, and more send the vibe of someone looking for Internet debate points.
If I cross the street, I make a bet about whether a car will run over me.
If I eat a pizza, I make a bet about whether the pizza will taste good.
If I’m posting this comment, I make a bet about whether it will convince anyone.
etc.
(Note: I ask that you not take this as an invitation to continue arguing the primary topic of this thread; however, one of the points you made is interesting enough on its own, and tangential enough from the main dispute, that I wanted to address it for the benefits of anyone reading this.)
This turns out to be an interesting question.
One obvious counterexample is simply an agent whose preferences are not totally deterministic; suppose that when choosing between A and B (though not necessarily in other cases involving other choices), the agent flips a coin, preferring A if heads, B otherwise (and thenceforth behaves according to this coin flip). However, until they actually have to make the choice, they have no preference. How do you propose to construct a Dutch book for this agent? Remember, the agent will only determine their preference after being provided with your offered bets.
A less trivial example is the case of bounded rationality. Suppose you want to know if I prefer A to B. However, either or both of A/B are outcomes that I have not considered yet. Suppose also (as is often the case in reality) that whenever I do encounter this choice, I will at once perceive that to fully evaluate it would be computationally (or otherwise cognitively) intractable given the limitations of time and other resources that I am willing to spend on making this decision. I will therefore rely on certain heuristics (which I have inherited from evolution, from my life experiences, or from god knows where else), I will consider certain previously known data, I will perhaps spend some small amount of time/effort on acquiring information to improve my understanding of A and B, and then form a preference.
My preference will thus depend on various contingent factors (what heuristics I can readily call to mind, what information is easily available for me to use in deciding, what has taken place in my life up to the point when I have to decide, etc.). Many, if not most, of these contingent factors, are not known to you; and even were they known to you, their effects on my preference are likely to be intractable to determine. You therefore are not able to model me as an agent whose preferences are complete. (We might, at most, be able to say something like “Omega, who can see the entire manifold of existence in all dimensions and time directions, can model me as an agent with complete preferences”, but certainly not that you, nor any other realistic agent, can do so.)
Finally, “Expected Utility Theory without the Completeness Axiom” (Dubra et. al., 2001) is a fascinating paper that explores some of the implications of completeness axiom violation in some detail. Key quote:
I encourage you to read the whole thing (it’s a mere 13 pages long).
P.S. Here’s the aforementioned “Aumann (1962)” (yes, that very same Robert J. Aumann)—a paper called “Utility Theory without the Completeness Axiom”. Aumann writes in plain language wherever possible, and the paper is very readable. It includes this line:
The full elaboration for this (perhaps quite shocking) comment is too long to quote; I encourage anyone who’s at all interested in utility theory to read the paper.
I happened upon this old thread, and found the discussion intriguing. Thanks for posting these references! Unless I’m mistaken, it sounds like you’ve discussed this topic a lot on LW but have never made a big post detailing your whole perspective. Maybe that would be useful! At least I personally find discussions of applicability/generalizability of VNM and other rationality axioms quite interesting.
Indeed, I think I recently ran into another old comment of yours in which you made a remark about how Dutch Books only hold for repeated games? I don’t recall the details now.
I have some comments on the preceding discussion. You said:
For me, it seems that transitivity and completeness are on an equally justified footing, based on the classic money-pump argument.
Just to keep things clear, here is how I think about the details. There are outcomes. Then there are gambles, which we will define recursively. An outcome counts as a gamble for the sake of the base case of our recursion. For gambles A and B, pA+(1-p)B also counts as a gamble, where p is a real number in the range [0,1].
Now we have a preference relation > on our gambles. I understand its negation to be ≤; saying ¬(A>B) is the same thing as A≤B. The indifference relation, A∼B is just the same thing as (A≤B)&(B≤A).
This is different than the development on wikipedia, where ~ is defined separately. But I think it makes more sense to define > and then define ~ from that. A>B can be understood as “definitely choose A when given the choice between A and B”. ~ then represents indifference as well as uncertainty like the kind you describe when you discuss bounded rationality.
From this starting point, it’s clear that either A<B, or B<A, or A~B. This is just a way of saying “either A<B or B<A or neither”. What’s important about the completeness axiom is the assumption that exactly one of these hold; this tells us that we cannot have both A<B and B<A.
But this is practically the same as circular preferences A<B<C<A, which transitivity outlaws. It’s just a circle of length 2.
The classic money-pump against circularity is that if we have circular preferences, someone can charge us for making a round trip around the circle, swapping A for B for C for A again. They leave us in the same position we started, less some money. They can then do this again and again, “pumping” all the money out of us.
Personally I find the this argument extremely metaphysically weird, for several reasons.
The money-pumper must be God, to be able to swap arbitrary A for B, and B for C, etc.
But furthermore, the agent must not understand the true nature of the money-pumper. When God asks about swapping A for B, the agent thinks it’ll get B in the end, and makes the decision accordingly. Yet, God proceeds to then ask a new question, offering to swap B for C. So God doesn’t actually put the agent in universe B; rather, God puts the agent in “B+God”, a universe with the possibility of B, but also a new offer from God, namely, to move on to C. So God is actually fooling the agent, making an offer of B but really giving the agent something different than B. Bad decision-making should not count against the agent if the agent was mislead in such a manner!
It’s also pretty weird that we can end up “in the same situation, but with less money”. If the outcomes A,B,C were capturing everything about the situation, they’d include how much money we had!
I have similar (but less severe) objections to Dutch-book arguments.
However, I also find the argument extremely practically applicable, so much so that I can excuse the metaphysical weirdness. I have come to think of Dutch-book and money-pump arguments as illustrative of important types of (in)consistency rather than literal arguments.
OK, why do I find money-pumps practical?
Simply put, if I have a loop in my preferences, then I will waste a lot of time deliberating. The real money-pump isn’t someone taking advantage of me, but rather, time itself passing.
What I find is that I get stuck deliberating until I can find a way to get rid of the loop. Or, if I “just choose randomly”, I’m stuck with a yucky dissatisfied feeling (I have regret, because I see another option as better than the one I chose).
This is equally true of three-choice loops and two-choice loops. So, transitivity and completeness seem equally well-justified to me.
Stuart Armstrong argues that there is a weak money pump for the independence axiom. I made a very technical post (not all of which seems to render correctly on LessWrong :/) justifying as much as I could with money-pump/dutch-book arguments, and similarly got everything except continuity.
I regard continuity as not very theoretically important, but highly applicable in practice. IE, I think the pure theory of rationality should exclude continuity, but a realistic agent will usually have continuous values. The reason for this is again because of deliberation time.
If we drop continuity, we get a version of utility theory with infinite and infinitesimal values. This is perfectly fine, has the advantage of being more general, and is in some sense more elegant. To reference the OP, continuity is definitely just boilerplate; we get a nice generalization if we want to drop it.
However, a real agent will ignore its own infinitesimal preferences, because it’s not worth spending time thinking about that. Indeed, it will almost always just think about the largest infinity in its preferences. This is especially true if we assume that the agent places positive probability on a really broad class of things, which again seems true of capable agents in practice. (IE, if you have infinities in your values, and a broad probability distribution, you’ll be Pascal-mugged—you’ll only think of the infinite payoffs, neglecting finite payoffs.)
So all of the axioms except independence have what appear to me to be rather practical justifications, and independence has a weak money-pump justification (which may or may not translate to anything practical).
Correction: I now see that my formulation turns the question of completeness into a question of transitivity of indifference. An “incomplete” preference relation should not be understood as one in which allows strict preferences to go in both directions (which is what I interpret them as, above) but rather, a preference relation in which the ≤ relation (and hence the ∼ relation) is not transitive.
In this case, we can distinguish between ~ and “gaps”, IE, incomparable A and B. ~ might be transitive, but this doesn’t bridge across the gaps. So we might have a preference chain A>B>C and a chain X>Y>Z, but not have any way to compare between the two chains.
In my formulation, which lumps together indifference and gaps, we can’t have this two-chain situation. If A~X, then we must have A>Y, since X>Y, by transitivity of ≥.
So what would be a completeness violation in the wikipedia formulation becomes a transitivity violation in mine.
But notice that I never argued for the transitivity of ~ or ≥ in my comment; I only argued for the transitivity of >.
I don’t think a money-pump argument can be offered for transitivity here.
However, I took a look at the paper by Aumann which you cited, and I’m fairly happy with the generalization of VNM therein! Dropping uniqueness does not seem like a big cost. This seems like more of an example of John Wentworth’s “boilerplate” point, rather than a counterexample.
Though there’s a great deal more I could say here, I think that when accusations of “looking for Internet debate points” start to fly, that’s the point at which it’s best to bow out of the conversation.