The arguments in the Aumann paper in favor of dropping the completeness axiom is that it makes for a better theory of Human/Buisness/Existent reasoning, not that it makes for a better theory of ideal reasoning.
The words you’ve written here might seem coherent on a superficial reading, but the more you think about them, the less sense they make.
“Ideal reasoning”, as you are using it, is a red herring. The process of reasoning we are interested in when we deal with the type of agentic set-up in front of us is one in which reason is a means to an end, namely the fulfillment of preferences, as can be viewed through the (purportedly sensible) maximization of the utility function. It does not act to constrain what those preferences must be, except in so far as (in a real-world setting, for instance) they allow the agent to attempt self-modification to avoid the loss of resources to scenarios like money-pumping due to violations of transitivity. Moreover, an agent is not required to self-modify in order to avoid circular-type inconsistencies; if it determines it will not be faced with money-pumping scenarios in real life, it can very well decide to not waste resources for an ultimately useless self-modification. That is to say, these types of inefficiencies in the preference ranking are necessary but not sufficient conditions for problems to appear. As such, incomplete preferences are no more violations of “ideal reasoning” than preferring black cats to orange cats are; it’s simply something (close) to orthogonal to reasoning (i.e., optimization) processes.
Now, let’s consider what Aumann actually wrote in his 1962 paper:
Of all the axioms of utility theory, the completeness axiom is perhaps the most questionable.8 Like others of the axioms, it is inaccurate as a description of real life; but unlike them, we find it hard to accept even from the normative viewpoint. Does “rationality” demand that an individual make definite preference comparisons between all possible lotteries (even on a limited set of basic alternatives)? For example, certain decisions that our individual is asked to make might involve highly hypothetical situations, which he will never face in real life; he might feel that he cannot reach an “honest” decision in such cases. Other decision problems might be extremely complex, too complex for intuitive “insight,” and our individual might prefer to make no decision at all in these problems.9 Or he might be willing to make rough preference statements such as, “I prefer a cup of cocoa to a 75-25 lottery of coffee and tea, but reverse my preference if the ratio is 25-75″; but he might be unwilling to fix the break-even point between coffee-tea lotteries and cocoa any more precisely.l1 Is it “rational” to force decisions in such cases?
Yes, this makes reference to humans, but that is for illustrative purposes only; as Aumann notes, humans do not satisfy completeness, just as they don’t satisfy the other axioms of VNM theory. The relevant question is whether there is any fundamental rule of rationality that says they ought to, and as described above, there is not.
The paper seems to prove that any partial preference ordering which obeys the other axioms must be representable by a utility function, but that there will be multiple such representatives.
This is true but misleading as written, because your writing does not explain what “utility function” means in this new context. It is not the same as the utility function described in the original question post, due to the fact that it has a different type. Quoting from Aumann again:
Fortunately, it turns out that much of utility theory stays intact even when the completeness axiom is dropped. However, there is a price to pay. We still get a utility function u that satisfies the expected utility hypothesis (item (b) above); and u still “represents” the preference order (item (a) above), but now in a weaker sense: as before, if x is preferred to y then u(x) > u(y), but the opposite implication is no longer true. Indeed, since the real numbers are completely ordered and our lottery space is only partially ordered, the opposite implication could not possibly be true. Furthermore, we no longer have uniqueness of the utility.
The fact that the utilities are no longer unique is only a small and unimportant (actually trivial, see footnote 13 on page 448 of Aumann) part of the modification. What is far more important is that there is no longer a unique maximum of the utility function; indeed, the partial ordering gives only a potentially infinite set of maximal elements (world-states, universe-histories, etc) that are incomparable with one another. This breaks most of the intuitions that come about from working with regular (VNM-style, or coming from the Complete Class Theorem) utility functions, as optimization is now a much (infinitely times more) broader process that can converge on any of those end states. Since the territory is constrained to a much lesser extent, we have much less information about the agent from this analysis.
The concrete examples of non complete agents in the above, either seem like they will act according to one of those representatives, or like they are easily dutch bookable.
As explained above, “acting according to a representative” now becomes only a very modest constraint on the behavior of such an agent.
But demonstrating this is hard, as it is unclear what actions correspond to the fact that A is incomparable to B.
An agent has a preferential gap between lottery X and lottery Y iff
(1) it lacks a preference between X and Y, and
(2) this lack of preference is insensitive to some sweetening or souring.
Here clause (2) means that the agent also lacks a preference between X and some sweetening or souring of Y, or lacks a preference between Y and some sweetening or souring of X.
Consider an example. You likely have a preferential gap between some career as an accountant and some career as a clown.[8] There is some pair of salaries $m and $n you could be offered for those careers such that you lack a preference between the two careers, and you’d also lack a preference between those careers if the offers were instead $m+1 and $n, or $m−1 and $n, or $m and $n+1, or $m and $n−1. Since your lack of preference is insensitive to at least one of these sweetenings and sourings, you have a preferential gap between those careers at salaries $m and $n.[9]
Sami Petersen has explained in full, formal detail how it is not true that agents built with preferential gaps play dominated strategies. In a sense, this is a formalization of some of what Said explained in the comment I linked earlier.
The words you’ve written here might seem coherent on a superficial reading, but the more you think about them, the less sense they make.
“Ideal reasoning”, as you are using it, is a red herring. The process of reasoning we are interested in when we deal with the type of agentic set-up in front of us is one in which reason is a means to an end, namely the fulfillment of preferences, as can be viewed through the (purportedly sensible) maximization of the utility function. It does not act to constrain what those preferences must be, except in so far as (in a real-world setting, for instance) they allow the agent to attempt self-modification to avoid the loss of resources to scenarios like money-pumping due to violations of transitivity. Moreover, an agent is not required to self-modify in order to avoid circular-type inconsistencies; if it determines it will not be faced with money-pumping scenarios in real life, it can very well decide to not waste resources for an ultimately useless self-modification. That is to say, these types of inefficiencies in the preference ranking are necessary but not sufficient conditions for problems to appear. As such, incomplete preferences are no more violations of “ideal reasoning” than preferring black cats to orange cats are; it’s simply something (close) to orthogonal to reasoning (i.e., optimization) processes.
Now, let’s consider what Aumann actually wrote in his 1962 paper:
Yes, this makes reference to humans, but that is for illustrative purposes only; as Aumann notes, humans do not satisfy completeness, just as they don’t satisfy the other axioms of VNM theory. The relevant question is whether there is any fundamental rule of rationality that says they ought to, and as described above, there is not.
This is true but misleading as written, because your writing does not explain what “utility function” means in this new context. It is not the same as the utility function described in the original question post, due to the fact that it has a different type. Quoting from Aumann again:
The fact that the utilities are no longer unique is only a small and unimportant (actually trivial, see footnote 13 on page 448 of Aumann) part of the modification. What is far more important is that there is no longer a unique maximum of the utility function; indeed, the partial ordering gives only a potentially infinite set of maximal elements (world-states, universe-histories, etc) that are incomparable with one another. This breaks most of the intuitions that come about from working with regular (VNM-style, or coming from the Complete Class Theorem) utility functions, as optimization is now a much (infinitely times more) broader process that can converge on any of those end states. Since the territory is constrained to a much lesser extent, we have much less information about the agent from this analysis.
As explained above, “acting according to a representative” now becomes only a very modest constraint on the behavior of such an agent.
I’ll try illustrating it again, from a different tack this time. The relevant part is Section 5 of “The Shutdown Problem: Incomplete Preferences as a Solution”, incidentally also by EJT. What is of particular importance here are the “preferential gaps” he explains as follows:
Sami Petersen has explained in full, formal detail how it is not true that agents built with preferential gaps play dominated strategies. In a sense, this is a formalization of some of what Said explained in the comment I linked earlier.