I don’t understand how you are using incompleteness.
For example, to me the sentence
“agents can make themselves immune to all possible money-pumps for completeness by acting in accordance with the following policy: ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’”
Sounds like “agents can avoid all money pumps for completeness by completing their preferences in a random way.” Which is true but doesn’t seem like much of a challenge to completeness.
Can you explain what behavior is allowed under the first but isn’t possible under my rephrasing?
Similarly can we make explicit what behavior counts as two options being incomparable?
In particular, the most direct answer to your question is the following:
Paperclip Minimizer: How can an agent not conform the completeness axiom ? It literally just say “either the agent prefer A to B, or B to A, or don’t prefer anything”. Offer me an example of an agent that don’t conform to the completeness axiom.
Said Achmiz: This turns out to be an interesting question.
One obvious counterexample is simply an agent whose preferences are not totally deterministic; suppose that when choosing between A and B (though not necessarily in other cases involving other choices), the agent flips a coin, preferring A if heads, B otherwise (and thenceforth behaves according to this coin flip). However, until they actually have to make the choice, they have no preference. How do you propose to construct a Dutch book for this agent? Remember, the agent will only determine their preference after being provided with your offered bets.
A less trivial example is the case of bounded rationality. Suppose you want to know if I prefer A to B. However, either or both of A/B are outcomes that I have not considered yet. Suppose also (as is often the case in reality) that whenever I do encounter this choice, I will at once perceive that to fully evaluate it would be computationally (or otherwise cognitively) intractable given the limitations of time and other resources that I am willing to spend on making this decision. I will therefore rely on certain heuristics (which I have inherited from evolution, from my life experiences, or from god knows where else), I will consider certain previously known data, I will perhaps spend some small amount of time/effort on acquiring information to improve my understanding of A and B, and then form a preference.
My preference will thus depend on various contingent factors (what heuristics I can readily call to mind, what information is easily available for me to use in deciding, what has taken place in my life up to the point when I have to decide, etc.). Many, if not most, of these contingent factors, are not known to you; and even were they known to you, their effects on my preference are likely to be intractable to determine. You therefore are not able to model me as an agent whose preferences are complete. (We might, at most, be able to say something like “Omega, who can see the entire manifold of existence in all dimensions and time directions, can model me as an agent with complete preferences”, but certainly not that you, nor any other realistic agent, can do so.)
“Before stating more carefully our goal and the contribution thereof, let us note that there are several economic reasons why one would like to study incomplete preference relations. First of all, as advanced by several authors in the literature, it is not evident if completeness is a fundamental rationality tenet the way the transitivity property is. Aumann (1962), Bewley (1986) and Mandler (1999), among others, defend this position very strongly from both the normative and positive viewpoints. Indeed, if one takes the psychological preference approach (which derives choices from preferences), and not the revealed preference approach, it seems natural to define a preference relation as a potentially incomplete preorder, thereby allowing for the occasional “indecisiveness” of the agents. Secondly, there are economic instances in which a decision maker is in fact composed of several agents each with a possibly distinct objective function. For instance, in coalitional bargaining games, it is in the nature of things to specify the preferences of each coalition by means of a vector of utility functions (one for each member of the coalition), and this requires one to view the preference relation of each coalition as an incomplete preference relation. The same reasoning applies to social choice problems; after all, the most commonly used social welfare ordering in economics, the Pareto dominance, is an incomplete preorder. Finally, we note that incomplete preferences allow one to enrich the decision making process of the agents by providing room for introducing to the model important behavioral traits like status quo bias, loss aversion, procedural decision making, etc.”
I encourage you to read the whole thing (it’s a mere 13 pages long).
The arguments in the Aumann paper in favor of dropping the completeness axiom is that it makes for a better theory of Human/Buisness/Existent reasoning, not that it makes for a better theory of ideal reasoning. The paper seems to prove that any partial preference ordering which obeys the other axioms must be representable by a utility function, but that there will be multiple such representatives.
My claim is that either there will be a dutch book, or your actions will be equivalent to the actions you would have taken by following one of those representative utility functions, in which case even though the internals don’t seem like following a utility function they are for the purposes of VNM.
But demonstrating this is hard, as it is unclear what actions correspond to the fact that A is incomparable to B.
The concrete examples of non complete agents in the above, either seem like they will act according to one of those representatives, or like they are easily dutch bookable.
The arguments in the Aumann paper in favor of dropping the completeness axiom is that it makes for a better theory of Human/Buisness/Existent reasoning, not that it makes for a better theory of ideal reasoning.
The words you’ve written here might seem coherent on a superficial reading, but the more you think about them, the less sense they make.
“Ideal reasoning”, as you are using it, is a red herring. The process of reasoning we are interested in when we deal with the type of agentic set-up in front of us is one in which reason is a means to an end, namely the fulfillment of preferences, as can be viewed through the (purportedly sensible) maximization of the utility function. It does not act to constrain what those preferences must be, except in so far as (in a real-world setting, for instance) they allow the agent to attempt self-modification to avoid the loss of resources to scenarios like money-pumping due to violations of transitivity. Moreover, an agent is not required to self-modify in order to avoid circular-type inconsistencies; if it determines it will not be faced with money-pumping scenarios in real life, it can very well decide to not waste resources for an ultimately useless self-modification. That is to say, these types of inefficiencies in the preference ranking are necessary but not sufficient conditions for problems to appear. As such, incomplete preferences are no more violations of “ideal reasoning” than preferring black cats to orange cats are; it’s simply something (close) to orthogonal to reasoning (i.e., optimization) processes.
Now, let’s consider what Aumann actually wrote in his 1962 paper:
Of all the axioms of utility theory, the completeness axiom is perhaps the most questionable.8 Like others of the axioms, it is inaccurate as a description of real life; but unlike them, we find it hard to accept even from the normative viewpoint. Does “rationality” demand that an individual make definite preference comparisons between all possible lotteries (even on a limited set of basic alternatives)? For example, certain decisions that our individual is asked to make might involve highly hypothetical situations, which he will never face in real life; he might feel that he cannot reach an “honest” decision in such cases. Other decision problems might be extremely complex, too complex for intuitive “insight,” and our individual might prefer to make no decision at all in these problems.9 Or he might be willing to make rough preference statements such as, “I prefer a cup of cocoa to a 75-25 lottery of coffee and tea, but reverse my preference if the ratio is 25-75″; but he might be unwilling to fix the break-even point between coffee-tea lotteries and cocoa any more precisely.l1 Is it “rational” to force decisions in such cases?
Yes, this makes reference to humans, but that is for illustrative purposes only; as Aumann notes, humans do not satisfy completeness, just as they don’t satisfy the other axioms of VNM theory. The relevant question is whether there is any fundamental rule of rationality that says they ought to, and as described above, there is not.
The paper seems to prove that any partial preference ordering which obeys the other axioms must be representable by a utility function, but that there will be multiple such representatives.
This is true but misleading as written, because your writing does not explain what “utility function” means in this new context. It is not the same as the utility function described in the original question post, due to the fact that it has a different type. Quoting from Aumann again:
Fortunately, it turns out that much of utility theory stays intact even when the completeness axiom is dropped. However, there is a price to pay. We still get a utility function u that satisfies the expected utility hypothesis (item (b) above); and u still “represents” the preference order (item (a) above), but now in a weaker sense: as before, if x is preferred to y then u(x) > u(y), but the opposite implication is no longer true. Indeed, since the real numbers are completely ordered and our lottery space is only partially ordered, the opposite implication could not possibly be true. Furthermore, we no longer have uniqueness of the utility.
The fact that the utilities are no longer unique is only a small and unimportant (actually trivial, see footnote 13 on page 448 of Aumann) part of the modification. What is far more important is that there is no longer a unique maximum of the utility function; indeed, the partial ordering gives only a potentially infinite set of maximal elements (world-states, universe-histories, etc) that are incomparable with one another. This breaks most of the intuitions that come about from working with regular (VNM-style, or coming from the Complete Class Theorem) utility functions, as optimization is now a much (infinitely times more) broader process that can converge on any of those end states. Since the territory is constrained to a much lesser extent, we have much less information about the agent from this analysis.
The concrete examples of non complete agents in the above, either seem like they will act according to one of those representatives, or like they are easily dutch bookable.
As explained above, “acting according to a representative” now becomes only a very modest constraint on the behavior of such an agent.
But demonstrating this is hard, as it is unclear what actions correspond to the fact that A is incomparable to B.
An agent has a preferential gap between lottery X and lottery Y iff
(1) it lacks a preference between X and Y, and
(2) this lack of preference is insensitive to some sweetening or souring.
Here clause (2) means that the agent also lacks a preference between X and some sweetening or souring of Y, or lacks a preference between Y and some sweetening or souring of X.
Consider an example. You likely have a preferential gap between some career as an accountant and some career as a clown.[8] There is some pair of salaries $m and $n you could be offered for those careers such that you lack a preference between the two careers, and you’d also lack a preference between those careers if the offers were instead $m+1 and $n, or $m−1 and $n, or $m and $n+1, or $m and $n−1. Since your lack of preference is insensitive to at least one of these sweetenings and sourings, you have a preferential gap between those careers at salaries $m and $n.[9]
Sami Petersen has explained in full, formal detail how it is not true that agents built with preferential gaps play dominated strategies. In a sense, this is a formalization of some of what Said explained in the comment I linked earlier.
I don’t understand how you are using incompleteness. For example, to me the sentence
Sounds like “agents can avoid all money pumps for completeness by completing their preferences in a random way.” Which is true but doesn’t seem like much of a challenge to completeness.
Can you explain what behavior is allowed under the first but isn’t possible under my rephrasing?
Similarly can we make explicit what behavior counts as two options being incomparable?
The following comments by @Said Achmiz are relevant here: 1, 2, 3; as well as @johnswentworth’s post from 5 years ago on “Why Subagents?”
In particular, the most direct answer to your question is the following:
If you’re going to link Why Subagents?, you should probably also link Why Not Subagents?.
It’s linked in the edit at the top of my post.
The arguments in the Aumann paper in favor of dropping the completeness axiom is that it makes for a better theory of Human/Buisness/Existent reasoning, not that it makes for a better theory of ideal reasoning.
The paper seems to prove that any partial preference ordering which obeys the other axioms must be representable by a utility function, but that there will be multiple such representatives.
My claim is that either there will be a dutch book, or your actions will be equivalent to the actions you would have taken by following one of those representative utility functions, in which case even though the internals don’t seem like following a utility function they are for the purposes of VNM.
But demonstrating this is hard, as it is unclear what actions correspond to the fact that A is incomparable to B.
The concrete examples of non complete agents in the above, either seem like they will act according to one of those representatives, or like they are easily dutch bookable.
The words you’ve written here might seem coherent on a superficial reading, but the more you think about them, the less sense they make.
“Ideal reasoning”, as you are using it, is a red herring. The process of reasoning we are interested in when we deal with the type of agentic set-up in front of us is one in which reason is a means to an end, namely the fulfillment of preferences, as can be viewed through the (purportedly sensible) maximization of the utility function. It does not act to constrain what those preferences must be, except in so far as (in a real-world setting, for instance) they allow the agent to attempt self-modification to avoid the loss of resources to scenarios like money-pumping due to violations of transitivity. Moreover, an agent is not required to self-modify in order to avoid circular-type inconsistencies; if it determines it will not be faced with money-pumping scenarios in real life, it can very well decide to not waste resources for an ultimately useless self-modification. That is to say, these types of inefficiencies in the preference ranking are necessary but not sufficient conditions for problems to appear. As such, incomplete preferences are no more violations of “ideal reasoning” than preferring black cats to orange cats are; it’s simply something (close) to orthogonal to reasoning (i.e., optimization) processes.
Now, let’s consider what Aumann actually wrote in his 1962 paper:
Yes, this makes reference to humans, but that is for illustrative purposes only; as Aumann notes, humans do not satisfy completeness, just as they don’t satisfy the other axioms of VNM theory. The relevant question is whether there is any fundamental rule of rationality that says they ought to, and as described above, there is not.
This is true but misleading as written, because your writing does not explain what “utility function” means in this new context. It is not the same as the utility function described in the original question post, due to the fact that it has a different type. Quoting from Aumann again:
The fact that the utilities are no longer unique is only a small and unimportant (actually trivial, see footnote 13 on page 448 of Aumann) part of the modification. What is far more important is that there is no longer a unique maximum of the utility function; indeed, the partial ordering gives only a potentially infinite set of maximal elements (world-states, universe-histories, etc) that are incomparable with one another. This breaks most of the intuitions that come about from working with regular (VNM-style, or coming from the Complete Class Theorem) utility functions, as optimization is now a much (infinitely times more) broader process that can converge on any of those end states. Since the territory is constrained to a much lesser extent, we have much less information about the agent from this analysis.
As explained above, “acting according to a representative” now becomes only a very modest constraint on the behavior of such an agent.
I’ll try illustrating it again, from a different tack this time. The relevant part is Section 5 of “The Shutdown Problem: Incomplete Preferences as a Solution”, incidentally also by EJT. What is of particular importance here are the “preferential gaps” he explains as follows:
Sami Petersen has explained in full, formal detail how it is not true that agents built with preferential gaps play dominated strategies. In a sense, this is a formalization of some of what Said explained in the comment I linked earlier.