As the Theorem treats them, voters are already utility-maximizing agents who have a clear preference set which they act on in rational ways. The question: how to aggregate these?
It turns out that if you want certain superficially reasonable things out of a voting process from such agents—nothing gets chosen at random, it doesn’t matter how you cut up choices or whatever, &c. - you’re in for disappointment. There isn’t actually a way to have a group that is itself rationally agentic in the precise way the Theorem postulates.
One bullet you could bite is having a dictator. Then none of the inconsistencies arise from having all these extra preference sets lying around because there’s only one and it’s perfectly coherent. This is very easily comparable to reducing all of your own preferences into a single coherent utility function.
Both involve taking a mathematical result about the only way to do something in a way that satisfies certain intuitively appealing properties, and using it to argue that we therefore should do it that way.
Not really, because the argument isn’t that you should do anything differently at all. It says that there’s some utility function that represents your preferences, some expected-utility-maximizing genie that makes the same choices as you, but it doesn’t tell you to have different preferences, or make different decisions under any circumstances.
In fact, I don’t really know why this post is called “Why you must maximize expected utility” instead of “Why you already maximize expected utility.” It seems that even if I have some algorithm that is on the surface not maximizing expected utility, such as being risk-averse in some way dealing with money, then I’m really just maximizing the expected value of a non-obvious utility function.
No. Most humans do not maximize expected utility with respect to any utility function whatsoever because they have preferences which violate the hypotheses of the VNM theorem. For example, framing effects show that humans do not even consistently have the same preferences regarding fixed probability distributions over outcomes (but that their preferences change depending on whether the outcomes are described in terms of gains or losses).
Edit: in other words, the VNM theorem shows that “you must maximize expected utility” is equivalent to “your preferences should satisfy the hypotheses of the VNM theorem” (and not all of these hypotheses are encapsulated in the VNM axioms), and this is a statement with nontrivial content.
No. Most humans do not maximize the expected utility of any utility function whatsoever because they have preferences which violate the hypotheses of the VNM theorem.
Axioms? (Hypotheses does seem to quite fit. One could have a hypothesis that humans had preferences that are in accord with the VNM axioms and falsify said theorem but the VNM doesn’t make the hypothesis itself.)
In the nomenclature that I think is relatively standard among mathematicians, if a theorem states “if P1, P2, … then Q” then P1, P2, … are the hypotheses of the theorem and Q is the conclusion. One of the hypotheses of the VNM theorem, which isn’t strictly speaking one of the von Neumann-Morgenstern axioms, is that you assign consistent preferences at all (that is, that the decision of whether you prefer A to B depends only on what A and B are). I’m not using “consistent” here in the same sense as the Wikipedia article does when talking about transitivity; I mean consistent over time. (Edit: Eliezer uses “incoherent”; maybe that’s a better word.)
Again, among mathematicians, I think “hypotheses” is more common. Exhibit A; Exhibit B. I would guess that “premises” is more common among philosophers...?
I usually say “assumptions”, but I’m neither a mathematician nor a philosopher. I do say “hypotheses” if for some reason I’m wearing mathematician attire.
It seems that even if I have some algorithm that is on the surface not maximizing expected utility, such as being risk-averse in some way dealing with money, then I’m really just maximizing the expected value of a non-obvious utility function.
Not all decision algorithms are utility-maximising algorithms. If this were not so, the axioms of the VNM theorem would not be necessary. But they are necessary: the conclusion requires the axioms, and when axioms are dropped, decision algorithms violating the conclusion exist.
For example, suppose that given a choice between A and B it chooses A; between B and C it chooses B; between C and A it chooses C. No utility function describes this decision algorithm. Suppose that given a choice between A and B it never makes a choice. No utility function describes this decision algorithm.
Another way that a decision algorithm can fail to have an associated utility function is by lying outside the ontology of the VNM theorem. The VNM theorem treats only of decisions over probability distributions of outcomes. Decisions can be made over many other things. And what is an “outcome”? Can it be anything less than the complete state of the agent’s entire positive light-cone? If not, it is practically impossible to calculate with; but if it can be smaller, what counts as an outcome and what does not?
Here is another decision algorithm. It is the one implemented by a room thermostat. It has two possible actions: turn the heating on, or turn the heating off. It has two sensors: one for the actual temperature and one for the set-point temperature. Its decisions are given by this algorithm: if the temperature falls 0.5 degrees below the set point, turn the heating on; if it rises 0.5 degrees above the set-point, turn the heating off. Exercise: what relationship holds between this system, the VNM theorem, and utility functions?
Analogous in what way?
As the Theorem treats them, voters are already utility-maximizing agents who have a clear preference set which they act on in rational ways. The question: how to aggregate these?
It turns out that if you want certain superficially reasonable things out of a voting process from such agents—nothing gets chosen at random, it doesn’t matter how you cut up choices or whatever, &c. - you’re in for disappointment. There isn’t actually a way to have a group that is itself rationally agentic in the precise way the Theorem postulates.
One bullet you could bite is having a dictator. Then none of the inconsistencies arise from having all these extra preference sets lying around because there’s only one and it’s perfectly coherent. This is very easily comparable to reducing all of your own preferences into a single coherent utility function.
Both involve taking a mathematical result about the only way to do something in a way that satisfies certain intuitively appealing properties, and using it to argue that we therefore should do it that way.
A dictatorship isn’t the only resolution to Arrow’s theorem. Anyway, this sounds like a rather weak argument against the position.
It’s an outside view argument.
Not really, because the argument isn’t that you should do anything differently at all. It says that there’s some utility function that represents your preferences, some expected-utility-maximizing genie that makes the same choices as you, but it doesn’t tell you to have different preferences, or make different decisions under any circumstances.
In fact, I don’t really know why this post is called “Why you must maximize expected utility” instead of “Why you already maximize expected utility.” It seems that even if I have some algorithm that is on the surface not maximizing expected utility, such as being risk-averse in some way dealing with money, then I’m really just maximizing the expected value of a non-obvious utility function.
No. Most humans do not maximize expected utility with respect to any utility function whatsoever because they have preferences which violate the hypotheses of the VNM theorem. For example, framing effects show that humans do not even consistently have the same preferences regarding fixed probability distributions over outcomes (but that their preferences change depending on whether the outcomes are described in terms of gains or losses).
Edit: in other words, the VNM theorem shows that “you must maximize expected utility” is equivalent to “your preferences should satisfy the hypotheses of the VNM theorem” (and not all of these hypotheses are encapsulated in the VNM axioms), and this is a statement with nontrivial content.
Axioms? (Hypotheses does seem to quite fit. One could have a hypothesis that humans had preferences that are in accord with the VNM axioms and falsify said theorem but the VNM doesn’t make the hypothesis itself.)
In the nomenclature that I think is relatively standard among mathematicians, if a theorem states “if P1, P2, … then Q” then P1, P2, … are the hypotheses of the theorem and Q is the conclusion. One of the hypotheses of the VNM theorem, which isn’t strictly speaking one of the von Neumann-Morgenstern axioms, is that you assign consistent preferences at all (that is, that the decision of whether you prefer A to B depends only on what A and B are). I’m not using “consistent” here in the same sense as the Wikipedia article does when talking about transitivity; I mean consistent over time. (Edit: Eliezer uses “incoherent”; maybe that’s a better word.)
Premises.
Again, among mathematicians, I think “hypotheses” is more common. Exhibit A; Exhibit B. I would guess that “premises” is more common among philosophers...?
I usually say “assumptions”, but I’m neither a mathematician nor a philosopher. I do say “hypotheses” if for some reason I’m wearing mathematician attire.
Not all decision algorithms are utility-maximising algorithms. If this were not so, the axioms of the VNM theorem would not be necessary. But they are necessary: the conclusion requires the axioms, and when axioms are dropped, decision algorithms violating the conclusion exist.
For example, suppose that given a choice between A and B it chooses A; between B and C it chooses B; between C and A it chooses C. No utility function describes this decision algorithm. Suppose that given a choice between A and B it never makes a choice. No utility function describes this decision algorithm.
Another way that a decision algorithm can fail to have an associated utility function is by lying outside the ontology of the VNM theorem. The VNM theorem treats only of decisions over probability distributions of outcomes. Decisions can be made over many other things. And what is an “outcome”? Can it be anything less than the complete state of the agent’s entire positive light-cone? If not, it is practically impossible to calculate with; but if it can be smaller, what counts as an outcome and what does not?
Here is another decision algorithm. It is the one implemented by a room thermostat. It has two possible actions: turn the heating on, or turn the heating off. It has two sensors: one for the actual temperature and one for the set-point temperature. Its decisions are given by this algorithm: if the temperature falls 0.5 degrees below the set point, turn the heating on; if it rises 0.5 degrees above the set-point, turn the heating off. Exercise: what relationship holds between this system, the VNM theorem, and utility functions?