It seems to me that the “continuity/Archimedean” property is the least intuitively necessary of the four axioms of the VNM utility theorem. One way of specifying preferences over lotteries that still obeys the other three axioms is assigning to each possible world two real numbers U1 and U2 instead of one, where U1 is a “top priority” and U2 is a “secondary priority”. If two lotteries have different ⟨U1⟩, the one with greater ⟨U1⟩ is ranked higher, and ⟨U2⟩ is used as a tie-breaker. One possible real-world example (with integer-valued U1 for deterministic outcomes) would be a parent whose top priority is minimizing the number of their children who die within the parent’s lifetime, with the rest of their utility function being secondary.
I’d be interested in whether there exist any preferences over lotteries quantifying our intuitive understanding of risk aversion while still obeying the other three axioms of the VNM theorem. I spent about an hour trying to construct an example without success, and suspect it might be impossible.
Yes, many people will have problems with the Archimedes’ axiom because it implies that everything has a price (that any good option can be probability-diluted enough that a mediocre is chosen instead), and people don’t take it kindly when you tell “you absolutely must have a trade-off between value A and value B” - especially if they really don’t have a trade-off, but also if they don’t want to admit or consciously estimate it.
Thankfully, that VNM property is not that critical for rational decision-making because we can simply use surreal numbers instead.
One possible real-world example (with integer-valued U1 for deterministic outcomes) would be a parent whose top priority is minimizing the number of their children who die within the parent’s lifetime, with the rest of their utility function being secondary.
Wouldn’t work well since in real world outcomes are non-deterministic; given that, minimizing expected number is accomplished by simply having zero children.
What is the precise statement for being able to use surreal numbers when we remove the Archimedean axiom? The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, I was imagining that the 2-priority utility function for the parent only applied after they already had children, and that their utility function before having children is able to trade off between not having children, having some who live, and having some who die. Anecdotally it seems a lot of new parents experience diachronic inconsistency in their preferences.
The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, utility function (or its evaluations) changing in an expectable way seems problematic to rational optimizing. If you know you prefer A to B, and know that you will prefer B to A in future even given only current context (so no “waiter must run back and forth”), then you don’t reflectively endorse either decision.
So would it be accurate to say that a preference over lotteries (where each lottery involves only real-valued probabilities) satisfies the axioms of the VNM theorem (except for the Archimedean property) if and only if that preference is equivalent to maximizing the expectation value of a surreal-valued utility function?
Re the parent example, I agree that changing in an expectable way is problematic to rational optimizing, but I think “what kind of agent am I happy about being?” is a distinct question from “what kinds of agents exist among minds in the world?”.
It seems to me that the “continuity/Archimedean” property is the least intuitively necessary of the four axioms of the VNM utility theorem. One way of specifying preferences over lotteries that still obeys the other three axioms is assigning to each possible world two real numbers U1 and U2 instead of one, where U1 is a “top priority” and U2 is a “secondary priority”. If two lotteries have different ⟨U1⟩, the one with greater ⟨U1⟩ is ranked higher, and ⟨U2⟩ is used as a tie-breaker. One possible real-world example (with integer-valued U1 for deterministic outcomes) would be a parent whose top priority is minimizing the number of their children who die within the parent’s lifetime, with the rest of their utility function being secondary.
I’d be interested in whether there exist any preferences over lotteries quantifying our intuitive understanding of risk aversion while still obeying the other three axioms of the VNM theorem. I spent about an hour trying to construct an example without success, and suspect it might be impossible.
Yes, many people will have problems with the Archimedes’ axiom because it implies that everything has a price (that any good option can be probability-diluted enough that a mediocre is chosen instead), and people don’t take it kindly when you tell “you absolutely must have a trade-off between value A and value B” - especially if they really don’t have a trade-off, but also if they don’t want to admit or consciously estimate it.
Thankfully, that VNM property is not that critical for rational decision-making because we can simply use surreal numbers instead.
Wouldn’t work well since in real world outcomes are non-deterministic; given that, minimizing expected number is accomplished by simply having zero children.
What is the precise statement for being able to use surreal numbers when we remove the Archimedean axiom? The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, I was imagining that the 2-priority utility function for the parent only applied after they already had children, and that their utility function before having children is able to trade off between not having children, having some who live, and having some who die. Anecdotally it seems a lot of new parents experience diachronic inconsistency in their preferences.
That’s right! However it is not really a problem unless we can obtain surreal probabilities from the real world; and if all our priors and evidence are just real numbers, updates won’t lead us into the surreal area. (And it seems non-real-valued probabilities don’t help us in infinite domains, as I’ve written in https://www.lesswrong.com/posts/sZneDLRBaDndHJxa7/open-thread-fall-2024?commentId=LcDJFixRCChZimc7t.)
Re the parent example, utility function (or its evaluations) changing in an expectable way seems problematic to rational optimizing. If you know you prefer A to B, and know that you will prefer B to A in future even given only current context (so no “waiter must run back and forth”), then you don’t reflectively endorse either decision.
So would it be accurate to say that a preference over lotteries (where each lottery involves only real-valued probabilities) satisfies the axioms of the VNM theorem (except for the Archimedean property) if and only if that preference is equivalent to maximizing the expectation value of a surreal-valued utility function?
Re the parent example, I agree that changing in an expectable way is problematic to rational optimizing, but I think “what kind of agent am I happy about being?” is a distinct question from “what kinds of agents exist among minds in the world?”.