What is the precise statement for being able to use surreal numbers when we remove the Archimedean axiom? The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, I was imagining that the 2-priority utility function for the parent only applied after they already had children, and that their utility function before having children is able to trade off between not having children, having some who live, and having some who die. Anecdotally it seems a lot of new parents experience diachronic inconsistency in their preferences.
The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, utility function (or its evaluations) changing in an expectable way seems problematic to rational optimizing. If you know you prefer A to B, and know that you will prefer B to A in future even given only current context (so no “waiter must run back and forth”), then you don’t reflectively endorse either decision.
So would it be accurate to say that a preference over lotteries (where each lottery involves only real-valued probabilities) satisfies the axioms of the VNM theorem (except for the Archimedean property) if and only if that preference is equivalent to maximizing the expectation value of a surreal-valued utility function?
Re the parent example, I agree that changing in an expectable way is problematic to rational optimizing, but I think “what kind of agent am I happy about being?” is a distinct question from “what kinds of agents exist among minds in the world?”.
What is the precise statement for being able to use surreal numbers when we remove the Archimedean axiom? The surreal version of the VNM representation theorem in “Surreal Decisions” (https://arxiv.org/abs/2111.00862) seems to still have a surreal version of the Archimedean axiom.
Re the parent example, I was imagining that the 2-priority utility function for the parent only applied after they already had children, and that their utility function before having children is able to trade off between not having children, having some who live, and having some who die. Anecdotally it seems a lot of new parents experience diachronic inconsistency in their preferences.
That’s right! However it is not really a problem unless we can obtain surreal probabilities from the real world; and if all our priors and evidence are just real numbers, updates won’t lead us into the surreal area. (And it seems non-real-valued probabilities don’t help us in infinite domains, as I’ve written in https://www.lesswrong.com/posts/sZneDLRBaDndHJxa7/open-thread-fall-2024?commentId=LcDJFixRCChZimc7t.)
Re the parent example, utility function (or its evaluations) changing in an expectable way seems problematic to rational optimizing. If you know you prefer A to B, and know that you will prefer B to A in future even given only current context (so no “waiter must run back and forth”), then you don’t reflectively endorse either decision.
So would it be accurate to say that a preference over lotteries (where each lottery involves only real-valued probabilities) satisfies the axioms of the VNM theorem (except for the Archimedean property) if and only if that preference is equivalent to maximizing the expectation value of a surreal-valued utility function?
Re the parent example, I agree that changing in an expectable way is problematic to rational optimizing, but I think “what kind of agent am I happy about being?” is a distinct question from “what kinds of agents exist among minds in the world?”.