The VNM axioms refer to an “agent” who has “preferences” over lotteries of outcomes. It seems to me this is challenging to interpret if there isn’t a persistent agent, with a persistent mind, who assigns Bayesian subjective probabilities to outcomes (which I’m assuming it has some ability to think about and care about, i.e. my (4)), and who chooses actions based on their preferences between lotteries. That is, it seems to me the axioms rely on there being a mind that is certain kinds of persistent/unaffected.
Do you (habryka) mean there’s a new “utility function” at any given moment, made of “outcomes” that can include parts of how the agent runs its own inside? Or can you say more about VNM is compatible with the negations of my 1, 3, and 4, or otherwise give me more traction for figuring out where our disagreement is coming from?
I was reasoning mostly from “what’re the assumptions required for an agent to base its choices on the anticipated external consequences of those choices.”
It seems to me this is challenging to interpret if there isn’t a persistent agent, with a persistent mind, who assigns Bayesian subjective probabilities to outcomes
Right but if there isn’t a persistent agent with a persistent mind, then we no longer have an entity to which predicates of rationality apply (at least in the sense that the term “rationality” is usually understood in this community). Talking about it in terms of “it’s no longer vNM-rational” feels like saying “it’s no longer wet” when you change the subject of discussion from physical bodies to abstract mathematical structures.
I was trying to explain to Habryka why I thought (1), (3) and (4) are parts of the assumptions under which the VNM utility theorem is derived.
I think all of (1), (2), (3) and (4) are part of the context I’ve usually pictured in understanding VNM as having real-world application, at least. And they’re part of this context because I’ve been wanting to think of a mind as having persistence, and persistent preferences, and persistent (though rationally updated) beliefs about what lotteries of outcomes can be chosen via particular physical actions, and stuff. (E.g., in Scott’s example about the couple, one could say “they don’t really violate independence; they just care also about process-fairness” or something, but, … it seems more natural to attach words to real-world scenarios in such a way as to say the couple does violate independence. And when I try to reason this way, I end up thinking that all of (1)-(4) are part of the most natural way to try to get the VNM utility theorem to apply to the world with sensible, non-Grue-like word-to-stuff mappings.)
I’m not sure why Habryka disagrees. I feel like lots of us are talking past each other in this subthread, and am not sure how to do better.
I don’t think I follow your (Mateusz’s) remark yet.
The VNM axioms refer to an “agent” who has “preferences” over lotteries of outcomes. It seems to me this is challenging to interpret if there isn’t a persistent agent, with a persistent mind, who assigns Bayesian subjective probabilities to outcomes (which I’m assuming it has some ability to think about and care about, i.e. my (4)), and who chooses actions based on their preferences between lotteries. That is, it seems to me the axioms rely on there being a mind that is certain kinds of persistent/unaffected.
Do you (habryka) mean there’s a new “utility function” at any given moment, made of “outcomes” that can include parts of how the agent runs its own inside? Or can you say more about VNM is compatible with the negations of my 1, 3, and 4, or otherwise give me more traction for figuring out where our disagreement is coming from?
I was reasoning mostly from “what’re the assumptions required for an agent to base its choices on the anticipated external consequences of those choices.”
Right but if there isn’t a persistent agent with a persistent mind, then we no longer have an entity to which predicates of rationality apply (at least in the sense that the term “rationality” is usually understood in this community). Talking about it in terms of “it’s no longer vNM-rational” feels like saying “it’s no longer wet” when you change the subject of discussion from physical bodies to abstract mathematical structures.
Or am I misunderstanding you?
I was trying to explain to Habryka why I thought (1), (3) and (4) are parts of the assumptions under which the VNM utility theorem is derived.
I think all of (1), (2), (3) and (4) are part of the context I’ve usually pictured in understanding VNM as having real-world application, at least. And they’re part of this context because I’ve been wanting to think of a mind as having persistence, and persistent preferences, and persistent (though rationally updated) beliefs about what lotteries of outcomes can be chosen via particular physical actions, and stuff. (E.g., in Scott’s example about the couple, one could say “they don’t really violate independence; they just care also about process-fairness” or something, but, … it seems more natural to attach words to real-world scenarios in such a way as to say the couple does violate independence. And when I try to reason this way, I end up thinking that all of (1)-(4) are part of the most natural way to try to get the VNM utility theorem to apply to the world with sensible, non-Grue-like word-to-stuff mappings.)
I’m not sure why Habryka disagrees. I feel like lots of us are talking past each other in this subthread, and am not sure how to do better.
I don’t think I follow your (Mateusz’s) remark yet.