Intuitively there are some utility functions that should be discarded based on principle of indifference. For example, my utility function shouldn’t change based on my angular position with respect to the sun (given everything else the same) because I don’t have a reason to prefer one direction over the other.
It also feels like my utility function should not change with time either.
I’m wondering if there can be some symmetry and invariance based arguments one can use to restrict one’s construction of their own utility function. Is there any literature or posts that I can read?
In other words, is there an objectively rationally better purpose one can give to their own life based on symmetries in the real world?
“rational agent” implies that there exists a utility function, it does not say very much about what that utility function is, or whether such an agent has any ability or duty to change it’s own utility function. It does mean that the utility function is self-consistent and consistent over time (though the application of it in decisions can change as an agent updates).
It’s not actually specified whether an agent “constructs” or “discovers” their utility function. Humans are not rational agents, so it’s even less clear which one applies—I tend to model it as a mix of both: some ability to influence and learn to prefer different things, with an underlying starting point that evolution and early environment gives us.
There’s nothing obviously wrong with preferring a given chirality of the solar system, though it’s unlikely to come up in any actual decisions. There’s nothing wrong with changes over time, as long as they’re CONSISTENT changes. In fact, even inconsistent changes can be correct, if not rational. All it means is that your previous utility function was irrational—it does NOT imply that the new one is.
[Question] How should a rational agent construct their utility function when faced with existence?
Intuitively there are some utility functions that should be discarded based on principle of indifference. For example, my utility function shouldn’t change based on my angular position with respect to the sun (given everything else the same) because I don’t have a reason to prefer one direction over the other.
It also feels like my utility function should not change with time either.
I’m wondering if there can be some symmetry and invariance based arguments one can use to restrict one’s construction of their own utility function. Is there any literature or posts that I can read?
In other words, is there an objectively rationally better purpose one can give to their own life based on symmetries in the real world?
“rational agent” implies that there exists a utility function, it does not say very much about what that utility function is, or whether such an agent has any ability or duty to change it’s own utility function. It does mean that the utility function is self-consistent and consistent over time (though the application of it in decisions can change as an agent updates).
It’s not actually specified whether an agent “constructs” or “discovers” their utility function. Humans are not rational agents, so it’s even less clear which one applies—I tend to model it as a mix of both: some ability to influence and learn to prefer different things, with an underlying starting point that evolution and early environment gives us.
There’s nothing obviously wrong with preferring a given chirality of the solar system, though it’s unlikely to come up in any actual decisions. There’s nothing wrong with changes over time, as long as they’re CONSISTENT changes. In fact, even inconsistent changes can be correct, if not rational. All it means is that your previous utility function was irrational—it does NOT imply that the new one is.