Yeah, I’d more say that a utility function can be a description of an agent at a particular point in time, or across the agents entire existence, depending on how you frame it.
Like, for an instance in time (i) where you are evaluating what an agent will do next, there is some mathematical description of what they will do next based on their state of existence and the context that they are in.
If you have several moments in time, you could define such a description for each moment. Indeed, as the agent may change over time, and the context almost certainly does, the utility function couldn’t be static (unless you were referring to the outside-of-time all-timepoints-included utility function).
Does that make sense?
I’m not stating this with much confidence, this doesn’t feel an idea I fully grok, I’m just trying to share what I think I’ve learned and learn from you what you know, since it seems like you’ve thought this out more than I have.
My assertion is that all utility functions (i.e., all functions that satisfy the 4 VNM axioms plus perhaps some additional postulates most of us would agree on) are static (do not change over time).
I should try to prove that, but I’ve been telling myself I should for months now, but haven’t mustered the energy, so am posting the assertion now without proof because an weak argument posted now is better then a perfect argument that might never be posted.
I’ve never been tempted to distinguish between “the outside-of-time all-timepoints-included utility function” and other utility functions such as the utility function referred to by the definition of expected utility (EU (action) = sum over all outcomes of (U(outcome) times p(outcome | action))).
Ok, the static nature of a utility function for a static agent makes sense.
But in the case of humans, or of ML models with online (ongoing) learning, we aren’t static agents.
The continuity of self is an illusion. Every fraction of a second we become a fundamentally different agent. Usually this is only imperceptibly slightly different. The change isn’t a random walk however, it’s based on interactions with the environment and inbuilt algorithms, plus randomness and (in the case of humans) degradation from aging.
Over the span of seconds, this likely has no meaningful impact on the utility function. Over a longer span, like a year, this has a huge impact. Fundamental values can shift. The different agents at those different timepoints surely have different utility functions, don’t they?
Yeah, I’d more say that a utility function can be a description of an agent at a particular point in time, or across the agents entire existence, depending on how you frame it.
Like, for an instance in time (i) where you are evaluating what an agent will do next, there is some mathematical description of what they will do next based on their state of existence and the context that they are in.
If you have several moments in time, you could define such a description for each moment. Indeed, as the agent may change over time, and the context almost certainly does, the utility function couldn’t be static (unless you were referring to the outside-of-time all-timepoints-included utility function).
Does that make sense?
I’m not stating this with much confidence, this doesn’t feel an idea I fully grok, I’m just trying to share what I think I’ve learned and learn from you what you know, since it seems like you’ve thought this out more than I have.
My assertion is that all utility functions (i.e., all functions that satisfy the 4 VNM axioms plus perhaps some additional postulates most of us would agree on) are static (do not change over time).
I should try to prove that, but I’ve been telling myself I should for months now, but haven’t mustered the energy, so am posting the assertion now without proof because an weak argument posted now is better then a perfect argument that might never be posted.
I’ve never been tempted to distinguish between “the outside-of-time all-timepoints-included utility function” and other utility functions such as the utility function referred to by the definition of expected utility (EU (action) = sum over all outcomes of (U(outcome) times p(outcome | action))).
Ok, the static nature of a utility function for a static agent makes sense. But in the case of humans, or of ML models with online (ongoing) learning, we aren’t static agents. The continuity of self is an illusion. Every fraction of a second we become a fundamentally different agent. Usually this is only imperceptibly slightly different. The change isn’t a random walk however, it’s based on interactions with the environment and inbuilt algorithms, plus randomness and (in the case of humans) degradation from aging. Over the span of seconds, this likely has no meaningful impact on the utility function. Over a longer span, like a year, this has a huge impact. Fundamental values can shift. The different agents at those different timepoints surely have different utility functions, don’t they?
IMHO, no.