Seems to me we’ve got a gen-u-ine semantic misunderstanding on our hands here, Tim :)
My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;). Any computable agent operating over any possible state and action space can be represented by some function pi, though typically folks in this field deal in Markov Decision Processes since they’re computationally tractable. More on that in the book, or in a longer post if folks are interested. It seems to me that when you say “utility function”, you’re thinking of something a lot like pi. If I’m wrong about that, please let me know
When folks in the RL field talk about “utility functions”, generally they’ve got something a little different in mind. Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space. U takes in future states of the world and outputs the reward that the agent can expect to receive upon reaching that state (loosely “how much the agent likes s”). Since each action in general leads to a range of different future states with different probabilities, you can use U(s) to get an expected utility U’(a,s):
U’(a,s) = sum((p(s,a,s’)*U(s’)),
where s is the state you’re in, a is the action you take, s’ are the possible future states, and p is the probability than action a taken in state s will lead to state s’. Once your agent has a U’, some simple decision rule over that is enough to determine the agent’s policy. There are a bunch of cool things about agents that do this, one of which (not the most important) is that their behavior is much easier to predict. This is because behavior is determined entirely by U, a function over just the state space, whereas Pi is over the conjunction of state and action spaces. From a limited sample of behavior, you can get a good estimate of U(s), and use this to predict future behavior, including in regions of state and action space that you’ve never actually observed. If your agent doesn’t use this cool U(s) scheme, the only general way to learn Pi is to actually watch the thing behave in every possible region of action and state space. This I think is why von Neumann was so interested in specifying exactly when an agent could and could not be treated as a utility-maximizer.
Hopefully that makes some sense, and doesn’t just look like an incomprehensible jargon-filled snow job. If folks are interested in this stuff I can write a longer article about it that’ll (hopefully) be a lot more clear.
Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space.
If we’re talking about ascribing utility functions to humans, then the state space is the universe, right? (That is, the same universe the astronomers talk about.) In that case, the state space contains clocks, so there’s no problem with having a time-dependent utility function, since the time is already present in the domain of the utility function.
Thus, I don’t see the semantic misunderstanding—human behavior is consistent with at least one utility function even in the formalism you have in mind.
(Maybe the state space is the part of the universe outside of the decision-making apparatus of the subject. No matter, that state space contains clocks too.)
The interesting question here for me is whether any of those alternatives to having a utility function mentioned in the Allais paradox Wikipedia article are actually useful if you’re trying to help the subject get what they want. Can someone give me a clue how to raise the level of discourse enough so it’s possible to talk about that, instead of wading through trivialities? PM’ing me would be fine if you have a suggestion here but don’t want it to generate responses that will be more trivialities to wade through.
Seems to me we’ve got a gen-u-ine semantic misunderstanding on our hands here, Tim :)
My understanding of these ideas is mostly taken from reinforcement learning theory in AI (a la Sutton & Barto 1998). In general, an agent is determined by a policy pi that determines the probability that the agent will make a particular action in a particular state, P = pi(s,a). In the most general case, Pi can also depend on time, and is typically quite complicated, though usually not complex ;).
Any computable agent operating over any possible state and action space can be represented by some function pi, though typically folks in this field deal in Markov Decision Processes since they’re computationally tractable. More on that in the book, or in a longer post if folks are interested. It seems to me that when you say “utility function”, you’re thinking of something a lot like pi. If I’m wrong about that, please let me know
When folks in the RL field talk about “utility functions”, generally they’ve got something a little different in mind. Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space. U takes in future states of the world and outputs the reward that the agent can expect to receive upon reaching that state (loosely “how much the agent likes s”). Since each action in general leads to a range of different future states with different probabilities, you can use U(s) to get an expected utility U’(a,s):
U’(a,s) = sum((p(s,a,s’)*U(s’)),
where s is the state you’re in, a is the action you take, s’ are the possible future states, and p is the probability than action a taken in state s will lead to state s’. Once your agent has a U’, some simple decision rule over that is enough to determine the agent’s policy. There are a bunch of cool things about agents that do this, one of which (not the most important) is that their behavior is much easier to predict. This is because behavior is determined entirely by U, a function over just the state space, whereas Pi is over the conjunction of state and action spaces. From a limited sample of behavior, you can get a good estimate of U(s), and use this to predict future behavior, including in regions of state and action space that you’ve never actually observed. If your agent doesn’t use this cool U(s) scheme, the only general way to learn Pi is to actually watch the thing behave in every possible region of action and state space. This I think is why von Neumann was so interested in specifying exactly when an agent could and could not be treated as a utility-maximizer.
Hopefully that makes some sense, and doesn’t just look like an incomprehensible jargon-filled snow job. If folks are interested in this stuff I can write a longer article about it that’ll (hopefully) be a lot more clear.
If we’re talking about ascribing utility functions to humans, then the state space is the universe, right? (That is, the same universe the astronomers talk about.) In that case, the state space contains clocks, so there’s no problem with having a time-dependent utility function, since the time is already present in the domain of the utility function.
Thus, I don’t see the semantic misunderstanding—human behavior is consistent with at least one utility function even in the formalism you have in mind.
(Maybe the state space is the part of the universe outside of the decision-making apparatus of the subject. No matter, that state space contains clocks too.)
The interesting question here for me is whether any of those alternatives to having a utility function mentioned in the Allais paradox Wikipedia article are actually useful if you’re trying to help the subject get what they want. Can someone give me a clue how to raise the level of discourse enough so it’s possible to talk about that, instead of wading through trivialities? PM’ing me would be fine if you have a suggestion here but don’t want it to generate responses that will be more trivialities to wade through.