Well, one way to weakly model the actions is to assign the trivial utility function—one which makes the agent indifferent among all outcomes of actions. Any set of actions would be consistent with this utility function.
If you want the actions to actually be uniquely determined by the utility function, then you can do it in the way you propose—by adding additional payoffs associated with actions, rather than outcomes. I think this is “cheating”, in some sense, even though I recognize that in order to model a rational agent practicing a deontological form of ethics, you need to postulate payoffs attached to actions rather than outcomes.
However, short of using one of these two “tricks”, it is not true that any agent can be modeled by a utility function. Only rational agents can be so modeled. “Rational” agents are characterized as Bayesians that adhere to the axioms of transitivity of preference and the “sure thing principle”.
Utilities are necessarily associated with actions. That is the point of using them. Agents consider their possible actions and assign utilities to them in order to decide which one to take. It is surely not a “trick”—it is a totally standard practice.
Outcomes are not known at the time of the decision. At best, the agent has a fancy simulation—which is just a type of computer program. If some computer programs are forbidden, while others are allowed, then what is permitted and what is not should be laid down. I am happy not to have to face that unpleasant-looking task.
I’m sorry, Tim. I cannot even begin to take this seriously. Please consult any economics, game theory, or decision theory text. Chapter 1 of Myerson is just one of many possible sources.
You will learn that utilities are derived from preferences over outcomes, portfolios, or market baskets, and that from this data, one constructs expected utilities over actions so as to guide choice of actions.
I don’t know whether you are trolling here, or sharing your own original research, or are simply confused by something you may have read about “revealed preferences”. In any case, please provide a respectable reference if you wish to continue this discussion.
In economics, utility is a measure of relative satisfaction. Given this measure, one may speak meaningfully of increasing or decreasing utility, and thereby explain economic behavior in terms of attempts to increase one’s utility. Utility is often modeled to be affected by consumption of various goods and services, possession of wealth and spending of leisure time.
That is what “utliity” means—and not what you mistakenly said. Consequently, if an agent has preferences for its own immediate actions, so be it—that is absolutely permitted.
I meant a respectable reference which supported your position. It sure seems to me
that that quotation supported me, rather than you. So it is quite likely that we are not understanding each other. And given results so far, probably not worth it that we try.
My reference looks OK to me. It supports my position just fine—and you don’t seem to disagree with it. However, here is another similar one:
Utility: Economist-speak for a good thing; a measure of satisfaction. (See also WELFARE.) Underlying most economic theory is the assumption that people do things because doing so gives them utility. People want as much utility as they can get.
However such definitions are simply inadequate for use in decision theory applications.
Note that utility is NOT defined as being associated with states of the external world, things in the future—or anything like that. It is simply a measure of an agent’s satisfaction.
If you really do think that satisfaction is tied to outcomes, then I think you could benefit from some more exposure to Buddhism—which teaches that satisfaction lies within. The hypothesis that it is solely a function of the state of the external world is just wrong.
This post is all wrong. You can, in fact, closely model the actions of any computable agent using a utility function.
This has been previously explained here—and elsewhere
Well, one way to weakly model the actions is to assign the trivial utility function—one which makes the agent indifferent among all outcomes of actions. Any set of actions would be consistent with this utility function.
If you want the actions to actually be uniquely determined by the utility function, then you can do it in the way you propose—by adding additional payoffs associated with actions, rather than outcomes. I think this is “cheating”, in some sense, even though I recognize that in order to model a rational agent practicing a deontological form of ethics, you need to postulate payoffs attached to actions rather than outcomes.
However, short of using one of these two “tricks”, it is not true that any agent can be modeled by a utility function. Only rational agents can be so modeled. “Rational” agents are characterized as Bayesians that adhere to the axioms of transitivity of preference and the “sure thing principle”.
Utilities are necessarily associated with actions. That is the point of using them. Agents consider their possible actions and assign utilities to them in order to decide which one to take. It is surely not a “trick”—it is a totally standard practice.
Outcomes are not known at the time of the decision. At best, the agent has a fancy simulation—which is just a type of computer program. If some computer programs are forbidden, while others are allowed, then what is permitted and what is not should be laid down. I am happy not to have to face that unpleasant-looking task.
I’m sorry, Tim. I cannot even begin to take this seriously. Please consult any economics, game theory, or decision theory text. Chapter 1 of Myerson is just one of many possible sources.
You will learn that utilities are derived from preferences over outcomes, portfolios, or market baskets, and that from this data, one constructs expected utilities over actions so as to guide choice of actions.
I don’t know whether you are trolling here, or sharing your own original research, or are simply confused by something you may have read about “revealed preferences”. In any case, please provide a respectable reference if you wish to continue this discussion.
Uh, try here:
http://en.wikipedia.org/wiki/Utility
That is what “utliity” means—and not what you mistakenly said. Consequently, if an agent has preferences for its own immediate actions, so be it—that is absolutely permitted.
I meant a respectable reference which supported your position. It sure seems to me that that quotation supported me, rather than you. So it is quite likely that we are not understanding each other. And given results so far, probably not worth it that we try.
My reference looks OK to me. It supports my position just fine—and you don’t seem to disagree with it. However, here is another similar one:
http://www.economist.com/research/economics/alphabetic.cfm?letter=U
It is true that some economic definitions of utility attach utility to “goods and services”—or to “humans”—e.g.:
http://dictionary.reference.com/browse/utility
However such definitions are simply inadequate for use in decision theory applications.
Note that utility is NOT defined as being associated with states of the external world, things in the future—or anything like that. It is simply a measure of an agent’s satisfaction.
If you really do think that satisfaction is tied to outcomes, then I think you could benefit from some more exposure to Buddhism—which teaches that satisfaction lies within. The hypothesis that it is solely a function of the state of the external world is just wrong.