It seems accurate to say that it’s treated in a utility-like way within certain incentive systems, but actually calling it a form of utility seems to imply a kind of agency that all the money-optimizers I can think of don’t have. Except perhaps for automated trading systems, and even those can have whatever utility curves over money that their designers feel like setting.
Not really, no. They have goals in the sense that aggregating their subunits’ goals gives us something of nonzero magnitude, but their ability to make plans and act intentionally usually seems very limited compared to individual humans’, never mind well-programmed software. Where we find exceptions, it’s usually because of an exceptional human at the helm, which of course implies more humanlike and less money-optimizerlike behavior.
Where we find exceptions, it’s usually because of an exceptional human at the helm, which of course implies more humanlike and less money-optimizerlike behavior.
Right. So, to a first approximation, humans make reasonable money-optimizers. Thus the “Homo economicus” model.
I think it is perfectly reasonable to say that companies have “agency”. Companies are powerfully agent-like entities, complete with mission statements, contractual obligations and reputations. Their ability to make plans and act intentionally is often superhuman. Also, in many constitutuencies they are actually classified as legal persons.
It seems accurate to say that it’s treated in a utility-like way within certain incentive systems, but actually calling it a form of utility seems to imply a kind of agency that all the money-optimizers I can think of don’t have. Except perhaps for automated trading systems, and even those can have whatever utility curves over money that their designers feel like setting.
You don’t think economic systems have “agency”? Despite being made up of large numbers of humans and optimising computer systems?
Not really, no. They have goals in the sense that aggregating their subunits’ goals gives us something of nonzero magnitude, but their ability to make plans and act intentionally usually seems very limited compared to individual humans’, never mind well-programmed software. Where we find exceptions, it’s usually because of an exceptional human at the helm, which of course implies more humanlike and less money-optimizerlike behavior.
Right. So, to a first approximation, humans make reasonable money-optimizers. Thus the “Homo economicus” model.
I think it is perfectly reasonable to say that companies have “agency”. Companies are powerfully agent-like entities, complete with mission statements, contractual obligations and reputations. Their ability to make plans and act intentionally is often superhuman. Also, in many constitutuencies they are actually classified as legal persons.