The arguments in the posts themselves seem unimpressive to me in this context. If there are strong arguments that human actions cannot, in principle, be modelled well by using a utility function, perhaps they should be made explicit.
Agreed. Now, if it were possible to write a complete utility function for some person, it would be pretty clear that “utility” did not equal happiness, or anything simple like that.
I tend to think that the best candidate in most organisms is “expected fitness”. It’s probably reasonable to expect fairly heavy correlations with reward systems in brains—if the organisms have brains.
The arguments in the posts themselves seem unimpressive to me in this context. If there are strong arguments that human actions cannot, in principle, be modelled well by using a utility function, perhaps they should be made explicit.
Agreed. Now, if it were possible to write a complete utility function for some person, it would be pretty clear that “utility” did not equal happiness, or anything simple like that.
I tend to think that the best candidate in most organisms is “expected fitness”. It’s probably reasonable to expect fairly heavy correlations with reward systems in brains—if the organisms have brains.
Agents which can’t be modelled by a utility-based framework are:
Agents which are infinite;
Agents with uncomputable utility functions.
AFAIK, there’s no good evidence that either kind of agent can actually exist. Counter-arguments are welcome, of course.