Agreed. Now, if it were possible to write a complete utility function for some person, it would be pretty clear that “utility” did not equal happiness, or anything simple like that.
I tend to think that the best candidate in most organisms is “expected fitness”. It’s probably reasonable to expect fairly heavy correlations with reward systems in brains—if the organisms have brains.
Agreed. Now, if it were possible to write a complete utility function for some person, it would be pretty clear that “utility” did not equal happiness, or anything simple like that.
I tend to think that the best candidate in most organisms is “expected fitness”. It’s probably reasonable to expect fairly heavy correlations with reward systems in brains—if the organisms have brains.