I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.
To show this is trivially true, take your decision algorithm and consider the utility function “1 for acting in accordance with this algorithm, 0 for not doing so”. Tim is giving an example where it doesn’t have to be this ridiculous, but still has to be meta compared to object-level preferences.
Still (I say), if it’s less complicated to describe the full range of human behavior by an algorithm that doesn’t break down into utility function plus optimizer, then we’re better off doing so (as a descriptive strategy).
I think the point is that any decision algorithm, even one which has intransitive preferences over world-states, can be described as optimization of a utility function. However, the objects to which utility are assigned may be ridiculously complicated constructs rather than the things we think should determine our actions.
To show this is trivially true, take your decision algorithm and consider the utility function “1 for acting in accordance with this algorithm, 0 for not doing so”. Tim is giving an example where it doesn’t have to be this ridiculous, but still has to be meta compared to object-level preferences.
Still (I say), if it’s less complicated to describe the full range of human behavior by an algorithm that doesn’t break down into utility function plus optimizer, then we’re better off doing so (as a descriptive strategy).