I’ve been interpreting ‘utility function’ along the lines of ‘coherent extrapolated volition’, i.e. something like ‘the most similar utility function’ that’s both coherent and consistent and best approximates ‘preferences’.
The intuition is that there is, in some sense, an adjacent or nearby utility function, even if human behavior isn’t (perfectly) consistent or coherent.
I’ve been interpreting ‘utility function’ along the lines of ‘coherent extrapolated volition’, i.e. something like ‘the most similar utility function’ that’s both coherent and consistent and best approximates ‘preferences’.
The intuition is that there is, in some sense, an adjacent or nearby utility function, even if human behavior isn’t (perfectly) consistent or coherent.