My understanding of utility function is that it’s impossible not to have one, even if the function is implicit, subconscious, or something that wouldn’t be endorsed if it could be stated explicitly.
My understanding is that a utility function implies consistent and coherent preferences. Humans definitely don’t have that, our preferences are inconsistent and subject to your instance framing effects.
Thats the correct definition, but rationalists have got into the habit of using “utlity function” to mean prefefrences, leading to considerable confusion.
I’ve been interpreting ‘utility function’ along the lines of ‘coherent extrapolated volition’, i.e. something like ‘the most similar utility function’ that’s both coherent and consistent and best approximates ‘preferences’.
The intuition is that there is, in some sense, an adjacent or nearby utility function, even if human behavior isn’t (perfectly) consistent or coherent.
My understanding is that a utility function implies consistent and coherent preferences. Humans definitely don’t have that, our preferences are inconsistent and subject to your instance framing effects.
Thats the correct definition, but rationalists have got into the habit of using “utlity function” to mean prefefrences, leading to considerable confusion.
I’ve been interpreting ‘utility function’ along the lines of ‘coherent extrapolated volition’, i.e. something like ‘the most similar utility function’ that’s both coherent and consistent and best approximates ‘preferences’.
The intuition is that there is, in some sense, an adjacent or nearby utility function, even if human behavior isn’t (perfectly) consistent or coherent.