For practical purposes I agree that it does not help a lot to talk about utility functions. As the We Don’t Have a Utility Function article points out, we simply do not know our utility functions but only vague terminal values. However, as you pointed out yourself that does not mean that we do not “have” a utility function at all.
The soft (and hard) failure seems to be a tempting but unnecessary case of pseudo-rationalization. Still, the concept of an agent “having” (maybe in the sense of “acting in a complex way towards optimizing”) a utility funktion seems to be very important for defining utilitarian (hence the name, I guess...) ethical systems. In contrast, the notion of terminal values seems to be a lot more vague and not sufficient for defining utilitarianism. Similar things (practical uselessness but theoretical importance) apply to the evaluation of the intelligence of an agent. Therefore, I think that the term ‘utility function’ is essential for theoretical debate, even though I agree that it is sometimes used in the wrong place.
The soft (and hard) failure seems to be a tempting but unnecessary case of pseudo-rationalization.
I’d have called it “the danger of falling in love with your model”. The mathematics of having a utility function is far more elegant than what we actually have, a thousand shards of desire that Dutch-book you into working for the propagation of your genes. So people try to work like they have a utility function, and this leaves them open to ordinary human-level exploits since assuming you have a utility function still doesn’t work.
For practical purposes I agree that it does not help a lot to talk about utility functions. As the We Don’t Have a Utility Function article points out, we simply do not know our utility functions but only vague terminal values. However, as you pointed out yourself that does not mean that we do not “have” a utility function at all.
The soft (and hard) failure seems to be a tempting but unnecessary case of pseudo-rationalization. Still, the concept of an agent “having” (maybe in the sense of “acting in a complex way towards optimizing”) a utility funktion seems to be very important for defining utilitarian (hence the name, I guess...) ethical systems. In contrast, the notion of terminal values seems to be a lot more vague and not sufficient for defining utilitarianism. Similar things (practical uselessness but theoretical importance) apply to the evaluation of the intelligence of an agent. Therefore, I think that the term ‘utility function’ is essential for theoretical debate, even though I agree that it is sometimes used in the wrong place.
I’d have called it “the danger of falling in love with your model”. The mathematics of having a utility function is far more elegant than what we actually have, a thousand shards of desire that Dutch-book you into working for the propagation of your genes. So people try to work like they have a utility function, and this leaves them open to ordinary human-level exploits since assuming you have a utility function still doesn’t work.