I’m completely baffled by your reply. I have no idea what the “technical sense” of the term “utility function” is, but I thought I was using it the normal, LW way: to refer to an agent’s terminal values.
What term should I use instead? I was under the impression that “utility function” was pretty safe, but apparently it carries some pretty heavy baggage. I’ll gladly switch to using whatever term would prevent this sort of reply in the future. Just let me know.
Or perhaps I simply repeated “utility function” way too many times in that response? I probably should have switched it up a lot more and alternated it with “terminal values”, “goal set”, etc. Using it like 6 times in such a short comment may have been careless and brought it undue attention and scrutiny.
Or… is there something you disagree with in my assessment? I understand that it’s controversial to state that people even have coherent utility functions, or even have terminal values, or whatever, so perhaps my comment takes something for granted that shouldn’t be?
Two more things:
Can you explain how exactly I conflated all those senses into that single word? I thought I used the term to refer to the same exact thing over and over, and I haven’t heard anything to convince me otherwise.
And what exactly does it mean for it to be a “rhetorical psuedomath buzzword”? That sounds like an eloquent attack, but I honestly can’t pinpoint how to interpret it on any higher of a level of detail than you simply reacting to my usage in a disapproving way.
Anyway, do you disagree that somebody could from one moment from the next have a terminal value (or whatever) for avoiding emotional pain at all costs? Or is that wrong or incoherent? Or what?
I’m completely baffled by your reply. I have no idea what the “technical sense” of the term “utility function” is, but I thought I was using it the normal, LW way: to refer to an agent’s terminal values.
Your usage was fine. Some people will try to go all ‘deep’ on you and challenge even the use of the term “terminal values” because “humans aren’t that simple etc”. But that is their baggage not yours and can be safely ignored.
I’m completely baffled by your reply. I have no idea what the “technical sense” of the term “utility function” is, but I thought I was using it the normal, LW way: to refer to an agent’s terminal values.
What term should I use instead? I was under the impression that “utility function” was pretty safe, but apparently it carries some pretty heavy baggage. I’ll gladly switch to using whatever term would prevent this sort of reply in the future. Just let me know.
Or perhaps I simply repeated “utility function” way too many times in that response? I probably should have switched it up a lot more and alternated it with “terminal values”, “goal set”, etc. Using it like 6 times in such a short comment may have been careless and brought it undue attention and scrutiny.
Or… is there something you disagree with in my assessment? I understand that it’s controversial to state that people even have coherent utility functions, or even have terminal values, or whatever, so perhaps my comment takes something for granted that shouldn’t be?
Two more things:
Can you explain how exactly I conflated all those senses into that single word? I thought I used the term to refer to the same exact thing over and over, and I haven’t heard anything to convince me otherwise.
And what exactly does it mean for it to be a “rhetorical psuedomath buzzword”? That sounds like an eloquent attack, but I honestly can’t pinpoint how to interpret it on any higher of a level of detail than you simply reacting to my usage in a disapproving way.
Anyway, do you disagree that somebody could from one moment from the next have a terminal value (or whatever) for avoiding emotional pain at all costs? Or is that wrong or incoherent? Or what?
Your usage was fine. Some people will try to go all ‘deep’ on you and challenge even the use of the term “terminal values” because “humans aren’t that simple etc”. But that is their baggage not yours and can be safely ignored.