I don’t think that human values are well described by a utility function if, by “utility function”, we mean “a function which an optimizing agent will behave risk-neutrally towards”. If we mean something more general by “utility function”, then I am less confident that human values don’t fit into one.
It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human’s values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
I don’t think that human values are well described by a utility function if, by “utility function”, we mean “a function which an optimizing agent will behave risk-neutrally towards”. If we mean something more general by “utility function”, then I am less confident that human values don’t fit into one.
It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human’s values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I was trying to get you to clarify what you meant.
As far as I can tell, your reply makes no attempt to clarify :-(
“Utility function” does not normally mean:
“a function which an optimizing agent will behave risk-neutrally towards”.
It means the function which, when maximised, explains an agent’s goal-directed actions.
Apart from the issue of “why-redefine”, the proposed redefinition appears incomprehensible—at least to me.
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
Can you give an example of a non-risk-neutral utility function that can’t be converted a standard utility function by rescaling.
Bonus points if it doesn’t make you into a money pump.
No, because I don’t have a good handle on what magic can and cannot be done with math; when I have tried to do this in the past, it looks like this.
Me: But thus and so and thresholds and ambivalence without indifference and stuff.
Mathemagician: POOF! Look, this thing you don’t understand satisfies your every need.