I think there is no values-preserving representation of any human’s approximation of a utility function according to which risk neutrality is unambiguously rational.
Could you clarify this? I think you are saying that human values are not well-described by a utility function (and stressing certain details of the failure), but you seem to explicitly assume a good approximation by a utility function, which makes me uncertain.
Risk neutrality is often used with respect to a resource. But if you just want to say that humans are not risk-neutral about money, there’s no need to mention representations—you can just talk about preferences. So I think you’re talking about risk neutrality with respect putative-utiles. But to be a utility function, to satisfy the vNM axioms, is exactly risk neutrality about utiles. If one satisfies the axioms, the way one reconstructs the utility function is by risk-neutrality with respect to a reference utile.
I propose:
I think there is no numeric representation of any human’s values according to which risk neutrality is unambiguously rational.
I don’t think that human values are well described by a utility function if, by “utility function”, we mean “a function which an optimizing agent will behave risk-neutrally towards”. If we mean something more general by “utility function”, then I am less confident that human values don’t fit into one.
It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human’s values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.
Could you clarify this?
I think you are saying that human values are not well-described by a utility function (and stressing certain details of the failure), but you seem to explicitly assume a good approximation by a utility function, which makes me uncertain.
Risk neutrality is often used with respect to a resource. But if you just want to say that humans are not risk-neutral about money, there’s no need to mention representations—you can just talk about preferences.
So I think you’re talking about risk neutrality with respect putative-utiles. But to be a utility function, to satisfy the vNM axioms, is exactly risk neutrality about utiles. If one satisfies the axioms, the way one reconstructs the utility function is by risk-neutrality with respect to a reference utile.
I propose:
Am I missing the point?
I don’t think that human values are well described by a utility function if, by “utility function”, we mean “a function which an optimizing agent will behave risk-neutrally towards”. If we mean something more general by “utility function”, then I am less confident that human values don’t fit into one.
It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human’s values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I was trying to get you to clarify what you meant.
As far as I can tell, your reply makes no attempt to clarify :-(
“Utility function” does not normally mean:
“a function which an optimizing agent will behave risk-neutrally towards”.
It means the function which, when maximised, explains an agent’s goal-directed actions.
Apart from the issue of “why-redefine”, the proposed redefinition appears incomprehensible—at least to me.
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
Can you give an example of a non-risk-neutral utility function that can’t be converted a standard utility function by rescaling.
Bonus points if it doesn’t make you into a money pump.
No, because I don’t have a good handle on what magic can and cannot be done with math; when I have tried to do this in the past, it looks like this.
Me: But thus and so and thresholds and ambivalence without indifference and stuff.
Mathemagician: POOF! Look, this thing you don’t understand satisfies your every need.
My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.
This is at least close, if I understand what you’re saying.