I think there is no values-preserving representation of any human’s approximation of a utility function according to which risk neutrality is unambiguously rational. (70%)
You imply that of the billions of varied human personalities, none have rational goal-seeking that can be described by such a utility function. Had you restricted it to most humans, I would agree. Upvoted.
No; humans are dumb and even if there were a risk seeking or risk neutral person running around, that wouldn’t mean it would necessarily be rational for them to be so.
I think there is no values-preserving representation of any human’s approximation of a utility function according to which risk neutrality is unambiguously rational.
Could you clarify this? I think you are saying that human values are not well-described by a utility function (and stressing certain details of the failure), but you seem to explicitly assume a good approximation by a utility function, which makes me uncertain.
Risk neutrality is often used with respect to a resource. But if you just want to say that humans are not risk-neutral about money, there’s no need to mention representations—you can just talk about preferences. So I think you’re talking about risk neutrality with respect putative-utiles. But to be a utility function, to satisfy the vNM axioms, is exactly risk neutrality about utiles. If one satisfies the axioms, the way one reconstructs the utility function is by risk-neutrality with respect to a reference utile.
I propose:
I think there is no numeric representation of any human’s values according to which risk neutrality is unambiguously rational.
I don’t think that human values are well described by a utility function if, by “utility function”, we mean “a function which an optimizing agent will behave risk-neutrally towards”. If we mean something more general by “utility function”, then I am less confident that human values don’t fit into one.
It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human’s values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.
It’s as low as 70% because I’m Aumanning a little from people who are better at math than me assuring me very confidently that, with math, one can perform such magic as to make risk-neutrality sensible on a human-values-derived utility function. The fact that it looks like it would have to actually be magic prevents me from entertaining the proposition coherently enough simply to accept their authority on the matter.
There may be some confusion here. I don’t think any serious economist has ever argued that risk neutrality is the only rational stance to take regarding risk. What they have argued is that they can draw up utility functions for people who prefer $100 to a 50:50 gamble for $200 or 0. And they can also draw functions for people who prefer the gamble and for people who are neutral. That is, risk (non)neutrality is a value that can be captured in the personal utility function just like (non)neutrality toward artificial sweeteners.
Now, one thing that these economists do assume is at least a little weird. Say you are completely neutral between a vacation on the beach and a vacation in the mountains. According to the economists, any rational person would then be neutral between the beach and a lottery ticket promising a vacation but making it 50:50 whether it will be beach or mountains. Risk aversion in that sense is indeed considered irrational. But, by their definitions, that ‘weird’ preference is not really “risk aversion”.
I think there is no values-preserving representation of any human’s approximation of a utility function according to which risk neutrality is unambiguously rational. (70%)
You imply that of the billions of varied human personalities, none have rational goal-seeking that can be described by such a utility function. Had you restricted it to most humans, I would agree. Upvoted.
That’s my other major source of uncertainty.
Is this the same as saying that everyone is either risk averse or risk seeking about something?
No; humans are dumb and even if there were a risk seeking or risk neutral person running around, that wouldn’t mean it would necessarily be rational for them to be so.
Could you clarify this?
I think you are saying that human values are not well-described by a utility function (and stressing certain details of the failure), but you seem to explicitly assume a good approximation by a utility function, which makes me uncertain.
Risk neutrality is often used with respect to a resource. But if you just want to say that humans are not risk-neutral about money, there’s no need to mention representations—you can just talk about preferences.
So I think you’re talking about risk neutrality with respect putative-utiles. But to be a utility function, to satisfy the vNM axioms, is exactly risk neutrality about utiles. If one satisfies the axioms, the way one reconstructs the utility function is by risk-neutrality with respect to a reference utile.
I propose:
Am I missing the point?
I don’t think that human values are well described by a utility function if, by “utility function”, we mean “a function which an optimizing agent will behave risk-neutrally towards”. If we mean something more general by “utility function”, then I am less confident that human values don’t fit into one.
It seems challenging to understand you. What does it mean to behave risk-neutrally towards a function? To behave risk-neutrally, there has to be an environment with some potential risks in it.
...It seems challenging to understand you, too. Everything that optimizes for a function needs an environment to do it in. Indeed, any utility function extracted from a human’s values would make sense only relative to an environment with risks in it, whether the agent trying to optimize that function is a human or not, risk-neutral or not. So what are you asking?
I was trying to get you to clarify what you meant.
As far as I can tell, your reply makes no attempt to clarify :-(
“Utility function” does not normally mean:
“a function which an optimizing agent will behave risk-neutrally towards”.
It means the function which, when maximised, explains an agent’s goal-directed actions.
Apart from the issue of “why-redefine”, the proposed redefinition appears incomprehensible—at least to me.
I have concluded to my satisfaction that it would not be an efficient expenditure of our time to continue attempting to understand each other in this matter.
Can you give an example of a non-risk-neutral utility function that can’t be converted a standard utility function by rescaling.
Bonus points if it doesn’t make you into a money pump.
No, because I don’t have a good handle on what magic can and cannot be done with math; when I have tried to do this in the past, it looks like this.
Me: But thus and so and thresholds and ambivalence without indifference and stuff.
Mathemagician: POOF! Look, this thing you don’t understand satisfies your every need.
My guess would be that she meant that there is no physical event that corresponds to a utile with which humans want to behave risk-neutrally toward, and/or that if you abstracted human values enough to create an abstract such utile, it would be unrecognizable and unFriendly.
This is at least close, if I understand what you’re saying.
voted up for underconfidence
It’s as low as 70% because I’m Aumanning a little from people who are better at math than me assuring me very confidently that, with math, one can perform such magic as to make risk-neutrality sensible on a human-values-derived utility function. The fact that it looks like it would have to actually be magic prevents me from entertaining the proposition coherently enough simply to accept their authority on the matter.
There may be some confusion here. I don’t think any serious economist has ever argued that risk neutrality is the only rational stance to take regarding risk. What they have argued is that they can draw up utility functions for people who prefer $100 to a 50:50 gamble for $200 or 0. And they can also draw functions for people who prefer the gamble and for people who are neutral. That is, risk (non)neutrality is a value that can be captured in the personal utility function just like (non)neutrality toward artificial sweeteners.
Now, one thing that these economists do assume is at least a little weird. Say you are completely neutral between a vacation on the beach and a vacation in the mountains. According to the economists, any rational person would then be neutral between the beach and a lottery ticket promising a vacation but making it 50:50 whether it will be beach or mountains. Risk aversion in that sense is indeed considered irrational. But, by their definitions, that ‘weird’ preference is not really “risk aversion”.
“Human-values-derived utility function” is a vague and wooly concept—too vague to be of much use, IMHO.