In VNM utility theory, we assign utility to outcomes, defined as a complete description of what happens, and expected utility to lotteries, defined as a probability distribution over outcomes. They are measured in the same units, but they are not the same thing [...]
Type (I), as best I understand it, seems to consist of assigning utility to a lottery. Its not so much an axiom violation as a category error
I am indeed suggesting that an agent can assign utility, not merely expected utility, to a lottery. Note that in this sentence “utility” does not have its technical meaning(s) but simply means raw preference. With that caveat, that may be a better way of putting it than anything I’ve said so far.
You can call that a category error, but I just don’t see the mistake. Other than that it doesn’t fit the VNM theory, which would be a circular argument for its irrationality in this context.
Your point about f*ing human brains gets at my True Rejection, so thanks. And I read the conversation with kilobug. As a result I have a new idea where you may be coming from—about which I will quote Luke’s decision theory FAQ:
Peterson (2009, ch. 4) explains:
In the indirect approach, which is the dominant approach, the decision maker does not prefer a risky act to another because the expected utility of the former exceeds that of the latter. Instead, the decision maker is asked to state a set of preferences over a set of risky acts… Then, if the set of preferences stated by the decision maker is consistent with a small number of structural constraints (axioms), it can be shown that her decisions can be described as if she were choosing what to do by assigning numerical probabilities and utilities to outcomes and then maximising expected utility...
[In contrast] the direct approach seeks to generate preferences over acts from probabilities and utilities directly assigned to outcomes. In contrast to the indirect approach, it is not assumed that the decision maker has access to a set of preferences over acts before he starts to deliberate.
The axiomatic decision theories listed in section 8.2 all follow the indirect approach. These theories, it might be said, cannot offer any action guidance because they require an agent to state its preferences over acts “up front.” But an agent that states its preferences over acts already knows which act it prefers, so the decision theory can’t offer any action guidance not already present in the agent’s own stated preferences over acts.
Emphasis added. It sounds to me like you favor a direct approach. For you, utility is not an as-if: it is a fundamentally real, interval-scale-able quality of our lives. In this scheme, the angst I feel while taking a risk is something I can assign a utility to, then shut up and (re-)calculate the expected utilities. Yes?
If you favor a direct approach, I wonder why you even care to defend the VNM axioms, or what role they play for you.
I am indeed suggesting that an agent can assign utility, not merely expected utility, to a lottery.
I am suggesting that this is equivalent to suggesting that two points can be parallel. It may be true for your special definition of point, but its not true for mine, and its not true for the definition the theorems refer to.
Yes, in the real world the lottery is part of the outcome, but that can be factored in with assigning utility to the outcomes, we don’t need to change our definition of utility when the existing one works (reading the rest of your post, I now see you already understand this).
It sounds to me like you favour a direct approach. For you, utility is not an as-if: it is a fundamentally real, interval-scale-able quality of our lives.
I cannot see anything I have said to suggest I believe this. Interpreted descriptively, (as a statement about how people actually make decisions) I think it is utter garbage.
Interpreted prescriptively, I think I might believe it. I would at least probably say what while I like the fact that VNM axioms imply EU theory, I think I would consider EU the obviously correct way to do things even if they did not.
In this scheme, the angst I feel while taking a risk is something I can assign a utility to, then shut up and (re-)calculate the expected utilities.
Yes.
Granted, if decision angst is often playing a large part in your decisions, and in particular costing you other benefits, I would strongly suggest you work on finding ways to get around this. Rightly or wrongly, yelling “stop being so irrational!” at my brain has sometimes worked here for me. I am almost certain there are better techniques.
I wonder why you even care to defend the VNM axioms, or what role they play for you.
I defend them because I think they are correct. What more reason should be required?
Interpreted prescriptively, I think I might believe it. I would at least probably say what while I like the fact that VNM axioms imply EU theory, I think I would consider EU the obviously correct way to do things even if they did not.
So let me rephrase my earlier question (poorly phrased before) about what role the VNM axioms play for you. Sometimes (especially when it comes to “rationality”) an “axiom” is held to be obvious, even indubitable: the principle of non-contradiction is often viewed in this light. At other times, say when formulating a mathematical model of an advanced physics theory, the axioms are anything but obvious, but they are endorsed because they seem to work. The axioms are the result of an inference to the best explanation.
So I’m wondering if your view is more like (A) than like (B) below.
(A) Rationality is a sort of attractor in mind-space, and people approach closer and closer to being describable by EU theory the more rational they are. Since the VNM axioms are obeyed in these cases, that tends to show that rationality includes following those axioms.
(B) Obviously only a mad person would violate the Axiom of Independence knowing full well they were doing so.
Granted, if decision angst is often playing a large part in your decisions, and in particular costing you other benefits, I would strongly suggest you work on finding ways to get around this. Rightly or wrongly, yelling “stop being so irrational!” at my brain has sometimes worked here for me. I am almost certain there are better techniques.
And now we are back to my True Rejection, namely: I don’t think it’s irrational to take decision-angst into account, or to seek to avoid it by avoiding risk rather than just seeking psychotherapy so that one can buck up and keep a stiff upper lip. It’s not Spock-like, but it’s not irrational.
I am indeed suggesting that an agent can assign utility, not merely expected utility, to a lottery. Note that in this sentence “utility” does not have its technical meaning(s) but simply means raw preference. With that caveat, that may be a better way of putting it than anything I’ve said so far.
You can call that a category error, but I just don’t see the mistake. Other than that it doesn’t fit the VNM theory, which would be a circular argument for its irrationality in this context.
Your point about f*ing human brains gets at my True Rejection, so thanks. And I read the conversation with kilobug. As a result I have a new idea where you may be coming from—about which I will quote Luke’s decision theory FAQ:
Emphasis added. It sounds to me like you favor a direct approach. For you, utility is not an as-if: it is a fundamentally real, interval-scale-able quality of our lives. In this scheme, the angst I feel while taking a risk is something I can assign a utility to, then shut up and (re-)calculate the expected utilities. Yes?
If you favor a direct approach, I wonder why you even care to defend the VNM axioms, or what role they play for you.
I am suggesting that this is equivalent to suggesting that two points can be parallel. It may be true for your special definition of point, but its not true for mine, and its not true for the definition the theorems refer to.
Yes, in the real world the lottery is part of the outcome, but that can be factored in with assigning utility to the outcomes, we don’t need to change our definition of utility when the existing one works (reading the rest of your post, I now see you already understand this).
I cannot see anything I have said to suggest I believe this. Interpreted descriptively, (as a statement about how people actually make decisions) I think it is utter garbage.
Interpreted prescriptively, I think I might believe it. I would at least probably say what while I like the fact that VNM axioms imply EU theory, I think I would consider EU the obviously correct way to do things even if they did not.
Yes.
Granted, if decision angst is often playing a large part in your decisions, and in particular costing you other benefits, I would strongly suggest you work on finding ways to get around this. Rightly or wrongly, yelling “stop being so irrational!” at my brain has sometimes worked here for me. I am almost certain there are better techniques.
I defend them because I think they are correct. What more reason should be required?
So let me rephrase my earlier question (poorly phrased before) about what role the VNM axioms play for you. Sometimes (especially when it comes to “rationality”) an “axiom” is held to be obvious, even indubitable: the principle of non-contradiction is often viewed in this light. At other times, say when formulating a mathematical model of an advanced physics theory, the axioms are anything but obvious, but they are endorsed because they seem to work. The axioms are the result of an inference to the best explanation.
So I’m wondering if your view is more like (A) than like (B) below.
(A) Rationality is a sort of attractor in mind-space, and people approach closer and closer to being describable by EU theory the more rational they are. Since the VNM axioms are obeyed in these cases, that tends to show that rationality includes following those axioms.
(B) Obviously only a mad person would violate the Axiom of Independence knowing full well they were doing so.
And now we are back to my True Rejection, namely: I don’t think it’s irrational to take decision-angst into account, or to seek to avoid it by avoiding risk rather than just seeking psychotherapy so that one can buck up and keep a stiff upper lip. It’s not Spock-like, but it’s not irrational.