Right. And the thing is, that if one were to argue that humans are thereby irrational, I would disagree. (Which is to say, I would not assent to defining rationality as constituting, or necessarily containing, adherence to VNM.)
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
Incidentally, I suspect the axiom I would end up rejecting is continuity (axiom 3), but don’t quote me on that
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.
I tentatively agree. The decision system I tend toward modelling an idealised me as having contains an extra level of abstraction in order to generalise the VNM axioms and decision theory regarding utility maximisation principles to something that does allow the kind of system you are advocating (and which I don’t consider intrinsically irrational).
Simply put, if instead of having preferences for world-histories you have preferences for probability distributions of world-histories then doing the same math and reasoning gives you an entirely different but still clearly defined and abstractly-consequentialist way of interacting with lotteries. It means the agent is doing a different thing than maximising the mean of utility… it could, in effect, be maximising the mean subject to satisficing on a maximum probability of utility below a value.
It’s the way being inherently and coherently risk-averse (and similar non-mean optimisers) would work.
Such agents are coherent. It doesn’t matter much whether we call them irrational or not. If that is what they want to do then so be it.
That does seem to be the most likely axiom being rejected. At least that has been my intuition when I’ve considered how plausible not ‘expected’ utility maximisers seem to think.